diff --git "a/database/tawos/tfidf/MESOS_tfidf-se.csv" "b/database/tawos/tfidf/MESOS_tfidf-se.csv" new file mode 100644--- /dev/null +++ "b/database/tawos/tfidf/MESOS_tfidf-se.csv" @@ -0,0 +1,1514 @@ +"issuekey","created","storypoint","context","codesnippet","t_Epic","t_Improvement","t_Bug","t_Task","t_Documentation","t_Story","t_Wish","c_agent","c_documentation","c_allocation","c_test","c_containerization","c_build","c_python.api","c_master","c_framework","c_webui","c_libprocess","c_statistics","c_reviewbot","c_replicated.log","c_stout","c_json.api","c_scheduler.driver","c_security","c_c...api","c_fetcher","c_java.api","c_docker","c_leader.election","c_modules","c_cmake","c_project.website","c_HTTP.API","c_cli","c_flaky","c_storage","c_network","c_executor","c_scheduler.api","c_provisioner","c_release","c_image.gc","c_gpu","c_cni","c_resource.provider","c_metrics" +"MESOS-336","01/18/2013 19:19:36",5,"Mesos slave should cache executors ""The slave should be smarter about how it handles pulling down executors. In our environment, executors rarely change but the slave will always pull it down from regardless HDFS. This puts undue stress on our HDFS clusters, and is not resilient to reduced HDFS availability.""","",1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-564","07/17/2013 20:41:59",3,"Update Contribution Documentation ""Our contribution guide is currently fairly verbose, and it focuses on the ReviewBoard workflow for making code contributions. It would be helpful for new contributors to have a first-time contribution guide which focuses on using GitHub PRs to make small contributions, since that workflow has a smaller barrier to entry for new users.""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-621","08/06/2013 23:49:54",5,"`HierarchicalAllocatorProcess::removeSlave` doesn't properly handle framework allocations/resources ""Currently a slaveRemoved() simply removes the slave from 'slaves' map and slave's resources from 'roleSorter'. Looking at resourcesRecovered(), more things need to be done when a slave is removed (e.g., framework unallocations). It would be nice to fix this and have a test for this.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-752","10/18/2013 23:09:37",1,"SlaveRecoveryTest/0.ReconcileTasksMissingFromSlave test is flaky ""[ RUN ] SlaveRecoveryTest/0.ReconcileTasksMissingFromSlave Checkpointing executor's forked pid 32281 to '/tmp/SlaveRecoveryTest_0_ReconcileTasksMissingFromSlave_NT1btb/meta/slaves/201310151913-16777343-35153-31491-0/frameworks/201310151913-16777343-35153-31491-0000/executors/0514b52f-3c17-4ee5-ba16-635198701ca2/runs/97c9e2cc-ceea-40a8-a915-aed5fed1dcb3/pids/forked.pid' Fetching resources into '/tmp/SlaveRecoveryTest_0_ReconcileTasksMissingFromSlave_NT1btb/slaves/201310151913-16777343-35153-31491-0/frameworks/201310151913-16777343-35153-31491-0000/executors/0514b52f-3c17-4ee5-ba16-635198701ca2/runs/97c9e2cc-ceea-40a8-a915-aed5fed1dcb3' Registered executor on localhost.localdomain Starting task 0514b52f-3c17-4ee5-ba16-635198701ca2 Forked command at 32317 sh -c 'sleep 10' tests/slave_recovery_tests.cpp:1927: Failure Mock function called more times than expected - returning directly. Function call: statusUpdate(0x7fffae636eb0, @0x7f1590027a00 64-byte object ) Expected: to be called once Actual: called twice - over-saturated and active Command exited with status 0 (pid: 32317) ""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-830","11/20/2013 23:14:32",8,"ExamplesTest.JavaFramework is flaky ""Identify the cause of the following test failure: [ RUN ] ExamplesTest.JavaFramework Using temporary directory '/tmp/ExamplesTest_JavaFramework_wSc7u8' Enabling authentication for the framework I1120 15:13:39.820032 1681264640 master.cpp:285] Master started on 172.25.133.171:52576 I1120 15:13:39.820180 1681264640 master.cpp:299] Master ID: 201311201513-2877626796-52576-3234 I1120 15:13:39.820194 1681264640 master.cpp:302] Master only allowing authenticated frameworks to register! I1120 15:13:39.821197 1679654912 slave.cpp:112] Slave started on 1)@172.25.133.171:52576 I1120 15:13:39.821795 1679654912 slave.cpp:212] Slave resources: cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] I1120 15:13:39.822855 1682337792 slave.cpp:112] Slave started on 2)@172.25.133.171:52576 I1120 15:13:39.823652 1682337792 slave.cpp:212] Slave resources: cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] I1120 15:13:39.825330 1679118336 master.cpp:744] The newly elected leader is master@172.25.133.171:52576 I1120 15:13:39.825445 1679118336 master.cpp:748] Elected as the leading master! I1120 15:13:39.825907 1681264640 state.cpp:33] Recovering state from '/tmp/ExamplesTest_JavaFramework_wSc7u8/0/meta' I1120 15:13:39.826127 1681264640 status_update_manager.cpp:180] Recovering status update manager I1120 15:13:39.826331 1681801216 process_isolator.cpp:317] Recovering isolator I1120 15:13:39.826738 1682874368 slave.cpp:2743] Finished recovery I1120 15:13:39.827747 1682337792 state.cpp:33] Recovering state from '/tmp/ExamplesTest_JavaFramework_wSc7u8/1/meta' I1120 15:13:39.827945 1680191488 slave.cpp:112] Slave started on 3)@172.25.133.171:52576 I1120 15:13:39.828415 1682337792 status_update_manager.cpp:180] Recovering status update manager I1120 15:13:39.828608 1680728064 sched.cpp:260] Authenticating with master master@172.25.133.171:52576 I1120 15:13:39.828606 1680191488 slave.cpp:212] Slave resources: cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] I1120 15:13:39.828680 1682874368 slave.cpp:497] New master detected at master@172.25.133.171:52576 I1120 15:13:39.828765 1682337792 process_isolator.cpp:317] Recovering isolator I1120 15:13:39.829828 1680728064 sched.cpp:229] Detecting new master I1120 15:13:39.830288 1679654912 authenticatee.hpp:100] Initializing client SASL I1120 15:13:39.831635 1680191488 state.cpp:33] Recovering state from '/tmp/ExamplesTest_JavaFramework_wSc7u8/2/meta' I1120 15:13:39.831991 1679118336 status_update_manager.cpp:158] New master detected at master@172.25.133.171:52576 I1120 15:13:39.832042 1682874368 slave.cpp:524] Detecting new master I1120 15:13:39.832314 1682337792 slave.cpp:2743] Finished recovery I1120 15:13:39.832309 1681264640 master.cpp:1266] Attempting to register slave on vkone.local at slave(1)@172.25.133.171:52576 I1120 15:13:39.832929 1680728064 status_update_manager.cpp:180] Recovering status update manager I1120 15:13:39.833371 1681801216 slave.cpp:497] New master detected at master@172.25.133.171:52576 I1120 15:13:39.833273 1681264640 master.cpp:2513] Adding slave 201311201513-2877626796-52576-3234-0 at vkone.local with cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] I1120 15:13:39.833595 1680728064 process_isolator.cpp:317] Recovering isolator I1120 15:13:39.833859 1681801216 slave.cpp:524] Detecting new master I1120 15:13:39.833861 1682874368 status_update_manager.cpp:158] New master detected at master@172.25.133.171:52576 I1120 15:13:39.834092 1680191488 slave.cpp:542] Registered with master master@172.25.133.171:52576; given slave ID 201311201513-2877626796-52576-3234-0 I1120 15:13:39.834486 1681264640 master.cpp:1266] Attempting to register slave on vkone.local at slave(2)@172.25.133.171:52576 I1120 15:13:39.834549 1681264640 master.cpp:2513] Adding slave 201311201513-2877626796-52576-3234-1 at vkone.local with cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] I1120 15:13:39.834750 1680191488 slave.cpp:555] Checkpointing SlaveInfo to '/tmp/ExamplesTest_JavaFramework_wSc7u8/0/meta/slaves/201311201513-2877626796-52576-3234-0/slave.info' I1120 15:13:39.834875 1682874368 hierarchical_allocator_process.hpp:445] Added slave 201311201513-2877626796-52576-3234-0 (vkone.local) with cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] (and cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] available) I1120 15:13:39.835155 1680728064 slave.cpp:542] Registered with master master@172.25.133.171:52576; given slave ID 201311201513-2877626796-52576-3234-1 I1120 15:13:39.835458 1679118336 slave.cpp:2743] Finished recovery I1120 15:13:39.835739 1680728064 slave.cpp:555] Checkpointing SlaveInfo to '/tmp/ExamplesTest_JavaFramework_wSc7u8/1/meta/slaves/201311201513-2877626796-52576-3234-1/slave.info' I1120 15:13:39.835922 1682874368 hierarchical_allocator_process.hpp:445] Added slave 201311201513-2877626796-52576-3234-1 (vkone.local) with cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] (and cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] available) I1120 15:13:39.836120 1681264640 slave.cpp:497] New master detected at master@172.25.133.171:52576 I1120 15:13:39.836340 1679118336 status_update_manager.cpp:158] New master detected at master@172.25.133.171:52576 I1120 15:13:39.836436 1681264640 slave.cpp:524] Detecting new master I1120 15:13:39.836629 1682874368 master.cpp:1266] Attempting to register slave on vkone.local at slave(3)@172.25.133.171:52576 I1120 15:13:39.836653 1682874368 master.cpp:2513] Adding slave 201311201513-2877626796-52576-3234-2 at vkone.local with cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] I1120 15:13:39.836804 1680728064 slave.cpp:542] Registered with master master@172.25.133.171:52576; given slave ID 201311201513-2877626796-52576-3234-2 I1120 15:13:39.837190 1680728064 slave.cpp:555] Checkpointing SlaveInfo to '/tmp/ExamplesTest_JavaFramework_wSc7u8/2/meta/slaves/201311201513-2877626796-52576-3234-2/slave.info' I1120 15:13:39.837569 1682874368 hierarchical_allocator_process.hpp:445] Added slave 201311201513-2877626796-52576-3234-2 (vkone.local) with cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] (and cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000] available) I1120 15:13:39.852011 1679654912 authenticatee.hpp:124] Creating new client SASL connection I1120 15:13:39.852219 1680191488 master.cpp:1734] Authenticating framework at scheduler(1)@172.25.133.171:52576 I1120 15:13:39.852577 1682337792 authenticator.hpp:83] Initializing server SASL I1120 15:13:39.856160 1682337792 authenticator.hpp:140] Creating new server SASL connection I1120 15:13:39.856334 1681264640 authenticatee.hpp:212] Received SASL authentication mechanisms: CRAM-MD5 I1120 15:13:39.856360 1681264640 authenticatee.hpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I1120 15:13:39.856421 1681264640 authenticator.hpp:243] Received SASL authentication start I1120 15:13:39.856487 1681264640 authenticator.hpp:325] Authentication requires more steps I1120 15:13:39.856531 1681264640 authenticatee.hpp:258] Received SASL authentication step I1120 15:13:39.856576 1681264640 authenticator.hpp:271] Received SASL authentication step I1120 15:13:39.856643 1681264640 authenticator.hpp:317] Authentication success I1120 15:13:39.856724 1681264640 authenticatee.hpp:298] Authentication success I1120 15:13:39.856768 1681264640 master.cpp:1774] Successfully authenticated framework at scheduler(1)@172.25.133.171:52576 I1120 15:13:39.857028 1681264640 sched.cpp:334] Successfully authenticated with master master@172.25.133.171:52576 I1120 15:13:39.857139 1681264640 master.cpp:798] Received registration request from scheduler(1)@172.25.133.171:52576 I1120 15:13:39.857306 1681264640 master.cpp:816] Registering framework 201311201513-2877626796-52576-3234-0000 at scheduler(1)@172.25.133.171:52576 I1120 15:13:39.862296 1680191488 hierarchical_allocator_process.hpp:332] Added framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.863867 1680191488 master.cpp:1700] Sending 3 offers to framework 201311201513-2877626796-52576-3234-0000 Registered! ID = 201311201513-2877626796-52576-3234-0000 Launching task 0 Launching task 1 Launching task 2 I1120 15:13:39.905390 1680191488 master.cpp:2026] Processing reply for offer 201311201513-2877626796-52576-3234-0 on slave 201311201513-2877626796-52576-3234-1 (vkone.local) for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.905825 1680191488 master.hpp:400] Adding task 0 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-1 (vkone.local) I1120 15:13:39.905886 1680191488 master.cpp:2150] Launching task 0 of framework 201311201513-2877626796-52576-3234-0000 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-1 (vkone.local) I1120 15:13:39.906422 1680191488 master.cpp:2026] Processing reply for offer 201311201513-2877626796-52576-3234-1 on slave 201311201513-2877626796-52576-3234-2 (vkone.local) for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.906664 1680191488 master.hpp:400] Adding task 1 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-2 (vkone.local) I1120 15:13:39.906721 1680191488 master.cpp:2150] Launching task 1 of framework 201311201513-2877626796-52576-3234-0000 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-2 (vkone.local) I1120 15:13:39.907171 1680191488 master.cpp:2026] Processing reply for offer 201311201513-2877626796-52576-3234-2 on slave 201311201513-2877626796-52576-3234-0 (vkone.local) for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.907419 1680191488 master.hpp:400] Adding task 2 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-0 (vkone.local) I1120 15:13:39.907480 1680191488 master.cpp:2150] Launching task 2 of framework 201311201513-2877626796-52576-3234-0000 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-0 (vkone.local) I1120 15:13:39.907938 1680191488 slave.cpp:722] Got assigned task 0 for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.908473 1680191488 slave.cpp:833] Launching task 0 for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.914427 1682874368 slave.cpp:722] Got assigned task 1 for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.914594 1680728064 slave.cpp:722] Got assigned task 2 for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.914844 1681801216 hierarchical_allocator_process.hpp:590] Framework 201311201513-2877626796-52576-3234-0000 filtered slave 201311201513-2877626796-52576-3234-1 for 1secs I1120 15:13:39.915292 1682874368 slave.cpp:833] Launching task 1 for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.915424 1681801216 hierarchical_allocator_process.hpp:590] Framework 201311201513-2877626796-52576-3234-0000 filtered slave 201311201513-2877626796-52576-3234-2 for 1secs I1120 15:13:39.915685 1681801216 hierarchical_allocator_process.hpp:590] Framework 201311201513-2877626796-52576-3234-0000 filtered slave 201311201513-2877626796-52576-3234-0 for 1secs I1120 15:13:39.915828 1680728064 slave.cpp:833] Launching task 2 for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.917840 1680191488 slave.cpp:943] Queuing task '0' for executor default of framework '201311201513-2877626796-52576-3234-0000 I1120 15:13:39.917935 1679118336 process_isolator.cpp:100] Launching default (/Users/vinod/workspace/apache/mesos/build/src/examples/java/test-executor) in /tmp/ExamplesTest_JavaFramework_wSc7u8/1/slaves/201311201513-2877626796-52576-3234-1/frameworks/201311201513-2877626796-52576-3234-0000/executors/default/runs/375b31a9-7093-4db1-964d-e6b425b1e4b4 with resources ' for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.922019 1679118336 process_isolator.cpp:163] Forked executor at 3268 I1120 15:13:39.922703 1679118336 slave.cpp:2073] Monitoring executor default of framework 201311201513-2877626796-52576-3234-0000 forked at pid 3268 I1120 15:13:39.929134 1682874368 slave.cpp:943] Queuing task '1' for executor default of framework '201311201513-2877626796-52576-3234-0000 I1120 15:13:39.929323 1682874368 process_isolator.cpp:100] Launching default (/Users/vinod/workspace/apache/mesos/build/src/examples/java/test-executor) in /tmp/ExamplesTest_JavaFramework_wSc7u8/2/slaves/201311201513-2877626796-52576-3234-2/frameworks/201311201513-2877626796-52576-3234-0000/executors/default/runs/2bd0e75d-a2b9-4ae6-be08-9782612309a5 with resources ' for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.931243 1682874368 process_isolator.cpp:163] Forked executor at 3269 I1120 15:13:39.931612 1681801216 slave.cpp:2073] Monitoring executor default of framework 201311201513-2877626796-52576-3234-0000 forked at pid 3269 E1120 15:13:39.931836 1681801216 slave.cpp:2099] Failed to watch executor default of framework 201311201513-2877626796-52576-3234-0000: Already watched I1120 15:13:39.936460 1680728064 slave.cpp:943] Queuing task '2' for executor default of framework '201311201513-2877626796-52576-3234-0000 I1120 15:13:39.936619 1681801216 process_isolator.cpp:100] Launching default (/Users/vinod/workspace/apache/mesos/build/src/examples/java/test-executor) in /tmp/ExamplesTest_JavaFramework_wSc7u8/0/slaves/201311201513-2877626796-52576-3234-0/frameworks/201311201513-2877626796-52576-3234-0000/executors/default/runs/16d600da-da86-4614-91cb-58a7b27ab534 with resources ' for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:39.941299 1681801216 process_isolator.cpp:163] Forked executor at 3270 I1120 15:13:39.942179 1681801216 slave.cpp:2073] Monitoring executor default of framework 201311201513-2877626796-52576-3234-0000 forked at pid 3270 E1120 15:13:39.942395 1681801216 slave.cpp:2099] Failed to watch executor default of framework 201311201513-2877626796-52576-3234-0000: Already watched Fetching resources into '/tmp/ExamplesTest_JavaFramework_wSc7u8/2/slaves/201311201513-2877626796-52576-3234-2/frameworks/201311201513-2877626796-52576-3234-0000/executors/default/runs/2bd0e75d-a2b9-4ae6-be08-9782612309a5' Fetching resources into '/tmp/ExamplesTest_JavaFramework_wSc7u8/1/slaves/201311201513-2877626796-52576-3234-1/frameworks/201311201513-2877626796-52576-3234-0000/executors/default/runs/375b31a9-7093-4db1-964d-e6b425b1e4b4' Fetching resources into '/tmp/ExamplesTest_JavaFramework_wSc7u8/0/slaves/201311201513-2877626796-52576-3234-0/frameworks/201311201513-2877626796-52576-3234-0000/executors/default/runs/16d600da-da86-4614-91cb-58a7b27ab534' I1120 15:13:40.372573 1681801216 slave.cpp:1406] Got registration for executor 'default' of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.373258 1681801216 slave.cpp:1527] Flushing queued task 1 for executor 'default' of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.388317 1681801216 slave.cpp:1406] Got registration for executor 'default' of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.388983 1681801216 slave.cpp:1527] Flushing queued task 0 for executor 'default' of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.398084 1679654912 slave.cpp:1406] Got registration for executor 'default' of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.399344 1679654912 slave.cpp:1527] Flushing queued task 2 for executor 'default' of framework 201311201513-2877626796-52576-3234-0000 Registered executor on vkone.local I1120 15:13:40.491843 1679654912 slave.cpp:1740] Handling status update TASK_RUNNING (UUID: f04b1852-3669-444a-906f-3675f784c14f) for task 1 of framework 201311201513-2877626796-52576-3234-0000 from executor(1)@172.25.133.171:52577 I1120 15:13:40.492202 1679654912 status_update_manager.cpp:305] Received status update TASK_RUNNING (UUID: f04b1852-3669-444a-906f-3675f784c14f) for task 1 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.492424 1679654912 status_update_manager.cpp:356] Forwarding status update TASK_RUNNING (UUID: f04b1852-3669-444a-906f-3675f784c14f) for task 1 of framework 201311201513-2877626796-52576-3234-0000 to master@172.25.133.171:52576 Registered executor on vkone.local I1120 15:13:40.492671 1682337792 master.cpp:1452] Status update TASK_RUNNING (UUID: f04b1852-3669-444a-906f-3675f784c14f) for task 1 of framework 201311201513-2877626796-52576-3234-0000 from slave(3)@172.25.133.171:52576 I1120 15:13:40.492735 1682337792 slave.cpp:1865] Sending acknowledgement for status update TASK_RUNNING (UUID: f04b1852-3669-444a-906f-3675f784c14f) for task 1 of framework 201311201513-2877626796-52576-3234-0000 to executor(1)@172.25.133.171:52577 Status update: task 1 is in state TASK_RUNNING I1120 15:13:40.502235 1679654912 status_update_manager.cpp:380] Received status update acknowledgement (UUID: f04b1852-3669-444a-906f-3675f784c14f) for task 1 of framework 201311201513-2877626796-52576-3234-0000 Registered executor on vkone.local I1120 15:13:40.531292 1679654912 slave.cpp:1740] Handling status update TASK_RUNNING (UUID: c19b6a5a-19ce-4613-8a5a-08fe807ff27c) for task 2 of framework 201311201513-2877626796-52576-3234-0000 from executor(1)@172.25.133.171:52579 I1120 15:13:40.532091 1680728064 status_update_manager.cpp:305] Received status update TASK_RUNNING (UUID: c19b6a5a-19ce-4613-8a5a-08fe807ff27c) for task 2 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.532305 1680728064 status_update_manager.cpp:356] Forwarding status update TASK_RUNNING (UUID: c19b6a5a-19ce-4613-8a5a-08fe807ff27c) for task 2 of framework 201311201513-2877626796-52576-3234-0000 to master@172.25.133.171:52576 I1120 15:13:40.532776 1682874368 slave.cpp:1865] Sending acknowledgement for status update TASK_RUNNING (UUID: c19b6a5a-19ce-4613-8a5a-08fe807ff27c) for task 2 of framework 201311201513-2877626796-52576-3234-0000 to executor(1)@172.25.133.171:52579 I1120 15:13:40.532951 1681801216 master.cpp:1452] Status update TASK_RUNNING (UUID: c19b6a5a-19ce-4613-8a5a-08fe807ff27c) for task 2 of framework 201311201513-2877626796-52576-3234-0000 from slave(1)@172.25.133.171:52576 Status update: task 2 is in state TASK_RUNNING I1120 15:13:40.538895 1682874368 status_update_manager.cpp:380] Received status update acknowledgement (UUID: c19b6a5a-19ce-4613-8a5a-08fe807ff27c) for task 2 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.541267 1682874368 slave.cpp:1740] Handling status update TASK_RUNNING (UUID: c218b0c3-d77c-4901-8570-391c330ba117) for task 0 of framework 201311201513-2877626796-52576-3234-0000 from executor(1)@172.25.133.171:52578 I1120 15:13:40.541555 1682874368 status_update_manager.cpp:305] Received status update TASK_RUNNING (UUID: c218b0c3-d77c-4901-8570-391c330ba117) for task 0 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.541725 1682874368 status_update_manager.cpp:356] Forwarding status update TASK_RUNNING (UUID: c218b0c3-d77c-4901-8570-391c330ba117) for task 0 of framework 201311201513-2877626796-52576-3234-0000 to master@172.25.133.171:52576 I1120 15:13:40.542196 1682874368 master.cpp:1452] Status update TASK_RUNNING (UUID: c218b0c3-d77c-4901-8570-391c330ba117) for task 0 of framework 201311201513-2877626796-52576-3234-0000 from slave(2)@172.25.133.171:52576 I1120 15:13:40.542251 1682874368 slave.cpp:1865] Sending acknowledgement for status update TASK_RUNNING (UUID: c218b0c3-d77c-4901-8570-391c330ba117) for task 0 of framework 201311201513-2877626796-52576-3234-0000 to executor(1)@172.25.133.171:52578 Status update: task 0 is in state TASK_RUNNING I1120 15:13:40.545537 1682874368 status_update_manager.cpp:380] Received status update acknowledgement (UUID: c218b0c3-d77c-4901-8570-391c330ba117) for task 0 of framework 201311201513-2877626796-52576-3234-0000 Running task value: """"1"""" I1120 15:13:40.764219 1682337792 slave.cpp:1740] Handling status update TASK_FINISHED (UUID: 4a163594-146a-46f7-bd43-f906e76ad84c) for task 1 of framework 201311201513-2877626796-52576-3234-0000 from executor(1)@172.25.133.171:52577 I1120 15:13:40.764629 1682337792 status_update_manager.cpp:305] Received status update TASK_FINISHED (UUID: 4a163594-146a-46f7-bd43-f906e76ad84c) for task 1 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.764698 1682337792 status_update_manager.cpp:356] Forwarding status update TASK_FINISHED (UUID: 4a163594-146a-46f7-bd43-f906e76ad84c) for task 1 of framework 201311201513-2877626796-52576-3234-0000 to master@172.25.133.171:52576 I1120 15:13:40.765043 1682337792 master.cpp:1452] Status update TASK_FINISHED (UUID: 4a163594-146a-46f7-bd43-f906e76ad84c) for task 1 of framework 201311201513-2877626796-52576-3234-0000 from slave(3)@172.25.133.171:52576 I1120 15:13:40.765192 1682337792 master.hpp:418] Removing task 1 with resources cpus(*):1; mem(*):128 on slave 2Status update: task 1 is in state TASK_FINISHED Finished tasks: 1 01311201513-2877626796-52576-3234-2 (vkone.local) I1120 15:13:40.765363 1679118336 slave.cpp:1865] Sending acknowledgement for status update TASK_FINISHED (UUID: 4a163594-146a-46f7-bd43-f906e76ad84c) for task 1 of framework 201311201513-2877626796-52576-3234-0000 to executor(1)@172.25.133.171:52577 I1120 15:13:40.772738 1682337792 hierarchical_allocator_process.hpp:637] Recovered cpus(*):1; mem(*):128 (total allocatable: cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000]) on slave 201311201513-2877626796-52576-3234-2 from framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.773190 1679118336 status_update_manager.cpp:380] Received status update acknowledgement (UUID: 4a163594-146a-46f7-bd43-f906e76ad84c) for task 1 of framework 201311201513-2877626796-52576-3234-0000 Running task value: """"0"""" Running task value: """"2"""" I1120 15:13:40.790068 1679118336 slave.cpp:1740] Handling status update TASK_FINISHED (UUID: 13265a94-50f1-4bc4-b2e2-9f60a1bb4086) for task 0 of framework 201311201513-2877626796-52576-3234-0000 from executor(1)@172.25.133.171:52578 I1120 15:13:40.790411 1680728064 status_update_manager.cpp:305] Received status update TASK_FINISHED (UUID: 13265a94-50f1-4bc4-b2e2-9f60a1bb4086) for task 0 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.790493 1680728064 status_update_manager.cpp:356] Forwarding status update TASK_FINISHED (UUID: 13265a94-50f1-4bc4-b2e2-9f60a1bb4086) for task 0 of framework 201311201513-2877626796-52576-3234-0000 to master@172.25.133.171:52576 I1120 15:13:40.790674 1679118336 master.cpp:1452] Status update TASK_FINISHED (UUID: 13265a94-50f1-4bc4-b2e2-9f60a1bb4086) for task 0 of framework 201311201513-2877626796-52576-3234-0000 from slave(2)@172.25.133.171:52576 I1120 15:13:40.790798 1679118336 master.hpp:418] Removing task 0 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-1 (vkone.local) I1120 15:13:40.790928 1679118336 slave.cpp:1865] Sending acknowledgement for status update TASK_FINISHED (UUID: 13265a94-50f1-4bc4-b2e2-9f60a1bb4086) for task 0 of framework 201311201513-2877626796-52576-3234-0000 to executor(1)@172.25.133.171:52578 Status update: task 0 is in state TASK_FINISHED Finished tasks: 2 I1120 15:13:40.791225 1680191488 hierarchical_allocator_process.hpp:637] Recovered cpus(*):1; mem(*):128 (total allocatable: cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000]) on slave 201311201513-2877626796-52576-3234-1 from framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.794234 1679118336 status_update_manager.cpp:380] Received status update acknowledgement (UUID: 13265a94-50f1-4bc4-b2e2-9f60a1bb4086) for task 0 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.795830 1681801216 slave.cpp:1740] Handling status update TASK_FINISHED (UUID: f6f28e88-5ea6-4519-ba92-65ead6236fff) for task 2 of framework 201311201513-2877626796-52576-3234-0000 from executor(1)@172.25.133.171:52579 I1120 15:13:40.796111 1679118336 status_update_manager.cpp:305] Received status update TASK_FINISHED (UUID: f6f28e88-5ea6-4519-ba92-65ead6236fff) for task 2 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.796182 1679118336 status_update_manager.cpp:356] Forwarding status update TASK_FINISHED (UUID: f6f28e88-5ea6-4519-ba92-65ead6236fff) for task 2 of framework 201311201513-2877626796-52576-3234-0000 to master@172.25.133.171:52576 I1120 15:13:40.796352 1680728064 master.cpp:1452] Status update TASK_FINISHED (UUID: f6f28e88-5ea6-4519-ba92-65ead6236fff) for task 2 of framework 201311201513-2877626796-52576-3234-0000 from slave(1)@172.25.133.171:52576 I1120 15:13:40.796398 1679118336 slave.cpp:1865] Sending acknowledgement for status update TASK_FINISHED (UUID: f6f28e88-5ea6-4519-ba92-65ead6236fff) for task 2 of framework 201311201513-2877626796-52576-3234-0000 to executor(1)@172.25.133.171:52579 I1120 15:13:40.796466 1680728064 master.hpp:418] Removing task 2 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-0 (vkone.local) I1120 15:13:40.796707 1679118336 hierarchical_allocator_process.hpp:637] Recovered cpus(*):1; mem(*):128 (total allocatable: cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000]) on slave 201311201513-2877626796-52576-3234-0 from framework 201311201513-2877626796-52576-3234-0000 Status update: task 2 is in state TASK_FINISHED Finished tasks: 3 I1120 15:13:40.797384 1680728064 status_update_manager.cpp:380] Received status update acknowledgement (UUID: f6f28e88-5ea6-4519-ba92-65ead6236fff) for task 2 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.824383 1681801216 master.cpp:1700] Sending 3 offers to framework 201311201513-2877626796-52576-3234-0000 Launching task 3 Launching task 4 I1120 15:13:40.826971 1679118336 master.cpp:2026] Processing reply for offer 201311201513-2877626796-52576-3234-3 on slave 201311201513-2877626796-52576-3234-1 (vkone.local) for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.827268 1679118336 master.hpp:400] Adding task 3 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-1 (vkone.local) I1120 15:13:40.827348 1679118336 master.cpp:2150] Launching task 3 of framework 201311201513-2877626796-52576-3234-0000 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-1 (vkone.local) I1120 15:13:40.827487 1680728064 slave.cpp:722] Got assigned task 3 for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.827857 1680728064 slave.cpp:833] Launching task 3 for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.827913 1679118336 master.cpp:2026] Processing reply for offer 201311201513-2877626796-52576-3234-4 on slave 201311201513-2877626796-52576-3234-2 (vkone.local) for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.827986 1680728064 slave.cpp:968] Sending task '3' to executor 'default' of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.828126 1679118336 master.hpp:400] Adding task 4 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-2 (vkone.local) I1120 15:13:40.828187 1679118336 master.cpp:2150] Launching task 4 of framework 201311201513-2877626796-52576-3234-0000 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-2 (vkone.local) I1120 15:13:40.828632 1679118336 master.cpp:2026] Processing reply for offer 201311201513-2877626796-52576-3234-5 on slave 201311201513-2877626796-52576-3234-0 (vkone.local) for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.828655 1680728064 hierarchical_allocator_process.hpp:590] Framework 201311201513-2877626796-52576-3234-0000 filtered slave 201311201513-2877626796-52576-3234-1 for 1secs I1120 15:13:40.829005 1679118336 slave.cpp:722] Got assigned task 4 for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.829027 1680728064 hierarchical_allocator_process.hpp:590] Framework 201311201513-2877626796-52576-3234-0000 filtered slave 201311201513-2877626796-52576-3234-2 for 1secs I1120 15:13:40.829260 1679118336 slave.cpp:833] Launching task 4 for framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.829273 1680728064 hierarchical_allocator_process.hpp:590] Framework 201311201513-2877626796-52576-3234-0000 filtered slave 201311201513-2877626796-52576-3234-0 for 1secs I1120 15:13:40.829390 1679118336 slave.cpp:968] Sending task '4' to executor 'default' of framework 201311201513-2877626796-52576-3234-0000 Running task value: """"3"""" Running task value: """"4"""" I1120 15:13:40.839279 1682337792 slave.cpp:1740] Handling status update TASK_RUNNING (UUID: a8d02ae6-3138-441c-a004-465d879b1277) for task 3 of framework 201311201513-2877626796-52576-3234-0000 from executor(1)@172.25.133.171:52578 I1120 15:13:40.839534 1679118336 status_update_manager.cpp:305] Received status update TASK_RUNNING (UUID: a8d02ae6-3138-441c-a004-465d879b1277) for task 3 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.839705 1679118336 status_update_manager.cpp:356] Forwarding status update TASK_RUNNING (UUID: a8d02ae6-3138-441c-a004-465d879b1277) for task 3 of framework 201311201513-2877626796-52576-3234-0000 to master@172.25.133.171:52576 I1120 15:13:40.839944 1682337792 master.cpp:1452] Status update TASK_RUNNING (UUID: a8d02ae6-3138-441c-a004-465d879b1277) for task 3 of framework 201311201513-2877626796-52576-3234-0000 from slave(2)@172.25.133.171:52576 Status update: task 3 is in state TASK_RUNNING I1120 15:13:40.839947 1679118336 slave.cpp:1865] Sending acknowledgement for status update TASK_RUNNING (UUID: a8d02ae6-3138-441c-a004-465d879b1277) for task 3 of framework 201311201513-2877626796-52576-3234-0000 to executor(1)@172.25.133.171:52578 I1120 15:13:40.856334 1679118336 slave.cpp:1740] Handling status update TASK_FINISHED (UUID: 3fc45cb8-fd7f-4bed-a21d-76f234af6b36) for task 3 of framework 201311201513-2877626796-52576-3234-0000 from executor(1)@172.25.133.171:52578 I1120 15:13:40.856650 1679118336 status_update_manager.cpp:380] Received status update acknowledgement (UUID: a8d02ae6-3138-441c-a004-465d879b1277) for task 3 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.856818 1679118336 status_update_manager.cpp:305] Received status update TASK_FINISHED (UUID: 3fc45cb8-fd7f-4bed-a21d-76f234af6b36) for task 3 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.856875 1679118336 status_update_manager.cpp:356] Forwarding status update TASK_FINISHED (UUID: 3fc45cb8-fd7f-4bed-a21d-76f234af6b36) for task 3 of framework 201311201513-2877626796-52576-3234-0000 to master@172.25.133.171:52576 I1120 15:13:40.857105 1679118336 slave.cpp:1740] Handling status update TASK_RUNNING (UUID: 4f97c8df-1cc0-4eeb-8469-9d72df09ec73) for task 4 of framework 201311201513-2877626796-52576-3234-0000 from executor(1)@172.25.133.171:52577 I1120 15:13:40.857369 1679118336 slave.cpp:1865] Sending acknowledgement for status update TASK_FINISHED (UUID: 3fc45cb8-fd7f-4bed-a21d-76f234af6b36) for task 3 of framework 201311201513-2877626796-52576-3234-0000 to executor(1)@172.25.133.171:52578 I1120 15:13:40.857498 1680728064 status_update_manager.cpp:305] Received status update TASK_RUNNING (UUID: 4f97c8df-1cc0-4eeb-8469-9d72df09ec73) for task 4 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.857518 1682337792 master.cpp:1452] Status update TASK_FINISHED (UUID: 3fc45cb8-fd7f-4bed-a21d-76f234af6b36) for task 3 of framework 201311201513-2877626796-52576-3234-0000 from slave(2)@172.25.133.171:52576 I1120 15:13:40.857635 1680728064 status_update_manager.cpp:356] Forwarding status update TASK_RUNNING (UUID: 4f97c8df-1cc0-4eeb-8469-9d72df09ec73) for task 4 of framework 201311201513-2877626796-52576-3234-0000 to master@172.25.133.171:52576 I1120 15:13:40.857630 1682337792 master.hpp:418] Removing task 3 with resources cpus(*):1; mem(*):128 on slave 201311201513-2877626796-52576-3234-1 (vkone.local) I1120 15:13:40.857843 1682337792 master.cpp:1452] Status update TASK_RUNNING (UUID: 4f97c8df-1cc0-4eeb-8469-9d72df09ec73) for task 4 of framework 201311201513-2877626796-52576-3234-0000 from slave(3)@172.25.133.171:52576 I1120 15:13:40.858043 1680728064 hierarchical_allocator_process.hpp:637] Recovered cpus(*):1; mem(*):128 (total allocatable: cpus(*):4; mem(*):7168; disk(*):481998; ports(*):[31000-32000]) on slave 201311201513-2877626796-52576-3234-1 from framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.858098 1680728064 slave.cpp:1865] Sending acknowledgement for status update TASK_RUNNING (UUID: 4f97c8df-1cc0-4eeb-8469-9d72df09ec73) for task 4 of framework 201311201513-2877626796-52576-3234-0000 to executor(1)@172.25.133.171:52577 Status update: task 3 is in state TASK_FINISHED Finished tasks: 4 Status update: task 4 is in state TASK_RUNNING I1120 15:13:40.858896 1682337792 status_update_manager.cpp:380] Received status update acknowledgement (UUID: 3fc45cb8-fd7f-4bed-a21d-76f234af6b36) for task 3 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.858957 1680728064 status_update_manager.cpp:380] Received status update acknowledgement (UUID: 4f97c8df-1cc0-4eeb-8469-9d72df09ec73) for task 4 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.859905 1679654912 slave.cpp:1740] Handling status update TASK_FINISHED (UUID: 876d7ddc-5d58-48df-a590-d82cf39d4978) for task 4 of framework 201311201513-2877626796-52576-3234-0000 from executor(1)@172.25.133.171:52577 I1120 15:13:40.860174 1680728064 status_update_manager.cpp:305] Received status update TASK_FINISHED (UUID: 876d7ddc-5d58-48df-a590-d82cf39d4978) for task 4 of framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.860245 1680728064 status_update_manager.cpp:356] Forwarding status update TASK_FINISHED (UUID: 876d7ddc-5d58-48df-a590-d82cf39d4978) for task 4 of framework 201311201513-2877626796-52576-3234-0000 to master@172.25.133.171:52576 I1120 15:13:40.860437 1679654912 master.cpp:1452] Status update TASK_FINISHED (UUID: 876d7ddc-5d58-48df-a590-d82cf39d4978) for task 4 of framework 201311201513-2877626796-52576-3234-0000 from slave(3)@172.25.133.171:52576 I1120 15:13:40.860486 1680728064 slave.cpp:1865] Sending acknowledgement for status update TASK_FINISHED (UUID: 876d7ddc-5d58-48df-a590-d82cf39d4978) for task 4 of framework 201311201513-2877626796-52576-3234-0000 to executor(1)@172.25.133.171:52577 I1120 15:13:40.860550 1679654912 master.hpp:418] Removing task 4 with resources cpus(*):1; mem(Status update: task 4 is in state TASK_FINISHED Finished tasks: 5 *):128 on slave 201311201513-2877626796-52576-3234-2 (vkone.local) I1120 15:13:40.863689 1679654912 master.cpp:996] Asked to unregister framework 201311201513-2877626796-52576-3234-0000 I1120 15:13:40.863750 1679654912 master.cpp:2385] Removing framework 201311201513-2877626796-52576-3234-0000 ../../src/tests/script.cpp:81: Failure Failed java_framework_test.sh terminated with signal 'Abort trap: 6' [ FAILED ] ExamplesTest.JavaFramework (2688 ms) [----------] 1 test from ExamplesTest (2688 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (2692 ms total) [ PASSED ] 0 tests. [ FAILED ] 1 test, listed below: [ FAILED ] ExamplesTest.JavaFramework ""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-934","01/21/2014 18:43:58",1,"'Logging and Debugging' document is out-of-date. ""The following is no longer correct: http://mesos.apache.org/documentation/latest/logging-and-debugging/ We should either delete this document or re-write it entirely.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-976","02/06/2014 20:25:17",1,"SlaveRecoveryTest/1.SchedulerFailover is flaky ""[==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from SlaveRecoveryTest/1, where TypeParam = mesos::internal::slave::CgroupsIsolator [ RUN ] SlaveRecoveryTest/1.SchedulerFailover I0206 20:18:31.525116 56447 master.cpp:239] Master ID: 2014-02-06-20:18:31-1740121354-55566-56447 Hostname: smfd-bkq-03-sr4.devel.twitter.com I0206 20:18:31.525295 56481 master.cpp:321] Master started on 10.37.184.103:55566 I0206 20:18:31.525315 56481 master.cpp:324] Master only allowing authenticated frameworks to register! I0206 20:18:31.527093 56481 master.cpp:756] The newly elected leader is master@10.37.184.103:55566 I0206 20:18:31.527122 56481 master.cpp:764] Elected as the leading master! I0206 20:18:31.530642 56473 slave.cpp:112] Slave started on 9)@10.37.184.103:55566 I0206 20:18:31.530802 56473 slave.cpp:212] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0206 20:18:31.531203 56473 slave.cpp:240] Slave hostname: smfd-bkq-03-sr4.devel.twitter.com I0206 20:18:31.531221 56473 slave.cpp:241] Slave checkpoint: true I0206 20:18:31.531991 56482 cgroups_isolator.cpp:225] Using /tmp/mesos_test_cgroup as cgroups hierarchy root I0206 20:18:31.532470 56478 state.cpp:33] Recovering state from '/tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/meta' I0206 20:18:31.532698 56469 status_update_manager.cpp:188] Recovering status update manager I0206 20:18:31.533962 56472 sched.cpp:265] Authenticating with master master@10.37.184.103:55566 I0206 20:18:31.534102 56472 sched.cpp:234] Detecting new master I0206 20:18:31.534124 56484 authenticatee.hpp:124] Creating new client SASL connection I0206 20:18:31.534299 56473 master.cpp:2317] Authenticating framework at scheduler(9)@10.37.184.103:55566 I0206 20:18:31.534459 56461 authenticator.hpp:140] Creating new server SASL connection I0206 20:18:31.534572 56466 authenticatee.hpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0206 20:18:31.534595 56466 authenticatee.hpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 20:18:31.534667 56474 authenticator.hpp:243] Received SASL authentication start I0206 20:18:31.534732 56474 authenticator.hpp:325] Authentication requires more steps I0206 20:18:31.534814 56468 authenticatee.hpp:258] Received SASL authentication step I0206 20:18:31.534946 56466 authenticator.hpp:271] Received SASL authentication step I0206 20:18:31.535007 56466 authenticator.hpp:317] Authentication success I0206 20:18:31.535084 56471 authenticatee.hpp:298] Authentication success I0206 20:18:31.535107 56461 master.cpp:2357] Successfully authenticated framework at scheduler(9)@10.37.184.103:55566 I0206 20:18:31.535392 56476 sched.cpp:339] Successfully authenticated with master master@10.37.184.103:55566 I0206 20:18:31.535512 56465 master.cpp:812] Received registration request from scheduler(9)@10.37.184.103:55566 I0206 20:18:31.535570 56465 master.cpp:830] Registering framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 at scheduler(9)@10.37.184.103:55566 I0206 20:18:31.535856 56465 hierarchical_allocator_process.hpp:332] Added framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.537802 56482 cgroups_isolator.cpp:840] Recovering isolator I0206 20:18:31.538462 56472 slave.cpp:2760] Finished recovery I0206 20:18:31.538910 56472 slave.cpp:508] New master detected at master@10.37.184.103:55566 I0206 20:18:31.539036 56478 status_update_manager.cpp:162] New master detected at master@10.37.184.103:55566 I0206 20:18:31.539223 56464 master.cpp:1834] Attempting to register slave on smfd-bkq-03-sr4.devel.twitter.com at slave(9)@10.37.184.103:55566 I0206 20:18:31.539271 56472 slave.cpp:533] Detecting new master I0206 20:18:31.539330 56464 master.cpp:2804] Adding slave 2014-02-06-20:18:31-1740121354-55566-56447-0 at smfd-bkq-03-sr4.devel.twitter.com with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0206 20:18:31.539454 56472 slave.cpp:551] Registered with master master@10.37.184.103:55566; given slave ID 2014-02-06-20:18:31-1740121354-55566-56447-0 I0206 20:18:31.539620 56472 slave.cpp:564] Checkpointing SlaveInfo to '/tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/meta/slaves/2014-02-06-20:18:31-1740121354-55566-56447-0/slave.info' I0206 20:18:31.539834 56475 hierarchical_allocator_process.hpp:445] Added slave 2014-02-06-20:18:31-1740121354-55566-56447-0 (smfd-bkq-03-sr4.devel.twitter.com) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0206 20:18:31.540341 56472 master.cpp:2272] Sending 1 offers to framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.543433 56472 master.cpp:1568] Processing reply for offers: [ 2014-02-06-20:18:31-1740121354-55566-56447-0 ] on slave 2014-02-06-20:18:31-1740121354-55566-56447-0 (smfd-bkq-03-sr4.devel.twitter.com) for framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.543642 56472 master.hpp:411] Adding task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 2014-02-06-20:18:31-1740121354-55566-56447-0 (smfd-bkq-03-sr4.devel.twitter.com) I0206 20:18:31.543781 56472 master.cpp:2441] Launching task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 2014-02-06-20:18:31-1740121354-55566-56447-0 (smfd-bkq-03-sr4.devel.twitter.com) I0206 20:18:31.544002 56484 slave.cpp:736] Got assigned task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 for framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.544097 56484 slave.cpp:2899] Checkpointing FrameworkInfo to '/tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/meta/slaves/2014-02-06-20:18:31-1740121354-55566-56447-0/frameworks/2014-02-06-20:18:31-1740121354-55566-56447-0000/framework.info' I0206 20:18:31.544272 56484 slave.cpp:2906] Checkpointing framework pid 'scheduler(9)@10.37.184.103:55566' to '/tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/meta/slaves/2014-02-06-20:18:31-1740121354-55566-56447-0/frameworks/2014-02-06-20:18:31-1740121354-55566-56447-0000/framework.pid' I0206 20:18:31.544617 56484 slave.cpp:845] Launching task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 for framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.546721 56484 slave.cpp:3169] Checkpointing ExecutorInfo to '/tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/meta/slaves/2014-02-06-20:18:31-1740121354-55566-56447-0/frameworks/2014-02-06-20:18:31-1740121354-55566-56447-0000/executors/d045a0bd-2ed2-410a-bd1f-5bd9219896e3/executor.info' I0206 20:18:31.547317 56484 slave.cpp:3257] Checkpointing TaskInfo to '/tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/meta/slaves/2014-02-06-20:18:31-1740121354-55566-56447-0/frameworks/2014-02-06-20:18:31-1740121354-55566-56447-0000/executors/d045a0bd-2ed2-410a-bd1f-5bd9219896e3/runs/9adabe16-5d84-45c9-bc83-1a72a6d1c986/tasks/d045a0bd-2ed2-410a-bd1f-5bd9219896e3/task.info' I0206 20:18:31.547514 56484 slave.cpp:955] Queuing task 'd045a0bd-2ed2-410a-bd1f-5bd9219896e3' for executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework '2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.547590 56481 cgroups_isolator.cpp:517] Launching d045a0bd-2ed2-410a-bd1f-5bd9219896e3 (/home/vinod/mesos/build/src/mesos-executor) in /tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/slaves/2014-02-06-20:18:31-1740121354-55566-56447-0/frameworks/2014-02-06-20:18:31-1740121354-55566-56447-0000/executors/d045a0bd-2ed2-410a-bd1f-5bd9219896e3/runs/9adabe16-5d84-45c9-bc83-1a72a6d1c986 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] for framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 in cgroup mesos_test/framework_2014-02-06-20:18:31-1740121354-55566-56447-0000_executor_d045a0bd-2ed2-410a-bd1f-5bd9219896e3_tag_9adabe16-5d84-45c9-bc83-1a72a6d1c986 I0206 20:18:31.548408 56481 cgroups_isolator.cpp:717] Changing cgroup controls for executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0206 20:18:31.548833 56481 cgroups_isolator.cpp:1007] Updated 'cpu.shares' to 2048 for executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.549294 56481 cgroups_isolator.cpp:1117] Updated 'memory.soft_limit_in_bytes' to 1GB for executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.550107 56481 cgroups_isolator.cpp:1147] Updated 'memory.limit_in_bytes' to 1GB for executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.550571 56481 cgroups_isolator.cpp:1174] Started listening for OOM events for executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.551553 56481 cgroups_isolator.cpp:569] Forked executor at = 56671 Checkpointing executor's forked pid 56671 to '/tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/meta/slaves/2014-02-06-20:18:31-1740121354-55566-56447-0/frameworks/2014-02-06-20:18:31-1740121354-55566-56447-0000/executors/d045a0bd-2ed2-410a-bd1f-5bd9219896e3/runs/9adabe16-5d84-45c9-bc83-1a72a6d1c986/pids/forked.pid' I0206 20:18:31.552222 56472 slave.cpp:2098] Monitoring executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 forked at pid 56671 Fetching resources into '/tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/slaves/2014-02-06-20:18:31-1740121354-55566-56447-0/frameworks/2014-02-06-20:18:31-1740121354-55566-56447-0000/executors/d045a0bd-2ed2-410a-bd1f-5bd9219896e3/runs/9adabe16-5d84-45c9-bc83-1a72a6d1c986' I0206 20:18:31.604012 56472 slave.cpp:1431] Got registration for executor 'd045a0bd-2ed2-410a-bd1f-5bd9219896e3' of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.604167 56472 slave.cpp:1516] Checkpointing executor pid 'executor(1)@10.37.184.103:46181' to '/tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/meta/slaves/2014-02-06-20:18:31-1740121354-55566-56447-0/frameworks/2014-02-06-20:18:31-1740121354-55566-56447-0000/executors/d045a0bd-2ed2-410a-bd1f-5bd9219896e3/runs/9adabe16-5d84-45c9-bc83-1a72a6d1c986/pids/libprocess.pid' I0206 20:18:31.605183 56472 slave.cpp:1552] Flushing queued task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 for executor 'd045a0bd-2ed2-410a-bd1f-5bd9219896e3' of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 Registered executor on smfd-bkq-03-sr4.devel.twitter.com Starting task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 sh -c 'sleep 1000' Forked command at 56712 I0206 20:18:31.613098 56481 slave.cpp:1765] Handling status update TASK_RUNNING (UUID: fc151a46-751b-4c4b-b048-1727752f34e3) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 from executor(1)@10.37.184.103:46181 I0206 20:18:31.613628 56469 status_update_manager.cpp:314] Received status update TASK_RUNNING (UUID: fc151a46-751b-4c4b-b048-1727752f34e3) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.614006 56469 status_update_manager.hpp:342] Checkpointing UPDATE for status update TASK_RUNNING (UUID: fc151a46-751b-4c4b-b048-1727752f34e3) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.795529 56469 status_update_manager.cpp:367] Forwarding status update TASK_RUNNING (UUID: fc151a46-751b-4c4b-b048-1727752f34e3) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 to master@10.37.184.103:55566 I0206 20:18:31.795992 56480 slave.cpp:1890] Sending acknowledgement for status update TASK_RUNNING (UUID: fc151a46-751b-4c4b-b048-1727752f34e3) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 to executor(1)@10.37.184.103:46181 I0206 20:18:31.796131 56471 master.cpp:2020] Status update TASK_RUNNING (UUID: fc151a46-751b-4c4b-b048-1727752f34e3) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 from slave(9)@10.37.184.103:55566 I0206 20:18:31.797099 56483 status_update_manager.cpp:392] Received status update acknowledgement (UUID: fc151a46-751b-4c4b-b048-1727752f34e3) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.797165 56483 status_update_manager.hpp:342] Checkpointing ACK for status update TASK_RUNNING (UUID: fc151a46-751b-4c4b-b048-1727752f34e3) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.882767 56481 slave.cpp:394] Slave terminating I0206 20:18:31.883112 56481 master.cpp:641] Slave 2014-02-06-20:18:31-1740121354-55566-56447-0 (smfd-bkq-03-sr4.devel.twitter.com) disconnected I0206 20:18:31.883200 56476 hierarchical_allocator_process.hpp:484] Slave 2014-02-06-20:18:31-1740121354-55566-56447-0 disconnected I0206 20:18:31.888206 56473 sched.cpp:265] Authenticating with master master@10.37.184.103:55566 I0206 20:18:31.888473 56473 sched.cpp:234] Detecting new master I0206 20:18:31.888556 56469 authenticatee.hpp:124] Creating new client SASL connection I0206 20:18:31.888978 56484 master.cpp:2317] Authenticating framework at scheduler(10)@10.37.184.103:55566 I0206 20:18:31.889348 56469 authenticator.hpp:140] Creating new server SASL connection I0206 20:18:31.889925 56469 authenticatee.hpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0206 20:18:31.889989 56469 authenticatee.hpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 20:18:31.890059 56469 authenticator.hpp:243] Received SASL authentication start I0206 20:18:31.890233 56469 authenticator.hpp:325] Authentication requires more steps I0206 20:18:31.890399 56468 authenticatee.hpp:258] Received SASL authentication step I0206 20:18:31.890554 56484 authenticator.hpp:271] Received SASL authentication step I0206 20:18:31.890630 56484 authenticator.hpp:317] Authentication success I0206 20:18:31.890728 56470 authenticatee.hpp:298] Authentication success I0206 20:18:31.890748 56484 master.cpp:2357] Successfully authenticated framework at scheduler(10)@10.37.184.103:55566 I0206 20:18:31.892210 56469 sched.cpp:339] Successfully authenticated with master master@10.37.184.103:55566 I0206 20:18:31.892410 56473 master.cpp:900] Re-registering framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 at scheduler(10)@10.37.184.103:55566 I0206 20:18:31.892460 56473 master.cpp:926] Framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 failed over W0206 20:18:31.892691 56465 master.cpp:1048] Ignoring deactivate framework message for framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 from 'scheduler(9)@10.37.184.103:55566' because it is not from the registered framework 'scheduler(10)@10.37.184.103:55566' I0206 20:18:31.897049 56466 slave.cpp:112] Slave started on 10)@10.37.184.103:55566 I0206 20:18:31.897207 56466 slave.cpp:212] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0206 20:18:31.897536 56466 slave.cpp:240] Slave hostname: smfd-bkq-03-sr4.devel.twitter.com I0206 20:18:31.897554 56466 slave.cpp:241] Slave checkpoint: true I0206 20:18:31.898388 56463 cgroups_isolator.cpp:225] Using /tmp/mesos_test_cgroup as cgroups hierarchy root I0206 20:18:31.898936 56472 state.cpp:33] Recovering state from '/tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/meta' I0206 20:18:31.901702 56465 slave.cpp:2828] Recovering framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.901759 56465 slave.cpp:3020] Recovering executor 'd045a0bd-2ed2-410a-bd1f-5bd9219896e3' of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:31.902716 56464 status_update_manager.cpp:188] Recovering status update manager I0206 20:18:31.902884 56464 status_update_manager.cpp:196] Recovering executor 'd045a0bd-2ed2-410a-bd1f-5bd9219896e3' of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.475915 56463 cgroups_isolator.cpp:840] Recovering isolator I0206 20:18:34.476066 56463 cgroups_isolator.cpp:847] Recovering executor 'd045a0bd-2ed2-410a-bd1f-5bd9219896e3' of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.477478 56463 cgroups_isolator.cpp:1174] Started listening for OOM events for executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.478728 56463 slave.cpp:2700] Sending reconnect request to executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 at executor(1)@10.37.184.103:46181 I0206 20:18:34.480114 56476 slave.cpp:1597] Re-registering executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.480566 56476 cgroups_isolator.cpp:717] Changing cgroup controls for executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0206 20:18:34.481370 56476 cgroups_isolator.cpp:1007] Updated 'cpu.shares' to 2048 for executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.481827 56476 cgroups_isolator.cpp:1117] Updated 'memory.soft_limit_in_bytes' to 1GB for executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 Re-registered executor on smfd-bkq-03-sr4.devel.twitter.com I0206 20:18:34.489497 56471 slave.cpp:1713] Cleaning up un-reregistered executors I0206 20:18:34.489588 56471 slave.cpp:2760] Finished recovery I0206 20:18:34.490048 56463 slave.cpp:508] New master detected at master@10.37.184.103:55566 I0206 20:18:34.490257 56475 status_update_manager.cpp:162] New master detected at master@10.37.184.103:55566 I0206 20:18:34.490357 56463 slave.cpp:533] Detecting new master W0206 20:18:34.490603 56480 master.cpp:1878] Slave at slave(10)@10.37.184.103:55566 (smfd-bkq-03-sr4.devel.twitter.com) is being allowed to re-register with an already in use id (2014-02-06-20:18:31-1740121354-55566-56447-0) I0206 20:18:34.490927 56479 slave.cpp:601] Re-registered with master master@10.37.184.103:55566 I0206 20:18:34.491322 56461 hierarchical_allocator_process.hpp:498] Slave 2014-02-06-20:18:31-1740121354-55566-56447-0 reconnected I0206 20:18:34.491421 56468 slave.cpp:1312] Updating framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 pid to scheduler(10)@10.37.184.103:55566 I0206 20:18:34.491444 56480 master.cpp:1673] Asked to kill task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.491488 56468 slave.cpp:1320] Checkpointing framework pid 'scheduler(10)@10.37.184.103:55566' to '/tmp/SlaveRecoveryTest_1_SchedulerFailover_7dC2N1/meta/slaves/2014-02-06-20:18:31-1740121354-55566-56447-0/frameworks/2014-02-06-20:18:31-1740121354-55566-56447-0000/framework.pid' I0206 20:18:34.491497 56480 master.cpp:1707] Telling slave 2014-02-06-20:18:31-1740121354-55566-56447-0 (smfd-bkq-03-sr4.devel.twitter.com) to kill task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.491657 56468 slave.cpp:1013] Asked to kill task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 Shutting down Killing process tree at pid 56712 Killed the following process trees: [ --- 56712 sleep 1000 ] Command terminated with signal Killed (pid: 56712) I0206 20:18:34.615216 56463 slave.cpp:1765] Handling status update TASK_KILLED (UUID: d9d37827-3002-4a67-8659-fa36f1986fc7) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 from executor(1)@10.37.184.103:46181 I0206 20:18:34.615556 56483 cgroups_isolator.cpp:717] Changing cgroup controls for executor d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 with resources I0206 20:18:34.615624 56476 status_update_manager.cpp:314] Received status update TASK_KILLED (UUID: d9d37827-3002-4a67-8659-fa36f1986fc7) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.615701 56476 status_update_manager.hpp:342] Checkpointing UPDATE for status update TASK_KILLED (UUID: d9d37827-3002-4a67-8659-fa36f1986fc7) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.706945 56476 status_update_manager.cpp:367] Forwarding status update TASK_KILLED (UUID: d9d37827-3002-4a67-8659-fa36f1986fc7) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 to master@10.37.184.103:55566 I0206 20:18:34.707263 56476 slave.cpp:1890] Sending acknowledgement for status update TASK_KILLED (UUID: d9d37827-3002-4a67-8659-fa36f1986fc7) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 to executor(1)@10.37.184.103:46181 I0206 20:18:34.707352 56469 master.cpp:2020] Status update TASK_KILLED (UUID: d9d37827-3002-4a67-8659-fa36f1986fc7) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 from slave(10)@10.37.184.103:55566 I0206 20:18:34.707620 56469 master.hpp:429] Removing task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 2014-02-06-20:18:31-1740121354-55566-56447-0 (smfd-bkq-03-sr4.devel.twitter.com) I0206 20:18:34.708348 56466 hierarchical_allocator_process.hpp:637] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 2014-02-06-20:18:31-1740121354-55566-56447-0 from framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.708673 56469 status_update_manager.cpp:392] Received status update acknowledgement (UUID: d9d37827-3002-4a67-8659-fa36f1986fc7) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.708749 56469 status_update_manager.hpp:342] Checkpointing ACK for status update TASK_KILLED (UUID: d9d37827-3002-4a67-8659-fa36f1986fc7) for task d045a0bd-2ed2-410a-bd1f-5bd9219896e3 of framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.709411 56470 master.cpp:2272] Sending 1 offers to framework 2014-02-06-20:18:31-1740121354-55566-56447-0000 I0206 20:18:34.809782 56447 master.cpp:583] Master terminating I0206 20:18:34.810066 56447 master.cpp:246] Shutting down master I0206 20:18:34.810134 56482 slave.cpp:1965] master@10.37.184.103:55566 exited W0206 20:18:34.810184 56482 slave.cpp:1968] Master disconnected! Waiting for a new master to be elected I0206 20:18:34.810652 56447 master.cpp:289] Removing slave 2014-02-06-20:18:31-1740121354-55566-56447-0 (smfd-bkq-03-sr4.devel.twitter.com) I0206 20:18:34.813144 56447 slave.cpp:394] Slave terminating I0206 20:18:34.821583 56467 cgroups.cpp:1209] Trying to freeze cgroup /tmp/mesos_test_cgroup/mesos_test I0206 20:18:34.821652 56467 cgroups.cpp:1248] Successfully froze cgroup /tmp/mesos_test_cgroup/mesos_test after 1 attempts I0206 20:18:34.823129 56471 cgroups.cpp:1224] Trying to thaw cgroup /tmp/mesos_test_cgroup/mesos_test I0206 20:18:34.823247 56471 cgroups.cpp:1334] Successfully thawed /tmp/mesos_test_cgroup/mesos_test I0206 20:18:34.923945 56470 cgroups.cpp:1209] Trying to freeze cgroup /tmp/mesos_test_cgroup/mesos_test/framework_2014-02-06-20:18:31-1740121354-55566-56447-0000_executor_d045a0bd-2ed2-410a-bd1f-5bd9219896e3_tag_9adabe16-5d84-45c9-bc83-1a72a6d1c986 I0206 20:18:34.924018 56470 cgroups.cpp:1248] Successfully froze cgroup /tmp/mesos_test_cgroup/mesos_test/framework_2014-02-06-20:18:31-1740121354-55566-56447-0000_executor_d045a0bd-2ed2-410a-bd1f-5bd9219896e3_tag_9adabe16-5d84-45c9-bc83-1a72a6d1c986 after 1 attempts I0206 20:18:34.925506 56461 cgroups.cpp:1224] Trying to thaw cgroup /tmp/mesos_test_cgroup/mesos_test/framework_2014-02-06-20:18:31-1740121354-55566-56447-0000_executor_d045a0bd-2ed2-410a-bd1f-5bd9219896e3_tag_9adabe16-5d84-45c9-bc83-1a72a6d1c986 I0206 20:18:34.925580 56461 cgroups.cpp:1334] Successfully thawed /tmp/mesos_test_cgroup/mesos_test/framework_2014-02-06-20:18:31-1740121354-55566-56447-0000_executor_d045a0bd-2ed2-410a-bd1f-5bd9219896e3_tag_9adabe16-5d84-45c9-bc83-1a72a6d1c986 [ OK ] SlaveRecoveryTest/1.SchedulerFailover (3408 ms) [----------] 1 test from SlaveRecoveryTest/1 (3409 ms total) [----------] Global test environment tear-down ../../src/tests/environment.cpp:247: Failure Failed Tests completed with child processes remaining: -+- 56447 /home/vinod/mesos/build/src/.libs/lt-mesos-tests --verbose --gtest_filter=*SlaveRecoveryTest/1.SchedulerFailover* --gtest_repeat=10 \--- 56671 () ""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-988","02/11/2014 22:25:26",3,"ExamplesTest.PythonFramework is flaky ""Looks like a SEGFAULT during shutdown. """," [ RUN ] ExamplesTest.PythonFramework Using temporary directory '/tmp/ExamplesTest_PythonFramework_RZ4yaf' WARNING: Logging before InitGoogleLogging() is written to STDERR I0211 21:14:47.861803 21045 process.cpp:1591] libprocess is initialized on 67.195.138.9:53443 for 8 cpus I0211 21:14:47.861884 21045 logging.cpp:140] Logging to STDERR I0211 21:14:47.862761 21045 master.cpp:240] Master ID: 2014-02-11-21:14:47-160088899-53443-21045 Hostname: vesta.apache.org I0211 21:14:47.862897 21054 master.cpp:322] Master started on 67.195.138.9:53443 I0211 21:14:47.862908 21054 master.cpp:325] Master only allowing authenticated frameworks to register! I0211 21:14:47.864362 21053 master.cpp:86] No whitelist given. Advertising offers for all slaves I0211 21:14:47.864506 21055 slave.cpp:112] Slave started on 1)@67.195.138.9:53443 I0211 21:14:47.864522 21059 slave.cpp:112] Slave started on 2)@67.195.138.9:53443 I0211 21:14:47.864749 21055 slave.cpp:212] Slave resources: cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] I0211 21:14:47.864778 21059 slave.cpp:212] Slave resources: cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] I0211 21:14:47.864819 21055 slave.cpp:240] Slave hostname: vesta.apache.org I0211 21:14:47.864827 21055 slave.cpp:241] Slave checkpoint: true I0211 21:14:47.864850 21059 slave.cpp:240] Slave hostname: vesta.apache.org I0211 21:14:47.864858 21059 slave.cpp:241] Slave checkpoint: true I0211 21:14:47.865329 21055 master.cpp:760] The newly elected leader is master@67.195.138.9:53443 with id 2014-02-11-21:14:47-160088899-53443-21045 I0211 21:14:47.865350 21055 master.cpp:770] Elected as the leading master! I0211 21:14:47.865399 21055 state.cpp:33] Recovering state from '/tmp/mesos-Z8v6cu/1/meta' I0211 21:14:47.865407 21059 state.cpp:33] Recovering state from '/tmp/mesos-Z8v6cu/0/meta' I0211 21:14:47.865502 21052 hierarchical_allocator_process.hpp:302] Initializing hierarchical allocator process with master : master@67.195.138.9:53443 I0211 21:14:47.865540 21054 status_update_manager.cpp:188] Recovering status update manager I0211 21:14:47.865619 21053 process_isolator.cpp:319] Recovering isolator I0211 21:14:47.865674 21057 status_update_manager.cpp:188] Recovering status update manager I0211 21:14:47.865699 21059 slave.cpp:2760] Finished recovery I0211 21:14:47.865733 21053 process_isolator.cpp:319] Recovering isolator I0211 21:14:47.865789 21053 slave.cpp:2760] Finished recovery I0211 21:14:47.865921 21059 slave.cpp:508] New master detected at master@67.195.138.9:53443 I0211 21:14:47.865958 21053 status_update_manager.cpp:162] New master detected at master@67.195.138.9:53443 I0211 21:14:47.865978 21059 slave.cpp:533] Detecting new master I0211 21:14:47.866019 21053 slave.cpp:508] New master detected at master@67.195.138.9:53443 I0211 21:14:47.866063 21053 slave.cpp:533] Detecting new master I0211 21:14:47.866070 21055 status_update_manager.cpp:162] New master detected at master@67.195.138.9:53443 I0211 21:14:47.866077 21059 master.cpp:1840] Attempting to register slave on vesta.apache.org at slave(2)@67.195.138.9:53443 I0211 21:14:47.866092 21059 master.cpp:2810] Adding slave 2014-02-11-21:14:47-160088899-53443-21045-0 at vesta.apache.org with cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] I0211 21:14:47.866216 21059 master.cpp:1840] Attempting to register slave on vesta.apache.org at slave(1)@67.195.138.9:53443 I0211 21:14:47.866225 21053 slave.cpp:551] Registered with master master@67.195.138.9:53443; given slave ID 2014-02-11-21:14:47-160088899-53443-21045-0 I0211 21:14:47.866228 21059 master.cpp:2810] Adding slave 2014-02-11-21:14:47-160088899-53443-21045-1 at vesta.apache.org with cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] I0211 21:14:47.866278 21055 hierarchical_allocator_process.hpp:445] Added slave 2014-02-11-21:14:47-160088899-53443-21045-0 (vesta.apache.org) with cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] (and cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] available) I0211 21:14:47.866297 21059 slave.cpp:551] Registered with master master@67.195.138.9:53443; given slave ID 2014-02-11-21:14:47-160088899-53443-21045-1 I0211 21:14:47.866327 21055 hierarchical_allocator_process.hpp:708] Performed allocation for slave 2014-02-11-21:14:47-160088899-53443-21045-0 in 11us I0211 21:14:47.866330 21053 slave.cpp:564] Checkpointing SlaveInfo to '/tmp/mesos-Z8v6cu/1/meta/slaves/2014-02-11-21:14:47-160088899-53443-21045-0/slave.info' I0211 21:14:47.866400 21059 slave.cpp:564] Checkpointing SlaveInfo to '/tmp/mesos-Z8v6cu/0/meta/slaves/2014-02-11-21:14:47-160088899-53443-21045-1/slave.info' I0211 21:14:47.866399 21055 hierarchical_allocator_process.hpp:445] Added slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) with cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] (and cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] available) I0211 21:14:47.866423 21055 hierarchical_allocator_process.hpp:708] Performed allocation for slave 2014-02-11-21:14:47-160088899-53443-21045-1 in 2505ns I0211 21:14:47.866636 21059 slave.cpp:112] Slave started on 3)@67.195.138.9:53443 I0211 21:14:47.866727 21059 slave.cpp:212] Slave resources: cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] I0211 21:14:47.866766 21059 slave.cpp:240] Slave hostname: vesta.apache.org I0211 21:14:47.866772 21059 slave.cpp:241] Slave checkpoint: true I0211 21:14:47.867300 21052 state.cpp:33] Recovering state from '/tmp/mesos-Z8v6cu/2/meta' I0211 21:14:47.867368 21052 status_update_manager.cpp:188] Recovering status update manager I0211 21:14:47.867419 21055 process_isolator.cpp:319] Recovering isolator I0211 21:14:47.867544 21052 slave.cpp:2760] Finished recovery I0211 21:14:47.867729 21052 slave.cpp:508] New master detected at master@67.195.138.9:53443 I0211 21:14:47.867770 21054 status_update_manager.cpp:162] New master detected at master@67.195.138.9:53443 I0211 21:14:47.867777 21052 slave.cpp:533] Detecting new master I0211 21:14:47.867815 21055 master.cpp:1840] Attempting to register slave on vesta.apache.org at slave(3)@67.195.138.9:53443 I0211 21:14:47.867827 21055 master.cpp:2810] Adding slave 2014-02-11-21:14:47-160088899-53443-21045-2 at vesta.apache.org with cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] I0211 21:14:47.867885 21052 slave.cpp:551] Registered with master master@67.195.138.9:53443; given slave ID 2014-02-11-21:14:47-160088899-53443-21045-2 I0211 21:14:47.867961 21055 hierarchical_allocator_process.hpp:445] Added slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) with cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] (and cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] available) I0211 21:14:47.867985 21052 slave.cpp:564] Checkpointing SlaveInfo to '/tmp/mesos-Z8v6cu/2/meta/slaves/2014-02-11-21:14:47-160088899-53443-21045-2/slave.info' I0211 21:14:47.867987 21055 hierarchical_allocator_process.hpp:708] Performed allocation for slave 2014-02-11-21:14:47-160088899-53443-21045-2 in 3308ns I0211 21:14:47.868468 21045 sched.cpp:121] Version: 0.18.0 I0211 21:14:47.868633 21055 sched.cpp:217] New master detected at master@67.195.138.9:53443 I0211 21:14:47.868651 21055 sched.cpp:268] Authenticating with master master@67.195.138.9:53443 I0211 21:14:47.868696 21055 sched.cpp:237] Detecting new master I0211 21:14:47.868708 21054 authenticatee.hpp:100] Initializing client SASL I0211 21:14:47.869549 21054 authenticatee.hpp:124] Creating new client SASL connection I0211 21:14:47.869633 21055 master.cpp:2323] Authenticating framework at scheduler(1)@67.195.138.9:53443 I0211 21:14:47.869818 21059 authenticator.hpp:83] Initializing server SASL I0211 21:14:47.870029 21059 auxprop.cpp:45] Initialized in-memory auxiliary property plugin I0211 21:14:47.870040 21059 authenticator.hpp:140] Creating new server SASL connection I0211 21:14:47.870144 21057 authenticatee.hpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0211 21:14:47.870174 21057 authenticatee.hpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0211 21:14:47.870203 21057 authenticator.hpp:243] Received SASL authentication start I0211 21:14:47.870256 21057 authenticator.hpp:325] Authentication requires more steps I0211 21:14:47.870282 21057 authenticatee.hpp:258] Received SASL authentication step I0211 21:14:47.870348 21057 authenticator.hpp:271] Received SASL authentication step I0211 21:14:47.870376 21057 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'vesta.apache.org' server FQDN: 'vesta.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0211 21:14:47.870384 21057 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0211 21:14:47.870396 21057 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0211 21:14:47.870405 21057 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'vesta.apache.org' server FQDN: 'vesta.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0211 21:14:47.870411 21057 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0211 21:14:47.870415 21057 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0211 21:14:47.870425 21057 authenticator.hpp:317] Authentication success I0211 21:14:47.870445 21057 master.cpp:2363] Successfully authenticated framework at scheduler(1)@67.195.138.9:53443 I0211 21:14:47.870448 21055 authenticatee.hpp:298] Authentication success I0211 21:14:47.870492 21055 sched.cpp:342] Successfully authenticated with master master@67.195.138.9:53443 I0211 21:14:47.870538 21057 master.cpp:818] Received registration request from scheduler(1)@67.195.138.9:53443 I0211 21:14:47.870590 21057 master.cpp:836] Registering framework 2014-02-11-21:14:47-160088899-53443-21045-0000 at scheduler(1)@67.195.138.9:53443 I0211 21:14:47.870661 21055 sched.cpp:391] Framework registered with 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.870661 21057 hierarchical_allocator_process.hpp:332] Added framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.870707 21057 hierarchical_allocator_process.hpp:752] Offering cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-0 to framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.870798 21057 hierarchical_allocator_process.hpp:752] Offering cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-1 to framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.870869 21057 hierarchical_allocator_process.hpp:752] Offering cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-2 to framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.870894 21055 sched.cpp:405] Scheduler::registered took 222149ns I0211 21:14:47.871038 21057 hierarchical_allocator_process.hpp:688] Performed allocation for 3 slaves in 351098ns I0211 21:14:47.871106 21058 master.hpp:439] Adding offer 2014-02-11-21:14:47-160088899-53443-21045-0 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) I0211 21:14:47.871215 21058 master.hpp:439] Adding offer 2014-02-11-21:14:47-160088899-53443-21045-1 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) I0211 21:14:47.871296 21058 master.hpp:439] Adding offer 2014-02-11-21:14:47-160088899-53443-21045-2 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-0 (vesta.apache.org) I0211 21:14:47.871333 21058 master.cpp:2278] Sending 3 offers to framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.873667 21055 sched.cpp:525] Scheduler::resourceOffers took 2.150843ms I0211 21:14:47.873884 21053 master.hpp:449] Removing offer 2014-02-11-21:14:47-160088899-53443-21045-0 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) I0211 21:14:47.873934 21053 master.cpp:1574] Processing reply for offers: [ 2014-02-11-21:14:47-160088899-53443-21045-0 ] on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.874035 21053 master.hpp:411] Adding task 0 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) I0211 21:14:47.874059 21053 master.cpp:2447] Launching task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) I0211 21:14:47.874150 21059 slave.cpp:736] Got assigned task 0 for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.874200 21058 hierarchical_allocator_process.hpp:547] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 left cpus(*):7; mem(*):6929; disk(*):1.38501e+06; ports(*):[31000-32000] unused on slave 2014-02-11-21:14:47-160088899-53443-21045-2 I0211 21:14:47.874250 21053 master.hpp:449] Removing offer 2014-02-11-21:14:47-160088899-53443-21045-1 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) I0211 21:14:47.874307 21053 master.cpp:1574] Processing reply for offers: [ 2014-02-11-21:14:47-160088899-53443-21045-1 ] on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.874322 21058 hierarchical_allocator_process.hpp:590] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 filtered slave 2014-02-11-21:14:47-160088899-53443-21045-2 for 5secs I0211 21:14:47.874354 21059 slave.cpp:845] Launching task 0 for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.874404 21053 master.hpp:411] Adding task 1 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) I0211 21:14:47.874428 21053 master.cpp:2447] Launching task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) I0211 21:14:47.874479 21058 slave.cpp:736] Got assigned task 1 for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.874586 21053 master.hpp:449] Removing offer 2014-02-11-21:14:47-160088899-53443-21045-2 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-0 (vesta.apache.org) I0211 21:14:47.874646 21053 master.cpp:1574] Processing reply for offers: [ 2014-02-11-21:14:47-160088899-53443-21045-2 ] on slave 2014-02-11-21:14:47-160088899-53443-21045-0 (vesta.apache.org) for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.874690 21058 slave.cpp:845] Launching task 1 for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.874694 21053 master.hpp:411] Adding task 2 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-0 (vesta.apache.org) I0211 21:14:47.874716 21053 master.cpp:2447] Launching task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-0 (vesta.apache.org) I0211 21:14:47.874820 21053 hierarchical_allocator_process.hpp:547] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 left cpus(*):7; mem(*):6929; disk(*):1.38501e+06; ports(*):[31000-32000] unused on slave 2014-02-11-21:14:47-160088899-53443-21045-1 I0211 21:14:47.874892 21053 hierarchical_allocator_process.hpp:590] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 filtered slave 2014-02-11-21:14:47-160088899-53443-21045-1 for 5secs I0211 21:14:47.874922 21053 hierarchical_allocator_process.hpp:547] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 left cpus(*):7; mem(*):6929; disk(*):1.38501e+06; ports(*):[31000-32000] unused on slave 2014-02-11-21:14:47-160088899-53443-21045-0 I0211 21:14:47.874980 21053 hierarchical_allocator_process.hpp:590] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 filtered slave 2014-02-11-21:14:47-160088899-53443-21045-0 for 5secs I0211 21:14:47.875012 21053 slave.cpp:736] Got assigned task 2 for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.875151 21053 slave.cpp:845] Launching task 2 for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.875527 21059 slave.cpp:955] Queuing task '0' for executor default of framework '2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.875608 21059 process_isolator.cpp:102] Launching default (/home/hudson/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-In-Src-Set-JAVA_HOME/src/examples/python/test-executor) in /tmp/mesos-Z8v6cu/2/slaves/2014-02-11-21:14:47-160088899-53443-21045-2/frameworks/2014-02-11-21:14:47-160088899-53443-21045-0000/executors/default/runs/02cdf8bd-0757-4a40-8e77-af60bb202d71 with resources cpus(*):1; mem(*):32' for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.876787 21054 slave.cpp:469] Successfully attached file '/tmp/mesos-Z8v6cu/2/slaves/2014-02-11-21:14:47-160088899-53443-21045-2/frameworks/2014-02-11-21:14:47-160088899-53443-21045-0000/executors/default/runs/02cdf8bd-0757-4a40-8e77-af60bb202d71' I0211 21:14:47.876852 21059 process_isolator.cpp:165] Forked executor at 21061 I0211 21:14:47.876940 21058 slave.cpp:955] Queuing task '1' for executor default of framework '2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.877095 21057 process_isolator.cpp:102] Launching default (/home/hudson/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-In-Src-Set-JAVA_HOME/src/examples/python/test-executor) in /tmp/mesos-Z8v6cu/0/slaves/2014-02-11-21:14:47-160088899-53443-21045-1/frameworks/2014-02-11-21:14:47-160088899-53443-21045-0000/executors/default/runs/568b657d-839d-483f-aff1-4872fbfc27dc with resources cpus(*):1; mem(*):32' for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.877102 21052 slave.cpp:469] Successfully attached file '/tmp/mesos-Z8v6cu/0/slaves/2014-02-11-21:14:47-160088899-53443-21045-1/frameworks/2014-02-11-21:14:47-160088899-53443-21045-0000/executors/default/runs/568b657d-839d-483f-aff1-4872fbfc27dc' I0211 21:14:47.878783 21057 process_isolator.cpp:165] Forked executor at 21062 I0211 21:14:47.879032 21053 slave.cpp:955] Queuing task '2' for executor default of framework '2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.879192 21054 slave.cpp:2098] Monitoring executor default of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 forked at pid 21062 I0211 21:14:47.879192 21058 slave.cpp:469] Successfully attached file '/tmp/mesos-Z8v6cu/1/slaves/2014-02-11-21:14:47-160088899-53443-21045-0/frameworks/2014-02-11-21:14:47-160088899-53443-21045-0000/executors/default/runs/a7c4170a-f40b-4493-81b3-0ea8c70e3977' I0211 21:14:47.879166 21052 process_isolator.cpp:102] Launching default (/home/hudson/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-In-Src-Set-JAVA_HOME/src/examples/python/test-executor) in /tmp/mesos-Z8v6cu/1/slaves/2014-02-11-21:14:47-160088899-53443-21045-0/frameworks/2014-02-11-21:14:47-160088899-53443-21045-0000/executors/default/runs/a7c4170a-f40b-4493-81b3-0ea8c70e3977 with resources cpus(*):1; mem(*):32' for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:47.880775 21057 slave.cpp:2098] Monitoring executor default of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 forked at pid 21061 I0211 21:14:47.880959 21052 process_isolator.cpp:165] Forked executor at 21064 E0211 21:14:47.881386 21054 slave.cpp:2124] Failed to watch executor default of framework 2014-02-11-21:14:47-160088899-53443-21045-0000: Already watched I0211 21:14:47.881474 21055 slave.cpp:2098] Monitoring executor default of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 forked at pid 21064 E0211 21:14:47.881516 21055 slave.cpp:2124] Failed to watch executor default of framework 2014-02-11-21:14:47-160088899-53443-21045-0000: Already watched Fetching resources into '/tmp/mesos-Z8v6cu/2/slaves/2014-02-11-21:14:47-160088899-53443-21045-2/frameworks/2014-02-11-21:14:47-160088899-53443-21045-0000/executors/default/runs/02cdf8bd-0757-4a40-8e77-af60bb202d71' Fetching resources into '/tmp/mesos-Z8v6cu/0/slaves/2014-02-11-21:14:47-160088899-53443-21045-1/frameworks/2014-02-11-21:14:47-160088899-53443-21045-0000/executors/default/runs/568b657d-839d-483f-aff1-4872fbfc27dc' Fetching resources into '/tmp/mesos-Z8v6cu/1/slaves/2014-02-11-21:14:47-160088899-53443-21045-0/frameworks/2014-02-11-21:14:47-160088899-53443-21045-0000/executors/default/runs/a7c4170a-f40b-4493-81b3-0ea8c70e3977' WARNING: Logging before InitGoogleLogging() is written to STDERR I0211 21:14:48.154657 21117 process.cpp:1591] libprocess is initialized on 67.195.138.9:60148 for 8 cpus I0211 21:14:48.155632 21117 exec.cpp:131] Version: 0.18.0 WARNING: Logging before InitGoogleLogging() is written to STDERR I0211 21:14:48.156184 21116 process.cpp:1591] libprocess is initialized on 67.195.138.9:55901 for 8 cpus I0211 21:14:48.157078 21119 exec.cpp:181] Executor started at: executor(1)@67.195.138.9:60148 with pid 21117 I0211 21:14:48.157146 21116 exec.cpp:131] Version: 0.18.0 I0211 21:14:48.157536 21052 slave.cpp:1431] Got registration for executor 'default' of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.157784 21052 slave.cpp:1552] Flushing queued task 2 for executor 'default' of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.158042 21126 process.cpp:1010] Socket closed while receiving I0211 21:14:48.158088 21124 exec.cpp:205] Executor registered on slave 2014-02-11-21:14:47-160088899-53443-21045-0 WARNING: Logging before InitGoogleLogging() is written to STDERR I0211 21:14:48.158324 21113 process.cpp:1591] libprocess is initialized on 67.195.138.9:43514 for 8 cpus I0211 21:14:48.158526 21128 exec.cpp:181] Executor started at: executor(1)@67.195.138.9:55901 with pid 21116 I0211 21:14:48.158803 21055 slave.cpp:1431] Got registration for executor 'default' of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.159018 21055 slave.cpp:1552] Flushing queued task 1 for executor 'default' of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.159241 21133 exec.cpp:205] Executor registered on slave 2014-02-11-21:14:47-160088899-53443-21045-1 I0211 21:14:48.159246 21135 process.cpp:1010] Socket closed while receiving I0211 21:14:48.159283 21113 exec.cpp:131] Version: 0.18.0 I0211 21:14:48.159543 21124 exec.cpp:217] Executor::registered took 575493ns I0211 21:14:48.159593 21124 exec.cpp:292] Executor asked to run task '2' Starting executor Running task 2 I0211 21:14:48.160181 21124 exec.cpp:301] Executor::launchTask took 569794ns I0211 21:14:48.160450 21133 exec.cpp:217] Executor::registered took 454612ns I0211 21:14:48.160522 21133 exec.cpp:292] Executor asked to run task '1' Sending status update... I0211 21:14:48.160640 21137 exec.cpp:181] Executor started at: executor(1)@67.195.138.9:43514 with pid 21113 Sent status update I0211 21:14:48.160894 21052 slave.cpp:1431] Got registration for executor 'default' of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 Starting executor Running task 1 I0211 21:14:48.161001 21133 exec.cpp:301] Executor::launchTask took 466392ns I0211 21:14:48.161068 21052 slave.cpp:1552] Flushing queued task 0 for executor 'default' of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.161222 21144 process.cpp:1010] Socket closed while receiving I0211 21:14:48.161273 21137 exec.cpp:205] Executor registered on slave 2014-02-11-21:14:47-160088899-53443-21045-2 ISending status update... 0211 21:14:48.161321 21144 process.cpp:1010] Socket closed while receiving Sent status update I0211 21:14:48.161535 21125 exec.cpp:524] Executor sending status update TASK_RUNNING (UUID: bd0018b7-0742-42bc-a0a0-1d90f87e7d3b) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.161744 21058 slave.cpp:1765] Handling status update TASK_RUNNING (UUID: bd0018b7-0742-42bc-a0a0-1d90f87e7d3b) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from executor(1)@67.195.138.9:60148 I0211 21:14:48.161859 21058 status_update_manager.cpp:314] Received status update TASK_RUNNING (UUID: bd0018b7-0742-42bc-a0a0-1d90f87e7d3b) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.161874 21058 status_update_manager.cpp:493] Creating StatusUpdate stream for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.161938 21058 status_update_manager.cpp:367] Forwarding status update TASK_RUNNING (UUID: bd0018b7-0742-42bc-a0a0-1d90f87e7d3b) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to master@67.195.138.9:53443 I0211 21:14:48.162057 21058 master.cpp:2026] Status update TASK_RUNNING (UUID: bd0018b7-0742-42bc-a0a0-1d90f87e7d3b) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from slave(2)@67.195.138.9:53443 I0211 21:14:48.162080 21058 slave.cpp:1884] Status update manager successfully handled status update TASK_RUNNING (UUID: bd0018b7-0742-42bc-a0a0-1d90f87e7d3b) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.162088 21058 slave.cpp:1890] Sending acknowledgement for status update TASK_RUNNING (UUID: bd0018b7-0742-42bc-a0a0-1d90f87e7d3b) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to executor(1)@67.195.138.9:60148 I0211 21:14:48.162555 21058 sched.cpp:616] Scheduler::statusUpdate took 351553ns I0211 21:14:48.162623 21058 status_update_manager.cpp:392] Received status update acknowledgement (UUID: bd0018b7-0742-42bc-a0a0-1d90f87e7d3b) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.162669 21058 slave.cpp:1371] Status update manager successfully handled status update acknowledgement (UUID: bd0018b7-0742-42bc-a0a0-1d90f87e7d3b) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.162766 21126 process.cpp:1010] Socket closed while receiving I0211 21:14:48.163368 21131 exec.cpp:524] Executor sending status update TASK_RUNNING (UUID: e2254a60-ebc8-4553-9ed8-e44cc4d84eb8) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.163434 21125 exec.cpp:524] Executor sending status update TASK_FINISHED (UUID: 1d57909c-8b68-45f9-9785-c4b6ad29e664) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.163486 21125 exec.cpp:338] Executor received status update acknowledgement bd0018b7-0742-42bc-a0a0-1d90f87e7d3b for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.163565 21058 slave.cpp:1765] Handling status update TASK_FINISHED (UUID: 1d57909c-8b68-45f9-9785-c4b6ad29e664) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from executor(1)@67.195.138.9:60148 I0211 21:14:48.163583 21058 slave.cpp:3214] Terminating task 2 I0211 21:14:48.163662 21058 slave.cpp:1765] Handling status update TASK_RUNNING (UUID: e2254a60-ebc8-4553-9ed8-e44cc4d84eb8) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from executor(1)@67.195.138.9:55901 I0211 21:14:48.163676 21137 exec.cpp:217] Executor::registered took 548316ns II0211 21:14:48.163739 21058 status_update_manager.cpp:314] Received status update TASK_FINISHED (UUID: 1d57909c-8b68-45f9-9785-c4b6ad29e664) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 0211 21:14:48.163740 21137 exec.cpp:292] Executor asked to run task '0' I0211 21:14:48.163756 21058 status_update_manager.cpp:367] Forwarding status update TASK_FINISHED (UUID: 1d57909c-8b68-45f9-9785-c4b6ad29e664) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to master@67.195.138.9:53443 I0211 21:14:48.163813 21058 status_update_manager.cpp:314] Received status update TASK_RUNNING (UUID: e2254a60-ebc8-4553-9ed8-e44cc4d84eb8) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.163825 21058 status_update_manager.cpp:493] Creating StatusUpdate stream for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.163868 21058 status_update_manager.cpp:367] Forwarding status update TASK_RUNNING (UUID: e2254a60-ebc8-4553-9ed8-e44cc4d84eb8) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to master@67.195.138.9:53443 I0211 21:14:48.163954 21058 master.cpp:2026] Status update TASK_FINISHED (UUID: 1d57909c-8b68-45f9-9785-c4b6ad29e664) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from slave(2)@67.195.138.9:53443 I0211 21:14:48.163998 21058 master.hpp:429] Removing task 2 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-0 (vesta.apache.org) I0211 21:14:48.164083 21058 master.cpp:2026] Status update TASK_RUNNING (UUID: e2254a60-ebc8-4553-9ed8-e44cc4d84eb8) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from slave(1)@67.195.138.9:53443 I0211 21:14:48.164103 21058 slave.cpp:1884] Status update manager successfully handled status update TASK_FINISHED (UUID: 1d57909c-8b68-45f9-9785-c4b6ad29e664) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.164113 21058 slave.cpp:1890] Sending acknowledgement for status update TASK_FINISHED (UUID: 1d57909c-8b68-45f9-9785-c4b6ad29e664) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to executor(1)@67.195.138.9:60148 I0211 21:14:48.164181 21058 slave.cpp:1884] Status update manager successfully handled status update TASK_RUNNING (UUID: e2254a60-ebc8-4553-9ed8-e44cc4d84eb8) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 II0211 21:14:48.164193 21058 slave.cpp:1890] Sending acknowledgement for status update TASK_RUNNING (UUID: e2254a60-ebc8-4553-9ed8-e44cc4d84eb8) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to executor(1)@67.195.138.9:55901 0211 21:14:48.164191 21131 exec.cpp:524] Executor sending status update TASK_FINISHED (UUID: 37fd5f35-c3b3-4c16-b836-3cab90ed6874) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 Starting executor Running task 0 I0211 21:14:48.164299 21052 hierarchical_allocator_process.hpp:637] Recovered cpus(*):1; mem(*):32 (total allocatable: cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000]) on slave 2014-02-11-21:14:47-160088899-53443-21045-0 from framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.164361 21058 slave.cpp:1765] Handling status update TASK_FINISHED (UUID: 37fd5f35-c3b3-4c16-b836-3cab90ed6874) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from executor(1)@67.195.138.9:55901 I0211 21:14:48.164392 21058 slave.cpp:3214] Terminating task 1 I0211 21:14:48.164505 21052 status_update_manager.cpp:314] Received status update TASK_FINISHED (UUID: 37fd5f35-c3b3-4c16-b836-3cab90ed6874) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.164512 21057 sched.cpp:616] Scheduler::statusUpdate took 238156ns I0211 21:14:48.164559 21052 slave.cpp:1884] Status update manager successfully handled status update TASK_FINISHED (UUID: 37fd5f35-c3b3-4c16-b836-3cab90ed6874) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 II0211 21:14:48.164572 21052 slave.cpp:1890] Sending acknowledgement for status update TASK_FINISHED (UUID: 37fd5f35-c3b3-4c16-b836-3cab90ed6874) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to executor(1)@67.195.138.9:55901 0211 21:14:48.164571 21135 process.cpp:1010] Socket closed while receiving I0211 21:14:48.164600 21130 exec.cpp:338] Executor received status update acknowledgement e2254a60-ebc8-4553-9ed8-e44cc4d84eb8 for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.164635 21057 sched.cpp:616] Scheduler::statusUpdate took 76837ns Sending status update... I0211 21:14:48.164715 21135 process.cpp:1010] Socket closed while receiving I0211 21:14:48.164726 21057 status_update_manager.cpp:392] Received status update acknowledgement (UUID: e2254a60-ebc8-4553-9ed8-e44cc4d84eb8) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.164728 21130 exec.cpp:338] Executor received status update acknowledgement 37fd5f35-c3b3-4c16-b836-3cab90ed6874 for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.164749 21057 status_update_manager.cpp:367] Forwarding status update TASK_FINISHED (UUID: 37fd5f35-c3b3-4c16-b836-3cab90ed6874) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to master@67.195.138.9:53443 Sent status update I0211 21:14:48.164818 21057 slave.cpp:1371] Status update manager successfully handled status update acknowledgement (UUID: e2254a60-ebc8-4553-9ed8-e44cc4d84eb8) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.164842 21053 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 1d57909c-8b68-45f9-9785-c4b6ad29e664) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.164829 21137 exec.cpp:301] Executor::launchTask took 1.068244ms I0211 21:14:48.164872 21053 status_update_manager.cpp:524] Cleaning up status update stream for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.164911 21126 process.cpp:1010] Socket closed while receiving I0211 21:14:48.164952 21122 exec.cpp:338] Executor received status update acknowledgement 1d57909c-8b68-45f9-9785-c4b6ad29e664 for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.164995 21126 process.cpp:1010] Socket closed while receiving I0211 21:14:48.165006 21122 exec.cpp:359] Executor received framework message I0211 21:14:48.165004 21058 master.cpp:2026] Status update TASK_FINISHED (UUID: 37fd5f35-c3b3-4c16-b836-3cab90ed6874) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from slave(1)@67.195.138.9:53443 I0211 21:14:48.165043 21057 slave.cpp:1371] Status update manager successfully handled status update acknowledgement (UUID: 1d57909c-8b68-45f9-9785-c4b6ad29e664) for task 2 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.165052 21122 exec.cpp:368] Executor::frameworkMessage took 34533ns I0211 21:14:48.165058 21057 slave.cpp:3237] Completing task 2 I0211 21:14:48.165057 21058 master.hpp:429] Removing task 1 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) I0211 21:14:48.165175 21053 sched.cpp:616] Scheduler::statusUpdate took 162784ns I0211 21:14:48.165220 21058 slave.cpp:1943] Sending message for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to scheduler(1)@67.195.138.9:53443 I0211 21:14:48.165211 21055 hierarchical_allocator_process.hpp:637] Recovered cpus(*):1; mem(*):32 (total allocatable: cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000]) on slave 2014-02-11-21:14:47-160088899-53443-21045-1 from framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.165316 21057 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 37fd5f35-c3b3-4c16-b836-3cab90ed6874) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.165386 21057 status_update_manager.cpp:524] Cleaning up status update stream for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.165427 21135 process.cpp:1010] Socket closed while receiving I0211 21:14:48.165451 21055 sched.cpp:701] Scheduler::frameworkMessage took 170832ns I0211 21:14:48.165462 21053 slave.cpp:1371] Status update manager successfully handled status update acknowledgement (UUID: 37fd5f35-c3b3-4c16-b836-3cab90ed6874) for task 1 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.165468 21132 exec.cpp:359] Executor received framework message I0211 21:14:48.165475 21053 slave.cpp:3237] Completing task 1 I0211 21:14:48.165493 21132 exec.cpp:368] Executor::frameworkMessage took 11572ns I0211 21:14:48.165654 21057 slave.cpp:1943] Sending message for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to scheduler(1)@67.195.138.9:53443 I0211 21:14:48.165786 21057 sched.cpp:701] Scheduler::frameworkMessage took 96059ns I0211 21:14:48.165777 21137 exec.cpp:524] Executor sending status update TASK_RUNNING (UUID: f955480b-856b-4f79-8d92-63edea7ad97d) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.165974 21055 slave.cpp:1765] Handling status update TASK_RUNNING (UUID: f955480b-856b-4f79-8d92-63edea7ad97d) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from executor(1)@67.195.138.9:43514 I0211 21:14:48.166085 21053 status_update_manager.cpp:314] Received status update TASK_RUNNING (UUID: f955480b-856b-4f79-8d92-63edea7ad97d) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.166113 21053 status_update_manager.cpp:493] Creating StatusUpdate stream for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.166160 21053 status_update_manager.cpp:367] Forwarding status update TASK_RUNNING (UUID: f955480b-856b-4f79-8d92-63edea7ad97d) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to master@67.195.138.9:53443 I0211 21:14:48.166240 21052 master.cpp:2026] Status update TASK_RUNNING (UUID: f955480b-856b-4f79-8d92-63edea7ad97d) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from slave(3)@67.195.138.9:53443 I0211 21:14:48.166244 21055 slave.cpp:1884] Status update manager successfully handled status update TASK_RUNNING (UUID: f955480b-856b-4f79-8d92-63edea7ad97d) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.166278 21055 slave.cpp:1890] Sending acknowledgement for status update TASK_RUNNING (UUID: f955480b-856b-4f79-8d92-63edea7ad97d) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to executor(1)@67.195.138.9:43514 I0211 21:14:48.166385 21058 sched.cpp:616] Scheduler::statusUpdate took 88496ns I0211 21:14:48.166489 21144 process.cpp:1010] Socket closed while receiving I0211 21:14:48.166631 21052 status_update_manager.cpp:392] Received status update acknowledgement (UUID: f955480b-856b-4f79-8d92-63edea7ad97d) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.166653 21137 exec.cpp:524] Executor sending status update TASK_FINISHED (UUID: 9af4cdff-74ae-40ab-9788-0d5f8b7435ec) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.166679 21052 slave.cpp:1371] Status update manager successfully handled status update acknowledgement (UUID: f955480b-856b-4f79-8d92-63edea7ad97d) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.166699 21137 exec.cpp:338] Executor received status update acknowledgement f955480b-856b-4f79-8d92-63edea7ad97d for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.166836 21059 slave.cpp:1765] Handling status update TASK_FINISHED (UUID: 9af4cdff-74ae-40ab-9788-0d5f8b7435ec) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from executor(1)@67.195.138.9:43514 I0211 21:14:48.166858 21059 slave.cpp:3214] Terminating task 0 I0211 21:14:48.166960 21054 status_update_manager.cpp:314] Received status update TASK_FINISHED (UUID: 9af4cdff-74ae-40ab-9788-0d5f8b7435ec) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.166988 21054 status_update_manager.cpp:367] Forwarding status update TASK_FINISHED (UUID: 9af4cdff-74ae-40ab-9788-0d5f8b7435ec) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to master@67.195.138.9:53443 I0211 21:14:48.167079 21055 slave.cpp:1884] Status update manager successfully handled status update TASK_FINISHED (UUID: 9af4cdff-74ae-40ab-9788-0d5f8b7435ec) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.167095 21055 slave.cpp:1890] Sending acknowledgement for status update TASK_FINISHED (UUID: 9af4cdff-74ae-40ab-9788-0d5f8b7435ec) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to executor(1)@67.195.138.9:43514 I0211 21:14:48.167100 21059 master.cpp:2026] Status update TASK_FINISHED (UUID: 9af4cdff-74ae-40ab-9788-0d5f8b7435ec) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from slave(3)@67.195.138.9:53443 I0211 21:14:48.167147 21059 master.hpp:429] Removing task 0 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) I0211 21:14:48.167260 21055 hierarchical_allocator_process.hpp:637] Recovered cpus(*):1; mem(*):32 (total allocatable: cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000]) on slave 2014-02-11-21:14:47-160088899-53443-21045-2 from framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.167284 21144 process.cpp:1010] Socket closed while receiving I0211 21:14:48.167284 21140 exec.cpp:338] Executor received status update acknowledgement 9af4cdff-74ae-40ab-9788-0d5f8b7435ec for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.167326 21054 sched.cpp:616] Scheduler::statusUpdate took 160237ns I0211 21:14:48.167469 21057 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 9af4cdff-74ae-40ab-9788-0d5f8b7435ec) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.167495 21057 status_update_manager.cpp:524] Cleaning up status update stream for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.167501 21144 process.cpp:1010] Socket closed while receiving I0211 21:14:48.167520 21141 exec.cpp:359] Executor received framework message I0211 21:14:48.167543 21054 slave.cpp:1371] Status update manager successfully handled status update acknowledgement (UUID: 9af4cdff-74ae-40ab-9788-0d5f8b7435ec) for task 0 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.167563 21054 slave.cpp:3237] Completing task 0 I0211 21:14:48.167563 21141 exec.cpp:368] Executor::frameworkMessage took 29844ns I0211 21:14:48.167691 21057 slave.cpp:1943] Sending message for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to scheduler(1)@67.195.138.9:53443 I0211 21:14:48.167773 21052 sched.cpp:701] Scheduler::frameworkMessage took 46462ns I0211 21:14:48.866621 21057 hierarchical_allocator_process.hpp:752] Offering cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-0 to framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.866730 21057 hierarchical_allocator_process.hpp:752] Offering cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-1 to framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.866799 21057 hierarchical_allocator_process.hpp:752] Offering cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-2 to framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.866981 21057 hierarchical_allocator_process.hpp:688] Performed allocation for 3 slaves in 433438ns I0211 21:14:48.867055 21059 master.hpp:439] Adding offer 2014-02-11-21:14:47-160088899-53443-21045-3 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) I0211 21:14:48.867164 21059 master.hpp:439] Adding offer 2014-02-11-21:14:47-160088899-53443-21045-4 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) I0211 21:14:48.867241 21059 master.hpp:439] Adding offer 2014-02-11-21:14:47-160088899-53443-21045-5 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-0 (vesta.apache.org) I0211 21:14:48.867285 21059 master.cpp:2278] Sending 3 offers to framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.869622 21053 sched.cpp:525] Scheduler::resourceOffers took 2.155683ms I0211 21:14:48.869803 21059 master.hpp:449] Removing offer 2014-02-11-21:14:47-160088899-53443-21045-3 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) I0211 21:14:48.869858 21059 master.cpp:1574] Processing reply for offers: [ 2014-02-11-21:14:47-160088899-53443-21045-3 ] on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.869946 21059 master.hpp:411] Adding task 3 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) I0211 21:14:48.869969 21059 master.cpp:2447] Launching task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) I0211 21:14:48.870033 21053 slave.cpp:736] Got assigned task 3 for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.870142 21059 master.hpp:449] Removing offer 2014-02-11-21:14:47-160088899-53443-21045-4 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) I0211 21:14:48.870169 21058 hierarchical_allocator_process.hpp:547] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 left cpus(*):7; mem(*):6929; disk(*):1.38501e+06; ports(*):[31000-32000] unused on slave 2014-02-11-21:14:47-160088899-53443-21045-2 I0211 21:14:48.870193 21059 master.cpp:1574] Processing reply for offers: [ 2014-02-11-21:14:47-160088899-53443-21045-4 ] on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.870250 21059 master.hpp:411] Adding task 4 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) I0211 21:14:48.870275 21059 master.cpp:2447] Launching task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) I0211 21:14:48.870281 21058 hierarchical_allocator_process.hpp:590] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 filtered slave 2014-02-11-21:14:47-160088899-53443-21045-2 for 5secs I0211 21:14:48.870331 21058 slave.cpp:736] Got assigned task 4 for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.870414 21058 slave.cpp:845] Launching task 4 for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.870406 21059 master.hpp:449] Removing offer 2014-02-11-21:14:47-160088899-53443-21045-5 with resources cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] on slave 2014-02-11-21:14:47-160088899-53443-21045-0 (vesta.apache.org) I0211 21:14:48.870468 21058 slave.cpp:980] Sending task '4' to executor 'default' of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.870475 21059 master.cpp:1574] Processing reply for offers: [ 2014-02-11-21:14:47-160088899-53443-21045-5 ] on slave 2014-02-11-21:14:47-160088899-53443-21045-0 (vesta.apache.org) for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.870528 21059 hierarchical_allocator_process.hpp:547] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 left cpus(*):7; mem(*):6929; disk(*):1.38501e+06; ports(*):[31000-32000] unused on slave 2014-02-11-21:14:47-160088899-53443-21045-1 I0211 21:14:48.870601 21059 hierarchical_allocator_process.hpp:590] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 filtered slave 2014-02-11-21:14:47-160088899-53443-21045-1 for 5secs I0211 21:14:48.870632 21053 slave.cpp:845] Launching task 3 for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.870656 21059 hierarchical_allocator_process.hpp:547] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 left cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000] unused on slave 2014-02-11-21:14:47-160088899-53443-21045-0 I0211 21:14:48.870666 21053 slave.cpp:980] Sending task '3' to executor 'default' of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.870735 21059 hierarchical_allocator_process.hpp:590] Framework 2014-02-11-21:14:47-160088899-53443-21045-0000 filtered slave 2014-02-11-21:14:47-160088899-53443-21045-0 for 5secs I0211 21:14:48.870842 21135 process.cpp:1010] Socket closed while receiving I0211 21:14:48.870910 21133 exec.cpp:292] Executor asked to run task '4' I0211 21:14:48.870980 21144 process.cpp:1010] Socket closed while receiving I0211 21:14:48.871078 21136 exec.cpp:292] Executor asked to run task '3' Running task 4 I0211 21:14:48.871618 21133 exec.cpp:301] Executor::launchTask took 669868ns Sending status update... Running task 3 Sent status update Sending status update... Sent status update I0211 21:14:48.872700 21134 exec.cpp:524] Executor sending status update TASK_RUNNING (UUID: 1402086c-13c6-4892-a87a-603864039b45) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.872844 21136 exec.cpp:301] Executor::launchTask took 1.735607ms I0211 21:14:48.872951 21057 slave.cpp:1765] Handling status update TASK_RUNNING (UUID: 1402086c-13c6-4892-a87a-603864039b45) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from executor(1)@67.195.138.9:55901 I0211 21:14:48.873046 21055 status_update_manager.cpp:314] Received status update TASK_RUNNING (UUID: 1402086c-13c6-4892-a87a-603864039b45) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.873066 21055 status_update_manager.cpp:493] Creating StatusUpdate stream for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.873123 21055 status_update_manager.cpp:367] Forwarding status update TASK_RUNNING (UUID: 1402086c-13c6-4892-a87a-603864039b45) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to master@67.195.138.9:53443 I0211 21:14:48.873214 21057 master.cpp:2026] Status update TASK_RUNNING (UUID: 1402086c-13c6-4892-a87a-603864039b45) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from slave(1)@67.195.138.9:53443 I0211 21:14:48.873245 21058 slave.cpp:1884] Status update manager successfully handled status update TASK_RUNNING (UUID: 1402086c-13c6-4892-a87a-603864039b45) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.873268 21058 slave.cpp:1890] Sending acknowledgement for status update TASK_RUNNING (UUID: 1402086c-13c6-4892-a87a-603864039b45) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to executor(1)@67.195.138.9:55901 I0211 21:14:48.873344 21055 sched.cpp:616] Scheduler::statusUpdate took 109430ns I0211 21:14:48.873440 21055 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 1402086c-13c6-4892-a87a-603864039b45) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.873472 21135 process.cpp:1010] Socket closed while receiving I0211 21:14:48.873497 21057 slave.cpp:1371] Status update manager successfully handled status update acknowledgement (UUID: 1402086c-13c6-4892-a87a-603864039b45) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.874042 21134 exec.cpp:524] Executor sending status update TASK_FINISHED (UUID: 9ef8e17e-8569-41fc-93f2-df09a42bf876) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.874084 21134 exec.cpp:338] Executor received status update acknowledgement 1402086c-13c6-4892-a87a-603864039b45 for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.874217 21055 slave.cpp:1765] Handling status update TASK_FINISHED (UUID: 9ef8e17e-8569-41fc-93f2-df09a42bf876) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from executor(1)@67.195.138.9:55901 I0211 21:14:48.874234 21055 slave.cpp:3214] Terminating task 4 I0211 21:14:48.874250 21136 exec.cpp:524] Executor sending status update TASK_RUNNING (UUID: 14454a53-c4e5-49bd-be22-cc119dbf206e) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.874305 21055 status_update_manager.cpp:314] Received status update TASK_FINISHED (UUID: 9ef8e17e-8569-41fc-93f2-df09a42bf876) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.874326 21055 status_update_manager.cpp:367] Forwarding status update TASK_FINISHED (UUID: 9ef8e17e-8569-41fc-93f2-df09a42bf876) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to master@67.195.138.9:53443 I0211 21:14:48.874400 21055 slave.cpp:1765] Handling status update TASK_RUNNING (UUID: 14454a53-c4e5-49bd-be22-cc119dbf206e) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from executor(1)@67.195.138.9:43514 I0211 21:14:48.874440 21052 master.cpp:2026] Status update TASK_FINISHED (UUID: 9ef8e17e-8569-41fc-93f2-df09a42bf876) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from slave(1)@67.195.138.9:53443 I0211 21:14:48.874461 21055 slave.cpp:1884] Status update manager successfully handled status update TASK_FINISHED (UUID: 9ef8e17e-8569-41fc-93f2-df09a42bf876) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.874471 21055 slave.cpp:1890] Sending acknowledgement for status update TASK_FINISHED (UUID: 9ef8e17e-8569-41fc-93f2-df09a42bf876) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to executor(1)@67.195.138.9:55901 I0211 21:14:48.874487 21052 master.hpp:429] Removing task 4 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) I0211 21:14:48.874555 21052 status_update_manager.cpp:314] Received status update TASK_RUNNING (UUID: 14454a53-c4e5-49bd-be22-cc119dbf206e) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.874575 21052 status_update_manager.cpp:493] Creating StatusUpdate stream for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.874647 21052 status_update_manager.cpp:367] Forwarding status update TASK_RUNNING (UUID: 14454a53-c4e5-49bd-be22-cc119dbf206e) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to master@67.195.138.9:53443 I0211 21:14:48.874707 21135 process.cpp:1010] Socket closed while receiving I0211 21:14:48.874711 21055 sched.cpp:616] Scheduler::statusUpdate took 141827ns I0211 21:14:48.874744 21132 exec.cpp:338] Executor received status update acknowledgement 9ef8e17e-8569-41fc-93f2-df09a42bf876 for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.874634 21054 hierarchical_allocator_process.hpp:637] Recovered cpus(*):1; mem(*):32 (total allocatable: cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000]) on slave 2014-02-11-21:14:47-160088899-53443-21045-1 from framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.874775 21054 slave.cpp:1884] Status update manager successfully handled status update TASK_RUNNING (UUID: 14454a53-c4e5-49bd-be22-cc119dbf206e) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.874785 21054 slave.cpp:1890] Sending acknowledgement for status update TASK_RUNNING (UUID: 14454a53-c4e5-49bd-be22-cc119dbf206e) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to executor(1)@67.195.138.9:43514 I0211 21:14:48.874802 21052 master.cpp:2026] Status update TASK_RUNNING (UUID: 14454a53-c4e5-49bd-be22-cc119dbf206e) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from slave(3)@67.195.138.9:53443 I0211 21:14:48.874904 21055 sched.cpp:616] Scheduler::statusUpdate took 76541ns I0211 21:14:48.874948 21144 process.cpp:1010] Socket closed while receiving I0211 21:14:48.874986 21055 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 14454a53-c4e5-49bd-be22-cc119dbf206e) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.875021 21058 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 9ef8e17e-8569-41fc-93f2-df09a42bf876) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.875051 21055 slave.cpp:1371] Status update manager successfully handled status update acknowledgement (UUID: 14454a53-c4e5-49bd-be22-cc119dbf206e) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.875064 21058 status_update_manager.cpp:524] Cleaning up status update stream for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.875071 21135 process.cpp:1010] Socket closed while receiving I0211 21:14:48.875093 21131 exec.cpp:359] Executor received framework message II0211 21:14:48.875123 21131 exec.cpp:368] Executor::frameworkMessage took 17599ns 0211 21:14:48.875120 21136 exec.cpp:524] Executor sending status update TASK_FINISHED (UUID: 292fb4b8-d187-497c-8d7f-b8e6f3bba219) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.875174 21136 exec.cpp:338] Executor received status update acknowledgement 14454a53-c4e5-49bd-be22-cc119dbf206e for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.875203 21059 slave.cpp:1371] Status update manager successfully handled status update acknowledgement (UUID: 9ef8e17e-8569-41fc-93f2-df09a42bf876) for task 4 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.875219 21059 slave.cpp:3237] Completing task 4 I0211 21:14:48.875272 21059 slave.cpp:1943] Sending message for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to scheduler(1)@67.195.138.9:53443 I0211 21:14:48.875335 21058 slave.cpp:1765] Handling status update TASK_FINISHED (UUID: 292fb4b8-d187-497c-8d7f-b8e6f3bba219) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from executor(1)@67.195.138.9:43514 I0211 21:14:48.875358 21058 slave.cpp:3214] Terminating task 3 I0211 21:14:48.875360 21059 sched.cpp:701] Scheduler::frameworkMessage took 53004ns I0211 21:14:48.875466 21059 status_update_manager.cpp:314] Received status update TASK_FINISHED (UUID: 292fb4b8-d187-497c-8d7f-b8e6f3bba219) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.875488 21059 status_update_manager.cpp:367] Forwarding status update TASK_FINISHED (UUID: 292fb4b8-d187-497c-8d7f-b8e6f3bba219) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to master@67.195.138.9:53443 I0211 21:14:48.875579 21057 slave.cpp:1884] Status update manager successfully handled status update TASK_FINISHED (UUID: 292fb4b8-d187-497c-8d7f-b8e6f3bba219) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.875582 21058 master.cpp:2026] Status update TASK_FINISHED (UUID: 292fb4b8-d187-497c-8d7f-b8e6f3bba219) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 from slave(3)@67.195.138.9:53443 I0211 21:14:48.875604 21057 slave.cpp:1890] Sending acknowledgement for status update TASK_FINISHED (UUID: 292fb4b8-d187-497c-8d7f-b8e6f3bba219) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to executor(1)@67.195.138.9:43514 I0211 21:14:48.875639 21058 master.hpp:429] Removing task 3 with resources cpus(*):1; mem(*):32 on slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) I0211 21:14:48.875778 21055 sched.cpp:616] Scheduler::statusUpdate took 143427ns I0211 21:14:48.875794 21144 process.cpp:1010] Socket closed while receiving I0211 21:14:48.875833 21058 hierarchical_allocator_process.hpp:637] Recovered cpus(*):1; mem(*):32 (total allocatable: cpus(*):8; mem(*):6961; disk(*):1.38501e+06; ports(*):[31000-32000]) on slave 2014-02-11-21:14:47-160088899-53443-21045-2 from framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.875932 21138 exec.cpp:338] Executor received status update acknowledgement 292fb4b8-d187-497c-8d7f-b8e6f3bba219 for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.875988 21055 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 292fb4b8-d187-497c-8d7f-b8e6f3bba219) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876013 21055 status_update_manager.cpp:524] Cleaning up status update stream for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876006 21144 process.cpp:1010] Socket closed while receiving I0211 21:14:48.876032 21137 exec.cpp:359] Executor received framework message I0211 21:14:48.876083 21137 exec.cpp:368] Executor::frameworkMessage took 36509ns I0211 21:14:48.876106 21058 slave.cpp:1371] Status update manager successfully handled status update acknowledgement (UUID: 292fb4b8-d187-497c-8d7f-b8e6f3bba219) for task 3 of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876121 21058 slave.cpp:3237] Completing task 3 I0211 21:14:48.876209 21055 slave.cpp:1943] Sending message for framework 2014-02-11-21:14:47-160088899-53443-21045-0000 to scheduler(1)@67.195.138.9:53443 I0211 21:14:48.876293 21055 sched.cpp:701] Scheduler::frameworkMessage took 59391ns I0211 21:14:48.876307 21055 sched.cpp:727] Stopping framework '2014-02-11-21:14:47-160088899-53443-21045-0000' I0211 21:14:48.876369 21052 master.cpp:1024] Asked to unregister framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876387 21052 master.cpp:2682] Removing framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876422 21055 hierarchical_allocator_process.hpp:408] Deactivated framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876449 21055 slave.cpp:1142] Asked to shut down framework 2014-02-11-21:14:47-160088899-53443-21045-0000 by master@67.195.138.9:53443 I0211 21:14:48.876461 21055 slave.cpp:1167] Shutting down framework 2014-02-11-21:14:47-160088899-53443-21045-0000 Enabling authentication for the framework Registered with framework ID 2014-02-11-21:14:47-160088899-53443-21045-0000 Got 3 resource offers Got resource offer 2014-02-11-21:14:47-160088899-53443-21045-0 Accepting offer on vesta.apache.org to start task 0 Got resource offer 2014-02-11-21:14:47-160088899-53443-21045-1 Accepting offer on vesta.apache.org to start task 1 Got resource offer 2014-02-11-21:14:47-160088899-53443-21045-2 Accepting offer on vesta.apache.org to start task 2 Task 2 is in state 1 Task 2 is in state 2 Task 1 is in state 1 Task 1 is in state 2 Received message: 'data with a \x00 byte' Received message: 'data with a \x00 byte' Task 0 is in state 1 Task 0 is in state 2 Received message: 'data with a \x00 byte' Got 3 resource offers Got resource offer 2014-02-11-21:14:47-160088899-53443-21045-3 Accepting offer on vesta.apache.org to start task 3 Got resource offer 2014-02-11-21:14:47-160088899-53443-21045-4 Accepting offer on vesta.apache.org to start task 4 Got resource offer 2014-02-11-21:14:47-160088899-53443-21045-5 Task 4 is in state 1 Task 4 is in state 2 Task 3 is in state 1 Received message: 'data with a \x00 byte' Task 3 is in state 2 All tasks done, waiting for final framework message Received message: 'data with a \x00 byte' All tasks done, and all messages received, exiting I0211 21:14:48.876477 21055 slave.cpp:2431] Shutting down executor 'default' of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876525 21052 slave.cpp:1142] Asked to shut down framework 2014-02-11-21:14:47-160088899-53443-21045-0000 by master@67.195.138.9:53443 I0211 21:14:48.876545 21052 slave.cpp:1167] Shutting down framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876555 21052 slave.cpp:2431] Shutting down executor 'default' of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876582 21055 slave.cpp:1142] Asked to shut down framework 2014-02-11-21:14:47-160088899-53443-21045-0000 by master@67.195.138.9:53443 I0211 21:14:48.876597 21055 slave.cpp:1167] Shutting down framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876606 21055 slave.cpp:2431] Shutting down executor 'default' of framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876662 21053 hierarchical_allocator_process.hpp:363] Removed framework 2014-02-11-21:14:47-160088899-53443-21045-0000 I0211 21:14:48.876766 21133 exec.cpp:378] Executor asked to shutdown I0211 21:14:48.876785 21135 process.cpp:1010] Socket closed while receiving I0211 21:14:48.876814 21133 exec.cpp:393] Executor::shutdown took 7179ns I0211 21:14:48.876843 21133 exec.cpp:77] Scheduling shutdown of the executor I0211 21:14:48.876899 21144 process.cpp:1010] Socket closed while receiving I0211 21:14:48.876956 21138 exec.cpp:378] Executor asked to shutdown I0211 21:14:48.876988 21126 process.cpp:1010] Socket closed while receiving I0211 21:14:48.877019 21138 exec.cpp:393] Executor::shutdown took 22029ns I0211 21:14:48.877027 21136 exec.cpp:77] Scheduling shutdown of the executor I0211 21:14:48.877110 21122 exec.cpp:378] Executor asked to shutdown I0211 21:14:48.877168 21122 exec.cpp:393] Executor::shutdown took 15665ns I0211 21:14:48.877185 21123 exec.cpp:77] Scheduling shutdown of the executor I0211 21:14:48.881350 21045 master.cpp:587] Master terminating I0211 21:14:48.881434 21045 master.cpp:247] Shutting down master I0211 21:14:48.881440 21058 slave.cpp:1965] master@67.195.138.9:53443 exited W0211 21:14:48.881453 21058 slave.cpp:1968] Master disconnected! Waiting for a new master to be elected I0211 21:14:48.881456 21045 master.cpp:290] Removing slave 2014-02-11-21:14:47-160088899-53443-21045-2 (vesta.apache.org) I0211 21:14:48.881464 21052 slave.cpp:1965] master@67.195.138.9:53443 exited W0211 21:14:48.881475 21052 slave.cpp:1968] Master disconnected! Waiting for a new master to be elected I0211 21:14:48.881438 21053 slave.cpp:1965] master@67.195.138.9:53443 exited W0211 21:14:48.881515 21053 slave.cpp:1968] Master disconnected! Waiting for a new master to be elected I0211 21:14:48.881549 21045 master.cpp:290] Removing slave 2014-02-11-21:14:47-160088899-53443-21045-1 (vesta.apache.org) I0211 21:14:48.881618 21045 master.cpp:290] Removing slave 2014-02-11-21:14:47-160088899-53443-21045-0 (vesta.apache.org) I0211 21:14:48.882072 21045 slave.cpp:394] Slave terminating I0211 21:14:48.882113 21045 slave.cpp:1142] Asked to shut down framework 2014-02-11-21:14:47-160088899-53443-21045-0000 by @0.0.0.0:0 W0211 21:14:48.882135 21045 slave.cpp:1163] Ignoring shutdown framework 2014-02-11-21:14:47-160088899-53443-21045-0000 because it is terminating I0211 21:14:49.300734 21126 process.cpp:1010] Socket closed while receiving I0211 21:14:49.300804 21121 exec.cpp:439] Ignoring exited event because the driver is aborted! II0211 21:14:49.300813 21135 process.cpp:1010] Socket closed while receiving 0211 21:14:49.300820 21144 process.cpp:1010] Socket closed while receiving II0211 21:14:49.300904 21138 exec.cpp:439] Ignoring exited event because the driver is aborted! 0211 21:14:49.300907 21133 exec.cpp:439] Ignoring exited event because the driver is aborted! tests/script.cpp:81: Failure Failed python_framework_test.sh terminated with signal 'Segmentation fault' [ FAILED ] ExamplesTest.PythonFramework (2484 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-998","02/12/2014 23:54:28",5,"Slave should wait until Containerizer::update() completes successfully ""Container resources are updated in several places in the slave and we don't check the update was successful or even wait until it completes.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1010","02/17/2014 21:24:16",3,"Python extension build is broken if gflags-dev is installed ""In my environment mesos build from master results in broken python api module {{_mesos.so}}: Unmangled version of symbol looks like this: During {{./configure}} step {{glog}} finds {{gflags}} development files and starts using them, thus *implicitly* adding dependency on {{libgflags.so}}. This breaks Python extensions module and perhaps can break other mesos subsystems when moved to hosts without {{gflags}} installed. This task is done when the ExamplesTest.PythonFramework test will pass on a system with gflags installed."""," nekto0n@ya-darkstar ~/workspace/mesos/src/python $ PYTHONPATH=build/lib.linux-x86_64-2.7/ python -c """"import _mesos"""" Traceback (most recent call last): File """""""", line 1, in ImportError: /home/nekto0n/workspace/mesos/src/python/build/lib.linux-x86_64-2.7/_mesos.so: undefined symbol: _ZN6google14FlagRegistererC1EPKcS2_S2_S2_PvS3_ google::FlagRegisterer::FlagRegisterer(char const*, char const*, char const*, char const*, void*, void*) ",0,0,1,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1013","02/18/2014 21:34:32",2,"ExamplesTest.JavaLog is flaky ""The {{ExamplesTest.JavaLog}} test framework is flaky, possibly related to a race condition between mutexes. Full logs attached."""," [ RUN ] ExamplesTest.JavaLog Using temporary directory '/tmp/ExamplesTest_JavaLog_WBWEb9' Feb 18, 2014 12:10:57 PM TestLog main INFO: Starting a local ZooKeeper server ... F0218 12:10:58.575036 17450 coordinator.cpp:394] Check failed: !missing Not expecting local replica to be missing position 3 after the writing is done *** Check failure stack trace: *** tests/script.cpp:81: Failure Failed java_log_test.sh terminated with signal 'Aborted' [ FAILED ] ExamplesTest.JavaLog (2166 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1081","03/11/2014 21:48:00",1,"Master should not deactivate authenticated framework/slave on new AuthenticateMessage unless new authentication succeeds. ""Master should not deactivate an authenticated framework/slave upon receiving a new AuthenticateMessage unless new authentication succeeds. As it stands now, a malicious user could spoof the pid of an authenticated framework/slave and send an AuthenticateMessage to knock a valid framework/slave off the authenticated list, forcing the valid framework/slave to re-authenticate and re-register. This could be used in a DoS attack. But how should we handle the scenario when the actual authenticated framework/slave sends an AuthenticateMessage that fails authentication?""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1119","03/18/2014 18:25:15",2,"Allocator should make an allocation decision per slave instead of per framework/role. ""Currently the Allocator::allocate() code loops through roles and frameworks (based on DRF sort) and allocates *all* slaves resources to the first framework. This logic should be a bit inversed. Instead, the slave should go through each slave, allocate it a role/framework and update the DRF shares.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1127","03/20/2014 17:23:15",8,"Implement the protobufs for the scheduler API ""The default scheduler/executor interface and implementation in Mesos have a few drawbacks: (1) The interface is fairly high-level which makes it hard to do certain things, for example, handle events (callbacks) in batch. This can have a big impact on the performance of schedulers (for example, writing task updates that need to be persisted). (2) The implementation requires writing a lot of boilerplate JNI and native Python wrappers when adding additional API components. The plan is to provide a lower-level API that can easily be used to implement the higher-level API that is currently provided. This will also open the door to more easily building native-language Mesos libraries (i.e., not needing the C++ shim layer) and building new higher-level abstractions on top of the lower-level API.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1143","03/24/2014 16:13:43",2,"Add a TASK_ERROR task status. ""During task validation we drop tasks that have errors and send TASK_LOST status updates. In most circumstances a framework will want to relaunch a task that has gone lost, and in the event the task is actually malformed (thus invalid) this will result in an infinite loop of sending a task and having it go lost.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1148","03/26/2014 20:48:43",3,"Add support for rate limiting slave removal ""To safeguard against unforeseen bugs leading to widespread slave removal, it would be nice to allow for rate limiting of the decision to remove slaves and/or send TASK_LOST messages for tasks on those slaves. Ideally this would allow an operator to be notified soon enough to intervene before causing cluster impact.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1195","04/07/2014 19:24:53",3,"systemd.slice + cgroup enablement fails in multiple ways. ""When attempting to configure mesos to use systemd slices on a 'rawhide/f21' machine, it fails creating the isolator: I0407 12:39:28.035354 14916 containerizer.cpp:180] Using isolation: cgroups/cpu,cgroups/mem Failed to create a containerizer: Could not create isolator cgroups/cpu: Failed to create isolator: The cpu subsystem is co-mounted at /sys/fs/cgroup/cpu with other subsytems ------ details ------ /sys/fs/cgroup total 0 drwxr-xr-x. 12 root root 280 Mar 18 08:47 . drwxr-xr-x. 6 root root 0 Mar 18 08:47 .. drwxr-xr-x. 2 root root 0 Mar 18 08:47 blkio lrwxrwxrwx. 1 root root 11 Mar 18 08:47 cpu -> cpu,cpuacct lrwxrwxrwx. 1 root root 11 Mar 18 08:47 cpuacct -> cpu,cpuacct drwxr-xr-x. 2 root root 0 Mar 18 08:47 cpu,cpuacct drwxr-xr-x. 2 root root 0 Mar 18 08:47 cpuset drwxr-xr-x. 2 root root 0 Mar 18 08:47 devices drwxr-xr-x. 2 root root 0 Mar 18 08:47 freezer drwxr-xr-x. 2 root root 0 Mar 18 08:47 hugetlb drwxr-xr-x. 3 root root 0 Apr 3 11:26 memory drwxr-xr-x. 2 root root 0 Mar 18 08:47 net_cls drwxr-xr-x. 2 root root 0 Mar 18 08:47 perf_event drwxr-xr-x. 4 root root 0 Mar 18 08:47 systemd ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1219","04/17/2014 06:49:06",2,"Master should disallow frameworks that reconnect after failover timeout. ""When a scheduler reconnects after the failover timeout has exceeded, the framework id is usually reused because the scheduler doesn't know that the timeout exceeded and it is actually handled as a new framework. The /framework/:framework_id route of the Web UI doesn't handle those cases very well because its key is reused. It only shows the terminated one. Would it make sense to ignore the provided framework id when a scheduler reconnects to a terminated framework and generate a new id to make sure it's unique?""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1303","05/05/2014 20:05:43",1,"ExamplesTest.{TestFramework, NoExecutorFramework} flaky ""I'm having trouble reproducing this but I did observe it once on my OSX system: when investigating a failed make check for https://reviews.apache.org/r/20971/ """," [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from ExamplesTest [ RUN ] ExamplesTest.TestFramework ../../src/tests/script.cpp:81: Failure Failed test_framework_test.sh terminated with signal 'Abort trap: 6' [ FAILED ] ExamplesTest.TestFramework (953 ms) [ RUN ] ExamplesTest.NoExecutorFramework [ OK ] ExamplesTest.NoExecutorFramework (10162 ms) [----------] 2 tests from ExamplesTest (11115 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (11121 ms total) [ PASSED ] 1 test. [ FAILED ] 1 test, listed below: [ FAILED ] ExamplesTest.TestFramework [----------] 6 tests from ExamplesTest [ RUN ] ExamplesTest.TestFramework [ OK ] ExamplesTest.TestFramework (8643 ms) [ RUN ] ExamplesTest.NoExecutorFramework tests/script.cpp:81: Failure Failed no_executor_framework_test.sh terminated with signal 'Aborted' [ FAILED ] ExamplesTest.NoExecutorFramework (7220 ms) [ RUN ] ExamplesTest.JavaFramework [ OK ] ExamplesTest.JavaFramework (11181 ms) [ RUN ] ExamplesTest.JavaException [ OK ] ExamplesTest.JavaException (5624 ms) [ RUN ] ExamplesTest.JavaLog [ OK ] ExamplesTest.JavaLog (6472 ms) [ RUN ] ExamplesTest.PythonFramework [ OK ] ExamplesTest.PythonFramework (14467 ms) [----------] 6 tests from ExamplesTest (53607 ms total) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1332","05/09/2014 00:40:36",3,"Improve Master and Slave metric names ""As we move the metrics to a new endpoint, we should consider revisiting the names of some of the current metrics to make them clearer. It may also be worth considering changing some existing counter-style metrics to gauges. ""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1347","05/12/2014 20:14:42",2,"GarbageCollectorIntegrationTest.DiskUsage is flaky. ""From Jenkins: https://builds.apache.org/job/Mesos-Ubuntu-distcheck/79/consoleFull """," [ RUN ] GarbageCollectorIntegrationTest.DiskUsage Using temporary directory '/tmp/GarbageCollectorIntegrationTest_DiskUsage_pU3Ym7' I0507 03:27:38.775058 5758 leveldb.cpp:174] Opened db in 44.343989ms I0507 03:27:38.787498 5758 leveldb.cpp:181] Compacted db in 12.411065ms I0507 03:27:38.787533 5758 leveldb.cpp:196] Created db iterator in 4008ns I0507 03:27:38.787545 5758 leveldb.cpp:202] Seeked to beginning of db in 598ns I0507 03:27:38.787552 5758 leveldb.cpp:271] Iterated through 0 keys in the db in 173ns I0507 03:27:38.787564 5758 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0507 03:27:38.787858 5777 recover.cpp:425] Starting replica recovery I0507 03:27:38.788352 5793 master.cpp:267] Master 20140507-032738-453759884-58462-5758 (hemera.apache.org) started on 140.211.11.27:58462 I0507 03:27:38.788377 5793 master.cpp:304] Master only allowing authenticated frameworks to register I0507 03:27:38.788383 5793 master.cpp:309] Master only allowing authenticated slaves to register I0507 03:27:38.788389 5793 credentials.hpp:35] Loading credentials for authentication I0507 03:27:38.789064 5779 recover.cpp:451] Replica is in EMPTY status W0507 03:27:38.789115 5793 credentials.hpp:48] Failed to stat credentials file 'file:///tmp/GarbageCollectorIntegrationTest_DiskUsage_pU3Ym7/credentials': No such file or directory I0507 03:27:38.789489 5779 master.cpp:104] No whitelist given. Advertising offers for all slaves I0507 03:27:38.789531 5778 hierarchical_allocator_process.hpp:301] Initializing hierarchical allocator process with master : master@140.211.11.27:58462 I0507 03:27:38.791007 5788 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I0507 03:27:38.791177 5780 master.cpp:921] The newly elected leader is master@140.211.11.27:58462 with id 20140507-032738-453759884-58462-5758 I0507 03:27:38.791198 5780 master.cpp:931] Elected as the leading master! I0507 03:27:38.791205 5780 master.cpp:752] Recovering from registrar I0507 03:27:38.791251 5796 recover.cpp:188] Received a recover response from a replica in EMPTY status I0507 03:27:38.791323 5797 registrar.cpp:313] Recovering registrar I0507 03:27:38.792137 5795 recover.cpp:542] Updating replica status to STARTING I0507 03:27:38.807531 5781 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 15.124092ms I0507 03:27:38.807559 5781 replica.cpp:320] Persisted replica status to STARTING I0507 03:27:38.807621 5781 recover.cpp:451] Replica is in STARTING status I0507 03:27:38.809319 5799 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I0507 03:27:38.809983 5795 recover.cpp:188] Received a recover response from a replica in STARTING status I0507 03:27:38.811204 5778 recover.cpp:542] Updating replica status to VOTING I0507 03:27:38.827595 5795 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 16.011355ms I0507 03:27:38.827627 5795 replica.cpp:320] Persisted replica status to VOTING I0507 03:27:38.827683 5795 recover.cpp:556] Successfully joined the Paxos group I0507 03:27:38.827775 5795 recover.cpp:440] Recover process terminated I0507 03:27:38.828966 5780 log.cpp:656] Attempting to start the writer I0507 03:27:38.831114 5782 replica.cpp:474] Replica received implicit promise request with proposal 1 I0507 03:27:38.847708 5782 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 16.573137ms I0507 03:27:38.847739 5782 replica.cpp:342] Persisted promised to 1 I0507 03:27:38.848141 5797 coordinator.cpp:230] Coordinator attemping to fill missing position I0507 03:27:38.849684 5790 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0507 03:27:38.863777 5790 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 14.076775ms I0507 03:27:38.863801 5790 replica.cpp:676] Persisted action at 0 I0507 03:27:38.864915 5798 replica.cpp:508] Replica received write request for position 0 I0507 03:27:38.864949 5798 leveldb.cpp:436] Reading position from leveldb took 11807ns I0507 03:27:38.879945 5798 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 14.978446ms I0507 03:27:38.879976 5798 replica.cpp:676] Persisted action at 0 I0507 03:27:38.880491 5797 replica.cpp:655] Replica received learned notice for position 0 I0507 03:27:38.895969 5797 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 15.459949ms I0507 03:27:38.895992 5797 replica.cpp:676] Persisted action at 0 I0507 03:27:38.896003 5797 replica.cpp:661] Replica learned NOP action at position 0 I0507 03:27:38.896411 5783 log.cpp:672] Writer started with ending position 0 I0507 03:27:38.898058 5798 leveldb.cpp:436] Reading position from leveldb took 11910ns I0507 03:27:38.899749 5777 registrar.cpp:346] Successfully fetched the registry (0B) I0507 03:27:38.899766 5777 registrar.cpp:422] Attempting to update the 'registry' I0507 03:27:38.901458 5791 log.cpp:680] Attempting to append 137 bytes to the log I0507 03:27:38.901666 5780 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0507 03:27:38.902773 5783 replica.cpp:508] Replica received write request for position 1 I0507 03:27:38.916127 5783 leveldb.cpp:341] Persisting action (156 bytes) to leveldb took 13.225715ms I0507 03:27:38.916152 5783 replica.cpp:676] Persisted action at 1 I0507 03:27:38.916534 5790 replica.cpp:655] Replica received learned notice for position 1 I0507 03:27:38.928203 5790 leveldb.cpp:341] Persisting action (158 bytes) to leveldb took 11.652434ms I0507 03:27:38.928225 5790 replica.cpp:676] Persisted action at 1 I0507 03:27:38.928236 5790 replica.cpp:661] Replica learned APPEND action at position 1 I0507 03:27:38.928546 5790 registrar.cpp:479] Successfully updated 'registry' I0507 03:27:38.928642 5790 registrar.cpp:372] Successfully recovered registrar I0507 03:27:38.929044 5783 master.cpp:779] Recovered 0 slaves from the Registry (99B) ; allowing 10mins for slaves to re-register I0507 03:27:38.929502 5799 log.cpp:699] Attempting to truncate the log to 1 I0507 03:27:38.929888 5797 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0507 03:27:38.930161 5781 replica.cpp:508] Replica received write request for position 2 I0507 03:27:38.932977 5789 slave.cpp:140] Slave started on 56)@140.211.11.27:58462 I0507 03:27:38.932991 5789 credentials.hpp:35] Loading credentials for authentication W0507 03:27:38.933567 5789 credentials.hpp:48] Failed to stat credentials file 'file:///tmp/GarbageCollectorIntegrationTest_DiskUsage_A9Pxks/credential': No such file or directory I0507 03:27:38.933585 5789 slave.cpp:230] Slave using credential for: test-principal I0507 03:27:38.933765 5789 slave.cpp:243] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0507 03:27:38.933854 5789 slave.cpp:271] Slave hostname: hemera.apache.org I0507 03:27:38.933863 5789 slave.cpp:272] Slave checkpoint: false I0507 03:27:38.934239 5778 state.cpp:33] Recovering state from '/tmp/GarbageCollectorIntegrationTest_DiskUsage_A9Pxks/meta' I0507 03:27:38.934960 5792 status_update_manager.cpp:193] Recovering status update manager I0507 03:27:38.935123 5779 slave.cpp:2945] Finished recovery I0507 03:27:38.936998 5779 slave.cpp:526] New master detected at master@140.211.11.27:58462 I0507 03:27:38.937021 5779 slave.cpp:586] Authenticating with master master@140.211.11.27:58462 I0507 03:27:38.937077 5798 status_update_manager.cpp:167] New master detected at master@140.211.11.27:58462 I0507 03:27:38.937306 5779 slave.cpp:559] Detecting new master I0507 03:27:38.937335 5800 authenticatee.hpp:128] Creating new client SASL connection I0507 03:27:38.938030 5778 master.cpp:2798] Authenticating slave(56)@140.211.11.27:58462 I0507 03:27:38.938742 5783 authenticator.hpp:148] Creating new server SASL connection I0507 03:27:38.939312 5786 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0507 03:27:38.939340 5786 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0507 03:27:38.939390 5786 authenticator.hpp:254] Received SASL authentication start I0507 03:27:38.939553 5786 authenticator.hpp:342] Authentication requires more steps I0507 03:27:38.939592 5786 authenticatee.hpp:265] Received SASL authentication step I0507 03:27:38.939715 5786 authenticator.hpp:282] Received SASL authentication step I0507 03:27:38.939803 5786 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'hemera.apache.org' server FQDN: 'hemera.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0507 03:27:38.939821 5786 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0507 03:27:38.939831 5786 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0507 03:27:38.939841 5786 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'hemera.apache.org' server FQDN: 'hemera.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0507 03:27:38.939851 5786 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0507 03:27:38.939857 5786 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0507 03:27:38.939870 5786 authenticator.hpp:334] Authentication success I0507 03:27:38.939937 5786 authenticatee.hpp:305] Authentication success I0507 03:27:38.940016 5778 master.cpp:2838] Successfully authenticated slave(56)@140.211.11.27:58462 I0507 03:27:38.940449 5799 slave.cpp:643] Successfully authenticated with master master@140.211.11.27:58462 I0507 03:27:38.940513 5799 slave.cpp:872] Will retry registration in 5.176207635secs if necessary I0507 03:27:38.940625 5794 master.cpp:2134] Registering slave at slave(56)@140.211.11.27:58462 (hemera.apache.org) with id 20140507-032738-453759884-58462-5758-0 I0507 03:27:38.940800 5796 registrar.cpp:422] Attempting to update the 'registry' I0507 03:27:38.940850 5781 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 10.659152ms I0507 03:27:38.940871 5781 replica.cpp:676] Persisted action at 2 I0507 03:27:38.941843 5788 replica.cpp:655] Replica received learned notice for position 2 I0507 03:27:38.953193 5788 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 11.291343ms I0507 03:27:38.953258 5788 leveldb.cpp:399] Deleting ~1 keys from leveldb took 33725ns I0507 03:27:38.953274 5788 replica.cpp:676] Persisted action at 2 I0507 03:27:38.953282 5788 replica.cpp:661] Replica learned TRUNCATE action at position 2 I0507 03:27:38.953541 5797 log.cpp:680] Attempting to append 330 bytes to the log I0507 03:27:38.953614 5797 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0507 03:27:38.954731 5789 replica.cpp:508] Replica received write request for position 3 I0507 03:27:38.965240 5789 leveldb.cpp:341] Persisting action (349 bytes) to leveldb took 10.489719ms I0507 03:27:38.965261 5789 replica.cpp:676] Persisted action at 3 I0507 03:27:38.966253 5780 replica.cpp:655] Replica received learned notice for position 3 I0507 03:27:38.977375 5780 leveldb.cpp:341] Persisting action (351 bytes) to leveldb took 11.098798ms I0507 03:27:38.977408 5780 replica.cpp:676] Persisted action at 3 I0507 03:27:38.977421 5780 replica.cpp:661] Replica learned APPEND action at position 3 I0507 03:27:38.977859 5792 registrar.cpp:479] Successfully updated 'registry' I0507 03:27:38.977926 5780 log.cpp:699] Attempting to truncate the log to 3 I0507 03:27:38.978060 5792 master.cpp:2174] Registered slave 20140507-032738-453759884-58462-5758-0 at slave(56)@140.211.11.27:58462 (hemera.apache.org) I0507 03:27:38.978112 5792 master.cpp:3283] Adding slave 20140507-032738-453759884-58462-5758-0 at slave(56)@140.211.11.27:58462 (hemera.apache.org) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0507 03:27:38.978134 5784 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0507 03:27:38.978508 5785 slave.cpp:676] Registered with master master@140.211.11.27:58462; given slave ID 20140507-032738-453759884-58462-5758-0 I0507 03:27:38.978631 5786 hierarchical_allocator_process.hpp:444] Added slave 20140507-032738-453759884-58462-5758-0 (hemera.apache.org) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0507 03:27:38.978677 5786 hierarchical_allocator_process.hpp:707] Performed allocation for slave 20140507-032738-453759884-58462-5758-0 in 5421ns I0507 03:27:38.979872 5796 replica.cpp:508] Replica received write request for position 4 I0507 03:27:38.982084 5758 sched.cpp:121] Version: 0.19.0 I0507 03:27:38.982213 5789 sched.cpp:217] New master detected at master@140.211.11.27:58462 I0507 03:27:38.982228 5789 sched.cpp:268] Authenticating with master master@140.211.11.27:58462 I0507 03:27:38.982347 5788 authenticatee.hpp:128] Creating new client SASL connection I0507 03:27:38.982676 5788 master.cpp:2798] Authenticating scheduler(59)@140.211.11.27:58462 I0507 03:27:38.983100 5788 authenticator.hpp:148] Creating new server SASL connection I0507 03:27:38.983294 5788 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0507 03:27:38.983312 5788 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0507 03:27:38.983360 5788 authenticator.hpp:254] Received SASL authentication start I0507 03:27:38.983505 5788 authenticator.hpp:342] Authentication requires more steps I0507 03:27:38.984220 5782 authenticatee.hpp:265] Received SASL authentication step I0507 03:27:38.984275 5782 authenticator.hpp:282] Received SASL authentication step I0507 03:27:38.984315 5782 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'hemera.apache.org' server FQDN: 'hemera.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0507 03:27:38.984347 5782 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0507 03:27:38.984359 5782 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0507 03:27:38.984370 5782 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'hemera.apache.org' server FQDN: 'hemera.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0507 03:27:38.984377 5782 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0507 03:27:38.984383 5782 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0507 03:27:38.984397 5782 authenticator.hpp:334] Authentication success I0507 03:27:38.984429 5782 authenticatee.hpp:305] Authentication success I0507 03:27:38.984469 5795 master.cpp:2838] Successfully authenticated scheduler(59)@140.211.11.27:58462 I0507 03:27:38.985110 5782 sched.cpp:342] Successfully authenticated with master master@140.211.11.27:58462 I0507 03:27:38.985133 5782 sched.cpp:461] Sending registration request to master@140.211.11.27:58462 I0507 03:27:38.985326 5795 master.cpp:980] Received registration request from scheduler(59)@140.211.11.27:58462 I0507 03:27:38.985357 5795 master.cpp:998] Registering framework 20140507-032738-453759884-58462-5758-0000 at scheduler(59)@140.211.11.27:58462 I0507 03:27:38.985424 5795 sched.cpp:392] Framework registered with 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.985471 5792 hierarchical_allocator_process.hpp:331] Added framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.985610 5795 sched.cpp:406] Scheduler::registered took 36702ns I0507 03:27:38.985646 5792 hierarchical_allocator_process.hpp:751] Offering cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140507-032738-453759884-58462-5758-0 to framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.985954 5792 hierarchical_allocator_process.hpp:687] Performed allocation for 1 slaves in 330895ns I0507 03:27:38.986001 5789 master.hpp:612] Adding offer 20140507-032738-453759884-58462-5758-0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140507-032738-453759884-58462-5758-0 (hemera.apache.org) I0507 03:27:38.986090 5789 master.cpp:2747] Sending 1 offers to framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.986548 5792 sched.cpp:529] Scheduler::resourceOffers took 162873ns I0507 03:27:38.986721 5792 master.hpp:622] Removing offer 20140507-032738-453759884-58462-5758-0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140507-032738-453759884-58462-5758-0 (hemera.apache.org) I0507 03:27:38.986781 5792 master.cpp:1812] Processing reply for offers: [ 20140507-032738-453759884-58462-5758-0 ] on slave 20140507-032738-453759884-58462-5758-0 at slave(56)@140.211.11.27:58462 (hemera.apache.org) for framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.986843 5792 master.hpp:584] Adding task 0 with resources cpus(*):2; mem(*):1024 on slave 20140507-032738-453759884-58462-5758-0 (hemera.apache.org) I0507 03:27:38.986876 5792 master.cpp:2922] Launching task 0 of framework 20140507-032738-453759884-58462-5758-0000 with resources cpus(*):2; mem(*):1024 on slave 20140507-032738-453759884-58462-5758-0 at slave(56)@140.211.11.27:58462 (hemera.apache.org) I0507 03:27:38.986981 5795 slave.cpp:906] Got assigned task 0 for framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.987180 5795 slave.cpp:1016] Launching task 0 for framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.987203 5787 hierarchical_allocator_process.hpp:546] Framework 20140507-032738-453759884-58462-5758-0000 left disk(*):1024; ports(*):[31000-32000] unused on slave 20140507-032738-453759884-58462-5758-0 I0507 03:27:38.987287 5787 hierarchical_allocator_process.hpp:589] Framework 20140507-032738-453759884-58462-5758-0000 filtered slave 20140507-032738-453759884-58462-5758-0 for 5secs I0507 03:27:38.991395 5795 exec.cpp:131] Version: 0.19.0 I0507 03:27:38.991497 5779 exec.cpp:181] Executor started at: executor(27)@140.211.11.27:58462 with pid 5758 I0507 03:27:38.991510 5795 slave.cpp:1126] Queuing task '0' for executor default of framework '20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.991566 5795 slave.cpp:487] Successfully attached file '/tmp/GarbageCollectorIntegrationTest_DiskUsage_A9Pxks/slaves/20140507-032738-453759884-58462-5758-0/frameworks/20140507-032738-453759884-58462-5758-0000/executors/default/runs/de776bec-2822-4bbc-befc-eec40eb5f674' I0507 03:27:38.991595 5795 slave.cpp:2283] Monitoring executor 'default' of framework '20140507-032738-453759884-58462-5758-0000' in container 'de776bec-2822-4bbc-befc-eec40eb5f674' I0507 03:27:38.991778 5795 slave.cpp:1599] Got registration for executor 'default' of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.991874 5795 slave.cpp:1718] Flushing queued task 0 for executor 'default' of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.991935 5780 exec.cpp:205] Executor registered on slave 20140507-032738-453759884-58462-5758-0 I0507 03:27:38.993419 5796 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 13.489998ms I0507 03:27:38.993449 5796 replica.cpp:676] Persisted action at 4 I0507 03:27:38.994510 5777 replica.cpp:655] Replica received learned notice for position 4 I0507 03:27:38.994753 5780 exec.cpp:217] Executor::registered took 14516ns I0507 03:27:38.994818 5780 exec.cpp:292] Executor asked to run task '0' I0507 03:27:38.994849 5780 exec.cpp:301] Executor::launchTask took 18872ns I0507 03:27:38.996703 5780 exec.cpp:524] Executor sending status update TASK_RUNNING (UUID: be7346ad-e198-4b38-9252-421ff759fdee) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.996793 5780 slave.cpp:1954] Handling status update TASK_RUNNING (UUID: be7346ad-e198-4b38-9252-421ff759fdee) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 from executor(27)@140.211.11.27:58462 I0507 03:27:38.996888 5780 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: be7346ad-e198-4b38-9252-421ff759fdee) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.996920 5780 status_update_manager.cpp:499] Creating StatusUpdate stream for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.996968 5780 status_update_manager.cpp:373] Forwarding status update TASK_RUNNING (UUID: be7346ad-e198-4b38-9252-421ff759fdee) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 to master@140.211.11.27:58462 I0507 03:27:38.997189 5790 master.cpp:2450] Status update TASK_RUNNING (UUID: be7346ad-e198-4b38-9252-421ff759fdee) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 from slave 20140507-032738-453759884-58462-5758-0 at slave(56)@140.211.11.27:58462 (hemera.apache.org) I0507 03:27:38.997268 5780 slave.cpp:2071] Status update manager successfully handled status update TASK_RUNNING (UUID: be7346ad-e198-4b38-9252-421ff759fdee) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:38.997321 5797 sched.cpp:620] Scheduler::statusUpdate took 77906ns I0507 03:27:38.997336 5780 slave.cpp:2077] Sending acknowledgement for status update TASK_RUNNING (UUID: be7346ad-e198-4b38-9252-421ff759fdee) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 to executor(27)@140.211.11.27:58462 I0507 03:27:38.998700 5797 slave.cpp:2341] Executor 'default' of framework 20140507-032738-453759884-58462-5758-0000 has exited with status 0 I0507 03:27:38.998814 5793 exec.cpp:338] Executor received status update acknowledgement be7346ad-e198-4b38-9252-421ff759fdee for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.000041 5797 slave.cpp:1954] Handling status update TASK_LOST (UUID: 4c8e572c-3fa7-43f3-aaf8-f82e77a70c1b) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 from @0.0.0.0:0 I0507 03:27:39.000063 5797 slave.cpp:3446] Terminating task 0 I0507 03:27:39.000190 5797 status_update_manager.cpp:320] Received status update TASK_LOST (UUID: 4c8e572c-3fa7-43f3-aaf8-f82e77a70c1b) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.000229 5779 master.cpp:2523] Executor default of framework 20140507-032738-453759884-58462-5758-0000 on slave 20140507-032738-453759884-58462-5758-0 at slave(56)@140.211.11.27:58462 (hemera.apache.org) has exited with status 0 I0507 03:27:39.000341 5797 status_update_manager.cpp:398] Received status update acknowledgement (UUID: be7346ad-e198-4b38-9252-421ff759fdee) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.000385 5797 status_update_manager.cpp:373] Forwarding status update TASK_LOST (UUID: 4c8e572c-3fa7-43f3-aaf8-f82e77a70c1b) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 to master@140.211.11.27:58462 I0507 03:27:39.000516 5791 slave.cpp:2071] Status update manager successfully handled status update TASK_LOST (UUID: 4c8e572c-3fa7-43f3-aaf8-f82e77a70c1b) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.000686 5791 slave.cpp:1539] Status update manager successfully handled status update acknowledgement (UUID: be7346ad-e198-4b38-9252-421ff759fdee) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.000759 5795 master.cpp:2450] Status update TASK_LOST (UUID: 4c8e572c-3fa7-43f3-aaf8-f82e77a70c1b) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 from slave 20140507-032738-453759884-58462-5758-0 at slave(56)@140.211.11.27:58462 (hemera.apache.org) I0507 03:27:39.000841 5784 sched.cpp:620] Scheduler::statusUpdate took 11418ns I0507 03:27:39.000849 5795 master.hpp:602] Removing task 0 with resources cpus(*):2; mem(*):1024 on slave 20140507-032738-453759884-58462-5758-0 (hemera.apache.org) I0507 03:27:39.001313 5799 hierarchical_allocator_process.hpp:636] Recovered cpus(*):2; mem(*):1024 (total allocatable: disk(*):1024; ports(*):[31000-32000]; cpus(*):2; mem(*):1024) on slave 20140507-032738-453759884-58462-5758-0 from framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.002792 5778 status_update_manager.cpp:398] Received status update acknowledgement (UUID: 4c8e572c-3fa7-43f3-aaf8-f82e77a70c1b) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.002831 5778 status_update_manager.cpp:530] Cleaning up status update stream for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.002903 5778 slave.cpp:1539] Status update manager successfully handled status update acknowledgement (UUID: 4c8e572c-3fa7-43f3-aaf8-f82e77a70c1b) for task 0 of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.002976 5778 slave.cpp:3470] Completing task 0 I0507 03:27:39.002991 5778 slave.cpp:2480] Cleaning up executor 'default' of framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.006098 5778 slave.cpp:2555] Cleaning up framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.006105 5800 gc.cpp:56] Scheduling '/tmp/GarbageCollectorIntegrationTest_DiskUsage_A9Pxks/slaves/20140507-032738-453759884-58462-5758-0/frameworks/20140507-032738-453759884-58462-5758-0000/executors/default/runs/de776bec-2822-4bbc-befc-eec40eb5f674' for gc 1.00000000231788weeks in the future I0507 03:27:39.006146 5800 gc.cpp:56] Scheduling '/tmp/GarbageCollectorIntegrationTest_DiskUsage_A9Pxks/slaves/20140507-032738-453759884-58462-5758-0/frameworks/20140507-032738-453759884-58462-5758-0000/executors/default' for gc 1.00000000231788weeks in the future I0507 03:27:39.006211 5786 status_update_manager.cpp:282] Closing status update streams for framework 20140507-032738-453759884-58462-5758-0000 I0507 03:27:39.006299 5786 gc.cpp:56] Scheduling '/tmp/GarbageCollectorIntegrationTest_DiskUsage_A9Pxks/slaves/20140507-032738-453759884-58462-5758-0/frameworks/20140507-032738-453759884-58462-5758-0000' for gc 1.00000000231788weeks in the future I0507 03:27:39.010058 5777 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 15.533184ms I0507 03:27:39.010144 5777 leveldb.cpp:399] Deleting ~2 keys from leveldb took 64787ns I0507 03:27:39.010154 5777 replica.cpp:676] Persisted action at 4 I0507 03:27:39.010160 5777 replica.cpp:661] Replica learned TRUNCATE action at position 4 I0507 03:27:39.029413 5789 slave.cpp:2801] Current usage 90.00%. Max allowed age: 0ns ../../src/tests/gc_tests.cpp:658: Failure Value of: os::exists(executorDir) Actual: true Expected: false ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1358","05/13/2014 21:58:48",1,"Show when the leading master was elected in the webui ""This would be nice to have during debugging.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1365","05/13/2014 23:57:15",1,"SlaveRecoveryTest/0.MultipleFrameworks is flaky ""--gtest_repeat=-1 --gtest_shuffle --gtest_break_on_failure """," [ RUN ] SlaveRecoveryTest/0.MultipleFrameworks WARNING: Logging before InitGoogleLogging() is written to STDERR I0513 15:42:05.931761 4320 exec.cpp:131] Version: 0.19.0 I0513 15:42:05.936698 4340 exec.cpp:205] Executor registered on slave 20140513-154204-16842879-51872-13062-0 Registered executor on artoo Starting task 51991f97-f5fd-4905-ad0f-02668083af7c Forked command at 4367 sh -c 'sleep 1000' WARNING: Logging before InitGoogleLogging() is written to STDERR I0513 15:42:06.915061 4408 exec.cpp:131] Version: 0.19.0 I0513 15:42:06.931149 4435 exec.cpp:205] Executor registered on slave 20140513-154204-16842879-51872-13062-0 Registered executor on artoo Starting task eaf5d8d6-3a6c-4ee1-84c1-fae20fb1df83 sh -c 'sleep 1000' Forked command at 4439 I0513 15:42:06.998332 4340 exec.cpp:251] Received reconnect request from slave 20140513-154204-16842879-51872-13062-0 I0513 15:42:06.998414 4436 exec.cpp:251] Received reconnect request from slave 20140513-154204-16842879-51872-13062-0 I0513 15:42:07.006350 4437 exec.cpp:228] Executor re-registered on slave 20140513-154204-16842879-51872-13062-0 Re-registered executor on artoo I0513 15:42:07.027039 4337 exec.cpp:378] Executor asked to shutdown Shutting down Sending SIGTERM to process tree at pid 4367 Killing the following process trees: [ -+- 4367 sh -c sleep 1000 \--- 4368 sleep 1000 ] ../../src/tests/slave_recovery_tests.cpp:2807: Failure Value of: status1.get().state() Actual: TASK_FAILED Expected: TASK_KILLED Program received signal SIGSEGV, Segmentation fault. testing::UnitTest::AddTestPartResult (this=0x154dac0 , result_type=testing::TestPartResult::kFatalFailure, file_name=0xeb6b6c """"../../src/tests/slave_recovery_tests.cpp"""", line_number=2807, message=..., os_stack_trace=...) at gmock-1.6.0/gtest/src/gtest.cc:3795 3795 *static_cast(NULL) = 1; (gdb) bt #0 testing::UnitTest::AddTestPartResult (this=0x154dac0 , result_type=testing::TestPartResult::kFatalFailure, file_name=0xeb6b6c """"../../src/tests/slave_recovery_tests.cpp"""", line_number=2807, message=..., os_stack_trace=...) at gmock-1.6.0/gtest/src/gtest.cc:3795 #1 0x0000000000df98b9 in testing::internal::AssertHelper::operator= (this=0x7fffffffb860, message=...) at gmock-1.6.0/gtest/src/gtest.cc:356 #2 0x0000000000cdfa57 in SlaveRecoveryTest_MultipleFrameworks_Test::TestBody (this=0x1954db0) at ../../src/tests/slave_recovery_tests.cpp:2807 #3 0x0000000000e22583 in testing::internal::HandleSehExceptionsInMethodIfSupported (object=0x1954db0, method=&virtual testing::Test::TestBody(), location=0xed0af0 """"the test body"""") at gmock-1.6.0/gtest/src/gtest.cc:2090 #4 0x0000000000e12467 in testing::internal::HandleExceptionsInMethodIfSupported (object=0x1954db0, method=&virtual testing::Test::TestBody(), location=0xed0af0 """"the test body"""") at gmock-1.6.0/gtest/src/gtest.cc:2126 #5 0x0000000000e010d5 in testing::Test::Run (this=0x1954db0) at gmock-1.6.0/gtest/src/gtest.cc:2161 #6 0x0000000000e01ceb in testing::TestInfo::Run (this=0x158cf80) at gmock-1.6.0/gtest/src/gtest.cc:2338 #7 0x0000000000e02387 in testing::TestCase::Run (this=0x158a880) at gmock-1.6.0/gtest/src/gtest.cc:2445 #8 0x0000000000e079ed in testing::internal::UnitTestImpl::RunAllTests (this=0x1558b40) at gmock-1.6.0/gtest/src/gtest.cc:4237 #9 0x0000000000e1ec83 in testing::internal::HandleSehExceptionsInMethodIfSupported (object=0x1558b40, method=(bool (testing::internal::UnitTestImpl::*)(testing::internal::UnitTestImpl * const)) 0xe07700 , location=0xed1219 """"auxiliary test code (environments or event listeners)"""") at gmock-1.6.0/gtest/src/gtest.cc:2090 #10 0x0000000000e14217 in testing::internal::HandleExceptionsInMethodIfSupported (object=0x1558b40, method=(bool (testing::internal::UnitTestImpl::*)(testing::internal::UnitTestImpl * const)) 0xe07700 , location=0xed1219 """"auxiliary test code (environments or event listeners)"""") at gmock-1.6.0/gtest/src/gtest.cc:2126 #11 0x0000000000e076d7 in testing::UnitTest::Run (this=0x154dac0 ) at gmock-1.6.0/gtest/src/gtest.cc:3872 #12 0x0000000000b99887 in main (argc=1, argv=0x7fffffffd9f8) at ../../src/tests/main.cpp:107 (gdb) frame 2 #2 0x0000000000cdfa57 in SlaveRecoveryTest_MultipleFrameworks_Test::TestBody (this=0x1954db0) at ../../src/tests/slave_recovery_tests.cpp:2807 2807 ASSERT_EQ(TASK_KILLED, status1.get().state()); (gdb) p status1 $1 = {data = {::Data, 2>> = {_M_ptr = 0x1963140, _M_refcount = {_M_pi = 0x198a620}}, }} (gdb) p status1.get() $2 = (const mesos::TaskStatus &) @0x7fffdc5bf5f0: { = { = {_vptr$MessageLite = 0x7ffff74bc940 }, }, static kTaskIdFieldNumber = 1, static kStateFieldNumber = 2, static kMessageFieldNumber = 4, static kDataFieldNumber = 3, static kSlaveIdFieldNumber = 5, static kTimestampFieldNumber = 6, _unknown_fields_ = {fields_ = 0x0}, task_id_ = 0x7fffdc5ce9a0, message_ = 0x7fffdc5f5880, data_ = 0x154b4b0 , slave_id_ = 0x7fffdc59c4f0, timestamp_ = 1429688582.046252, state_ = 3, _cached_size_ = 0, _has_bits_ = {55}, static default_instance_ = 0x0} (gdb) p status1.get().state() $3 = mesos::TASK_FAILED (gdb) list 2802 // Kill task 1. 2803 driver1.killTask(task1.task_id()); 2804 2805 // Wait for TASK_KILLED update. 2806 AWAIT_READY(status1); 2807 ASSERT_EQ(TASK_KILLED, status1.get().state()); 2808 2809 // Kill task 2. 2810 driver2.killTask(task2.task_id()); 2811 ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1371","05/14/2014 21:59:14",1,"Expose libprocess queue length from scheduler driver to metrics endpoint ""We expose the master's event queue length and we should do the same for the scheduler driver.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1373","05/15/2014 00:29:38",3,"Keep track of the principals for authenticated pids in Master. ""Need to add a 'principal' field to FrameworkInfo and verify if the Framework has the claimed principal during registration.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1424","05/27/2014 23:31:20",1,"Mesos tests should not rely on echo ""Triggered by MESOS-1413 I would like to propose changing our tests to not rely on {{echo}} but to use {{printf}} instead. This seems to be useful as {{echo}} is introducing an extra linefeed after the supplied string whereas {{printf}} does not. The {{-n}} switch preventing that extra linefeed is unfortunately not portable - it is not supported by the builtin {{echo}} of the BSD / OSX {{/bin/sh}}. ""","",0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1425","05/28/2014 02:01:15",1,"LogZooKeeperTest.WriteRead test is flaky """""," [ RUN ] LogZooKeeperTest.WriteRead I0527 23:23:48.286031 1352 zookeeper_test_server.cpp:158] Started ZooKeeperTestServer on port 39446 I0527 23:23:48.293916 1352 log_tests.cpp:1945] Using temporary directory '/tmp/LogZooKeeperTest_WriteRead_Vyty8g' I0527 23:23:48.296430 1352 leveldb.cpp:176] Opened db in 2.459713ms I0527 23:23:48.296740 1352 leveldb.cpp:183] Compacted db in 286843ns I0527 23:23:48.296761 1352 leveldb.cpp:198] Created db iterator in 3083ns I0527 23:23:48.296772 1352 leveldb.cpp:204] Seeked to beginning of db in 4541ns I0527 23:23:48.296777 1352 leveldb.cpp:273] Iterated through 0 keys in the db in 87ns I0527 23:23:48.296788 1352 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0527 23:23:48.297499 1383 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 505340ns I0527 23:23:48.297513 1383 replica.cpp:320] Persisted replica status to VOTING I0527 23:23:48.299492 1352 leveldb.cpp:176] Opened db in 1.73582ms I0527 23:23:48.299773 1352 leveldb.cpp:183] Compacted db in 263937ns I0527 23:23:48.299793 1352 leveldb.cpp:198] Created db iterator in 7494ns I0527 23:23:48.299806 1352 leveldb.cpp:204] Seeked to beginning of db in 235ns I0527 23:23:48.299813 1352 leveldb.cpp:273] Iterated through 0 keys in the db in 93ns I0527 23:23:48.299821 1352 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0527 23:23:48.300503 1380 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 492309ns I0527 23:23:48.300516 1380 replica.cpp:320] Persisted replica status to VOTING I0527 23:23:48.302500 1352 leveldb.cpp:176] Opened db in 1.793829ms I0527 23:23:48.303642 1352 leveldb.cpp:183] Compacted db in 1.123929ms I0527 23:23:48.303669 1352 leveldb.cpp:198] Created db iterator in 5865ns I0527 23:23:48.303689 1352 leveldb.cpp:204] Seeked to beginning of db in 8811ns I0527 23:23:48.303705 1352 leveldb.cpp:273] Iterated through 1 keys in the db in 9545ns I0527 23:23:48.303715 1352 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned 2014-05-27 23:23:48,303:1352(0x2b1173a29700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-05-27 23:23:48,303:1352(0x2b1173a29700):ZOO_INFO@log_env@716: Client environment:host.name=minerva 2014-05-27 23:23:48,303:1352(0x2b1173a29700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-05-27 23:23:48,303:1352(0x2b1173a29700):ZOO_INFO@log_env@724: Client environment:os.arch=3.2.0-57-generic 2014-05-27 23:23:48,303:1352(0x2b1173a29700):ZOO_INFO@log_env@725: Client environment:os.version=#87-Ubuntu SMP Tue Nov 12 21:35:10 UTC 2013 2014-05-27 23:23:48,303:1352(0x2b1173e2b700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-05-27 23:23:48,304:1352(0x2b1173e2b700):ZOO_INFO@log_env@716: Client environment:host.name=minerva 2014-05-27 23:23:48,304:1352(0x2b1173e2b700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-05-27 23:23:48,304:1352(0x2b1173e2b700):ZOO_INFO@log_env@724: Client environment:os.arch=3.2.0-57-generic 2014-05-27 23:23:48,304:1352(0x2b1173e2b700):ZOO_INFO@log_env@725: Client environment:os.version=#87-Ubuntu SMP Tue Nov 12 21:35:10 UTC 2013 2014-05-27 23:23:48,304:1352(0x2b1173a29700):ZOO_INFO@log_env@733: Client environment:user.name=(null) I0527 23:23:48.303988 1380 log.cpp:238] Attempting to join replica to ZooKeeper group 2014-05-27 23:23:48,304:1352(0x2b1173e2b700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-05-27 23:23:48,304:1352(0x2b1173a29700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins I0527 23:23:48.304198 1385 recover.cpp:425] Starting replica recovery 2014-05-27 23:23:48,304:1352(0x2b1173e2b700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-05-27 23:23:48,304:1352(0x2b1173a29700):ZOO_INFO@log_env@753: Client environment:user.dir=/tmp/LogZooKeeperTest_WriteRead_Vyty8g 2014-05-27 23:23:48,304:1352(0x2b1173a29700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:39446 sessionTimeout=5000 watcher=0x2b11708e98d0 sessionId=0 sessionPasswd= context=0x2b118002f4e0 flags=0 2014-05-27 23:23:48,304:1352(0x2b1173e2b700):ZOO_INFO@log_env@753: Client environment:user.dir=/tmp/LogZooKeeperTest_WriteRead_Vyty8g I0527 23:23:48.304352 1385 recover.cpp:451] Replica is in VOTING status 2014-05-27 23:23:48,304:1352(0x2b1173e2b700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:39446 sessionTimeout=5000 watcher=0x2b11708e98d0 sessionId=0 sessionPasswd= context=0x2b1198015ca0 flags=0 I0527 23:23:48.304417 1385 recover.cpp:440] Recover process terminated 2014-05-27 23:23:48,304:1352(0x2b12897b8700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:39446] 2014-05-27 23:23:48,304:1352(0x2b12891b5700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:39446] I0527 23:23:48.311262 1352 leveldb.cpp:176] Opened db in 7.261703ms 2014-05-27 23:23:48,311:1352(0x2b12897b8700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:39446], sessionId=0x1463fff34bd0000, negotiated timeout=6000 I0527 23:23:48.312379 1381 group.cpp:310] Group process ((614)@67.195.138.8:35151) connected to ZooKeeper I0527 23:23:48.312407 1381 group.cpp:784] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) I0527 23:23:48.312417 1381 group.cpp:382] Trying to create path '/log' in ZooKeeper I0527 23:23:48.312422 1352 leveldb.cpp:183] Compacted db in 1.119843ms I0527 23:23:48.312505 1352 leveldb.cpp:198] Created db iterator in 3901ns I0527 23:23:48.312526 1352 leveldb.cpp:204] Seeked to beginning of db in 7398ns I0527 23:23:48.312541 1352 leveldb.cpp:273] Iterated through 1 keys in the db in 6345ns I0527 23:23:48.312553 1352 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned 2014-05-27 23:23:48,312:1352(0x2b1173627700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-05-27 23:23:48,312:1352(0x2b1173627700):ZOO_INFO@log_env@716: Client environment:host.name=minerva 2014-05-27 23:23:48,312:1352(0x2b1173627700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-05-27 23:23:48,312:1352(0x2b1173627700):ZOO_INFO@log_env@724: Client environment:os.arch=3.2.0-57-generic 2014-05-27 23:23:48,312:1352(0x2b1173627700):ZOO_INFO@log_env@725: Client environment:os.version=#87-Ubuntu SMP Tue Nov 12 21:35:10 UTC 2013 2014-05-27 23:23:48,312:1352(0x2b1173627700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-05-27 23:23:48,312:1352(0x2b12891b5700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:39446], sessionId=0x1463fff34bd0001, negotiated timeout=6000 2014-05-27 23:23:48,313:1352(0x2b1173627700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-05-27 23:23:48,313:1352(0x2b1173627700):ZOO_INFO@log_env@753: Client environment:user.dir=/tmp/LogZooKeeperTest_WriteRead_Vyty8g 2014-05-27 23:23:48,313:1352(0x2b1173627700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:39446 sessionTimeout=5000 watcher=0x2b11708e98d0 sessionId=0 sessionPasswd= context=0x2b119001fd20 flags=0 I0527 23:23:48.313247 1380 group.cpp:310] Group process ((616)@67.195.138.8:35151) connected to ZooKeeper I0527 23:23:48.313266 1380 group.cpp:784] Syncing group operations: queue size (joins, cancels, datas) = (1, 0, 0) I0527 23:23:48.313273 1380 group.cpp:382] Trying to create path '/log' in ZooKeeper 2014-05-27 23:23:48,313:1352(0x2b12889b0700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:39446] 2014-05-27 23:23:48,313:1352(0x2b1173828700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-05-27 23:23:48,313:1352(0x2b1173828700):ZOO_INFO@log_env@716: Client environment:host.name=minerva 2014-05-27 23:23:48,313:1352(0x2b1173828700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-05-27 23:23:48,313:1352(0x2b1173828700):ZOO_INFO@log_env@724: Client environment:os.arch=3.2.0-57-generic 2014-05-27 23:23:48,313:1352(0x2b1173828700):ZOO_INFO@log_env@725: Client environment:os.version=#87-Ubuntu SMP Tue Nov 12 21:35:10 UTC 2013 I0527 23:23:48.313436 1387 log.cpp:238] Attempting to join replica to ZooKeeper group 2014-05-27 23:23:48,313:1352(0x2b1173828700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-05-27 23:23:48,313:1352(0x2b1173828700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-05-27 23:23:48,313:1352(0x2b1173828700):ZOO_INFO@log_env@753: Client environment:user.dir=/tmp/LogZooKeeperTest_WriteRead_Vyty8g 2014-05-27 23:23:48,313:1352(0x2b1173828700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:39446 sessionTimeout=5000 watcher=0x2b11708e98d0 sessionId=0 sessionPasswd= context=0x2b1190011ea0 flags=0 I0527 23:23:48.313601 1387 recover.cpp:425] Starting replica recovery I0527 23:23:48.313721 1382 recover.cpp:451] Replica is in VOTING status I0527 23:23:48.313794 1382 recover.cpp:440] Recover process terminated 2014-05-27 23:23:48,313:1352(0x2b1288bb1700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:39446] I0527 23:23:48.313973 1383 log.cpp:656] Attempting to start the writer 2014-05-27 23:23:48,315:1352(0x2b12889b0700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:39446], sessionId=0x1463fff34bd0002, negotiated timeout=6000 I0527 23:23:48.315682 1387 group.cpp:310] Group process ((619)@67.195.138.8:35151) connected to ZooKeeper 2014-05-27 23:23:48,315:1352(0x2b1288bb1700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:39446], sessionId=0x1463fff34bd0003, negotiated timeout=6000 I0527 23:23:48.315709 1387 group.cpp:784] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) I0527 23:23:48.315738 1387 group.cpp:382] Trying to create path '/log' in ZooKeeper I0527 23:23:48.315964 1386 group.cpp:310] Group process ((621)@67.195.138.8:35151) connected to ZooKeeper I0527 23:23:48.315981 1386 group.cpp:784] Syncing group operations: queue size (joins, cancels, datas) = (1, 0, 0) I0527 23:23:48.315989 1386 group.cpp:382] Trying to create path '/log' in ZooKeeper I0527 23:23:48.317881 1385 network.hpp:423] ZooKeeper group memberships changed I0527 23:23:48.317937 1381 group.cpp:655] Trying to get '/log/0000000000' in ZooKeeper I0527 23:23:48.318205 1382 network.hpp:423] ZooKeeper group memberships changed I0527 23:23:48.318317 1383 group.cpp:655] Trying to get '/log/0000000000' in ZooKeeper I0527 23:23:48.319154 1382 network.hpp:461] ZooKeeper group PIDs: { log-replica(22)@67.195.138.8:35151 } I0527 23:23:48.319541 1386 network.hpp:461] ZooKeeper group PIDs: { log-replica(22)@67.195.138.8:35151 } I0527 23:23:48.319851 1381 replica.cpp:474] Replica received implicit promise request with proposal 1 I0527 23:23:48.319905 1387 replica.cpp:474] Replica received implicit promise request with proposal 1 I0527 23:23:48.319907 1384 network.hpp:423] ZooKeeper group memberships changed I0527 23:23:48.320091 1385 group.cpp:655] Trying to get '/log/0000000000' in ZooKeeper I0527 23:23:48.320384 1383 network.hpp:423] ZooKeeper group memberships changed I0527 23:23:48.320441 1381 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 568396ns I0527 23:23:48.320456 1384 group.cpp:655] Trying to get '/log/0000000000' in ZooKeeper I0527 23:23:48.320461 1381 replica.cpp:342] Persisted promised to 1 I0527 23:23:48.320446 1387 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 516015ns I0527 23:23:48.320497 1387 replica.cpp:342] Persisted promised to 1 I0527 23:23:48.320814 1383 coordinator.cpp:230] Coordinator attemping to fill missing position I0527 23:23:48.321050 1384 group.cpp:655] Trying to get '/log/0000000001' in ZooKeeper I0527 23:23:48.321063 1385 group.cpp:655] Trying to get '/log/0000000001' in ZooKeeper I0527 23:23:48.321341 1387 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0527 23:23:48.321375 1381 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0527 23:23:48.321506 1387 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 89us I0527 23:23:48.321530 1387 replica.cpp:676] Persisted action at 0 I0527 23:23:48.321584 1381 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 122910ns I0527 23:23:48.321602 1381 replica.cpp:676] Persisted action at 0 I0527 23:23:48.321775 1383 network.hpp:461] ZooKeeper group PIDs: { log-replica(22)@67.195.138.8:35151, log-replica(23)@67.195.138.8:35151 } I0527 23:23:48.321961 1381 replica.cpp:508] Replica received write request for position 0 I0527 23:23:48.321984 1381 leveldb.cpp:438] Reading position from leveldb took 7813ns I0527 23:23:48.322064 1380 network.hpp:461] ZooKeeper group PIDs: { log-replica(22)@67.195.138.8:35151, log-replica(23)@67.195.138.8:35151 } I0527 23:23:48.322073 1381 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 78683ns I0527 23:23:48.322077 1383 replica.cpp:508] Replica received write request for position 0 I0527 23:23:48.322084 1381 replica.cpp:676] Persisted action at 0 I0527 23:23:48.322111 1383 leveldb.cpp:438] Reading position from leveldb took 17416ns I0527 23:23:48.322330 1383 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 157199ns I0527 23:23:48.322345 1383 replica.cpp:676] Persisted action at 0 I0527 23:23:48.322522 1386 replica.cpp:655] Replica received learned notice for position 0 I0527 23:23:48.322523 1382 replica.cpp:655] Replica received learned notice for position 0 I0527 23:23:48.322638 1386 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 86907ns I0527 23:23:48.322661 1386 replica.cpp:676] Persisted action at 0 I0527 23:23:48.322670 1386 replica.cpp:661] Replica learned NOP action at position 0 I0527 23:23:48.322682 1382 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 85031ns I0527 23:23:48.322693 1382 replica.cpp:676] Persisted action at 0 I0527 23:23:48.322700 1382 replica.cpp:661] Replica learned NOP action at position 0 I0527 23:23:48.322790 1380 log.cpp:672] Writer started with ending position 0 I0527 23:23:48.322898 1380 log.cpp:680] Attempting to append 11 bytes to the log I0527 23:23:48.322978 1383 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0527 23:23:48.323122 1380 replica.cpp:508] Replica received write request for position 1 I0527 23:23:48.323158 1381 replica.cpp:508] Replica received write request for position 1 I0527 23:23:48.323202 1380 leveldb.cpp:343] Persisting action (27 bytes) to leveldb took 66527ns I0527 23:23:48.323215 1380 replica.cpp:676] Persisted action at 1 I0527 23:23:48.323238 1381 leveldb.cpp:343] Persisting action (27 bytes) to leveldb took 67074ns I0527 23:23:48.323252 1381 replica.cpp:676] Persisted action at 1 I0527 23:23:48.323354 1380 replica.cpp:655] Replica received learned notice for position 1 I0527 23:23:48.323362 1382 replica.cpp:655] Replica received learned notice for position 1 I0527 23:23:48.323443 1380 leveldb.cpp:343] Persisting action (29 bytes) to leveldb took 77398ns I0527 23:23:48.323461 1380 replica.cpp:676] Persisted action at 1 I0527 23:23:48.323463 1382 leveldb.cpp:343] Persisting action (29 bytes) to leveldb took 90567ns I0527 23:23:48.323467 1380 replica.cpp:661] Replica learned APPEND action at position 1 I0527 23:23:48.323477 1382 replica.cpp:676] Persisted action at 1 I0527 23:23:48.323484 1382 replica.cpp:661] Replica learned APPEND action at position 1 I0527 23:23:48.323729 1380 leveldb.cpp:438] Reading position from leveldb took 7224ns 2014-05-27 23:23:48,324:1352(0x2b1173c2a700):ZOO_INFO@zookeeper_close@2505: Closing zookeeper sessionId=0x1463fff34bd0003 to [127.0.0.1:39446] 2014-05-27 23:23:48,324:1352(0x2b117301ff80):ZOO_INFO@zookeeper_close@2505: Closing zookeeper sessionId=0x1463fff34bd0002 to [127.0.0.1:39446] I0527 23:23:48.326591 1386 network.hpp:423] ZooKeeper group memberships changed I0527 23:23:48.326690 1382 group.cpp:655] Trying to get '/log/0000000000' in ZooKeeper I0527 23:23:48.327450 1384 network.hpp:461] ZooKeeper group PIDs: { log-replica(22)@67.195.138.8:35151 } 2014-05-27 23:23:48,446:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:23:51,782:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:23:55,118:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0527 23:23:57.002908 1381 network.hpp:423] ZooKeeper group memberships changed I0527 23:23:57.003042 1381 network.hpp:461] ZooKeeper group PIDs: { } 2014-05-27 23:23:58,455:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:01,791:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:05,127:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:08,464:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:11,800:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:15,136:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:18,473:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:21,809:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:25,146:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:28,482:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:31,818:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:35,155:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:38,491:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:41,827:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:45,164:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:48,500:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:51,834:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:55,171:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:24:58,507:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:01,844:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:05,180:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:08,516:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:11,853:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:15,186:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:18,523:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:21,859:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:25,195:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:28,530:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:31,866:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:35,203:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:38,539:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:41,875:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:45,212:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:48,548:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:51,885:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:55,221:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:25:58,557:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:01,894:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:05,230:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:08,567:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:11,903:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:15,239:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:18,576:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:21,912:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:25,248:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:28,585:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:31,921:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:35,257:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:38,594:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:41,930:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:45,267:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:48,603:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:51,939:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:55,276:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:26:58,612:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:01,948:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:05,285:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:08,621:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:11,958:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:15,294:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:18,630:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:21,967:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:25,303:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:28,639:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:31,976:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:35,312:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:38,649:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:41,985:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:45,321:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:48,658:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-05-27 23:27:51,994:1352(0x2b12bc401700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:51020] zk retcode=-4, errno=111(Connection refused): server refused to accept the client ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1452","06/03/2014 02:44:51",3,"Improve Master::removeOffer to avoid further resource accounting bugs. ""Per comments on this review: https://reviews.apache.org/r/21750/ We've had numerous bugs around resource accounting in the master due to the trickiness of removing offers in the Master code. There are a few ways to improve this: 1. Add multiple offer methods to differentiate semantics: 2. Add an enum to removeOffer to differentiate removal semantics: """," useOffer(offerId); rescindOffer(offerId); discardOffer(offerId); removeOffer(offerId, USE); removeOffer(offerId, RESCIND); removeOffer(offerId, DISCARD); ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1459","06/06/2014 18:34:37",1,"Build failure: Ubuntu 13.10/clang due to missing virtual destructor ""In file included from launcher/main.cpp:19: In file included from ./launcher/launcher.hpp:24: In file included from ../3rdparty/libprocess/include/process/future.hpp:23: ../3rdparty/libprocess/include/process/owned.hpp:188:5: error: delete called on 'mesos::internal::launcher::Operation' that is abstract but has non-virtual destructor [-Werror,-Wdelete-non-virtual-dtor] delete t; ^ /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/shared_ptr_base.h:456:8: note: in instantiation of member function 'process::Owned::Data::~Data' requested here delete __p; ^ /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/shared_ptr_base.h:768:24: note: in instantiation of function template specialization 'std::__shared_count<2>::__shared_count::Data *>' requested here : _M_ptr(__p), _M_refcount(__p) ^ /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/shared_ptr_base.h:919:4: note: in instantiation of function template specialization 'std::__shared_ptr::Data, 2>::__shared_ptr::Data>' requested here __shared_ptr(__p).swap(*this); ^ ../3rdparty/libprocess/include/process/owned.hpp:68:10: note: in instantiation of function template specialization 'std::__shared_ptr::Data, 2>::reset::Data>' requested here data.reset(new Data(t)); ^ ./launcher/launcher.hpp:101:7: note: in instantiation of member function 'process::Owned::Owned' requested here add(process::Owned(new T())); ^ launcher/main.cpp:26:3: note: in instantiation of function template specialization 'mesos::internal::launcher::add' requested here launcher::add(); ^ 1 error generated.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1466","06/10/2014 01:32:36",8,"Race between executor exited event and launch task can cause overcommit of resources ""The following sequence of events can cause an overcommit --> Launch task is called for a task whose executor is already running --> Executor's resources are not accounted for on the master --> Executor exits and the event is enqueued behind launch tasks on the master --> Master sends the task to the slave which needs to commit for resources for task and the (new) executor. --> Master processes the executor exited event and re-offers the executor's resources causing an overcommit of resources.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1469","06/10/2014 18:35:42",2,"No output from review bot on timeout ""When the mesos review build times out, likely due to a long-running failing test, we have no output to debug. We should find a way to stream the output from the build instead of waiting for the build to finish.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1471","06/11/2014 04:48:02",5,"Document replicated log design/internals ""The replicated log could benefit from some documentation. In particular, how does it work? What do operators need to know? Possibly there is some overlap with our future maintenance documentation in MESOS-1470. I believe [~jieyu] has some unpublished work that could be leveraged here!""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1472","06/11/2014 23:29:09",1,"Improve child exit if slave dies during executor launch in MC ""When restarting many slaves there's a reasonable chance that a slave will be restarted between the fork and exec stages of launching an executor in the MesosContainerizer. The forked child correctly detects this however rather than abort it should safely log and then exit non-zero cleanly.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1518","06/19/2014 22:27:29",2,"Update Rate Limiting Design doc to reflect the latest changes ""- Usage - Design - Implementation Notes""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1527","06/21/2014 01:05:00",3,"Choose containerizer at runtime ""Currently you have to choose the containerizer at mesos-slave start time via the --isolation option. I'd like to be able to specify the containerizer in the request to launch the job. This could be specified by a new """"Provider"""" field in the ContainerInfo proto buf.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1545","06/26/2014 01:42:48",1,"SlaveRecoveryTest/0.MultipleFrameworks is flaky """""," [ RUN ] SlaveRecoveryTest/0.MultipleFrameworks Using temporary directory '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_6dJqxr' I0626 00:04:39.557339 5450 leveldb.cpp:176] Opened db in 179.857593ms I0626 00:04:39.565433 5450 leveldb.cpp:183] Compacted db in 8.071041ms I0626 00:04:39.565457 5450 leveldb.cpp:198] Created db iterator in 4065ns I0626 00:04:39.565466 5450 leveldb.cpp:204] Seeked to beginning of db in 596ns I0626 00:04:39.565474 5450 leveldb.cpp:273] Iterated through 0 keys in the db in 396ns I0626 00:04:39.565490 5450 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0626 00:04:39.565827 5476 recover.cpp:425] Starting replica recovery I0626 00:04:39.566033 5474 recover.cpp:451] Replica is in EMPTY status I0626 00:04:39.566504 5474 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I0626 00:04:39.566686 5477 recover.cpp:188] Received a recover response from a replica in EMPTY status I0626 00:04:39.566905 5472 recover.cpp:542] Updating replica status to STARTING I0626 00:04:39.568307 5471 master.cpp:288] Master 20140626-000439-1032504131-55423-5450 (juno.apache.org) started on 67.195.138.61:55423 I0626 00:04:39.568332 5471 master.cpp:325] Master only allowing authenticated frameworks to register I0626 00:04:39.568339 5471 master.cpp:330] Master only allowing authenticated slaves to register I0626 00:04:39.568348 5471 credentials.hpp:35] Loading credentials for authentication from '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_6dJqxr/credentials' I0626 00:04:39.568461 5471 master.cpp:356] Authorization enabled I0626 00:04:39.568739 5478 master.cpp:122] No whitelist given. Advertising offers for all slaves I0626 00:04:39.568814 5475 hierarchical_allocator_process.hpp:301] Initializing hierarchical allocator process with master : master@67.195.138.61:55423 I0626 00:04:39.569206 5478 master.cpp:1122] The newly elected leader is master@67.195.138.61:55423 with id 20140626-000439-1032504131-55423-5450 I0626 00:04:39.569223 5478 master.cpp:1135] Elected as the leading master! I0626 00:04:39.569231 5478 master.cpp:953] Recovering from registrar I0626 00:04:39.569286 5475 registrar.cpp:313] Recovering registrar I0626 00:04:39.600639 5477 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 33.682136ms I0626 00:04:39.600661 5477 replica.cpp:320] Persisted replica status to STARTING I0626 00:04:39.600790 5476 recover.cpp:451] Replica is in STARTING status I0626 00:04:39.601184 5474 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I0626 00:04:39.601274 5477 recover.cpp:188] Received a recover response from a replica in STARTING status I0626 00:04:39.601465 5471 recover.cpp:542] Updating replica status to VOTING I0626 00:04:39.610605 5471 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 9.076262ms I0626 00:04:39.610638 5471 replica.cpp:320] Persisted replica status to VOTING I0626 00:04:39.610683 5471 recover.cpp:556] Successfully joined the Paxos group I0626 00:04:39.610780 5471 recover.cpp:440] Recover process terminated I0626 00:04:39.610946 5474 log.cpp:656] Attempting to start the writer I0626 00:04:39.611486 5475 replica.cpp:474] Replica received implicit promise request with proposal 1 I0626 00:04:39.618924 5475 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 7.418789ms I0626 00:04:39.618942 5475 replica.cpp:342] Persisted promised to 1 I0626 00:04:39.619220 5476 coordinator.cpp:230] Coordinator attemping to fill missing position I0626 00:04:39.619763 5476 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0626 00:04:39.627267 5476 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 7.485492ms I0626 00:04:39.627295 5476 replica.cpp:676] Persisted action at 0 I0626 00:04:39.627822 5473 replica.cpp:508] Replica received write request for position 0 I0626 00:04:39.627861 5473 leveldb.cpp:438] Reading position from leveldb took 17132ns I0626 00:04:39.635592 5473 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 7.714322ms I0626 00:04:39.635612 5473 replica.cpp:676] Persisted action at 0 I0626 00:04:39.635797 5473 replica.cpp:655] Replica received learned notice for position 0 I0626 00:04:39.643941 5473 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 8.129347ms I0626 00:04:39.643960 5473 replica.cpp:676] Persisted action at 0 I0626 00:04:39.643970 5473 replica.cpp:661] Replica learned NOP action at position 0 I0626 00:04:39.644207 5473 log.cpp:672] Writer started with ending position 0 I0626 00:04:39.644625 5471 leveldb.cpp:438] Reading position from leveldb took 9128ns I0626 00:04:39.646010 5476 registrar.cpp:346] Successfully fetched the registry (0B) I0626 00:04:39.646044 5476 registrar.cpp:422] Attempting to update the 'registry' I0626 00:04:39.647274 5471 log.cpp:680] Attempting to append 136 bytes to the log I0626 00:04:39.647337 5471 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0626 00:04:39.647687 5476 replica.cpp:508] Replica received write request for position 1 I0626 00:04:39.655206 5476 leveldb.cpp:343] Persisting action (155 bytes) to leveldb took 7.499736ms I0626 00:04:39.655225 5476 replica.cpp:676] Persisted action at 1 I0626 00:04:39.655467 5476 replica.cpp:655] Replica received learned notice for position 1 I0626 00:04:39.663534 5476 leveldb.cpp:343] Persisting action (157 bytes) to leveldb took 8.054929ms I0626 00:04:39.663554 5476 replica.cpp:676] Persisted action at 1 I0626 00:04:39.663563 5476 replica.cpp:661] Replica learned APPEND action at position 1 I0626 00:04:39.663890 5478 registrar.cpp:479] Successfully updated 'registry' I0626 00:04:39.663947 5478 registrar.cpp:372] Successfully recovered registrar I0626 00:04:39.663969 5476 log.cpp:699] Attempting to truncate the log to 1 I0626 00:04:39.664044 5478 master.cpp:980] Recovered 0 slaves from the Registry (98B) ; allowing 10mins for slaves to re-register I0626 00:04:39.664057 5476 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0626 00:04:39.664341 5476 replica.cpp:508] Replica received write request for position 2 I0626 00:04:39.664681 5450 containerizer.cpp:124] Using isolation: posix/cpu,posix/mem I0626 00:04:39.666721 5471 slave.cpp:168] Slave started on 173)@67.195.138.61:55423 I0626 00:04:39.666741 5471 credentials.hpp:35] Loading credentials for authentication from '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/credential' I0626 00:04:39.666806 5471 slave.cpp:268] Slave using credential for: test-principal I0626 00:04:39.666936 5471 slave.cpp:281] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0626 00:04:39.667000 5471 slave.cpp:326] Slave hostname: juno.apache.org I0626 00:04:39.667009 5471 slave.cpp:327] Slave checkpoint: true I0626 00:04:39.667572 5478 state.cpp:33] Recovering state from '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta' I0626 00:04:39.667703 5475 status_update_manager.cpp:193] Recovering status update manager I0626 00:04:39.667840 5475 containerizer.cpp:287] Recovering containerizer I0626 00:04:39.668478 5471 slave.cpp:3128] Finished recovery I0626 00:04:39.668712 5471 slave.cpp:601] New master detected at master@67.195.138.61:55423 I0626 00:04:39.668738 5471 slave.cpp:677] Authenticating with master master@67.195.138.61:55423 I0626 00:04:39.668802 5471 slave.cpp:650] Detecting new master I0626 00:04:39.668861 5471 status_update_manager.cpp:167] New master detected at master@67.195.138.61:55423 I0626 00:04:39.668916 5471 authenticatee.hpp:128] Creating new client SASL connection I0626 00:04:39.669087 5471 master.cpp:3499] Authenticating slave(173)@67.195.138.61:55423 I0626 00:04:39.669203 5471 authenticator.hpp:156] Creating new server SASL connection I0626 00:04:39.669340 5471 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0626 00:04:39.669359 5471 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0626 00:04:39.669386 5471 authenticator.hpp:262] Received SASL authentication start I0626 00:04:39.669414 5471 authenticator.hpp:384] Authentication requires more steps I0626 00:04:39.669457 5471 authenticatee.hpp:265] Received SASL authentication step I0626 00:04:39.669514 5471 authenticator.hpp:290] Received SASL authentication step I0626 00:04:39.669534 5471 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'juno.apache.org' server FQDN: 'juno.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0626 00:04:39.669543 5471 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0626 00:04:39.669567 5471 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0626 00:04:39.669580 5471 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'juno.apache.org' server FQDN: 'juno.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0626 00:04:39.669589 5471 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0626 00:04:39.669594 5471 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0626 00:04:39.669606 5471 authenticator.hpp:376] Authentication success I0626 00:04:39.669641 5471 authenticatee.hpp:305] Authentication success I0626 00:04:39.669669 5471 master.cpp:3539] Successfully authenticated principal 'test-principal' at slave(173)@67.195.138.61:55423 I0626 00:04:39.669761 5450 sched.cpp:139] Version: 0.20.0 I0626 00:04:39.669764 5478 slave.cpp:734] Successfully authenticated with master master@67.195.138.61:55423 I0626 00:04:39.669826 5478 slave.cpp:972] Will retry registration in 3.190666ms if necessary I0626 00:04:39.669950 5471 master.cpp:2781] Registering slave at slave(173)@67.195.138.61:55423 (juno.apache.org) with id 20140626-000439-1032504131-55423-5450-0 I0626 00:04:39.669960 5475 sched.cpp:235] New master detected at master@67.195.138.61:55423 I0626 00:04:39.669977 5475 sched.cpp:285] Authenticating with master master@67.195.138.61:55423 I0626 00:04:39.670073 5471 registrar.cpp:422] Attempting to update the 'registry' I0626 00:04:39.670114 5475 authenticatee.hpp:128] Creating new client SASL connection I0626 00:04:39.670263 5475 master.cpp:3499] Authenticating scheduler-e66c50d2-2790-4d20-bc77-a57af0e1780b@67.195.138.61:55423 I0626 00:04:39.670361 5474 authenticator.hpp:156] Creating new server SASL connection I0626 00:04:39.670506 5475 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0626 00:04:39.670526 5475 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0626 00:04:39.670559 5475 authenticator.hpp:262] Received SASL authentication start I0626 00:04:39.670590 5475 authenticator.hpp:384] Authentication requires more steps I0626 00:04:39.670619 5475 authenticatee.hpp:265] Received SASL authentication step I0626 00:04:39.670650 5475 authenticator.hpp:290] Received SASL authentication step I0626 00:04:39.670670 5475 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'juno.apache.org' server FQDN: 'juno.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0626 00:04:39.670677 5475 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0626 00:04:39.670687 5475 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0626 00:04:39.670697 5475 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'juno.apache.org' server FQDN: 'juno.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0626 00:04:39.670706 5475 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0626 00:04:39.670712 5475 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0626 00:04:39.670723 5475 authenticator.hpp:376] Authentication success I0626 00:04:39.670749 5475 authenticatee.hpp:305] Authentication success I0626 00:04:39.670773 5475 master.cpp:3539] Successfully authenticated principal 'test-principal' at scheduler-e66c50d2-2790-4d20-bc77-a57af0e1780b@67.195.138.61:55423 I0626 00:04:39.670845 5475 sched.cpp:359] Successfully authenticated with master master@67.195.138.61:55423 I0626 00:04:39.670858 5475 sched.cpp:478] Sending registration request to master@67.195.138.61:55423 I0626 00:04:39.670899 5475 master.cpp:1241] Received registration request from scheduler-e66c50d2-2790-4d20-bc77-a57af0e1780b@67.195.138.61:55423 I0626 00:04:39.670922 5475 master.cpp:1201] Authorizing framework principal 'test-principal' to receive offers for role '*' I0626 00:04:39.671052 5475 master.cpp:1300] Registering framework 20140626-000439-1032504131-55423-5450-0000 at scheduler-e66c50d2-2790-4d20-bc77-a57af0e1780b@67.195.138.61:55423 I0626 00:04:39.671159 5474 sched.cpp:409] Framework registered with 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:39.671185 5474 sched.cpp:423] Scheduler::registered took 10223ns I0626 00:04:39.671226 5474 hierarchical_allocator_process.hpp:331] Added framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:39.671241 5474 hierarchical_allocator_process.hpp:724] No resources available to allocate! I0626 00:04:39.671247 5474 hierarchical_allocator_process.hpp:686] Performed allocation for 0 slaves in 8574ns I0626 00:04:39.671879 5476 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 7.48781ms I0626 00:04:39.671900 5476 replica.cpp:676] Persisted action at 2 I0626 00:04:39.672164 5471 replica.cpp:655] Replica received learned notice for position 2 I0626 00:04:39.674092 5472 slave.cpp:972] Will retry registration in 25.467893ms if necessary I0626 00:04:39.674108 5476 master.cpp:2769] Ignoring register slave message from slave(173)@67.195.138.61:55423 (juno.apache.org) as admission is already in progress I0626 00:04:39.680193 5471 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 8.01285ms I0626 00:04:39.680223 5471 leveldb.cpp:401] Deleting ~1 keys from leveldb took 11393ns I0626 00:04:39.680234 5471 replica.cpp:676] Persisted action at 2 I0626 00:04:39.680245 5471 replica.cpp:661] Replica learned TRUNCATE action at position 2 I0626 00:04:39.680585 5472 log.cpp:680] Attempting to append 326 bytes to the log I0626 00:04:39.680670 5477 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0626 00:04:39.680953 5474 replica.cpp:508] Replica received write request for position 3 I0626 00:04:39.688521 5474 leveldb.cpp:343] Persisting action (345 bytes) to leveldb took 7.548316ms I0626 00:04:39.688542 5474 replica.cpp:676] Persisted action at 3 I0626 00:04:39.688750 5474 replica.cpp:655] Replica received learned notice for position 3 I0626 00:04:39.696851 5474 leveldb.cpp:343] Persisting action (347 bytes) to leveldb took 8.088289ms I0626 00:04:39.696869 5474 replica.cpp:676] Persisted action at 3 I0626 00:04:39.696878 5474 replica.cpp:661] Replica learned APPEND action at position 3 I0626 00:04:39.697268 5474 registrar.cpp:479] Successfully updated 'registry' I0626 00:04:39.697350 5474 log.cpp:699] Attempting to truncate the log to 3 I0626 00:04:39.697412 5474 master.cpp:2821] Registered slave 20140626-000439-1032504131-55423-5450-0 at slave(173)@67.195.138.61:55423 (juno.apache.org) I0626 00:04:39.697423 5474 master.cpp:3967] Adding slave 20140626-000439-1032504131-55423-5450-0 at slave(173)@67.195.138.61:55423 (juno.apache.org) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0626 00:04:39.697535 5474 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0626 00:04:39.697618 5474 slave.cpp:768] Registered with master master@67.195.138.61:55423; given slave ID 20140626-000439-1032504131-55423-5450-0 I0626 00:04:39.697754 5474 slave.cpp:781] Checkpointing SlaveInfo to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/slave.info' I0626 00:04:39.697762 5471 hierarchical_allocator_process.hpp:444] Added slave 20140626-000439-1032504131-55423-5450-0 (juno.apache.org) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0626 00:04:39.697845 5471 hierarchical_allocator_process.hpp:750] Offering cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140626-000439-1032504131-55423-5450-0 to framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:39.697854 5474 slave.cpp:2325] Received ping from slave-observer(142)@67.195.138.61:55423 I0626 00:04:39.698040 5471 hierarchical_allocator_process.hpp:706] Performed allocation for slave 20140626-000439-1032504131-55423-5450-0 in 231333ns I0626 00:04:39.698051 5474 replica.cpp:508] Replica received write request for position 4 I0626 00:04:39.698118 5471 master.hpp:794] Adding offer 20140626-000439-1032504131-55423-5450-0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140626-000439-1032504131-55423-5450-0 (juno.apache.org) I0626 00:04:39.698170 5471 master.cpp:3446] Sending 1 offers to framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:39.698318 5471 sched.cpp:546] Scheduler::resourceOffers took 24371ns I0626 00:04:39.699718 5477 master.hpp:804] Removing offer 20140626-000439-1032504131-55423-5450-0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140626-000439-1032504131-55423-5450-0 (juno.apache.org) I0626 00:04:39.699787 5477 master.cpp:2125] Processing reply for offers: [ 20140626-000439-1032504131-55423-5450-0 ] on slave 20140626-000439-1032504131-55423-5450-0 at slave(173)@67.195.138.61:55423 (juno.apache.org) for framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:39.699812 5477 master.cpp:2211] Authorizing framework principal 'test-principal' to launch task 897522cc-4ec5-4904-aed0-00b6b8c41028 as user 'jenkins' I0626 00:04:39.700160 5477 master.hpp:766] Adding task 897522cc-4ec5-4904-aed0-00b6b8c41028 with resources cpus(*):1; mem(*):512 on slave 20140626-000439-1032504131-55423-5450-0 (juno.apache.org) I0626 00:04:39.700188 5477 master.cpp:2277] Launching task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 with resources cpus(*):1; mem(*):512 on slave 20140626-000439-1032504131-55423-5450-0 at slave(173)@67.195.138.61:55423 (juno.apache.org) I0626 00:04:39.700392 5471 slave.cpp:1003] Got assigned task 897522cc-4ec5-4904-aed0-00b6b8c41028 for framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:39.700479 5477 hierarchical_allocator_process.hpp:546] Framework 20140626-000439-1032504131-55423-5450-0000 left cpus(*):1; mem(*):512; disk(*):1024; ports(*):[31000-32000] unused on slave 20140626-000439-1032504131-55423-5450-0 I0626 00:04:39.700505 5471 slave.cpp:3400] Checkpointing FrameworkInfo to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/framework.info' I0626 00:04:39.700597 5477 hierarchical_allocator_process.hpp:588] Framework 20140626-000439-1032504131-55423-5450-0000 filtered slave 20140626-000439-1032504131-55423-5450-0 for 5secs I0626 00:04:39.700686 5471 slave.cpp:3407] Checkpointing framework pid 'scheduler-e66c50d2-2790-4d20-bc77-a57af0e1780b@67.195.138.61:55423' to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/framework.pid' I0626 00:04:39.700960 5471 slave.cpp:1113] Launching task 897522cc-4ec5-4904-aed0-00b6b8c41028 for framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:39.702287 5471 slave.cpp:3722] Checkpointing ExecutorInfo to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/executors/897522cc-4ec5-4904-aed0-00b6b8c41028/executor.info' I0626 00:04:39.702738 5471 slave.cpp:3837] Checkpointing TaskInfo to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/executors/897522cc-4ec5-4904-aed0-00b6b8c41028/runs/9ad3a5ac-3587-47df-96c2-df76ea09328c/tasks/897522cc-4ec5-4904-aed0-00b6b8c41028/task.info' I0626 00:04:39.702744 5476 containerizer.cpp:427] Starting container '9ad3a5ac-3587-47df-96c2-df76ea09328c' for executor '897522cc-4ec5-4904-aed0-00b6b8c41028' of framework '20140626-000439-1032504131-55423-5450-0000' I0626 00:04:39.702987 5471 slave.cpp:1223] Queuing task '897522cc-4ec5-4904-aed0-00b6b8c41028' for executor 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework '20140626-000439-1032504131-55423-5450-0000 I0626 00:04:39.703039 5471 slave.cpp:562] Successfully attached file '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/executors/897522cc-4ec5-4904-aed0-00b6b8c41028/runs/9ad3a5ac-3587-47df-96c2-df76ea09328c' I0626 00:04:39.704654 5477 launcher.cpp:137] Forked child with pid '7596' for container '9ad3a5ac-3587-47df-96c2-df76ea09328c' I0626 00:04:39.704891 5477 containerizer.cpp:705] Checkpointing executor's forked pid 7596 to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/executors/897522cc-4ec5-4904-aed0-00b6b8c41028/runs/9ad3a5ac-3587-47df-96c2-df76ea09328c/pids/forked.pid' I0626 00:04:39.705301 5474 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 7.183865ms I0626 00:04:39.705343 5474 replica.cpp:676] Persisted action at 4 I0626 00:04:39.705912 5476 containerizer.cpp:537] Fetching URIs for container '9ad3a5ac-3587-47df-96c2-df76ea09328c' using command '/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Disable-Java-Disable-Python-Disable-Webui/build/src/mesos-fetcher' I0626 00:04:39.706073 5471 replica.cpp:655] Replica received learned notice for position 4 I0626 00:04:39.713664 5471 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 6.238172ms I0626 00:04:39.713762 5471 leveldb.cpp:401] Deleting ~2 keys from leveldb took 42244ns I0626 00:04:39.713788 5471 replica.cpp:676] Persisted action at 4 I0626 00:04:39.713810 5471 replica.cpp:661] Replica learned TRUNCATE action at position 4 I0626 00:04:40.378677 5475 slave.cpp:2470] Monitoring executor '897522cc-4ec5-4904-aed0-00b6b8c41028' of framework '20140626-000439-1032504131-55423-5450-0000' in container '9ad3a5ac-3587-47df-96c2-df76ea09328c' WARNING: Logging before InitGoogleLogging() is written to STDERR I0626 00:04:40.413177 7631 process.cpp:1671] libprocess is initialized on 67.195.138.61:40619 for 8 cpus I0626 00:04:40.414454 7631 exec.cpp:131] Version: 0.20.0 I0626 00:04:40.415856 7649 exec.cpp:181] Executor started at: executor(1)@67.195.138.61:40619 with pid 7631 I0626 00:04:40.416453 5471 slave.cpp:1734] Got registration for executor '897522cc-4ec5-4904-aed0-00b6b8c41028' of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.416527 5471 slave.cpp:1819] Checkpointing executor pid 'executor(1)@67.195.138.61:40619' to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/executors/897522cc-4ec5-4904-aed0-00b6b8c41028/runs/9ad3a5ac-3587-47df-96c2-df76ea09328c/pids/libprocess.pid' I0626 00:04:40.416998 5471 slave.cpp:1853] Flushing queued task 897522cc-4ec5-4904-aed0-00b6b8c41028 for executor '897522cc-4ec5-4904-aed0-00b6b8c41028' of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.417186 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:40.417322 7648 exec.cpp:205] Executor registered on slave 20140626-000439-1032504131-55423-5450-0 I0626 00:04:40.417368 7653 process.cpp:1037] Socket closed while receiving I0626 00:04:40.418385 7648 exec.cpp:217] Executor::registered took 115121ns Registered executor on juno.apache.org I0626 00:04:40.418544 7648 exec.cpp:292] Executor asked to run task '897522cc-4ec5-4904-aed0-00b6b8c41028' Starting task 897522cc-4ec5-4904-aed0-00b6b8c41028 I0626 00:04:40.418609 7648 exec.cpp:301] Executor::launchTask took 35936ns Forked command at 7654 sh -c 'sleep 1000' I0626 00:04:40.420611 7650 exec.cpp:524] Executor sending status update TASK_RUNNING (UUID: 6d952e6d-b7d7-4f40-9f44-f7c3f81757af) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.420953 5473 slave.cpp:2088] Handling status update TASK_RUNNING (UUID: 6d952e6d-b7d7-4f40-9f44-f7c3f81757af) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 from executor(1)@67.195.138.61:40619 I0626 00:04:40.421188 5474 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 6d952e6d-b7d7-4f40-9f44-f7c3f81757af) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.421206 5474 status_update_manager.cpp:499] Creating StatusUpdate stream for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.421469 5474 status_update_manager.hpp:342] Checkpointing UPDATE for status update TASK_RUNNING (UUID: 6d952e6d-b7d7-4f40-9f44-f7c3f81757af) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.525890 5474 status_update_manager.cpp:373] Forwarding status update TASK_RUNNING (UUID: 6d952e6d-b7d7-4f40-9f44-f7c3f81757af) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 to master@67.195.138.61:55423 I0626 00:04:40.526053 5474 master.cpp:3107] Status update TASK_RUNNING (UUID: 6d952e6d-b7d7-4f40-9f44-f7c3f81757af) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 from slave 20140626-000439-1032504131-55423-5450-0 at slave(173)@67.195.138.61:55423 (juno.apache.org) I0626 00:04:40.526087 5474 slave.cpp:2246] Status update manager successfully handled status update TASK_RUNNING (UUID: 6d952e6d-b7d7-4f40-9f44-f7c3f81757af) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.526100 5474 slave.cpp:2252] Sending acknowledgement for status update TASK_RUNNING (UUID: 6d952e6d-b7d7-4f40-9f44-f7c3f81757af) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 to executor(1)@67.195.138.61:40619 I0626 00:04:40.526252 5474 sched.cpp:637] Scheduler::statusUpdate took 17393ns I0626 00:04:40.526294 5474 master.cpp:2631] Forwarding status update acknowledgement 6d952e6d-b7d7-4f40-9f44-f7c3f81757af for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 to slave 20140626-000439-1032504131-55423-5450-0 at slave(173)@67.195.138.61:55423 (juno.apache.org) I0626 00:04:40.526371 5474 status_update_manager.cpp:398] Received status update acknowledgement (UUID: 6d952e6d-b7d7-4f40-9f44-f7c3f81757af) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.526384 5474 status_update_manager.hpp:342] Checkpointing ACK for status update TASK_RUNNING (UUID: 6d952e6d-b7d7-4f40-9f44-f7c3f81757af) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.526468 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:40.526574 7651 exec.cpp:338] Executor received status update acknowledgement 6d952e6d-b7d7-4f40-9f44-f7c3f81757af for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.526679 7653 process.cpp:1037] Socket closed while receiving I0626 00:04:40.569715 5473 hierarchical_allocator_process.hpp:833] Filtered cpus(*):1; mem(*):512; disk(*):1024; ports(*):[31000-32000] on slave 20140626-000439-1032504131-55423-5450-0 for framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.569749 5473 hierarchical_allocator_process.hpp:686] Performed allocation for 1 slaves in 105698ns I0626 00:04:40.576212 5477 slave.cpp:1674] Status update manager successfully handled status update acknowledgement (UUID: 6d952e6d-b7d7-4f40-9f44-f7c3f81757af) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.578642 5450 sched.cpp:139] Version: 0.20.0 I0626 00:04:40.578886 5475 sched.cpp:235] New master detected at master@67.195.138.61:55423 I0626 00:04:40.578902 5475 sched.cpp:285] Authenticating with master master@67.195.138.61:55423 I0626 00:04:40.579040 5475 authenticatee.hpp:128] Creating new client SASL connection I0626 00:04:40.579202 5475 master.cpp:3499] Authenticating scheduler-bb54dd52-95dc-4ed9-b69c-7a65f1661180@67.195.138.61:55423 I0626 00:04:40.579313 5475 authenticator.hpp:156] Creating new server SASL connection I0626 00:04:40.579414 5475 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0626 00:04:40.579430 5475 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0626 00:04:40.579457 5475 authenticator.hpp:262] Received SASL authentication start I0626 00:04:40.579488 5475 authenticator.hpp:384] Authentication requires more steps I0626 00:04:40.579514 5475 authenticatee.hpp:265] Received SASL authentication step I0626 00:04:40.579551 5475 authenticator.hpp:290] Received SASL authentication step I0626 00:04:40.579573 5475 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'juno.apache.org' server FQDN: 'juno.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0626 00:04:40.579586 5475 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0626 00:04:40.579601 5475 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0626 00:04:40.579612 5475 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'juno.apache.org' server FQDN: 'juno.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0626 00:04:40.579619 5475 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0626 00:04:40.579624 5475 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0626 00:04:40.579638 5475 authenticator.hpp:376] Authentication success I0626 00:04:40.579664 5475 authenticatee.hpp:305] Authentication success I0626 00:04:40.579687 5475 master.cpp:3539] Successfully authenticated principal 'test-principal' at scheduler-bb54dd52-95dc-4ed9-b69c-7a65f1661180@67.195.138.61:55423 I0626 00:04:40.579768 5475 sched.cpp:359] Successfully authenticated with master master@67.195.138.61:55423 I0626 00:04:40.579781 5475 sched.cpp:478] Sending registration request to master@67.195.138.61:55423 I0626 00:04:40.579825 5475 master.cpp:1241] Received registration request from scheduler-bb54dd52-95dc-4ed9-b69c-7a65f1661180@67.195.138.61:55423 I0626 00:04:40.579845 5475 master.cpp:1201] Authorizing framework principal 'test-principal' to receive offers for role '*' I0626 00:04:40.579984 5475 master.cpp:1300] Registering framework 20140626-000439-1032504131-55423-5450-0001 at scheduler-bb54dd52-95dc-4ed9-b69c-7a65f1661180@67.195.138.61:55423 I0626 00:04:40.580056 5475 sched.cpp:409] Framework registered with 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:40.580075 5475 sched.cpp:423] Scheduler::registered took 8994ns I0626 00:04:40.580117 5475 hierarchical_allocator_process.hpp:331] Added framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:40.580173 5475 hierarchical_allocator_process.hpp:750] Offering cpus(*):1; mem(*):512; disk(*):1024; ports(*):[31000-32000] on slave 20140626-000439-1032504131-55423-5450-0 to framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:40.580366 5475 hierarchical_allocator_process.hpp:833] Filtered on slave 20140626-000439-1032504131-55423-5450-0 for framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:40.580378 5475 hierarchical_allocator_process.hpp:686] Performed allocation for 1 slaves in 251520ns I0626 00:04:40.580454 5475 master.hpp:794] Adding offer 20140626-000439-1032504131-55423-5450-1 with resources cpus(*):1; mem(*):512; disk(*):1024; ports(*):[31000-32000] on slave 20140626-000439-1032504131-55423-5450-0 (juno.apache.org) I0626 00:04:40.580509 5475 master.cpp:3446] Sending 1 offers to framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:40.580796 5476 sched.cpp:546] Scheduler::resourceOffers took 36436ns I0626 00:04:40.582280 5476 master.hpp:804] Removing offer 20140626-000439-1032504131-55423-5450-1 with resources cpus(*):1; mem(*):512; disk(*):1024; ports(*):[31000-32000] on slave 20140626-000439-1032504131-55423-5450-0 (juno.apache.org) I0626 00:04:40.582362 5476 master.cpp:2125] Processing reply for offers: [ 20140626-000439-1032504131-55423-5450-1 ] on slave 20140626-000439-1032504131-55423-5450-0 at slave(173)@67.195.138.61:55423 (juno.apache.org) for framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:40.582402 5476 master.cpp:2211] Authorizing framework principal 'test-principal' to launch task b1f40647-a2ff-475d-a56b-d2a5db9c1229 as user 'jenkins' I0626 00:04:40.582823 5475 master.hpp:766] Adding task b1f40647-a2ff-475d-a56b-d2a5db9c1229 with resources cpus(*):1; mem(*):512; disk(*):1024; ports(*):[31000-32000] on slave 20140626-000439-1032504131-55423-5450-0 (juno.apache.org) I0626 00:04:40.582892 5475 master.cpp:2277] Launching task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 with resources cpus(*):1; mem(*):512; disk(*):1024; ports(*):[31000-32000] on slave 20140626-000439-1032504131-55423-5450-0 at slave(173)@67.195.138.61:55423 (juno.apache.org) I0626 00:04:40.583001 5474 slave.cpp:1003] Got assigned task b1f40647-a2ff-475d-a56b-d2a5db9c1229 for framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:40.583097 5474 slave.cpp:3400] Checkpointing FrameworkInfo to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/framework.info' I0626 00:04:40.583204 5474 slave.cpp:3407] Checkpointing framework pid 'scheduler-bb54dd52-95dc-4ed9-b69c-7a65f1661180@67.195.138.61:55423' to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/framework.pid' I0626 00:04:40.583442 5474 slave.cpp:1113] Launching task b1f40647-a2ff-475d-a56b-d2a5db9c1229 for framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:40.584455 5474 slave.cpp:3722] Checkpointing ExecutorInfo to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/executors/b1f40647-a2ff-475d-a56b-d2a5db9c1229/executor.info' I0626 00:04:40.584846 5474 slave.cpp:3837] Checkpointing TaskInfo to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/executors/b1f40647-a2ff-475d-a56b-d2a5db9c1229/runs/44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3/tasks/b1f40647-a2ff-475d-a56b-d2a5db9c1229/task.info' I0626 00:04:40.584866 5476 containerizer.cpp:427] Starting container '44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3' for executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' of framework '20140626-000439-1032504131-55423-5450-0001' I0626 00:04:40.584976 5474 slave.cpp:1223] Queuing task 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' for executor b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework '20140626-000439-1032504131-55423-5450-0001 I0626 00:04:40.585026 5474 slave.cpp:562] Successfully attached file '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/executors/b1f40647-a2ff-475d-a56b-d2a5db9c1229/runs/44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3' I0626 00:04:40.586937 5476 launcher.cpp:137] Forked child with pid '7656' for container '44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3' I0626 00:04:40.587131 5476 containerizer.cpp:705] Checkpointing executor's forked pid 7656 to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/executors/b1f40647-a2ff-475d-a56b-d2a5db9c1229/runs/44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3/pids/forked.pid' I0626 00:04:40.587872 5477 containerizer.cpp:537] Fetching URIs for container '44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3' using command '/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Disable-Java-Disable-Python-Disable-Webui/build/src/mesos-fetcher' I0626 00:04:41.384660 5472 slave.cpp:2470] Monitoring executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' of framework '20140626-000439-1032504131-55423-5450-0001' in container '44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3' WARNING: Logging before InitGoogleLogging() is written to STDERR I0626 00:04:41.417649 7691 process.cpp:1671] libprocess is initialized on 67.195.138.61:40524 for 8 cpus I0626 00:04:41.418674 7691 exec.cpp:131] Version: 0.20.0 I0626 00:04:41.420272 7712 exec.cpp:181] Executor started at: executor(1)@67.195.138.61:40524 with pid 7691 I0626 00:04:41.420771 5477 slave.cpp:1734] Got registration for executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.420871 5477 slave.cpp:1819] Checkpointing executor pid 'executor(1)@67.195.138.61:40524' to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/executors/b1f40647-a2ff-475d-a56b-d2a5db9c1229/runs/44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3/pids/libprocess.pid' I0626 00:04:41.421335 5477 slave.cpp:1853] Flushing queued task b1f40647-a2ff-475d-a56b-d2a5db9c1229 for executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.421401 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:41.421506 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:41.421622 7709 exec.cpp:205] Executor registered on slave 20140626-000439-1032504131-55423-5450-0 I0626 00:04:41.421701 7713 process.cpp:1037] Socket closed while receiving I0626 00:04:41.421891 7713 process.cpp:1037] Socket closed while receiving I0626 00:04:41.422695 7709 exec.cpp:217] Executor::registered took 116729ns Registered executor on juno.apache.org I0626 00:04:41.422817 7709 exec.cpp:292] Executor asked to run task 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' Starting task b1f40647-a2ff-475d-a56b-d2a5db9c1229 I0626 00:04:41.422878 7709 exec.cpp:301] Executor::launchTask took 44617ns Forked command at 7714 sh -c 'sleep 1000' I0626 00:04:41.424744 7710 exec.cpp:524] Executor sending status update TASK_RUNNING (UUID: 7994ad88-77f5-45a5-91bf-b1f4957fba87) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.425102 5473 slave.cpp:2088] Handling status update TASK_RUNNING (UUID: 7994ad88-77f5-45a5-91bf-b1f4957fba87) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 from executor(1)@67.195.138.61:40524 I0626 00:04:41.425271 5472 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 7994ad88-77f5-45a5-91bf-b1f4957fba87) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.425309 5472 status_update_manager.cpp:499] Creating StatusUpdate stream for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.425585 5472 status_update_manager.hpp:342] Checkpointing UPDATE for status update TASK_RUNNING (UUID: 7994ad88-77f5-45a5-91bf-b1f4957fba87) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.517669 5472 status_update_manager.cpp:373] Forwarding status update TASK_RUNNING (UUID: 7994ad88-77f5-45a5-91bf-b1f4957fba87) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 to master@67.195.138.61:55423 I0626 00:04:41.517848 5474 slave.cpp:2246] Status update manager successfully handled status update TASK_RUNNING (UUID: 7994ad88-77f5-45a5-91bf-b1f4957fba87) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.517870 5474 slave.cpp:2252] Sending acknowledgement for status update TASK_RUNNING (UUID: 7994ad88-77f5-45a5-91bf-b1f4957fba87) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 to executor(1)@67.195.138.61:40524 I0626 00:04:41.517985 5471 master.cpp:3107] Status update TASK_RUNNING (UUID: 7994ad88-77f5-45a5-91bf-b1f4957fba87) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 from slave 20140626-000439-1032504131-55423-5450-0 at slave(173)@67.195.138.61:55423 (juno.apache.org) I0626 00:04:41.518061 5473 sched.cpp:637] Scheduler::statusUpdate took 30727ns I0626 00:04:41.518087 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:41.518188 5473 master.cpp:2631] Forwarding status update acknowledgement 7994ad88-77f5-45a5-91bf-b1f4957fba87 for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 to slave 20140626-000439-1032504131-55423-5450-0 at slave(173)@67.195.138.61:55423 (juno.apache.org) I0626 00:04:41.518209 7705 exec.cpp:338] Executor received status update acknowledgement 7994ad88-77f5-45a5-91bf-b1f4957fba87 for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.518237 7713 process.cpp:1037] Socket closed while receiving I0626 00:04:41.518332 5477 status_update_manager.cpp:398] Received status update acknowledgement (UUID: 7994ad88-77f5-45a5-91bf-b1f4957fba87) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.518358 5477 status_update_manager.hpp:342] Checkpointing ACK for status update TASK_RUNNING (UUID: 7994ad88-77f5-45a5-91bf-b1f4957fba87) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.565961 5477 slave.cpp:1674] Status update manager successfully handled status update acknowledgement (UUID: 7994ad88-77f5-45a5-91bf-b1f4957fba87) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.566172 5450 slave.cpp:486] Slave terminating I0626 00:04:41.566315 5476 master.cpp:760] Slave 20140626-000439-1032504131-55423-5450-0 at slave(173)@67.195.138.61:55423 (juno.apache.org) disconnected I0626 00:04:41.566337 5476 master.cpp:1602] Disconnecting slave 20140626-000439-1032504131-55423-5450-0 I0626 00:04:41.566411 5473 hierarchical_allocator_process.hpp:483] Slave 20140626-000439-1032504131-55423-5450-0 disconnected I0626 00:04:41.567461 5450 containerizer.cpp:124] Using isolation: posix/cpu,posix/mem I0626 00:04:41.569854 5477 slave.cpp:168] Slave started on 174)@67.195.138.61:55423 I0626 00:04:41.569874 5477 credentials.hpp:35] Loading credentials for authentication from '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/credential' I0626 00:04:41.569941 5477 slave.cpp:268] Slave using credential for: test-principal I0626 00:04:41.570065 5477 slave.cpp:281] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0626 00:04:41.570139 5477 slave.cpp:326] Slave hostname: juno.apache.org I0626 00:04:41.570148 5477 slave.cpp:327] Slave checkpoint: true I0626 00:04:41.570361 5478 hierarchical_allocator_process.hpp:833] Filtered on slave 20140626-000439-1032504131-55423-5450-0 for framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:41.570382 5478 hierarchical_allocator_process.hpp:686] Performed allocation for 1 slaves in 97062ns I0626 00:04:41.570710 5476 state.cpp:33] Recovering state from '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta' I0626 00:04:41.572727 5475 slave.cpp:3196] Recovering framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.572752 5475 slave.cpp:3572] Recovering executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.572877 5475 slave.cpp:3196] Recovering framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:41.572904 5475 slave.cpp:3572] Recovering executor '897522cc-4ec5-4904-aed0-00b6b8c41028' of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:41.573421 5478 status_update_manager.cpp:193] Recovering status update manager I0626 00:04:41.573436 5478 status_update_manager.cpp:201] Recovering executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.573470 5478 status_update_manager.cpp:499] Creating StatusUpdate stream for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.573627 5478 status_update_manager.hpp:306] Replaying status update stream for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 I0626 00:04:41.573662 5478 status_update_manager.cpp:201] Recovering executor '897522cc-4ec5-4904-aed0-00b6b8c41028' of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:41.573689 5478 status_update_manager.cpp:499] Creating StatusUpdate stream for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:41.573804 5478 status_update_manager.hpp:306] Replaying status update stream for task 897522cc-4ec5-4904-aed0-00b6b8c41028 I0626 00:04:41.573848 5475 slave.cpp:562] Successfully attached file '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/executors/b1f40647-a2ff-475d-a56b-d2a5db9c1229/runs/44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3' I0626 00:04:41.573881 5475 slave.cpp:562] Successfully attached file '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/executors/897522cc-4ec5-4904-aed0-00b6b8c41028/runs/9ad3a5ac-3587-47df-96c2-df76ea09328c' I0626 00:04:41.574369 5477 containerizer.cpp:287] Recovering containerizer I0626 00:04:41.574404 5477 containerizer.cpp:329] Recovering container '44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3' for executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.574440 5477 containerizer.cpp:329] Recovering container '9ad3a5ac-3587-47df-96c2-df76ea09328c' for executor '897522cc-4ec5-4904-aed0-00b6b8c41028' of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:41.575889 5476 slave.cpp:3069] Sending reconnect request to executor 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 at executor(1)@67.195.138.61:40619 I0626 00:04:41.576014 5476 slave.cpp:3069] Sending reconnect request to executor b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 at executor(1)@67.195.138.61:40524 I0626 00:04:41.576128 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:41.576170 7645 exec.cpp:251] Received reconnect request from slave 20140626-000439-1032504131-55423-5450-0 I0626 00:04:41.576202 7653 process.cpp:1037] Socket closed while receiving I0626 00:04:41.576230 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:41.576308 7705 exec.cpp:251] Received reconnect request from slave 20140626-000439-1032504131-55423-5450-0 I0626 00:04:41.576328 7713 process.cpp:1037] Socket closed while receiving I0626 00:04:41.576519 5472 slave.cpp:1913] Re-registering executor 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:41.576658 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:41.576730 7653 process.cpp:1037] Socket closed while receiving I0626 00:04:41.576750 7650 exec.cpp:228] Executor re-registered on slave 20140626-000439-1032504131-55423-5450-0 IRe-registered executor on juno.apache.org 0626 00:04:41.577729 7650 exec.cpp:240] Executor::reregistered took 50146ns I0626 00:04:41.590677 5476 hierarchical_allocator_process.hpp:833] Filtered on slave 20140626-000439-1032504131-55423-5450-0 for framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:41.590695 5475 slave.cpp:2037] Cleaning up un-reregistered executors I0626 00:04:41.590701 5476 hierarchical_allocator_process.hpp:686] Performed allocation for 1 slaves in 56695ns I0626 00:04:41.590706 5475 slave.cpp:2055] Killing un-reregistered executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:41.590744 5475 slave.cpp:3128] Finished recovery I0626 00:04:41.590900 5474 containerizer.cpp:903] Destroying container '44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3' I0626 00:04:41.592074 5472 slave.cpp:601] New master detected at master@67.195.138.61:55423 I0626 00:04:41.592099 5472 slave.cpp:677] Authenticating with master master@67.195.138.61:55423 I0626 00:04:41.592154 5472 slave.cpp:650] Detecting new master I0626 00:04:41.592196 5472 status_update_manager.cpp:167] New master detected at master@67.195.138.61:55423 W0626 00:04:41.592607 5477 slave.cpp:1906] Shutting down executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' of framework 20140626-000439-1032504131-55423-5450-0001 because the slave is not in recovery mode I0626 00:04:41.592816 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:41.592881 7711 exec.cpp:378] Executor asked to shutdown I0626 00:04:41.592921 7713 process.cpp:1037] Socket closed while receiving I0626 00:04:41.592954 7705 exec.cpp:77] Scheduling shutdown of the executor IShutting down 0626 00:04:41.592994 7711 exec.cpp:393] Executor::shutdown took 49357ns Sending SIGTERM to process tree at pid 7714 I0626 00:04:41.594029 5471 authenticatee.hpp:128] Creating new client SASL connection I0626 00:04:41.594419 5472 master.cpp:3499] Authenticating slave(174)@67.195.138.61:55423 I0626 00:04:41.594646 5476 authenticator.hpp:156] Creating new server SASL connection I0626 00:04:41.594898 5476 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0626 00:04:41.594923 5476 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0626 00:04:41.594960 5476 authenticator.hpp:262] Received SASL authentication start I0626 00:04:41.595002 5476 authenticator.hpp:384] Authentication requires more steps I0626 00:04:41.595039 5476 authenticatee.hpp:265] Received SASL authentication step I0626 00:04:41.595095 5476 authenticator.hpp:290] Received SASL authentication step I0626 00:04:41.595115 5476 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'juno.apache.org' server FQDN: 'juno.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0626 00:04:41.595124 5476 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0626 00:04:41.595141 5476 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0626 00:04:41.595155 5476 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'juno.apache.org' server FQDN: 'juno.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0626 00:04:41.595162 5476 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0626 00:04:41.595168 5476 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0626 00:04:41.595181 5476 authenticator.hpp:376] Authentication success I0626 00:04:41.595219 5476 authenticatee.hpp:305] Authentication success I0626 00:04:41.595252 5476 master.cpp:3539] Successfully authenticated principal 'test-principal' at slave(174)@67.195.138.61:55423 I0626 00:04:41.595978 5471 slave.cpp:734] Successfully authenticated with master master@67.195.138.61:55423 I0626 00:04:41.596087 5471 slave.cpp:972] Will retry registration in 5.904051ms if necessary W0626 00:04:41.596179 5476 master.cpp:2896] Slave at slave(174)@67.195.138.61:55423 (juno.apache.org) is being allowed to re-register with an already in use id (20140626-000439-1032504131-55423-5450-0) I0626 00:04:41.596371 5476 slave.cpp:818] Re-registered with master master@67.195.138.61:55423 I0626 00:04:41.596407 5476 slave.cpp:1584] Updating framework 20140626-000439-1032504131-55423-5450-0000 pid to scheduler-e66c50d2-2790-4d20-bc77-a57af0e1780b@67.195.138.61:55423 I0626 00:04:41.596454 5476 slave.cpp:1592] Checkpointing framework pid 'scheduler-e66c50d2-2790-4d20-bc77-a57af0e1780b@67.195.138.61:55423' to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/framework.pid' I0626 00:04:41.596570 5476 slave.cpp:1584] Updating framework 20140626-000439-1032504131-55423-5450-0001 pid to scheduler-bb54dd52-95dc-4ed9-b69c-7a65f1661180@67.195.138.61:55423 I0626 00:04:41.596608 5476 slave.cpp:1592] Checkpointing framework pid 'scheduler-bb54dd52-95dc-4ed9-b69c-7a65f1661180@67.195.138.61:55423' to '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/framework.pid' I0626 00:04:41.596710 5476 hierarchical_allocator_process.hpp:497] Slave 20140626-000439-1032504131-55423-5450-0 reconnected I0626 00:04:41.597498 5476 master.cpp:2461] Asked to kill task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:41.597523 5476 master.cpp:2562] Telling slave 20140626-000439-1032504131-55423-5450-0 at slave(174)@67.195.138.61:55423 (juno.apache.org) to kill task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:41.597580 5476 slave.cpp:1279] Asked to kill task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:41.597724 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:41.597790 7645 exec.cpp:312] Executor asked to kill task '897522cc-4ec5-4904-aed0-00b6b8c41028' I0626 00:04:41.597796 7653 process.cpp:1037] Socket closed while receiving I0626 00:04:41.597843 7645 exec.cpp:321] Executor::killTask took 26639ns Shutting down Sending SIGTERM to process tree at pid 7654 Killing the following process trees: [ -+- 7654 sh -c sleep 1000 \--- 7655 sleep 1000 ] I0626 00:04:41.656000 5479 process.cpp:1037] Socket closed while receiving Command terminated with signal Terminated (pid: 7654) I0626 00:04:42.421964 7649 exec.cpp:524] Executor sending status update TASK_KILLED (UUID: 3bd1b60e-8496-4254-8188-c160b6a7e498) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.422332 5477 slave.cpp:2088] Handling status update TASK_KILLED (UUID: 3bd1b60e-8496-4254-8188-c160b6a7e498) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 from executor(1)@67.195.138.61:40619 I0626 00:04:42.422384 5477 slave.cpp:3770] Terminating task 897522cc-4ec5-4904-aed0-00b6b8c41028 I0626 00:04:42.422912 5472 status_update_manager.cpp:320] Received status update TASK_KILLED (UUID: 3bd1b60e-8496-4254-8188-c160b6a7e498) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.422946 5472 status_update_manager.hpp:342] Checkpointing UPDATE for status update TASK_KILLED (UUID: 3bd1b60e-8496-4254-8188-c160b6a7e498) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.558498 5472 status_update_manager.cpp:373] Forwarding status update TASK_KILLED (UUID: 3bd1b60e-8496-4254-8188-c160b6a7e498) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 to master@67.195.138.61:55423 I0626 00:04:42.558712 5477 slave.cpp:2246] Status update manager successfully handled status update TASK_KILLED (UUID: 3bd1b60e-8496-4254-8188-c160b6a7e498) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.558743 5477 slave.cpp:2252] Sending acknowledgement for status update TASK_KILLED (UUID: 3bd1b60e-8496-4254-8188-c160b6a7e498) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 to executor(1)@67.195.138.61:40619 I0626 00:04:42.558749 5476 master.cpp:3107] Status update TASK_KILLED (UUID: 3bd1b60e-8496-4254-8188-c160b6a7e498) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 from slave 20140626-000439-1032504131-55423-5450-0 at slave(174)@67.195.138.61:55423 (juno.apache.org) I0626 00:04:42.558820 5476 master.hpp:784] Removing task 897522cc-4ec5-4904-aed0-00b6b8c41028 with resources cpus(*):1; mem(*):512 on slave 20140626-000439-1032504131-55423-5450-0 (juno.apache.org) I0626 00:04:42.558917 5478 sched.cpp:637] Scheduler::statusUpdate took 40786ns I0626 00:04:42.559017 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:42.559092 7653 process.cpp:1037] Socket closed while receiving I0626 00:04:42.559100 7650 exec.cpp:338] Executor received status update acknowledgement 3bd1b60e-8496-4254-8188-c160b6a7e498 for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.559386 5471 master.cpp:2631] Forwarding status update acknowledgement 3bd1b60e-8496-4254-8188-c160b6a7e498 for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 to slave 20140626-000439-1032504131-55423-5450-0 at slave(174)@67.195.138.61:55423 (juno.apache.org) I0626 00:04:42.559453 5474 hierarchical_allocator_process.hpp:635] Recovered cpus(*):1; mem(*):512 (total allocatable: cpus(*):1; mem(*):512) on slave 20140626-000439-1032504131-55423-5450-0 from framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.559494 5471 master.cpp:2461] Asked to kill task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.559516 5471 master.cpp:2562] Telling slave 20140626-000439-1032504131-55423-5450-0 at slave(174)@67.195.138.61:55423 (juno.apache.org) to kill task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.559541 5474 status_update_manager.cpp:398] Received status update acknowledgement (UUID: 3bd1b60e-8496-4254-8188-c160b6a7e498) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.559577 5474 status_update_manager.hpp:342] Checkpointing ACK for status update TASK_KILLED (UUID: 3bd1b60e-8496-4254-8188-c160b6a7e498) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.559608 5472 slave.cpp:1279] Asked to kill task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 W0626 00:04:42.559625 5472 slave.cpp:1364] Ignoring kill task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 because the executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' is terminating/terminated I0626 00:04:42.569269 5476 master.cpp:122] No whitelist given. Advertising offers for all slaves I0626 00:04:42.591553 5478 containerizer.cpp:1019] Executor for container '44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3' has exited I0626 00:04:42.591665 5477 hierarchical_allocator_process.hpp:833] Filtered cpus(*):1; mem(*):512 on slave 20140626-000439-1032504131-55423-5450-0 for framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.591794 5477 hierarchical_allocator_process.hpp:750] Offering cpus(*):1; mem(*):512 on slave 20140626-000439-1032504131-55423-5450-0 to framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.591970 5477 hierarchical_allocator_process.hpp:686] Performed allocation for 1 slaves in 352174ns I0626 00:04:42.592067 5471 master.hpp:794] Adding offer 20140626-000439-1032504131-55423-5450-2 with resources cpus(*):1; mem(*):512 on slave 20140626-000439-1032504131-55423-5450-0 (juno.apache.org) I0626 00:04:42.592103 5471 master.cpp:3446] Sending 1 offers to framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.592118 5473 slave.cpp:2528] Executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' of framework 20140626-000439-1032504131-55423-5450-0001 terminated with signal Killed E0626 00:04:42.592233 5477 slave.cpp:2796] Failed to unmonitor container for executor b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001: Not monitored I0626 00:04:42.592279 5472 sched.cpp:546] Scheduler::resourceOffers took 32048ns I0626 00:04:42.592439 5472 master.hpp:804] Removing offer 20140626-000439-1032504131-55423-5450-2 with resources cpus(*):1; mem(*):512 on slave 20140626-000439-1032504131-55423-5450-0 (juno.apache.org) I0626 00:04:42.592495 5472 master.cpp:2125] Processing reply for offers: [ 20140626-000439-1032504131-55423-5450-2 ] on slave 20140626-000439-1032504131-55423-5450-0 at slave(174)@67.195.138.61:55423 (juno.apache.org) for framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.592707 5475 hierarchical_allocator_process.hpp:546] Framework 20140626-000439-1032504131-55423-5450-0001 left cpus(*):1; mem(*):512 unused on slave 20140626-000439-1032504131-55423-5450-0 I0626 00:04:42.592865 5475 hierarchical_allocator_process.hpp:588] Framework 20140626-000439-1032504131-55423-5450-0001 filtered slave 20140626-000439-1032504131-55423-5450-0 for 5secs I0626 00:04:42.593211 5473 slave.cpp:2088] Handling status update TASK_FAILED (UUID: 69181ee5-c620-4d1a-b5d2-d7cd03a0bc7e) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 from @0.0.0.0:0 I0626 00:04:42.593237 5473 slave.cpp:3770] Terminating task b1f40647-a2ff-475d-a56b-d2a5db9c1229 W0626 00:04:42.593387 5472 containerizer.cpp:809] Ignoring update for unknown container: 44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3 I0626 00:04:42.600702 5474 status_update_manager.cpp:530] Cleaning up status update stream for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.600874 5473 slave.cpp:1674] Status update manager successfully handled status update acknowledgement (UUID: 3bd1b60e-8496-4254-8188-c160b6a7e498) for task 897522cc-4ec5-4904-aed0-00b6b8c41028 of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.600895 5473 slave.cpp:3812] Completing task 897522cc-4ec5-4904-aed0-00b6b8c41028 I0626 00:04:42.600913 5474 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: 69181ee5-c620-4d1a-b5d2-d7cd03a0bc7e) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.600939 5474 status_update_manager.hpp:342] Checkpointing UPDATE for status update TASK_FAILED (UUID: 69181ee5-c620-4d1a-b5d2-d7cd03a0bc7e) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.634199 5474 status_update_manager.cpp:373] Forwarding status update TASK_FAILED (UUID: 69181ee5-c620-4d1a-b5d2-d7cd03a0bc7e) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 to master@67.195.138.61:55423 I0626 00:04:42.634354 5475 master.cpp:3107] Status update TASK_FAILED (UUID: 69181ee5-c620-4d1a-b5d2-d7cd03a0bc7e) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 from slave 20140626-000439-1032504131-55423-5450-0 at slave(174)@67.195.138.61:55423 (juno.apache.org) I0626 00:04:42.634373 5477 slave.cpp:2246] Status update manager successfully handled status update TASK_FAILED (UUID: 69181ee5-c620-4d1a-b5d2-d7cd03a0bc7e) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.634428 5473 sched.cpp:637] Scheduler::statusUpdate took 22610ns I0626 00:04:42.634520 5475 master.hpp:784] Removing task b1f40647-a2ff-475d-a56b-d2a5db9c1229 with resources cpus(*):1; mem(*):512; disk(*):1024; ports(*):[31000-32000] on slave 20140626-000439-1032504131-55423-5450-0 (juno.apache.org) ../../src/tests/slave_recovery_tests.cpp:2930: Failure Value of: status2.get().state() Actual: TASK_FAILED Expected: TASK_KILLED I0626 00:04:42.634699 5475 master.cpp:2631] Forwarding status update acknowledgement 69181ee5-c620-4d1a-b5d2-d7cd03a0bc7e for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 to slave 20140626-000439-1032504131-55423-5450-0 at slave(174)@67.195.138.61:55423 (juno.apache.org) I0626 00:04:42.634778 5472 hierarchical_allocator_process.hpp:635] Recovered cpus(*):1; mem(*):512; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 20140626-000439-1032504131-55423-5450-0 from framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.634804 5475 status_update_manager.cpp:398] Received status update acknowledgement (UUID: 69181ee5-c620-4d1a-b5d2-d7cd03a0bc7e) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.634836 5475 status_update_manager.hpp:342] Checkpointing ACK for status update TASK_FAILED (UUID: 69181ee5-c620-4d1a-b5d2-d7cd03a0bc7e) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.634843 5472 master.cpp:710] Framework 20140626-000439-1032504131-55423-5450-0001 disconnected I0626 00:04:42.634857 5472 master.cpp:1577] Deactivating framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.634881 5472 master.cpp:732] Giving framework 20140626-000439-1032504131-55423-5450-0001 0ns to failover I0626 00:04:42.635025 5472 hierarchical_allocator_process.hpp:407] Deactivated framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.635056 5472 master.cpp:3362] Framework failover timeout, removing framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.635066 5472 master.cpp:3821] Removing framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.635154 5472 master.cpp:710] Framework 20140626-000439-1032504131-55423-5450-0000 disconnected I0626 00:04:42.635167 5472 master.cpp:1577] Deactivating framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.635226 5472 master.cpp:732] Giving framework 20140626-000439-1032504131-55423-5450-0000 0ns to failover I0626 00:04:42.635254 5478 hierarchical_allocator_process.hpp:362] Removed framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.635267 5476 slave.cpp:1407] Asked to shut down framework 20140626-000439-1032504131-55423-5450-0001 by master@67.195.138.61:55423 I0626 00:04:42.635285 5476 slave.cpp:1432] Shutting down framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.635301 5476 slave.cpp:2662] Cleaning up executor 'b1f40647-a2ff-475d-a56b-d2a5db9c1229' of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.635308 5478 hierarchical_allocator_process.hpp:407] Deactivated framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.635340 5478 master.cpp:3362] Framework failover timeout, removing framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.635349 5478 master.cpp:3821] Removing framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.635469 5478 hierarchical_allocator_process.hpp:362] Removed framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.635601 5450 master.cpp:619] Master terminating I0626 00:04:42.635840 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/executors/b1f40647-a2ff-475d-a56b-d2a5db9c1229/runs/44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3' for gc 6.99999264157333days in the future I0626 00:04:42.635916 5476 slave.cpp:2737] Cleaning up framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.635916 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/executors/b1f40647-a2ff-475d-a56b-d2a5db9c1229' for gc 6.99999264090074days in the future I0626 00:04:42.635960 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/executors/b1f40647-a2ff-475d-a56b-d2a5db9c1229/runs/44b9f0a1-fcf4-4b33-b6dc-2d886304e8b3' for gc 6.99999264048593days in the future I0626 00:04:42.636015 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001/executors/b1f40647-a2ff-475d-a56b-d2a5db9c1229' for gc 6.99999264009185days in the future I0626 00:04:42.636034 5476 slave.cpp:1407] Asked to shut down framework 20140626-000439-1032504131-55423-5450-0000 by master@67.195.138.61:55423 I0626 00:04:42.636049 5476 slave.cpp:1432] Shutting down framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.636061 5476 slave.cpp:2808] Shutting down executor '897522cc-4ec5-4904-aed0-00b6b8c41028' of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:42.636064 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001' for gc 6.99999263944593days in the future I0626 00:04:42.636107 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0001' for gc 6.9999926390963days in the future I0626 00:04:42.636207 5476 slave.cpp:2332] master@67.195.138.61:55423 exited W0626 00:04:42.636220 5476 slave.cpp:2335] Master disconnected! Waiting for a new master to be elected I0626 00:04:42.636307 5479 process.cpp:1098] Socket closed while receiving I0626 00:04:42.636379 7653 process.cpp:1037] Socket closed while receiving I0626 00:04:42.636382 7648 exec.cpp:378] Executor asked to shutdown I0626 00:04:42.636535 7648 exec.cpp:393] Executor::shutdown took 6684ns I0626 00:04:42.636545 7649 exec.cpp:77] Scheduling shutdown of the executor I0626 00:04:42.637948 5472 containerizer.cpp:903] Destroying container '9ad3a5ac-3587-47df-96c2-df76ea09328c' I0626 00:04:42.672613 5479 process.cpp:1037] Socket closed while receiving I0626 00:04:42.692251 5475 status_update_manager.cpp:530] Cleaning up status update stream for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.692435 5475 status_update_manager.cpp:282] Closing status update streams for framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:42.692450 5471 slave.cpp:1674] Status update manager successfully handled status update acknowledgement (UUID: 69181ee5-c620-4d1a-b5d2-d7cd03a0bc7e) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of framework 20140626-000439-1032504131-55423-5450-0001 E0626 00:04:42.692477 5471 slave.cpp:1685] Status update acknowledgement (UUID: 69181ee5-c620-4d1a-b5d2-d7cd03a0bc7e) for task b1f40647-a2ff-475d-a56b-d2a5db9c1229 of unknown framework 20140626-000439-1032504131-55423-5450-0001 I0626 00:04:43.592118 5473 containerizer.cpp:1019] Executor for container '9ad3a5ac-3587-47df-96c2-df76ea09328c' has exited I0626 00:04:43.592550 5475 slave.cpp:2528] Executor '897522cc-4ec5-4904-aed0-00b6b8c41028' of framework 20140626-000439-1032504131-55423-5450-0000 terminated with signal Killed I0626 00:04:43.592599 5475 slave.cpp:2662] Cleaning up executor '897522cc-4ec5-4904-aed0-00b6b8c41028' of framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:43.592901 5475 slave.cpp:2737] Cleaning up framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:43.592900 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/executors/897522cc-4ec5-4904-aed0-00b6b8c41028/runs/9ad3a5ac-3587-47df-96c2-df76ea09328c' for gc 6.99999313928296days in the future I0626 00:04:43.592991 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/executors/897522cc-4ec5-4904-aed0-00b6b8c41028' for gc 6.99999313866963days in the future I0626 00:04:43.592985 5471 status_update_manager.cpp:282] Closing status update streams for framework 20140626-000439-1032504131-55423-5450-0000 I0626 00:04:43.593022 5475 slave.cpp:486] Slave terminating I0626 00:04:43.593040 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/executors/897522cc-4ec5-4904-aed0-00b6b8c41028/runs/9ad3a5ac-3587-47df-96c2-df76ea09328c' for gc 6.99999313827556days in the future I0626 00:04:43.593086 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000/executors/897522cc-4ec5-4904-aed0-00b6b8c41028' for gc 6.99999313791704days in the future I0626 00:04:43.593125 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000' for gc 6.99999313702518days in the future I0626 00:04:43.593166 5472 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_MultipleFrameworks_G6ObtK/meta/slaves/20140626-000439-1032504131-55423-5450-0/frameworks/20140626-000439-1032504131-55423-5450-0000' for gc 6.99999313664296days in the future [ FAILED ] SlaveRecoveryTest/0.MultipleFrameworks, where TypeParam = mesos::internal::slave::MesosContainerizer (4218 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1559","07/02/2014 22:33:49",5,"Allow jenkins build machine to dump stack traces of all threads when timeout ""Many of the time, when jenkins build times out, we know that some test freezes at some place. However, most of the time, it's very hard to reproduce the deadlock on dev machines. I would be cool if we can dump the stack traces of all threads when jenkins build times out. Some command like the following: """," echo thread apply all bt > tmp; gdb attach `pgrep lt-mesos-tests` < tmp ",0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1567","07/08/2014 04:04:29",1,"Add logging of the user uid when receiving SIGTERM. ""We currently do not log the user id when receiving a SIGTERM, this makes debugging a bit difficult. It's easy to get this information through sigaction.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1586","07/14/2014 18:26:47",3,"Isolate system directories, e.g., per-container /tmp ""Ideally, tasks should not write outside their sandbox (executor work directory) but pragmatically they may need to write to /tmp, /var/tmp, or some other directory. 1) We should include any such files in disk usage and quota. 2) We should make these """"shared"""" directories private, i.e., each container has their own. 3) We should make the lifetime of any such files the same as the executor work directory.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1587","07/14/2014 18:28:24",5,"Report disk usage from MesosContainerizer ""We should report disk usage for the executor work directory from MesosContainerizer and include in the ResourceStatistics protobuf.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1592","07/14/2014 22:56:04",5,"Design inverse resource offer support ""An """"inverse"""" resource offer means that Mesos is requesting resources back from the framework, possibly within some time interval. This can be leveraged initially to provide more automated cluster maintenance, by offering schedulers the opportunity to move tasks to compensate for planned maintenance. Operators can set a time limit on how long to wait for schedulers to relocate tasks before the tasks are forcibly terminated. Inverse resource offers have many other potential uses, as it opens the opportunity for the allocator to attempt to move tasks in the cluster through the co-operation of the framework, possibly providing better over-subscription, fairness, etc.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1594","07/15/2014 02:01:28",1,"SlaveRecoveryTest/0.ReconcileKillTask is flaky ""Observed this on Jenkins. """," [ RUN ] SlaveRecoveryTest/0.ReconcileKillTask Using temporary directory '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_3zJ6DG' I0714 15:08:43.915114 27216 leveldb.cpp:176] Opened db in 474.695188ms I0714 15:08:43.933645 27216 leveldb.cpp:183] Compacted db in 18.068942ms I0714 15:08:43.934129 27216 leveldb.cpp:198] Created db iterator in 7860ns I0714 15:08:43.934439 27216 leveldb.cpp:204] Seeked to beginning of db in 2560ns I0714 15:08:43.934779 27216 leveldb.cpp:273] Iterated through 0 keys in the db in 1400ns I0714 15:08:43.935098 27216 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0714 15:08:43.936027 27238 recover.cpp:425] Starting replica recovery I0714 15:08:43.936225 27238 recover.cpp:451] Replica is in EMPTY status I0714 15:08:43.936867 27238 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I0714 15:08:43.937049 27238 recover.cpp:188] Received a recover response from a replica in EMPTY status I0714 15:08:43.937232 27238 recover.cpp:542] Updating replica status to STARTING I0714 15:08:43.945600 27235 master.cpp:288] Master 20140714-150843-16842879-55850-27216 (quantal) started on 127.0.1.1:55850 I0714 15:08:43.945643 27235 master.cpp:325] Master only allowing authenticated frameworks to register I0714 15:08:43.945651 27235 master.cpp:330] Master only allowing authenticated slaves to register I0714 15:08:43.945658 27235 credentials.hpp:36] Loading credentials for authentication from '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_3zJ6DG/credentials' I0714 15:08:43.945808 27235 master.cpp:359] Authorization enabled I0714 15:08:43.946369 27235 hierarchical_allocator_process.hpp:301] Initializing hierarchical allocator process with master : master@127.0.1.1:55850 I0714 15:08:43.946419 27235 master.cpp:122] No whitelist given. Advertising offers for all slaves I0714 15:08:43.946614 27235 master.cpp:1128] The newly elected leader is master@127.0.1.1:55850 with id 20140714-150843-16842879-55850-27216 I0714 15:08:43.946630 27235 master.cpp:1141] Elected as the leading master! I0714 15:08:43.946637 27235 master.cpp:959] Recovering from registrar I0714 15:08:43.946707 27235 registrar.cpp:313] Recovering registrar I0714 15:08:43.957895 27238 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 20.529301ms I0714 15:08:43.957978 27238 replica.cpp:320] Persisted replica status to STARTING I0714 15:08:43.958142 27238 recover.cpp:451] Replica is in STARTING status I0714 15:08:43.958664 27238 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I0714 15:08:43.958762 27238 recover.cpp:188] Received a recover response from a replica in STARTING status I0714 15:08:43.958945 27238 recover.cpp:542] Updating replica status to VOTING I0714 15:08:43.975685 27238 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 16.646136ms I0714 15:08:43.976367 27238 replica.cpp:320] Persisted replica status to VOTING I0714 15:08:43.976824 27241 recover.cpp:556] Successfully joined the Paxos group I0714 15:08:43.977072 27242 recover.cpp:440] Recover process terminated I0714 15:08:43.980590 27236 log.cpp:656] Attempting to start the writer I0714 15:08:43.981385 27236 replica.cpp:474] Replica received implicit promise request with proposal 1 I0714 15:08:43.999141 27236 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 17.705787ms I0714 15:08:43.999222 27236 replica.cpp:342] Persisted promised to 1 I0714 15:08:44.004451 27240 coordinator.cpp:230] Coordinator attemping to fill missing position I0714 15:08:44.004914 27240 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0714 15:08:44.021456 27240 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 16.499775ms I0714 15:08:44.021533 27240 replica.cpp:676] Persisted action at 0 I0714 15:08:44.022006 27240 replica.cpp:508] Replica received write request for position 0 I0714 15:08:44.022043 27240 leveldb.cpp:438] Reading position from leveldb took 21376ns I0714 15:08:44.035969 27240 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 13.885907ms I0714 15:08:44.036365 27240 replica.cpp:676] Persisted action at 0 I0714 15:08:44.040156 27238 replica.cpp:655] Replica received learned notice for position 0 I0714 15:08:44.058082 27238 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 17.860707ms I0714 15:08:44.058161 27238 replica.cpp:676] Persisted action at 0 I0714 15:08:44.058176 27238 replica.cpp:661] Replica learned NOP action at position 0 I0714 15:08:44.058526 27238 log.cpp:672] Writer started with ending position 0 I0714 15:08:44.058872 27238 leveldb.cpp:438] Reading position from leveldb took 25660ns I0714 15:08:44.060556 27238 registrar.cpp:346] Successfully fetched the registry (0B) I0714 15:08:44.060845 27238 registrar.cpp:422] Attempting to update the 'registry' I0714 15:08:44.062304 27238 log.cpp:680] Attempting to append 120 bytes to the log I0714 15:08:44.062866 27236 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0714 15:08:44.063154 27236 replica.cpp:508] Replica received write request for position 1 I0714 15:08:44.082813 27236 leveldb.cpp:343] Persisting action (137 bytes) to leveldb took 19.61683ms I0714 15:08:44.082890 27236 replica.cpp:676] Persisted action at 1 I0714 15:08:44.083256 27236 replica.cpp:655] Replica received learned notice for position 1 I0714 15:08:44.097398 27236 leveldb.cpp:343] Persisting action (139 bytes) to leveldb took 14.104796ms I0714 15:08:44.097475 27236 replica.cpp:676] Persisted action at 1 I0714 15:08:44.097488 27236 replica.cpp:661] Replica learned APPEND action at position 1 I0714 15:08:44.098569 27236 registrar.cpp:479] Successfully updated 'registry' I0714 15:08:44.098906 27240 log.cpp:699] Attempting to truncate the log to 1 I0714 15:08:44.099608 27240 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0714 15:08:44.100005 27240 replica.cpp:508] Replica received write request for position 2 I0714 15:08:44.100566 27236 registrar.cpp:372] Successfully recovered registrar I0714 15:08:44.101227 27239 master.cpp:986] Recovered 0 slaves from the Registry (84B) ; allowing 10mins for slaves to re-register I0714 15:08:44.118376 27240 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 18.329495ms I0714 15:08:44.118455 27240 replica.cpp:676] Persisted action at 2 I0714 15:08:44.122258 27242 replica.cpp:655] Replica received learned notice for position 2 I0714 15:08:44.137336 27242 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 15.023553ms I0714 15:08:44.137460 27242 leveldb.cpp:401] Deleting ~1 keys from leveldb took 55049ns I0714 15:08:44.137480 27242 replica.cpp:676] Persisted action at 2 I0714 15:08:44.137492 27242 replica.cpp:661] Replica learned TRUNCATE action at position 2 I0714 15:08:44.143729 27216 containerizer.cpp:124] Using isolation: posix/cpu,posix/mem I0714 15:08:44.145934 27242 slave.cpp:168] Slave started on 43)@127.0.1.1:55850 I0714 15:08:44.145953 27242 credentials.hpp:84] Loading credential for authentication from '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/credential' I0714 15:08:44.146040 27242 slave.cpp:266] Slave using credential for: test-principal I0714 15:08:44.146136 27242 slave.cpp:279] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0714 15:08:44.146198 27242 slave.cpp:324] Slave hostname: quantal I0714 15:08:44.146209 27242 slave.cpp:325] Slave checkpoint: true I0714 15:08:44.146708 27242 state.cpp:33] Recovering state from '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta' I0714 15:08:44.146824 27242 status_update_manager.cpp:193] Recovering status update manager I0714 15:08:44.146901 27242 containerizer.cpp:287] Recovering containerizer I0714 15:08:44.147228 27242 slave.cpp:3126] Finished recovery I0714 15:08:44.147531 27242 slave.cpp:599] New master detected at master@127.0.1.1:55850 I0714 15:08:44.147562 27242 slave.cpp:675] Authenticating with master master@127.0.1.1:55850 I0714 15:08:44.147614 27242 slave.cpp:648] Detecting new master I0714 15:08:44.147652 27242 status_update_manager.cpp:167] New master detected at master@127.0.1.1:55850 I0714 15:08:44.147691 27242 authenticatee.hpp:128] Creating new client SASL connection I0714 15:08:44.148533 27235 master.cpp:3507] Authenticating slave(43)@127.0.1.1:55850 I0714 15:08:44.148666 27235 authenticator.hpp:156] Creating new server SASL connection I0714 15:08:44.149054 27242 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0714 15:08:44.149447 27242 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0714 15:08:44.149917 27236 authenticator.hpp:262] Received SASL authentication start I0714 15:08:44.149974 27236 authenticator.hpp:384] Authentication requires more steps I0714 15:08:44.150208 27242 authenticatee.hpp:265] Received SASL authentication step I0714 15:08:44.150720 27239 authenticator.hpp:290] Received SASL authentication step I0714 15:08:44.150749 27239 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0714 15:08:44.150758 27239 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0714 15:08:44.150771 27239 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0714 15:08:44.150781 27239 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0714 15:08:44.150787 27239 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0714 15:08:44.150792 27239 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0714 15:08:44.150804 27239 authenticator.hpp:376] Authentication success I0714 15:08:44.150848 27239 master.cpp:3547] Successfully authenticated principal 'test-principal' at slave(43)@127.0.1.1:55850 I0714 15:08:44.157696 27242 authenticatee.hpp:305] Authentication success I0714 15:08:44.158855 27242 slave.cpp:732] Successfully authenticated with master master@127.0.1.1:55850 I0714 15:08:44.158936 27242 slave.cpp:970] Will retry registration in 10.352612ms if necessary I0714 15:08:44.161813 27216 sched.cpp:139] Version: 0.20.0 I0714 15:08:44.162608 27236 sched.cpp:235] New master detected at master@127.0.1.1:55850 I0714 15:08:44.162637 27236 sched.cpp:285] Authenticating with master master@127.0.1.1:55850 I0714 15:08:44.162747 27236 authenticatee.hpp:128] Creating new client SASL connection I0714 15:08:44.163506 27239 master.cpp:2789] Registering slave at slave(43)@127.0.1.1:55850 (quantal) with id 20140714-150843-16842879-55850-27216-0 I0714 15:08:44.164086 27238 registrar.cpp:422] Attempting to update the 'registry' I0714 15:08:44.165694 27238 log.cpp:680] Attempting to append 295 bytes to the log I0714 15:08:44.166231 27240 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0714 15:08:44.166517 27240 replica.cpp:508] Replica received write request for position 3 I0714 15:08:44.167199 27239 master.cpp:3507] Authenticating scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 I0714 15:08:44.167867 27241 authenticator.hpp:156] Creating new server SASL connection I0714 15:08:44.168058 27241 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0714 15:08:44.168081 27241 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0714 15:08:44.168107 27241 authenticator.hpp:262] Received SASL authentication start I0714 15:08:44.168149 27241 authenticator.hpp:384] Authentication requires more steps I0714 15:08:44.168176 27241 authenticatee.hpp:265] Received SASL authentication step I0714 15:08:44.168215 27241 authenticator.hpp:290] Received SASL authentication step I0714 15:08:44.168233 27241 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0714 15:08:44.168793 27241 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0714 15:08:44.168820 27241 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0714 15:08:44.168834 27241 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0714 15:08:44.168840 27241 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0714 15:08:44.168845 27241 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0714 15:08:44.168858 27241 authenticator.hpp:376] Authentication success I0714 15:08:44.168895 27241 authenticatee.hpp:305] Authentication success I0714 15:08:44.168970 27241 sched.cpp:359] Successfully authenticated with master master@127.0.1.1:55850 I0714 15:08:44.168987 27241 sched.cpp:478] Sending registration request to master@127.0.1.1:55850 I0714 15:08:44.169426 27239 master.cpp:1239] Queuing up registration request from scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 because authentication is still in progress I0714 15:08:44.169958 27239 master.cpp:3547] Successfully authenticated principal 'test-principal' at scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 I0714 15:08:44.170440 27241 slave.cpp:970] Will retry registration in 8.76707ms if necessary I0714 15:08:44.175359 27239 master.cpp:2777] Ignoring register slave message from slave(43)@127.0.1.1:55850 (quantal) as admission is already in progress I0714 15:08:44.175916 27239 master.cpp:1247] Received registration request from scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 I0714 15:08:44.176298 27239 master.cpp:1207] Authorizing framework principal 'test-principal' to receive offers for role '*' I0714 15:08:44.176858 27239 master.cpp:1306] Registering framework 20140714-150843-16842879-55850-27216-0000 at scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 I0714 15:08:44.177408 27236 sched.cpp:409] Framework registered with 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.177443 27236 sched.cpp:423] Scheduler::registered took 12527ns I0714 15:08:44.177727 27241 hierarchical_allocator_process.hpp:331] Added framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.177747 27241 hierarchical_allocator_process.hpp:724] No resources available to allocate! I0714 15:08:44.177753 27241 hierarchical_allocator_process.hpp:686] Performed allocation for 0 slaves in 8120ns I0714 15:08:44.179908 27241 slave.cpp:970] Will retry registration in 66.781028ms if necessary I0714 15:08:44.180007 27241 master.cpp:2777] Ignoring register slave message from slave(43)@127.0.1.1:55850 (quantal) as admission is already in progress I0714 15:08:44.183082 27240 leveldb.cpp:343] Persisting action (314 bytes) to leveldb took 16.533189ms I0714 15:08:44.183125 27240 replica.cpp:676] Persisted action at 3 I0714 15:08:44.183465 27240 replica.cpp:655] Replica received learned notice for position 3 I0714 15:08:44.203276 27240 leveldb.cpp:343] Persisting action (316 bytes) to leveldb took 19.768951ms I0714 15:08:44.203376 27240 replica.cpp:676] Persisted action at 3 I0714 15:08:44.203392 27240 replica.cpp:661] Replica learned APPEND action at position 3 I0714 15:08:44.204033 27240 registrar.cpp:479] Successfully updated 'registry' I0714 15:08:44.204138 27240 log.cpp:699] Attempting to truncate the log to 3 I0714 15:08:44.204221 27240 master.cpp:2829] Registered slave 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) I0714 15:08:44.204241 27240 master.cpp:3975] Adding slave 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0714 15:08:44.204387 27240 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0714 15:08:44.204489 27240 slave.cpp:766] Registered with master master@127.0.1.1:55850; given slave ID 20140714-150843-16842879-55850-27216-0 I0714 15:08:44.204745 27240 slave.cpp:779] Checkpointing SlaveInfo to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/slave.info' I0714 15:08:44.204954 27240 hierarchical_allocator_process.hpp:444] Added slave 20140714-150843-16842879-55850-27216-0 (quantal) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0714 15:08:44.205023 27240 hierarchical_allocator_process.hpp:750] Offering cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140714-150843-16842879-55850-27216-0 to framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.205122 27240 hierarchical_allocator_process.hpp:706] Performed allocation for slave 20140714-150843-16842879-55850-27216-0 in 131192ns I0714 15:08:44.205189 27240 slave.cpp:2323] Received ping from slave-observer(32)@127.0.1.1:55850 I0714 15:08:44.205258 27240 master.hpp:801] Adding offer 20140714-150843-16842879-55850-27216-0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140714-150843-16842879-55850-27216-0 (quantal) I0714 15:08:44.205303 27240 master.cpp:3454] Sending 1 offers to framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.205469 27240 sched.cpp:546] Scheduler::resourceOffers took 23591ns I0714 15:08:44.206351 27241 replica.cpp:508] Replica received write request for position 4 I0714 15:08:44.208353 27237 master.hpp:811] Removing offer 20140714-150843-16842879-55850-27216-0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140714-150843-16842879-55850-27216-0 (quantal) I0714 15:08:44.208436 27237 master.cpp:2133] Processing reply for offers: [ 20140714-150843-16842879-55850-27216-0 ] on slave 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) for framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.208472 27237 master.cpp:2219] Authorizing framework principal 'test-principal' to launch task 4a6783aa-8d07-46e3-8399-2a5d047f0021 as user 'jenkins' I0714 15:08:44.208909 27237 master.hpp:773] Adding task 4a6783aa-8d07-46e3-8399-2a5d047f0021 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140714-150843-16842879-55850-27216-0 (quantal) I0714 15:08:44.208947 27237 master.cpp:2285] Launching task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) I0714 15:08:44.209090 27237 slave.cpp:1001] Got assigned task 4a6783aa-8d07-46e3-8399-2a5d047f0021 for framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.209190 27237 slave.cpp:3398] Checkpointing FrameworkInfo to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/framework.info' I0714 15:08:44.209413 27237 slave.cpp:3405] Checkpointing framework pid 'scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850' to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/framework.pid' I0714 15:08:44.209710 27237 slave.cpp:1111] Launching task 4a6783aa-8d07-46e3-8399-2a5d047f0021 for framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.210978 27237 slave.cpp:3720] Checkpointing ExecutorInfo to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/executor.info' I0714 15:08:44.211520 27237 slave.cpp:3835] Checkpointing TaskInfo to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a/tasks/4a6783aa-8d07-46e3-8399-2a5d047f0021/task.info' I0714 15:08:44.211714 27237 slave.cpp:1221] Queuing task '4a6783aa-8d07-46e3-8399-2a5d047f0021' for executor 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework '20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.211937 27236 containerizer.cpp:427] Starting container '19c466f8-bb5a-4842-a152-f585ff88762a' for executor '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework '20140714-150843-16842879-55850-27216-0000' I0714 15:08:44.212242 27236 slave.cpp:560] Successfully attached file '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a' I0714 15:08:44.216187 27236 launcher.cpp:137] Forked child with pid '28451' for container '19c466f8-bb5a-4842-a152-f585ff88762a' I0714 15:08:44.217281 27236 containerizer.cpp:705] Checkpointing executor's forked pid 28451 to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a/pids/forked.pid' I0714 15:08:44.219408 27236 containerizer.cpp:537] Fetching URIs for container '19c466f8-bb5a-4842-a152-f585ff88762a' using command '/var/jenkins/workspace/mesos-ubuntu-12.10-gcc/src/mesos-fetcher' I0714 15:08:44.223963 27241 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 17.554461ms I0714 15:08:44.224501 27241 replica.cpp:676] Persisted action at 4 I0714 15:08:44.225051 27241 replica.cpp:655] Replica received learned notice for position 4 I0714 15:08:44.242923 27241 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 17.806547ms I0714 15:08:44.243057 27241 leveldb.cpp:401] Deleting ~2 keys from leveldb took 57154ns I0714 15:08:44.243078 27241 replica.cpp:676] Persisted action at 4 I0714 15:08:44.243096 27241 replica.cpp:661] Replica learned TRUNCATE action at position 4 I0714 15:08:44.401140 27241 slave.cpp:2468] Monitoring executor '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework '20140714-150843-16842879-55850-27216-0000' in container '19c466f8-bb5a-4842-a152-f585ff88762a' WARNING: Logging before InitGoogleLogging() is written to STDERR I0714 15:08:44.434221 28486 process.cpp:1671] libprocess is initialized on 127.0.1.1:34669 for 8 cpus I0714 15:08:44.436146 28486 exec.cpp:131] Version: 0.20.0 I0714 15:08:44.438555 28500 exec.cpp:181] Executor started at: executor(1)@127.0.1.1:34669 with pid 28486 I0714 15:08:44.440846 27241 slave.cpp:1732] Got registration for executor '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.440917 27241 slave.cpp:1817] Checkpointing executor pid 'executor(1)@127.0.1.1:34669' to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a/pids/libprocess.pid' I0714 15:08:44.442373 27243 process.cpp:1098] Socket closed while receiving I0714 15:08:44.442790 27241 slave.cpp:1851] Flushing queued task 4a6783aa-8d07-46e3-8399-2a5d047f0021 for executor '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.443192 27243 process.cpp:1098] Socket closed while receiving I0714 15:08:44.443994 28508 process.cpp:1037] Socket closed while receiving I0714 15:08:44.444144 28508 process.cpp:1037] Socket closed while receiving I0714 15:08:44.444741 28500 exec.cpp:205] Executor registered on slave 20140714-150843-16842879-55850-27216-0 Registered executor on quantal I0714 15:08:44.446338 28500 exec.cpp:217] Executor::registered took 534236ns I0714 15:08:44.446715 28500 exec.cpp:292] Executor asked to run task '4a6783aa-8d07-46e3-8399-2a5d047f0021' Starting task 4a6783aa-8d07-46e3-8399-2a5d047f0021 I0714 15:08:44.447548 28500 exec.cpp:301] Executor::launchTask took 584306ns sh -c 'sleep 1000' Forked command at 28509 I0714 15:08:44.451202 28506 exec.cpp:524] Executor sending status update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.452327 27239 slave.cpp:2086] Handling status update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 from executor(1)@127.0.1.1:34669 I0714 15:08:44.452503 27239 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.452520 27239 status_update_manager.cpp:499] Creating StatusUpdate stream for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.452775 27239 status_update_manager.hpp:342] Checkpointing UPDATE for status update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.472384 27239 status_update_manager.cpp:373] Forwarding status update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 to master@127.0.1.1:55850 I0714 15:08:44.472764 27237 master.cpp:3115] Status update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 from slave 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) I0714 15:08:44.472854 27237 sched.cpp:637] Scheduler::statusUpdate took 17656ns I0714 15:08:44.472920 27237 master.cpp:2639] Forwarding status update acknowledgement 323fc20a-b5b8-475d-8752-b1f853797f55 for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 to slave 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) I0714 15:08:44.473122 27239 status_update_manager.cpp:398] Received status update acknowledgement (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.473146 27239 status_update_manager.hpp:342] Checkpointing ACK for status update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.473244 27237 slave.cpp:2244] Status update manager successfully handled status update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.473258 27237 slave.cpp:2250] Sending acknowledgement for status update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 to executor(1)@127.0.1.1:34669 I0714 15:08:44.473567 27243 process.cpp:1098] Socket closed while receiving I0714 15:08:44.474095 28508 process.cpp:1037] Socket closed while receiving I0714 15:08:44.474676 28502 exec.cpp:338] Executor received status update acknowledgement 323fc20a-b5b8-475d-8752-b1f853797f55 for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.491111 27239 slave.cpp:1672] Status update manager successfully handled status update acknowledgement (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.491761 27216 slave.cpp:484] Slave terminating I0714 15:08:44.492559 27216 containerizer.cpp:124] Using isolation: posix/cpu,posix/mem I0714 15:08:44.494635 27237 master.cpp:766] Slave 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) disconnected I0714 15:08:44.494663 27237 master.cpp:1608] Disconnecting slave 20140714-150843-16842879-55850-27216-0 I0714 15:08:44.495120 27237 slave.cpp:168] Slave started on 44)@127.0.1.1:55850 I0714 15:08:44.495133 27237 credentials.hpp:84] Loading credential for authentication from '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/credential' I0714 15:08:44.495226 27237 slave.cpp:266] Slave using credential for: test-principal I0714 15:08:44.495322 27237 slave.cpp:279] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0714 15:08:44.495407 27237 slave.cpp:324] Slave hostname: quantal I0714 15:08:44.495419 27237 slave.cpp:325] Slave checkpoint: true I0714 15:08:44.495939 27242 master.cpp:2469] Asked to kill task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.496207 27238 state.cpp:33] Recovering state from '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta' I0714 15:08:44.498291 27240 slave.cpp:3194] Recovering framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.498325 27240 slave.cpp:3570] Recovering executor '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.498940 27240 status_update_manager.cpp:193] Recovering status update manager I0714 15:08:44.498956 27240 status_update_manager.cpp:201] Recovering executor '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.498975 27240 status_update_manager.cpp:499] Creating StatusUpdate stream for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.499092 27240 status_update_manager.hpp:306] Replaying status update stream for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 I0714 15:08:44.499241 27240 slave.cpp:560] Successfully attached file '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a' I0714 15:08:44.499433 27240 containerizer.cpp:287] Recovering containerizer I0714 15:08:44.499457 27240 containerizer.cpp:329] Recovering container '19c466f8-bb5a-4842-a152-f585ff88762a' for executor '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.495811 27237 hierarchical_allocator_process.hpp:483] Slave 20140714-150843-16842879-55850-27216-0 disconnected I0714 15:08:44.501255 27240 slave.cpp:3067] Sending reconnect request to executor 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 at executor(1)@127.0.1.1:34669 I0714 15:08:44.502030 28501 exec.cpp:251] Received reconnect request from slave 20140714-150843-16842879-55850-27216-0 I0714 15:08:44.502627 27243 process.cpp:1098] Socket closed while receiving I0714 15:08:44.502681 28508 process.cpp:1037] Socket closed while receiving I0714 15:08:44.503211 27240 slave.cpp:1911] Re-registering executor 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:44.504238 28501 exec.cpp:228] Executor re-registered on slave 20140714-150843-16842879-55850-27216-0 I0714 15:08:44.505033 28501 exec.cpp:240] Executor::reregistered took 45053ns Re-registered executor on quantal I0714 15:08:44.505507 27243 process.cpp:1098] Socket closed while receiving I0714 15:08:44.505558 28508 process.cpp:1037] Socket closed while receiving I0714 15:08:44.948043 27241 hierarchical_allocator_process.hpp:686] Performed allocation for 1 slaves in 124255ns I0714 15:08:45.948671 27237 hierarchical_allocator_process.hpp:686] Performed allocation for 1 slaves in 61521ns I0714 15:08:46.503978 27238 slave.cpp:2035] Cleaning up un-reregistered executors I0714 15:08:46.504050 27238 slave.cpp:3126] Finished recovery I0714 15:08:46.504590 27238 slave.cpp:599] New master detected at master@127.0.1.1:55850 I0714 15:08:46.504639 27238 slave.cpp:675] Authenticating with master master@127.0.1.1:55850 I0714 15:08:46.504729 27238 slave.cpp:648] Detecting new master I0714 15:08:46.504772 27238 status_update_manager.cpp:167] New master detected at master@127.0.1.1:55850 I0714 15:08:46.504863 27238 authenticatee.hpp:128] Creating new client SASL connection I0714 15:08:46.505091 27238 master.cpp:3507] Authenticating slave(44)@127.0.1.1:55850 I0714 15:08:46.505239 27238 authenticator.hpp:156] Creating new server SASL connection I0714 15:08:46.505363 27238 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0714 15:08:46.505393 27238 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0714 15:08:46.505420 27238 authenticator.hpp:262] Received SASL authentication start I0714 15:08:46.505476 27238 authenticator.hpp:384] Authentication requires more steps I0714 15:08:46.505506 27238 authenticatee.hpp:265] Received SASL authentication step I0714 15:08:46.505542 27238 authenticator.hpp:290] Received SASL authentication step I0714 15:08:46.505558 27238 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0714 15:08:46.505566 27238 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0714 15:08:46.505584 27238 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0714 15:08:46.505595 27238 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0714 15:08:46.505601 27238 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0714 15:08:46.505606 27238 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0714 15:08:46.505616 27238 authenticator.hpp:376] Authentication success I0714 15:08:46.505646 27238 authenticatee.hpp:305] Authentication success I0714 15:08:46.505671 27238 master.cpp:3547] Successfully authenticated principal 'test-principal' at slave(44)@127.0.1.1:55850 I0714 15:08:46.505769 27238 slave.cpp:732] Successfully authenticated with master master@127.0.1.1:55850 I0714 15:08:46.505873 27238 slave.cpp:970] Will retry registration in 17.903094ms if necessary W0714 15:08:46.505991 27238 master.cpp:2904] Slave at slave(44)@127.0.1.1:55850 (quantal) is being allowed to re-register with an already in use id (20140714-150843-16842879-55850-27216-0) W0714 15:08:46.506063 27238 master.cpp:3679] Slave 20140714-150843-16842879-55850-27216-0 at slave(44)@127.0.1.1:55850 (quantal) has non-terminal task 4a6783aa-8d07-46e3-8399-2a5d047f0021 that is supposed to be killed. Killing it now! I0714 15:08:46.506150 27238 slave.cpp:816] Re-registered with master master@127.0.1.1:55850 I0714 15:08:46.506186 27238 slave.cpp:1277] Asked to kill task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:46.507275 27241 hierarchical_allocator_process.hpp:497] Slave 20140714-150843-16842879-55850-27216-0 reconnected I0714 15:08:46.508061 28504 exec.cpp:312] Executor asked to kill task '4a6783aa-8d07-46e3-8399-2a5d047f0021' I0714 15:08:46.508117 28504 exec.cpp:321] Executor::killTask took 24954ns Shutting down Sending SIGTERM to process tree at pid 28509 I0714 15:08:46.512238 27243 process.cpp:1098] Socket closed while receiving I0714 15:08:46.512508 27238 slave.cpp:1582] Updating framework 20140714-150843-16842879-55850-27216-0000 pid to scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 I0714 15:08:46.512846 27238 slave.cpp:1590] Checkpointing framework pid 'scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850' to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/framework.pid' I0714 15:08:46.513419 28508 process.cpp:1037] Socket closed while receiving Killing the following process trees: [ -+- 28509 sh -c sleep 1000 \--- 28510 sleep 1000 ] Command terminated with signal Terminated (pid: 28509) I0714 15:08:46.940232 28506 exec.cpp:524] Executor sending status update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:46.940918 27240 slave.cpp:2086] Handling status update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 from executor(1)@127.0.1.1:34669 I0714 15:08:46.940979 27240 slave.cpp:3768] Terminating task 4a6783aa-8d07-46e3-8399-2a5d047f0021 I0714 15:08:46.941603 27240 status_update_manager.cpp:320] Received status update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:46.941644 27240 status_update_manager.hpp:342] Checkpointing UPDATE for status update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:46.949417 27236 hierarchical_allocator_process.hpp:686] Performed allocation for 1 slaves in 63926ns I0714 15:08:46.965200 27240 status_update_manager.cpp:373] Forwarding status update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 to master@127.0.1.1:55850 I0714 15:08:46.965625 27239 master.cpp:3115] Status update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 from slave 20140714-150843-16842879-55850-27216-0 at slave(44)@127.0.1.1:55850 (quantal) I0714 15:08:46.965724 27239 master.hpp:791] Removing task 4a6783aa-8d07-46e3-8399-2a5d047f0021 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140714-150843-16842879-55850-27216-0 (quantal) I0714 15:08:46.965903 27239 sched.cpp:637] Scheduler::statusUpdate took 39326ns I0714 15:08:46.966022 27239 hierarchical_allocator_process.hpp:635] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 20140714-150843-16842879-55850-27216-0 from framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:46.966120 27239 master.cpp:2639] Forwarding status update acknowledgement e3a5f8fd-eefc-42c6-94a7-086c93c01968 for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 to slave 20140714-150843-16842879-55850-27216-0 at slave(44)@127.0.1.1:55850 (quantal) I0714 15:08:46.966501 27241 slave.cpp:2244] Status update manager successfully handled status update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:46.966519 27241 slave.cpp:2250] Sending acknowledgement for status update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 to executor(1)@127.0.1.1:34669 I0714 15:08:46.966754 27240 status_update_manager.cpp:398] Received status update acknowledgement (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:46.966785 27240 status_update_manager.hpp:342] Checkpointing ACK for status update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:46.967386 28500 exec.cpp:338] Executor received status update acknowledgement e3a5f8fd-eefc-42c6-94a7-086c93c01968 for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:46.967562 27243 process.cpp:1098] Socket closed while receiving I0714 15:08:46.968147 28508 process.cpp:1037] Socket closed while receiving I0714 15:08:46.984608 27240 status_update_manager.cpp:530] Cleaning up status update stream for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:46.985239 27236 slave.cpp:1672] Status update manager successfully handled status update acknowledgement (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:46.985280 27236 slave.cpp:3810] Completing task 4a6783aa-8d07-46e3-8399-2a5d047f0021 I0714 15:08:47.940703 27243 process.cpp:1037] Socket closed while receiving I0714 15:08:47.940984 27238 containerizer.cpp:1019] Executor for container '19c466f8-bb5a-4842-a152-f585ff88762a' has exited I0714 15:08:47.941007 27238 containerizer.cpp:903] Destroying container '19c466f8-bb5a-4842-a152-f585ff88762a' I0714 15:08:47.950192 27241 hierarchical_allocator_process.hpp:750] Offering cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140714-150843-16842879-55850-27216-0 to framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:47.950405 27241 hierarchical_allocator_process.hpp:686] Performed allocation for 1 slaves in 320604ns I0714 15:08:47.950518 27241 master.hpp:801] Adding offer 20140714-150843-16842879-55850-27216-1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140714-150843-16842879-55850-27216-0 (quantal) I0714 15:08:47.950572 27241 master.cpp:3454] Sending 1 offers to framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:47.950774 27241 sched.cpp:546] Scheduler::resourceOffers took 37944ns I0714 15:08:47.951179 27216 master.cpp:625] Master terminating I0714 15:08:47.951263 27216 master.hpp:811] Removing offer 20140714-150843-16842879-55850-27216-1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140714-150843-16842879-55850-27216-0 (quantal) I0714 15:08:47.953447 27242 sched.cpp:747] Stopping framework '20140714-150843-16842879-55850-27216-0000' I0714 15:08:47.953547 27242 slave.cpp:2330] master@127.0.1.1:55850 exited W0714 15:08:47.953567 27242 slave.cpp:2333] Master disconnected! Waiting for a new master to be elected I0714 15:08:47.964512 27238 slave.cpp:2526] Executor '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework 20140714-150843-16842879-55850-27216-0000 exited with status 0 I0714 15:08:47.968690 27238 slave.cpp:2660] Cleaning up executor '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:47.969348 27236 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a' for gc 6.99998878298667days in the future I0714 15:08:47.969751 27241 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021' for gc 6.99998877682963days in the future I0714 15:08:47.970082 27239 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a' for gc 6.99998877336889days in the future I0714 15:08:47.970379 27242 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021' for gc 6.99998876968889days in the future I0714 15:08:47.970587 27238 slave.cpp:2735] Cleaning up framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:47.970960 27237 status_update_manager.cpp:282] Closing status update streams for framework 20140714-150843-16842879-55850-27216-0000 I0714 15:08:47.971225 27236 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000' for gc 6.99998875966519days in the future I0714 15:08:47.971549 27241 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000' for gc 6.99998875612148days in the future W0714 15:08:47.975971 27235 containerizer.cpp:893] Ignoring destroy of unknown container: 19c466f8-bb5a-4842-a152-f585ff88762a ./tests/cluster.hpp:530: Failure (wait).failure(): Unknown container: 19c466f8-bb5a-4842-a152-f585ff88762a # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00000000005a0299, pid=27216, tid=47907931709760 # # JRE version: OpenJDK Runtime Environment (7.0_55-b14) (build 1.7.0_55-b14) # Java VM: OpenJDK 64-Bit Server VM (24.51-b03 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [lt-mesos-tests+0x1a0299] mlock@@GLIBC_2.2.5+0x1a0299 # # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try """"ulimit -c unlimited"""" before starting Java again # # An error report file with more information is saved as: # /var/jenkins/workspace/mesos-ubuntu-12.10-gcc/src/hs_err_pid27216.log # # If you would like to submit a bug report, please include # instructions on how to reproduce the bug and visit: # http://icedtea.classpath.org/bugzilla # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # make[3]: *** [check-local] Aborted make[3]: Leaving directory `/var/jenkins/workspace/mesos-ubuntu-12.10-gcc/src' make[2]: *** [check-am] Error 2 make[2]: Leaving directory `/var/jenkins/workspace/mesos-ubuntu-12.10-gcc/src' make[1]: *** [check] Error 2 make[1]: Leaving directory `/var/jenkins/workspace/mesos-ubuntu-12.10-gcc/src' make: *** [check-recursive] Error 1 Build step 'Execute shell' marked build as failure erreicht: 1854109 Sending e-mails to: kernel-test@twitter.com apache-mesos@twitter.com Finished: FAILURE Help us localize this page Page generated: Jul 14, 2014 5:57:17 PMREST ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1624","07/22/2014 05:54:25",1,"Apache Jenkins build fails due to -lsnappy is set when building leveldb ""The failed build: https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/2261/consoleFull {noformat:title=the log where -lsnappy is used when compiling leveldb} gzip -d -c ../../3rdparty/leveldb.tar.gz | tar xf - test ! -e ../../3rdparty/leveldb.patch || patch -d leveldb -p1 <../../3rdparty/leveldb.patch touch leveldb-stamp cd leveldb && \ make CC=""""gcc"""" CXX=""""g++"""" OPT=""""-g -g2 -O2 -Wno-unused-local-typedefs -std=c++11 -fPIC"""" make[5]: Entering directory `/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/mesos-0.20.0/_build/3rdparty/leveldb' g++ -pthread -lsnappy -shared -Wl,-soname -Wl,/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/mesos-0.20.0/_build/3rdparty/leveldb/libleveldb.so.1 -I. -I./include -fno-builtin-memcmp -pthread -DOS_LINUX -DLEVELDB_PLATFORM_POSIX -DSNAPPY -g -g2 -O2 -Wno-unused-local-typedefs -std=c++11 -fPIC -fPIC db/builder.cc db/c.cc db/db_impl.cc db/db_iter.cc db/dbformat.cc db/filename.cc db/log_reader.cc db/log_writer.cc db/memtable.cc db/repair.cc db/table_cache.cc db/version_edit.cc db/version_set.cc db/write_batch.cc table/block.cc table/block_builder.cc table/filter_block.cc table/format.cc table/iterator.cc table/merger.cc table/table.cc table/table_builder.cc table/two_level_iterator.cc util/arena.cc util/bloom.cc util/cache.cc util/coding.cc util/comparator.cc util/crc32c.cc util/env.cc util/env_posix.cc util/filter_policy.cc util/hash.cc util/histogram.cc util/logging.cc util/options.cc util/status.cc port/port_posix.cc -o libleveldb.so.1.4 ln -fs libleveldb.so.1.4 libleveldb.so ln -fs libleveldb.so.1.4 libleveldb.so.1 g++ -I. -I./include -fno-builtin-memcmp -pthread -DOS_LINUX -DLEVELDB_PLATFORM_POSIX -DSNAPPY -g -g2 -O2 -Wno-unused-local-typedefs -std=c++11 -fPIC -c db/builder.cc -o db/builder.o """," gzip -d -c ../../3rdparty/leveldb.tar.gz | tar xf - test ! -e ../../3rdparty/leveldb.patch || patch -d leveldb -p1 <../../3rdparty/leveldb.patch touch leveldb-stamp cd leveldb && \ make CC=""""gcc"""" CXX=""""g++"""" OPT=""""-g -g2 -O2 -Wno-unused-local-typedefs -std=c++11 -fPIC"""" make[5]: Entering directory `/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/mesos-0.20.0/_build/3rdparty/leveldb' g++ -pthread -lsnappy -shared -Wl,-soname -Wl,/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/mesos-0.20.0/_build/3rdparty/leveldb/libleveldb.so.1 -I. -I./include -fno-builtin-memcmp -pthread -DOS_LINUX -DLEVELDB_PLATFORM_POSIX -DSNAPPY -g -g2 -O2 -Wno-unused-local-typedefs -std=c++11 -fPIC -fPIC db/builder.cc db/c.cc db/db_impl.cc db/db_iter.cc db/dbformat.cc db/filename.cc db/log_reader.cc db/log_writer.cc db/memtable.cc db/repair.cc db/table_cache.cc db/version_edit.cc db/version_set.cc db/write_batch.cc table/block.cc table/block_builder.cc table/filter_block.cc table/format.cc table/iterator.cc table/merger.cc table/table.cc table/table_builder.cc table/two_level_iterator.cc util/arena.cc util/bloom.cc util/cache.cc util/coding.cc util/comparator.cc util/crc32c.cc util/env.cc util/env_posix.cc util/filter_policy.cc util/hash.cc util/histogram.cc util/logging.cc util/options.cc util/status.cc port/port_posix.cc -o libleveldb.so.1.4 ln -fs libleveldb.so.1.4 libleveldb.so ln -fs libleveldb.so.1.4 libleveldb.so.1 g++ -I. -I./include -fno-builtin-memcmp -pthread -DOS_LINUX -DLEVELDB_PLATFORM_POSIX -DSNAPPY -g -g2 -O2 -Wno-unused-local-typedefs -std=c++11 -fPIC -c db/builder.cc -o db/builder.o /bin/bash ../libtool --tag=CXX --mode=link g++ -pthread -g -g2 -O2 -Wno-unused-local-typedefs -std=c++11 -o mesos-local local/mesos_local-main.o libmesos.la -lsasl2 -lcurl -lz -lrt libtool: link: g++ -pthread -g -g2 -O2 -Wno-unused-local-typedefs -std=c++11 -o .libs/mesos-local local/mesos_local-main.o ./.libs/libmesos.so -lsasl2 /usr/lib/x86_64-linux-gnu/libcurl.so -lz -lrt -pthread -Wl,-rpath -Wl,/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/mesos-0.20.0/_inst/lib ./.libs/libmesos.so: undefined reference to `snappy::RawCompress(char const*, unsigned long, char*, unsigned long*)' ./.libs/libmesos.so: undefined reference to `snappy::RawUncompress(char const*, unsigned long, char*)' ./.libs/libmesos.so: undefined reference to `snappy::GetUncompressedLength(char const*, unsigned long, unsigned long*)' ./.libs/libmesos.so: undefined reference to `snappy::MaxCompressedLength(unsigned long)' ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1668","08/04/2014 23:04:18",2,"Handle a temporary one-way master --> slave socket closure. ""In MESOS-1529, we realized that it's possible for a slave to remain disconnected in the master if the following occurs: → Master and Slave connected operating normally. → Temporary one-way network failure, master→slave link breaks. → Master marks slave as disconnected. → Network restored and health checking continues normally, slave is not removed as a result. Slave does not attempt to re-register since it is receiving pings once again. → Slave remains disconnected according to the master, and the slave does not try to re-register. Bad! We were originally thinking of using a failover timeout in the master to remove these slaves that don't re-register. However, it can be dangerous when ZooKeeper issues are preventing the slave from re-registering with the master; we do not want to remove a ton of slaves in this situation. Rather, when the slave is health checking correctly but does not re-register within a timeout, we could send a registration request from the master to the slave, telling the slave that it must re-register. This message could also be used when receiving status updates (or other messages) from slaves that are disconnected in the master.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1676","08/06/2014 19:21:32",1,"ZooKeeperMasterContenderDetectorTest.MasterDetectorTimedoutSession is flaky ""{noformat:title=} [ RUN ] ZooKeeperMasterContenderDetectorTest.MasterDetectorTimedoutSession I0806 01:18:37.648684 17458 zookeeper_test_server.cpp:158] Started ZooKeeperTestServer on port 42069 2014-08-06 01:18:37,650:17458(0x2b4679ca5700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:37,650:17458(0x2b4679ca5700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:37,650:17458(0x2b4679ca5700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:37,650:17458(0x2b4679ca5700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:37,650:17458(0x2b4679ca5700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:37,651:17458(0x2b4679ca5700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:37,651:17458(0x2b4679ca5700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:37,651:17458(0x2b4679ca5700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:37,651:17458(0x2b4679ca5700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x1682db0 flags=0 2014-08-06 01:18:37,656:17458(0x2b468638b700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:42069] 2014-08-06 01:18:37,669:17458(0x2b468638b700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:42069], sessionId=0x147aa6601cf0000, negotiated timeout=6000 I0806 01:18:37.671725 17486 group.cpp:313] Group process (group(37)@127.0.1.1:55561) connected to ZooKeeper I0806 01:18:37.671758 17486 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) I0806 01:18:37.671771 17486 group.cpp:385] Trying to create path '/mesos' in ZooKeeper 2014-08-06 01:18:39,101:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:42,441:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0806 01:18:42.656673 17481 contender.cpp:131] Joining the ZK group I0806 01:18:42.662484 17484 contender.cpp:247] New candidate (id='0') has entered the contest for leadership I0806 01:18:42.663754 17481 detector.cpp:138] Detected a new leader: (id='0') I0806 01:18:42.663884 17481 group.cpp:658] Trying to get '/mesos/info_0000000000' in ZooKeeper I0806 01:18:42.664788 17483 detector.cpp:426] A new leading master (UPID=@128.150.152.0:10000) is detected 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x15c00f0 flags=0 2014-08-06 01:18:42,668:17458(0x2b4686d91700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:42069] 2014-08-06 01:18:42,672:17458(0x2b4686d91700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:42069], sessionId=0x147aa6601cf0001, negotiated timeout=6000 I0806 01:18:42.673542 17485 group.cpp:313] Group process (group(38)@127.0.1.1:55561) connected to ZooKeeper I0806 01:18:42.673570 17485 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) I0806 01:18:42.673580 17485 group.cpp:385] Trying to create path '/mesos' in ZooKeeper 2014-08-06 01:18:46,796:17458(0x2b468638b700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 2131ms 2014-08-06 01:18:46,796:17458(0x2b468638b700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:42069] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:42069 timed out (exceeded timeout by 131ms) 2014-08-06 01:18:46,796:17458(0x2b468638b700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 2131ms 2014-08-06 01:18:46,796:17458(0x2b4686d91700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 2115ms 2014-08-06 01:18:46,796:17458(0x2b4686d91700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:42069] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:42069 timed out (exceeded timeout by 115ms) 2014-08-06 01:18:46,796:17458(0x2b4686d91700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 2115ms 2014-08-06 01:18:46,799:17458(0x2b4687394700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 1025ms 2014-08-06 01:18:46,800:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0806 01:18:46.806895 17486 group.cpp:418] Lost connection to ZooKeeper, attempting to reconnect ... I0806 01:18:46.807857 17479 group.cpp:418] Lost connection to ZooKeeper, attempting to reconnect ... I0806 01:18:47.669064 17482 contender.cpp:131] Joining the ZK group 2014-08-06 01:18:47,669:17458(0x2b4686d91700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 2989ms 2014-08-06 01:18:47,669:17458(0x2b4686d91700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:42069] 2014-08-06 01:18:47,671:17458(0x2b4686d91700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:42069], sessionId=0x147aa6601cf0001, negotiated timeout=6000 I0806 01:18:47.682868 17485 contender.cpp:247] New candidate (id='1') has entered the contest for leadership I0806 01:18:47.683404 17482 group.cpp:313] Group process (group(38)@127.0.1.1:55561) reconnected to ZooKeeper I0806 01:18:47.683445 17482 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) I0806 01:18:47.685998 17482 detector.cpp:138] Detected a new leader: (id='0') I0806 01:18:47.686142 17482 group.cpp:658] Trying to get '/mesos/info_0000000000' in ZooKeeper I0806 01:18:47.687289 17479 detector.cpp:426] A new leading master (UPID=@128.150.152.0:10000) is detected 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x2b467c0421c0 flags=0 2014-08-06 01:18:47,699:17458(0x2b4687de6700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:42069] 2014-08-06 01:18:47,712:17458(0x2b4687de6700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:42069], sessionId=0x147aa6601cf0002, negotiated timeout=6000 I0806 01:18:47.712846 17479 group.cpp:313] Group process (group(39)@127.0.1.1:55561) connected to ZooKeeper I0806 01:18:47.712873 17479 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) I0806 01:18:47.712882 17479 group.cpp:385] Trying to create path '/mesos' in ZooKeeper I0806 01:18:47.714648 17479 detector.cpp:138] Detected a new leader: (id='0') I0806 01:18:47.714759 17479 group.cpp:658] Trying to get '/mesos/info_0000000000' in ZooKeeper I0806 01:18:47.716130 17479 detector.cpp:426] A new leading master (UPID=@128.150.152.0:10000) is detected 2014-08-06 01:18:47,718:17458(0x2b4686d91700):ZOO_ERROR@handle_socket_error_msg@1721: Socket [127.0.0.1:42069] zk retcode=-4, errno=112(Host is down): failed while receiving a server response I0806 01:18:47.718889 17479 group.cpp:418] Lost connection to ZooKeeper, attempting to reconnect ... 2014-08-06 01:18:47,720:17458(0x2b4687de6700):ZOO_ERROR@handle_socket_error_msg@1721: Socket [127.0.0.1:42069] zk retcode=-4, errno=112(Host is down): failed while receiving a server response I0806 01:18:47.720788 17484 group.cpp:418] Lost connection to ZooKeeper, attempting to reconnect ... I0806 01:18:47.724663 17458 zookeeper_test_server.cpp:122] Shutdown ZooKeeperTestServer on port 42069 2014-08-06 01:18:48,798:17458(0x2b468638b700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 4133ms 2014-08-06 01:18:48,798:17458(0x2b468638b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:49,720:17458(0x2b4686d91700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 33ms 2014-08-06 01:18:49,721:17458(0x2b4686d91700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:49,722:17458(0x2b4687de6700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:50,136:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:50,800:17458(0x2b468638b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:51,723:17458(0x2b4686d91700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:51,723:17458(0x2b4687de6700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:52,801:17458(0x2b468638b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client W0806 01:18:52.842553 17481 group.cpp:456] Timed out waiting to reconnect to ZooKeeper. Forcing ZooKeeper session (sessionId=147aa6601cf0000) expiration I0806 01:18:52.842911 17481 group.cpp:472] ZooKeeper session expired I0806 01:18:52.843468 17485 detector.cpp:126] The current leader (id=0) is lost I0806 01:18:52.843483 17485 detector.cpp:138] Detected a new leader: None I0806 01:18:52.843618 17485 contender.cpp:196] Membership cancelled: 0 2014-08-06 01:18:52,843:17458(0x2b4679aa4700):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0x147aa6601cf0000 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x1349ad0 flags=0 2014-08-06 01:18:52,844:17458(0x2b468698f700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:53,473:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client W0806 01:18:53.720684 17480 group.cpp:456] Timed out waiting to reconnect to ZooKeeper. Forcing ZooKeeper session (sessionId=147aa6601cf0001) expiration I0806 01:18:53.721132 17480 group.cpp:472] ZooKeeper session expired I0806 01:18:53.721516 17479 detector.cpp:126] The current leader (id=0) is lost I0806 01:18:53.721534 17479 detector.cpp:138] Detected a new leader: None I0806 01:18:53.721696 17479 contender.cpp:196] Membership cancelled: 1 2014-08-06 01:18:53,721:17458(0x2b46798a3700):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0x147aa6601cf0001 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x16a0550 flags=0 2014-08-06 01:18:53,723:17458(0x2b4686f92700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:53,726:17458(0x2b4687de6700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client W0806 01:18:53.730258 17479 group.cpp:456] Timed out waiting to reconnect to ZooKeeper. Forcing ZooKeeper session (sessionId=147aa6601cf0002) expiration I0806 01:18:53.730736 17479 group.cpp:472] ZooKeeper session expired I0806 01:18:53.731081 17481 detector.cpp:126] The current leader (id=0) is lost I0806 01:18:53.731132 17481 detector.cpp:138] Detected a new leader: None 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0x147aa6601cf0002 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:53,732:17458(0x2b46796a2700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:53,732:17458(0x2b46796a2700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x2b467c035f30 flags=0 2014-08-06 01:18:53,733:17458(0x2b4687be5700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:54,512:17458(0x2b468698f700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:55,393:17458(0x2b4686f92700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:55,403:17458(0x2b4687be5700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:56,301:17458(0x2b468698f700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 122ms 2014-08-06 01:18:56,302:17458(0x2b468698f700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:56,809:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:57,939:17458(0x2b4686f92700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 879ms 2014-08-06 01:18:57,940:17458(0x2b4686f92700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:57,940:17458(0x2b4687be5700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 870ms 2014-08-06 01:18:57,940:17458(0x2b4687be5700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client tests/master_contender_detector_tests.cpp:574: Failure Failed to wait 10secs for leaderReconnecting 2014-08-06 01:18:57,941:17458(0x2b46794a0120):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0 I0806 01:18:57.949972 17458 contender.cpp:186] Now cancelling the membership: 1 2014-08-06 01:18:57,950:17458(0x2b46794a0120):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0 I0806 01:18:57.950731 17458 contender.cpp:186] Now cancelling the membership: 0 2014-08-06 01:18:57,951:17458(0x2b46794a0120):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0 ../3rdparty/libprocess/include/process/gmock.hpp:298: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: dispatch matcher (1, 16-byte object <50-20 4A-00 00-00 00-00 00-00 00-00 00-00 00-00>) Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] ZooKeeperMasterContenderDetectorTest.MasterDetectorTimedoutSession (20308 ms) {noformat}"""," [ RUN ] ZooKeeperMasterContenderDetectorTest.MasterDetectorTimedoutSession I0806 01:18:37.648684 17458 zookeeper_test_server.cpp:158] Started ZooKeeperTestServer on port 42069 2014-08-06 01:18:37,650:17458(0x2b4679ca5700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:37,650:17458(0x2b4679ca5700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:37,650:17458(0x2b4679ca5700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:37,650:17458(0x2b4679ca5700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:37,650:17458(0x2b4679ca5700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:37,651:17458(0x2b4679ca5700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:37,651:17458(0x2b4679ca5700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:37,651:17458(0x2b4679ca5700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:37,651:17458(0x2b4679ca5700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x1682db0 flags=0 2014-08-06 01:18:37,656:17458(0x2b468638b700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:42069] 2014-08-06 01:18:37,669:17458(0x2b468638b700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:42069], sessionId=0x147aa6601cf0000, negotiated timeout=6000 I0806 01:18:37.671725 17486 group.cpp:313] Group process (group(37)@127.0.1.1:55561) connected to ZooKeeper I0806 01:18:37.671758 17486 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) I0806 01:18:37.671771 17486 group.cpp:385] Trying to create path '/mesos' in ZooKeeper 2014-08-06 01:18:39,101:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:42,441:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0806 01:18:42.656673 17481 contender.cpp:131] Joining the ZK group I0806 01:18:42.662484 17484 contender.cpp:247] New candidate (id='0') has entered the contest for leadership I0806 01:18:42.663754 17481 detector.cpp:138] Detected a new leader: (id='0') I0806 01:18:42.663884 17481 group.cpp:658] Trying to get '/mesos/info_0000000000' in ZooKeeper I0806 01:18:42.664788 17483 detector.cpp:426] A new leading master (UPID=@128.150.152.0:10000) is detected 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:42,666:17458(0x2b4679ea6700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x15c00f0 flags=0 2014-08-06 01:18:42,668:17458(0x2b4686d91700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:42069] 2014-08-06 01:18:42,672:17458(0x2b4686d91700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:42069], sessionId=0x147aa6601cf0001, negotiated timeout=6000 I0806 01:18:42.673542 17485 group.cpp:313] Group process (group(38)@127.0.1.1:55561) connected to ZooKeeper I0806 01:18:42.673570 17485 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) I0806 01:18:42.673580 17485 group.cpp:385] Trying to create path '/mesos' in ZooKeeper 2014-08-06 01:18:46,796:17458(0x2b468638b700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 2131ms 2014-08-06 01:18:46,796:17458(0x2b468638b700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:42069] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:42069 timed out (exceeded timeout by 131ms) 2014-08-06 01:18:46,796:17458(0x2b468638b700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 2131ms 2014-08-06 01:18:46,796:17458(0x2b4686d91700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 2115ms 2014-08-06 01:18:46,796:17458(0x2b4686d91700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:42069] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:42069 timed out (exceeded timeout by 115ms) 2014-08-06 01:18:46,796:17458(0x2b4686d91700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 2115ms 2014-08-06 01:18:46,799:17458(0x2b4687394700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 1025ms 2014-08-06 01:18:46,800:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0806 01:18:46.806895 17486 group.cpp:418] Lost connection to ZooKeeper, attempting to reconnect ... I0806 01:18:46.807857 17479 group.cpp:418] Lost connection to ZooKeeper, attempting to reconnect ... I0806 01:18:47.669064 17482 contender.cpp:131] Joining the ZK group 2014-08-06 01:18:47,669:17458(0x2b4686d91700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 2989ms 2014-08-06 01:18:47,669:17458(0x2b4686d91700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:42069] 2014-08-06 01:18:47,671:17458(0x2b4686d91700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:42069], sessionId=0x147aa6601cf0001, negotiated timeout=6000 I0806 01:18:47.682868 17485 contender.cpp:247] New candidate (id='1') has entered the contest for leadership I0806 01:18:47.683404 17482 group.cpp:313] Group process (group(38)@127.0.1.1:55561) reconnected to ZooKeeper I0806 01:18:47.683445 17482 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) I0806 01:18:47.685998 17482 detector.cpp:138] Detected a new leader: (id='0') I0806 01:18:47.686142 17482 group.cpp:658] Trying to get '/mesos/info_0000000000' in ZooKeeper I0806 01:18:47.687289 17479 detector.cpp:426] A new leading master (UPID=@128.150.152.0:10000) is detected 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:47,687:17458(0x2b467a2a8700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x2b467c0421c0 flags=0 2014-08-06 01:18:47,699:17458(0x2b4687de6700):ZOO_INFO@check_events@1703: initiated connection to server [127.0.0.1:42069] 2014-08-06 01:18:47,712:17458(0x2b4687de6700):ZOO_INFO@check_events@1750: session establishment complete on server [127.0.0.1:42069], sessionId=0x147aa6601cf0002, negotiated timeout=6000 I0806 01:18:47.712846 17479 group.cpp:313] Group process (group(39)@127.0.1.1:55561) connected to ZooKeeper I0806 01:18:47.712873 17479 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) I0806 01:18:47.712882 17479 group.cpp:385] Trying to create path '/mesos' in ZooKeeper I0806 01:18:47.714648 17479 detector.cpp:138] Detected a new leader: (id='0') I0806 01:18:47.714759 17479 group.cpp:658] Trying to get '/mesos/info_0000000000' in ZooKeeper I0806 01:18:47.716130 17479 detector.cpp:426] A new leading master (UPID=@128.150.152.0:10000) is detected 2014-08-06 01:18:47,718:17458(0x2b4686d91700):ZOO_ERROR@handle_socket_error_msg@1721: Socket [127.0.0.1:42069] zk retcode=-4, errno=112(Host is down): failed while receiving a server response I0806 01:18:47.718889 17479 group.cpp:418] Lost connection to ZooKeeper, attempting to reconnect ... 2014-08-06 01:18:47,720:17458(0x2b4687de6700):ZOO_ERROR@handle_socket_error_msg@1721: Socket [127.0.0.1:42069] zk retcode=-4, errno=112(Host is down): failed while receiving a server response I0806 01:18:47.720788 17484 group.cpp:418] Lost connection to ZooKeeper, attempting to reconnect ... I0806 01:18:47.724663 17458 zookeeper_test_server.cpp:122] Shutdown ZooKeeperTestServer on port 42069 2014-08-06 01:18:48,798:17458(0x2b468638b700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 4133ms 2014-08-06 01:18:48,798:17458(0x2b468638b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:49,720:17458(0x2b4686d91700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 33ms 2014-08-06 01:18:49,721:17458(0x2b4686d91700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:49,722:17458(0x2b4687de6700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:50,136:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:50,800:17458(0x2b468638b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:51,723:17458(0x2b4686d91700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:51,723:17458(0x2b4687de6700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:52,801:17458(0x2b468638b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client W0806 01:18:52.842553 17481 group.cpp:456] Timed out waiting to reconnect to ZooKeeper. Forcing ZooKeeper session (sessionId=147aa6601cf0000) expiration I0806 01:18:52.842911 17481 group.cpp:472] ZooKeeper session expired I0806 01:18:52.843468 17485 detector.cpp:126] The current leader (id=0) is lost I0806 01:18:52.843483 17485 detector.cpp:138] Detected a new leader: None I0806 01:18:52.843618 17485 contender.cpp:196] Membership cancelled: 0 2014-08-06 01:18:52,843:17458(0x2b4679aa4700):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0x147aa6601cf0000 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:52,844:17458(0x2b4679aa4700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x1349ad0 flags=0 2014-08-06 01:18:52,844:17458(0x2b468698f700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:53,473:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client W0806 01:18:53.720684 17480 group.cpp:456] Timed out waiting to reconnect to ZooKeeper. Forcing ZooKeeper session (sessionId=147aa6601cf0001) expiration I0806 01:18:53.721132 17480 group.cpp:472] ZooKeeper session expired I0806 01:18:53.721516 17479 detector.cpp:126] The current leader (id=0) is lost I0806 01:18:53.721534 17479 detector.cpp:138] Detected a new leader: None I0806 01:18:53.721696 17479 contender.cpp:196] Membership cancelled: 1 2014-08-06 01:18:53,721:17458(0x2b46798a3700):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0x147aa6601cf0001 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:53,722:17458(0x2b46798a3700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x16a0550 flags=0 2014-08-06 01:18:53,723:17458(0x2b4686f92700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:53,726:17458(0x2b4687de6700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client W0806 01:18:53.730258 17479 group.cpp:456] Timed out waiting to reconnect to ZooKeeper. Forcing ZooKeeper session (sessionId=147aa6601cf0002) expiration I0806 01:18:53.730736 17479 group.cpp:472] ZooKeeper session expired I0806 01:18:53.731081 17481 detector.cpp:126] The current leader (id=0) is lost I0806 01:18:53.731132 17481 detector.cpp:138] Detected a new leader: None 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0x147aa6601cf0002 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@716: Client environment:host.name=lucid 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-64-generic 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@725: Client environment:os.version=#128-Ubuntu SMP Tue Jul 15 08:32:40 UTC 2014 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2014-08-06 01:18:53,731:17458(0x2b46796a2700):ZOO_INFO@log_env@741: Client environment:user.home=/home/jenkins 2014-08-06 01:18:53,732:17458(0x2b46796a2700):ZOO_INFO@log_env@753: Client environment:user.dir=/var/jenkins/workspace/mesos-ubuntu-10.04-gcc/src 2014-08-06 01:18:53,732:17458(0x2b46796a2700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:42069 sessionTimeout=5000 watcher=0x2b467450bc00 sessionId=0 sessionPasswd= context=0x2b467c035f30 flags=0 2014-08-06 01:18:53,733:17458(0x2b4687be5700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:54,512:17458(0x2b468698f700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:55,393:17458(0x2b4686f92700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:55,403:17458(0x2b4687be5700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:56,301:17458(0x2b468698f700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 122ms 2014-08-06 01:18:56,302:17458(0x2b468698f700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:56,809:17458(0x2b4687394700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:36197] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:57,939:17458(0x2b4686f92700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 879ms 2014-08-06 01:18:57,940:17458(0x2b4686f92700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2014-08-06 01:18:57,940:17458(0x2b4687be5700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 870ms 2014-08-06 01:18:57,940:17458(0x2b4687be5700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:42069] zk retcode=-4, errno=111(Connection refused): server refused to accept the client tests/master_contender_detector_tests.cpp:574: Failure Failed to wait 10secs for leaderReconnecting 2014-08-06 01:18:57,941:17458(0x2b46794a0120):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0 I0806 01:18:57.949972 17458 contender.cpp:186] Now cancelling the membership: 1 2014-08-06 01:18:57,950:17458(0x2b46794a0120):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0 I0806 01:18:57.950731 17458 contender.cpp:186] Now cancelling the membership: 0 2014-08-06 01:18:57,951:17458(0x2b46794a0120):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0 ../3rdparty/libprocess/include/process/gmock.hpp:298: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: dispatch matcher (1, 16-byte object <50-20 4A-00 00-00 00-00 00-00 00-00 00-00 00-00>) Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] ZooKeeperMasterContenderDetectorTest.MasterDetectorTimedoutSession (20308 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1683","08/07/2014 19:59:27",2,"Create user doc for framework rate limiting feature ""Create a Markdown doc under /docs""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1695","08/12/2014 19:01:28",1,"The stats.json endpoint on the slave exposes ""registered"" as a string. ""The slave is currently exposing a string value for the """"registered"""" statistic, this should be a number: Should be a pretty straightforward fix, looks like this first originated back in 2013: """," slave:5051/stats.json { """"recovery_errors"""": 0, """"registered"""": """"1"""", """"slave/executors_registering"""": 0, ... } commit b8291304e1523eb67ea8dc5f195cdb0d8e7d8348 Author: Vinod Kone Date: Wed Jul 3 12:37:36 2013 -0700 Added a """"registered"""" key/value pair to slave's stats.json. Review: https://reviews.apache.org/r/12256 diff --git a/src/slave/http.cpp b/src/slave/http.cpp index dc2955f..dd51516 100644 --- a/src/slave/http.cpp +++ b/src/slave/http.cpp @@ -281,6 +281,8 @@ Future Slave::Http::stats(const Request& request) object.values[""""lost_tasks""""] = slave.stats.tasks[TASK_LOST]; object.values[""""valid_status_updates""""] = slave.stats.validStatusUpdates; object.values[""""invalid_status_updates""""] = slave.stats.invalidStatusUpdates; + object.values[""""registered""""] = slave.master ? """"1"""" : """"0""""; + return OK(object, request.query.get(""""jsonp"""")); } ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1696","08/12/2014 20:01:49",3,"Improve reconciliation between master and slave. ""As we update the Master to keep tasks in memory until they are both terminal and acknowledged (MESOS-1410), the lifetime of tasks in Mesos will look as follows: In the current form of reconciliation, the slave sends to the master all tasks that are not both terminal and acknowledged. At any point in the above lifecycle, the slave's re-registration message can reach the master. Note the following properties: *(1)* The master may have a non-terminal task, not present in the slave's re-registration message. *(2)* The master may have a non-terminal task, present in the slave's re-registration message but in a different state. *(3)* The slave's re-registration message may contain a terminal unacknowledged task unknown to the master. In the current master / slave [reconciliation|https://github.com/apache/mesos/blob/0.19.1/src/master/master.cpp#L3146] code, the master assumes that case (1) is because a launch task message was dropped, and it sends TASK_LOST. We've seen above that (1) can happen even when the task reaches the slave correctly, so this can lead to inconsistency! After chatting with [~vinodkone], we're considering updating the reconciliation to occur as follows: → Slave sends all tasks that are not both terminal and acknowledged, during re-registration. This is the same as before. → If the master sees tasks that are missing in the slave, the master sends the tasks that need to be reconciled to the slave for the tasks. This can be piggy-backed on the re-registration message. → The slave will send TASK_LOST if the task is not known to it. Preferably in a retried manner, unless we update socket closure on the slave to force a re-registration."""," Master Slave {} {} {Tn} {} // Master receives Task T, non-terminal. Forwards to slave. {Tn} {Tn} // Slave receives Task T, non-terminal. {Tn} {Tt} // Task becomes terminal on slave. Update forwarded. {Tt} {Tt} // Master receives update, forwards to framework. {} {Tt} // Master receives ack, forwards to slave. {} {} // Slave receives ack. ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1698","08/13/2014 18:35:17",2,"make check segfaults ""Observed this on Apache CI: https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/2331/consoleFull It looks like the segfault happens before any tests are run. So I suspect somewhere in the setup phase of the tests. """," mv -f .deps/tests-time_tests.Tpo .deps/tests-time_tests.Po /bin/bash ./libtool --tag=CXX --mode=link g++ -g -g2 -O2 -Wno-unused-local-typedefs -std=c++11 -o tests tests-decoder_tests.o tests-encoder_tests.o tests-http_tests.o tests-io_tests.o tests-main.o tests-mutex_tests.o tests-metrics_tests.o tests-owned_tests.o tests-process_tests.o tests-queue_tests.o tests-reap_tests.o tests-sequence_tests.o tests-shared_tests.o tests-statistics_tests.o tests-subprocess_tests.o tests-system_tests.o tests-timeseries_tests.o tests-time_tests.o 3rdparty/libgmock.la libprocess.la 3rdparty/glog-0.3.3/libglog.la 3rdparty/libry_http_parser.la 3rdparty/libev-4.15/libev.la -lz -lrt libtool: link: g++ -g -g2 -O2 -Wno-unused-local-typedefs -std=c++11 -o tests tests-decoder_tests.o tests-encoder_tests.o tests-http_tests.o tests-io_tests.o tests-main.o tests-mutex_tests.o tests-metrics_tests.o tests-owned_tests.o tests-process_tests.o tests-queue_tests.o tests-reap_tests.o tests-sequence_tests.o tests-shared_tests.o tests-statistics_tests.o tests-subprocess_tests.o tests-system_tests.o tests-timeseries_tests.o tests-time_tests.o 3rdparty/.libs/libgmock.a ./.libs/libprocess.a /home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/3rdparty/libprocess/3rdparty/glog-0.3.3/.libs/libglog.a /home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/3rdparty/libprocess/3rdparty/libev-4.15/.libs/libev.a 3rdparty/glog-0.3.3/.libs/libglog.a -lpthread 3rdparty/.libs/libry_http_parser.a 3rdparty/libev-4.15/.libs/libev.a -lm -lz -lrt make[5]: Leaving directory `/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/3rdparty/libprocess' make check-local make[5]: Entering directory `/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/3rdparty/libprocess' ./tests Note: Google Test filter = [==========] Running 0 tests from 0 test cases. [==========] 0 tests from 0 test cases ran. (0 ms total) [ PASSED ] 0 tests. YOU HAVE 3 DISABLED TESTS make[5]: *** [check-local] Segmentation fault make[5]: Leaving directory `/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/3rdparty/libprocess' make[4]: *** [check-am] Error 2 make[4]: Leaving directory `/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/3rdparty/libprocess' make[3]: *** [check-recursive] Error 1 make[3]: Leaving directory `/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/3rdparty/libprocess' make[2]: *** [check-recursive] Error 1 make[2]: Leaving directory `/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/3rdparty' make[1]: *** [check] Error 2 make[1]: Leaving directory `/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Set-JAVA_HOME/build/3rdparty' make: *** [check-recursive] Error 1 Build step 'Execute shell' marked build as failure Sending e-mails to: dev@mesos.apache.org benjamin.hindman@gmail.com dhamon@twitter.com yujie.jay@gmail.com Finished: FAILURE ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1705","08/15/2014 16:15:16",2,"SubprocessTest.Status sometimes flakes out ""It's a pretty rare event, but happened more then once. [ RUN ] SubprocessTest.Status *** Aborted at 1408023909 (unix time) try """"date -d @1408023909"""" if you are using GNU date *** PC: @ 0x35700094b1 (unknown) *** SIGTERM (@0x3e8000041d8) received by PID 16872 (TID 0x7fa9ea426780) from PID 16856; stack trace: *** @ 0x3570435cb0 (unknown) @ 0x35700094b1 (unknown) @ 0x3570009d9f (unknown) @ 0x357000e726 (unknown) @ 0x3570015185 (unknown) @ 0x5ead42 process::childMain() @ 0x5ece8d std::_Function_handler<>::_M_invoke() @ 0x5eac9c process::defaultClone() @ 0x5ebbd4 process::subprocess() @ 0x55a229 process::subprocess() @ 0x55a846 process::subprocess() @ 0x54224c SubprocessTest_Status_Test::TestBody() @ 0x7fa9ea460323 (unknown) @ 0x7fa9ea455b67 (unknown) @ 0x7fa9ea455c0e (unknown) @ 0x7fa9ea455d15 (unknown) @ 0x7fa9ea4593a8 (unknown) @ 0x7fa9ea459647 (unknown) @ 0x422466 main @ 0x3570421d65 (unknown) @ 0x4260bd (unknown) [ OK ] SubprocessTest.Status (153 ms)""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1712","08/18/2014 20:43:00",2,"Automate disallowing of commits mixing mesos/libprocess/stout ""For various reasons, we don't want to mix mesos/libprocess/stout changes into a single commit. Typically, it is up to the reviewee/reviewer to catch this. It wold be nice to automate this via the pre-commit hook .""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1715","08/19/2014 00:13:38",3,"The slave does not send pending tasks during re-registration. ""In what looks like an oversight, the pending tasks and executors in the slave (Framework::pending) are not sent in the re-registration message. For tasks, this can lead to spurious TASK_LOST notifications being generated by the master when it falsely thinks the tasks are not present on the slave.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1720","08/19/2014 02:44:35",8,"Slave should send exited executor message when the executor is never launched. ""When the slave sends TASK_LOST before launching an executor for a task, the slave does not send an exited executor message to the master. Since the master receives no exited executor message, it still thinks the executor's resources are consumed on the slave. One possible fix for this would be to send the exited executor message to the master in these cases.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1728","08/21/2014 17:37:02",1,"Libprocess: report bind parameters on failure ""When you attempt to start slave or master and there's another one already running there, it is nice to report what are the actual parameters to {{bind}} call that failed.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1748","08/29/2014 18:53:52",1,"MasterZooKeeperTest.LostZooKeeperCluster is flaky ""{noformat:title=} tests/master_tests.cpp:1795: Failure Failed to wait 10secs for slaveRegisteredMessage {noformat} Should have placed the FUTURE_MESSAGE that attempts to capture this messages before the slave starts..."""," tests/master_tests.cpp:1795: Failure Failed to wait 10secs for slaveRegisteredMessage ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1758","09/04/2014 00:08:43",2,"Freezer failure leads to lost task during container destruction. ""In the past we've seen numerous issues around the freezer. Lately, on the 2.6.44 kernel, we've seen issues where we're unable to freeze the cgroup: (1) An oom occurs. (2) No indication of oom in the kernel logs. (3) The slave is unable to freeze the cgroup. (4) The task is marked as lost. We should consider avoiding the freezer entirely in favor of a kill(2) loop. We don't have to wait for pid namespaces to remove the freezer dependency. At the very least, when the freezer fails, we should proceed with a kill(2) loop to ensure that we destroy the cgroup."""," I0903 16:46:24.956040 25469 mem.cpp:575] Memory limit exceeded: Requested: 15488MB Maximum Used: 15488MB MEMORY STATISTICS: cache 7958691840 rss 8281653248 mapped_file 9474048 pgpgin 4487861 pgpgout 522933 pgfault 2533780 pgmajfault 11 inactive_anon 0 active_anon 8281653248 inactive_file 7631708160 active_file 326852608 unevictable 0 hierarchical_memory_limit 16240345088 total_cache 7958691840 total_rss 8281653248 total_mapped_file 9474048 total_pgpgin 4487861 total_pgpgout 522933 total_pgfault 2533780 total_pgmajfault 11 total_inactive_anon 0 total_active_anon 8281653248 total_inactive_file 7631728640 total_active_file 326852608 total_unevictable 0 I0903 16:46:24.956848 25469 containerizer.cpp:1041] Container bbb9732a-d600-4c1b-b326-846338c608c3 has reached its limit for resource mem(*):1.62403e+10 and will be terminated I0903 16:46:24.957427 25469 containerizer.cpp:909] Destroying container 'bbb9732a-d600-4c1b-b326-846338c608c3' I0903 16:46:24.958664 25481 cgroups.cpp:2192] Freezing cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:46:34.959529 25488 cgroups.cpp:2209] Thawing cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:46:34.962070 25482 cgroups.cpp:1404] Successfullly thawed cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 after 1.710848ms I0903 16:46:34.962658 25479 cgroups.cpp:2192] Freezing cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:46:44.963349 25488 cgroups.cpp:2209] Thawing cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:46:44.965631 25472 cgroups.cpp:1404] Successfullly thawed cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 after 1.588224ms I0903 16:46:44.966356 25472 cgroups.cpp:2192] Freezing cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:46:54.967254 25488 cgroups.cpp:2209] Thawing cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:46:56.008447 25475 cgroups.cpp:1404] Successfullly thawed cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 after 2.15296ms I0903 16:46:56.009071 25466 cgroups.cpp:2192] Freezing cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:47:06.010329 25488 cgroups.cpp:2209] Thawing cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:47:06.012538 25467 cgroups.cpp:1404] Successfullly thawed cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 after 1.643008ms I0903 16:47:06.013216 25467 cgroups.cpp:2192] Freezing cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:47:12.516348 25480 slave.cpp:3030] Current usage 9.57%. Max allowed age: 5.630238827780799days I0903 16:47:16.015192 25488 cgroups.cpp:2209] Thawing cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:47:16.017043 25486 cgroups.cpp:1404] Successfullly thawed cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 after 1.511168ms I0903 16:47:16.017555 25480 cgroups.cpp:2192] Freezing cgroup /sys/fs/cgroup/freezer/mesos/bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:47:19.862746 25483 http.cpp:245] HTTP request for '/slave(1)/stats.json' E0903 16:47:24.960055 25472 slave.cpp:2557] Termination of executor 'E' of framework '201104070004-0000002563-0000' failed: Failed to destroy container: discarded future I0903 16:47:24.962054 25472 slave.cpp:2087] Handling status update TASK_LOST (UUID: c0c1633b-7221-40dc-90a2-660ef639f747) for task T of framework 201104070004-0000002563-0000 from @0.0.0.0:0 I0903 16:47:24.963470 25469 mem.cpp:293] Updated 'memory.soft_limit_in_bytes' to 128MB for container bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:47:24.963541 25471 cpushare.cpp:338] Updated 'cpu.shares' to 256 (cpus 0.25) for container bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:47:24.964756 25471 cpushare.cpp:359] Updated 'cpu.cfs_period_us' to 100ms and 'cpu.cfs_quota_us' to 25ms (cpus 0.25) for container bbb9732a-d600-4c1b-b326-846338c608c3 I0903 16:47:43.406610 25476 status_update_manager.cpp:320] Received status update TASK_LOST (UUID: c0c1633b-7221-40dc-90a2-660ef639f747) for task T of framework 201104070004-0000002563-0000 I0903 16:47:43.406991 25476 status_update_manager.hpp:342] Checkpointing UPDATE for status update TASK_LOST (UUID: c0c1633b-7221-40dc-90a2-660ef639f747) for task T of framework 201104070004-0000002563-0000 I0903 16:47:43.410475 25476 status_update_manager.cpp:373] Forwarding status update TASK_LOST (UUID: c0c1633b-7221-40dc-90a2-660ef639f747) for task T of framework 201104070004-0000002563-0000 to master@:5050 I0903 16:47:43.439923 25480 status_update_manager.cpp:398] Received status update acknowledgement (UUID: c0c1633b-7221-40dc-90a2-660ef639f747) for task T of framework 201104070004-0000002563-0000 I0903 16:47:43.440115 25480 status_update_manager.hpp:342] Checkpointing ACK for status update TASK_LOST (UUID: c0c1633b-7221-40dc-90a2-660ef639f747) for task T of framework 201104070004-0000002563-0000 I0903 16:47:43.443595 25480 slave.cpp:2709] Cleaning up executor 'E' of framework 201104070004-0000002563-0000 ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1760","09/04/2014 00:12:42",1,"MasterAuthorizationTest.FrameworkRemovedBeforeReregistration is flaky ""Observed this on Apache CI: https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Disable-Java-Disable-Python-Disable-Webui/2355/changes """," [ RUN] MasterAuthorizationTest.FrameworkRemovedBeforeReregistration Using temporary directory '/tmp/MasterAuthorizationTest_FrameworkRemovedBeforeReregistration_0tw16Z' I0903 22:04:33.520237 25565 leveldb.cpp:176] Opened db in 49.073821ms I0903 22:04:33.538331 25565 leveldb.cpp:183] Compacted db in 18.065051ms I0903 22:04:33.538363 25565 leveldb.cpp:198] Created db iterator in 4826ns I0903 22:04:33.538377 25565 leveldb.cpp:204] Seeked to beginning of db in 682ns I0903 22:04:33.538385 25565 leveldb.cpp:273] Iterated through 0 keys in the db in 312ns I0903 22:04:33.538399 25565 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0903 22:04:33.538624 25593 recover.cpp:425] Starting replica recovery I0903 22:04:33.538707 25598 recover.cpp:451] Replica is in EMPTY status I0903 22:04:33.540909 25590 master.cpp:286] Master 20140903-220433-453759884-44122-25565 (hemera.apache.org) started on 140.211.11.27:44122 I0903 22:04:33.540932 25590 master.cpp:332] Master only allowing authenticated frameworks to register I0903 22:04:33.540936 25590 master.cpp:337] Master only allowing authenticated slaves to register I0903 22:04:33.540941 25590 credentials.hpp:36] Loading credentials for authentication from '/tmp/MasterAuthorizationTest_FrameworkRemovedBeforeReregistration_0tw16Z/credentials' I0903 22:04:33.541337 25590 master.cpp:366] Authorization enabled I0903 22:04:33.541508 25597 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I0903 22:04:33.542343 25582 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@140.211.11.27:44122 I0903 22:04:33.542445 25592 master.cpp:120] No whitelist given. Advertising offers for all slaves I0903 22:04:33.543175 25602 recover.cpp:188] Received a recover response from a replica in EMPTY status I0903 22:04:33.543637 25587 recover.cpp:542] Updating replica status to STARTING I0903 22:04:33.544256 25579 master.cpp:1205] The newly elected leader is master@140.211.11.27:44122 with id 20140903-220433-453759884-44122-25565 I0903 22:04:33.544275 25579 master.cpp:1218] Elected as the leading master! I0903 22:04:33.544282 25579 master.cpp:1036] Recovering from registrar I0903 22:04:33.544401 25579 registrar.cpp:313] Recovering registrar I0903 22:04:33.558487 25593 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 14.678563ms I0903 22:04:33.558531 25593 replica.cpp:320] Persisted replica status to STARTING I0903 22:04:33.558653 25593 recover.cpp:451] Replica is in STARTING status I0903 22:04:33.559867 25588 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I0903 22:04:33.560057 25602 recover.cpp:188] Received a recover response from a replica in STARTING status I0903 22:04:33.561280 25584 recover.cpp:542] Updating replica status to VOTING I0903 22:04:33.576900 25581 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 14.712427ms I0903 22:04:33.576942 25581 replica.cpp:320] Persisted replica status to VOTING I0903 22:04:33.577018 25581 recover.cpp:556] Successfully joined the Paxos group I0903 22:04:33.577108 25581 recover.cpp:440] Recover process terminated I0903 22:04:33.577401 25581 log.cpp:656] Attempting to start the writer I0903 22:04:33.578559 25589 replica.cpp:474] Replica received implicit promise request with proposal 1 I0903 22:04:33.594611 25589 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 16.029152ms I0903 22:04:33.594640 25589 replica.cpp:342] Persisted promised to 1 I0903 22:04:33.595391 25584 coordinator.cpp:230] Coordinator attemping to fill missing position I0903 22:04:33.597512 25588 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0903 22:04:33.613037 25588 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 15.502568ms I0903 22:04:33.613065 25588 replica.cpp:676] Persisted action at 0 I0903 22:04:33.615435 25585 replica.cpp:508] Replica received write request for position 0 I0903 22:04:33.615463 25585 leveldb.cpp:438] Reading position from leveldb took 10743ns I0903 22:04:33.630801 25585 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 15.320225ms I0903 22:04:33.630852 25585 replica.cpp:676] Persisted action at 0 I0903 22:04:33.631126 25585 replica.cpp:655] Replica received learned notice for position 0 I0903 22:04:33.647801 25585 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 16.652951ms I0903 22:04:33.647830 25585 replica.cpp:676] Persisted action at 0 I0903 22:04:33.647842 25585 replica.cpp:661] Replica learned NOP action at position 0 I0903 22:04:33.648548 25583 log.cpp:672] Writer started with ending position 0 I0903 22:04:33.649235 25583 leveldb.cpp:438] Reading position from leveldb took 25209ns I0903 22:04:33.650897 25591 registrar.cpp:346] Successfully fetched the registry (0B) I0903 22:04:33.650930 25591 registrar.cpp:422] Attempting to update the 'registry' I0903 22:04:33.652861 25601 log.cpp:680] Attempting to append 138 bytes to the log I0903 22:04:33.653097 25586 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0903 22:04:33.655225 25590 replica.cpp:508] Replica received write request for position 1 I0903 22:04:33.669618 25590 leveldb.cpp:343] Persisting action (157 bytes) to leveldb took 14.337486ms I0903 22:04:33.669663 25590 replica.cpp:676] Persisted action at 1 I0903 22:04:33.670045 25584 replica.cpp:655] Replica received learned notice for position 1 I0903 22:04:34.414243 25584 leveldb.cpp:343] Persisting action (159 bytes) to leveldb took 15.401247ms I0903 22:04:34.414300 25584 replica.cpp:676] Persisted action at 1 I0903 22:04:34.414316 25584 replica.cpp:661] Replica learned APPEND action at position 1 I0903 22:04:34.414937 25589 registrar.cpp:479] Successfully updated 'registry' I0903 22:04:34.415069 25585 log.cpp:699] Attempting to truncate the log to 1 I0903 22:04:34.415194 25589 registrar.cpp:372] Successfully recovered registrar I0903 22:04:34.415284 25589 master.cpp:1063] Recovered 0 slaves from the Registry (100B) ; allowing 10mins for slaves to re-register I0903 22:04:34.415362 25587 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0903 22:04:34.418926 25597 replica.cpp:508] Replica received write request for position 2 I0903 22:04:34.434321 25597 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 15.368147ms I0903 22:04:34.434352 25597 replica.cpp:676] Persisted action at 2 I0903 22:04:34.435022 25582 replica.cpp:655] Replica received learned notice for position 2 I0903 22:04:34.450331 25582 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 15.284486ms I0903 22:04:34.450387 25582 leveldb.cpp:401] Deleting ~1 keys from leveldb took 25774ns I0903 22:04:34.450402 25582 replica.cpp:676] Persisted action at 2 I0903 22:04:34.450412 25582 replica.cpp:661] Replica learned TRUNCATE action at position 2 I0903 22:04:34.460691 25565 sched.cpp:137] Version: 0.21.0 I0903 22:04:34.460927 25582 sched.cpp:233] New master detected at master@140.211.11.27:44122 I0903 22:04:34.460948 25582 sched.cpp:283] Authenticating with master master@140.211.11.27:44122 I0903 22:04:34.461359 25582 authenticatee.hpp:128] Creating new client SASL connection I0903 22:04:34.461647 25582 master.cpp:3637] Authenticating scheduler-04e0b571-7e0c-4ef3-bb14-c6bbfd8ac9a4@140.211.11.27:44122 I0903 22:04:34.461801 25598 authenticator.hpp:156] Creating new server SASL connection I0903 22:04:34.462172 25598 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0903 22:04:34.462185 25598 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0903 22:04:34.462257 25598 authenticator.hpp:262] Received SASL authentication start I0903 22:04:34.462323 25598 authenticator.hpp:384] Authentication requires more steps I0903 22:04:34.462345 25598 authenticatee.hpp:265] Received SASL authentication step I0903 22:04:34.462417 25598 authenticator.hpp:290] Received SASL authentication step I0903 22:04:34.462522 25598 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'hemera.apache.org' server FQDN: 'hemera.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0903 22:04:34.462529 25598 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0903 22:04:34.462538 25598 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0903 22:04:34.462543 25598 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'hemera.apache.org' server FQDN: 'hemera.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0903 22:04:34.462548 25598 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0903 22:04:34.462550 25598 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0903 22:04:34.462558 25598 authenticator.hpp:376] Authentication success I0903 22:04:34.462635 25598 master.cpp:3677] Successfully authenticated principal 'test-principal' at scheduler-04e0b571-7e0c-4ef3-bb14-c6bbfd8ac9a4@140.211.11.27:44122 I0903 22:04:34.462687 25590 authenticatee.hpp:305] Authentication success I0903 22:04:34.463219 25588 sched.cpp:357] Successfully authenticated with master master@140.211.11.27:44122 I0903 22:04:34.463243 25588 sched.cpp:476] Sending registration request to master@140.211.11.27:44122 I0903 22:04:34.463307 25588 master.cpp:1324] Received registration request from scheduler-04e0b571-7e0c-4ef3-bb14-c6bbfd8ac9a4@140.211.11.27:44122 I0903 22:04:34.463330 25588 master.cpp:1284] Authorizing framework principal 'test-principal' to receive offers for role '*' I0903 22:04:34.463412 25588 master.cpp:1383] Registering framework 20140903-220433-453759884-44122-25565-0000 at scheduler-04e0b571-7e0c-4ef3-bb14-c6bbfd8ac9a4@140.211.11.27:44122 I0903 22:04:34.463577 25598 sched.cpp:407] Framework registered with 20140903-220433-453759884-44122-25565-0000 I0903 22:04:34.463728 25587 hierarchical_allocator_process.hpp:329] Added framework 20140903-220433-453759884-44122-25565-0000 I0903 22:04:34.463739 25587 hierarchical_allocator_process.hpp:697] No resources available to allocate! I0903 22:04:34.463743 25587 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 5016ns I0903 22:04:34.463755 25598 sched.cpp:421] Scheduler::registered took 165035ns I0903 22:04:34.465558 25583 sched.cpp:227] Scheduler::disconnected took 6254ns I0903 22:04:34.465566 25583 sched.cpp:233] New master detected at master@140.211.11.27:44122 I0903 22:04:34.465575 25583 sched.cpp:283] Authenticating with master master@140.211.11.27:44122 I0903 22:04:34.465642 25583 authenticatee.hpp:128] Creating new client SASL connection I0903 22:04:34.465790 25583 master.cpp:1680] Deactivating framework 20140903-220433-453759884-44122-25565-0000 I0903 22:04:34.465850 25583 master.cpp:3637] Authenticating scheduler-04e0b571-7e0c-4ef3-bb14-c6bbfd8ac9a4@140.211.11.27:44122 I0903 22:04:34.465879 25601 hierarchical_allocator_process.hpp:405] Deactivated framework 20140903-220433-453759884-44122-25565-0000 I0903 22:04:34.466047 25600 authenticator.hpp:156] Creating new server SASL connection I0903 22:04:34.466315 25600 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0903 22:04:34.466326 25600 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0903 22:04:34.466346 25600 authenticator.hpp:262] Received SASL authentication start I0903 22:04:34.466418 25600 authenticator.hpp:384] Authentication requires more steps I0903 22:04:34.466436 25600 authenticatee.hpp:265] Received SASL authentication step I0903 22:04:34.466475 25600 authenticator.hpp:290] Received SASL authentication step I0903 22:04:34.466486 25600 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'hemera.apache.org' server FQDN: 'hemera.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0903 22:04:34.466491 25600 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0903 22:04:34.466496 25600 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0903 22:04:34.466502 25600 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'hemera.apache.org' server FQDN: 'hemera.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0903 22:04:34.466506 25600 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0903 22:04:34.466509 25600 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0903 22:04:34.466516 25600 authenticator.hpp:376] Authentication success I0903 22:04:34.466596 25588 master.cpp:3677] Successfully authenticated principal 'test-principal' at scheduler-04e0b571-7e0c-4ef3-bb14-c6bbfd8ac9a4@140.211.11.27:44122 I0903 22:04:34.466629 25597 authenticatee.hpp:305] Authentication success I0903 22:04:34.467062 25594 sched.cpp:357] Successfully authenticated with master master@140.211.11.27:44122 I0903 22:04:34.467077 25594 sched.cpp:476] Sending registration request to master@140.211.11.27:44122 I0903 22:04:34.467190 25588 master.cpp:1448] Received re-registration request from framework 20140903-220433-453759884-44122-25565-0000 at scheduler-04e0b571-7e0c-4ef3-bb14-c6bbfd8ac9a4@140.211.11.27:44122 I0903 22:04:36.368134 25588 master.cpp:1284] Authorizing framework principal 'test-principal' to receive offers for role '*' I0903 22:04:34.542999 25594 hierarchical_allocator_process.hpp:697] No resources available to allocate! I0903 22:04:35.463639 25582 sched.cpp:476] Sending registration request to master@140.211.11.27:44122 I0903 22:04:36.368185 25594 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 1.825177748secs I0903 22:04:36.368302 25588 master.cpp:1448] Received re-registration request from framework 20140903-220433-453759884-44122-25565-0000 at scheduler-04e0b571-7e0c-4ef3-bb14-c6bbfd8ac9a4@140.211.11.27:44122 I0903 22:04:36.368330 25588 master.cpp:1284] Authorizing framework principal 'test-principal' to receive offers for role '*' I0903 22:04:36.368388 25582 sched.cpp:476] Sending registration request to master@140.211.11.27:44122 : Failure Mock function called more times than expected - returning default value. Function call: authorize(@0x2ba11964c1b0 40-byte object ) The mock function has no default action set, and its return type has no default value set. *** Aborted at 1409781876 (unix time) try """"date -d @1409781876"""" if you are using GNU date *** I0903 22:04:36.368913 25598 sched.cpp:745] Stopping framework '20140903-220433-453759884-44122-25565-0000' PC: @ 0x2ba117a990d5 (unknown) *** SIGABRT (@0x3ea000063dd) received by PID 25565 (TID 0x2ba11964d700) from PID 25565; stack trace: *** @ 0x2ba117854cb0 (unknown) @ 0x2ba117a990d5 (unknown) @ 0x2ba117a9c83b (unknown) @ 0x9cba9d testing::internal::GoogleTestFailureReporter::ReportFailure() @ 0x790091 testing::internal::FunctionMockerBase<>::PerformDefaultAction() @ 0x790166 testing::internal::FunctionMockerBase<>::UntypedPerformDefaultAction() @ 0x9c3daa testing::internal::UntypedFunctionMockerBase::UntypedInvokeWith() @ 0x787279 mesos::internal::tests::MockAuthorizer::authorize() @ 0x2ba1157c133d mesos::internal::master::Master::validate() @ 0x2ba1157c2b7a mesos::internal::master::Master::reregisterFramework() @ 0x2ba1157e0038 ProtobufProcess<>::handler2<>() @ 0x2ba1157dde89 std::tr1::_Function_handler<>::_M_invoke() @ 0x2ba1157b15f7 mesos::internal::master::Master::_visit() @ 0x2ba1157bfa3e mesos::internal::master::Master::visit() @ 0x2ba115caf5e7 process::ProcessManager::resume() @ 0x2ba115cb027c process::schedule() @ 0x2ba11784ce9a start_thread @ 0x2ba117b5731d (unknown) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1765","09/05/2014 17:54:39",5,"Use PID namespace to avoid freezing cgroup ""There is some known kernel issue when we freeze the whole cgroup upon OOM. Mesos probably can just use PID namespace so that we will only need to kill the """"init"""" of the pid namespace, instead of freezing all the processes and killing them one by one. But I am not quite sure if this would break the existing code.""","",0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1766","09/05/2014 18:24:31",2,"MasterAuthorizationTest.DuplicateRegistration test is flaky """""," [ RUN ] MasterAuthorizationTest.DuplicateRegistration Using temporary directory '/tmp/MasterAuthorizationTest_DuplicateRegistration_pVJg7m' I0905 15:53:16.398993 25769 leveldb.cpp:176] Opened db in 2.601036ms I0905 15:53:16.399566 25769 leveldb.cpp:183] Compacted db in 546216ns I0905 15:53:16.399590 25769 leveldb.cpp:198] Created db iterator in 2787ns I0905 15:53:16.399605 25769 leveldb.cpp:204] Seeked to beginning of db in 500ns I0905 15:53:16.399617 25769 leveldb.cpp:273] Iterated through 0 keys in the db in 185ns I0905 15:53:16.399633 25769 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0905 15:53:16.399817 25786 recover.cpp:425] Starting replica recovery I0905 15:53:16.399952 25793 recover.cpp:451] Replica is in EMPTY status I0905 15:53:16.400683 25795 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I0905 15:53:16.400795 25787 recover.cpp:188] Received a recover response from a replica in EMPTY status I0905 15:53:16.401005 25783 recover.cpp:542] Updating replica status to STARTING I0905 15:53:16.401470 25786 master.cpp:286] Master 20140905-155316-3125920579-49188-25769 (penates.apache.org) started on 67.195.81.186:49188 I0905 15:53:16.401521 25786 master.cpp:332] Master only allowing authenticated frameworks to register I0905 15:53:16.401533 25786 master.cpp:337] Master only allowing authenticated slaves to register I0905 15:53:16.401543 25786 credentials.hpp:36] Loading credentials for authentication from '/tmp/MasterAuthorizationTest_DuplicateRegistration_pVJg7m/credentials' I0905 15:53:16.401558 25793 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 474683ns I0905 15:53:16.401582 25793 replica.cpp:320] Persisted replica status to STARTING I0905 15:53:16.401667 25793 recover.cpp:451] Replica is in STARTING status I0905 15:53:16.401669 25786 master.cpp:366] Authorization enabled I0905 15:53:16.401898 25795 master.cpp:120] No whitelist given. Advertising offers for all slaves I0905 15:53:16.401936 25796 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@67.195.81.186:49188 I0905 15:53:16.402160 25784 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I0905 15:53:16.402333 25790 master.cpp:1205] The newly elected leader is master@67.195.81.186:49188 with id 20140905-155316-3125920579-49188-25769 I0905 15:53:16.402359 25790 master.cpp:1218] Elected as the leading master! I0905 15:53:16.402371 25790 master.cpp:1036] Recovering from registrar I0905 15:53:16.402472 25798 registrar.cpp:313] Recovering registrar I0905 15:53:16.402529 25791 recover.cpp:188] Received a recover response from a replica in STARTING status I0905 15:53:16.402782 25788 recover.cpp:542] Updating replica status to VOTING I0905 15:53:16.403002 25795 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 116403ns I0905 15:53:16.403020 25795 replica.cpp:320] Persisted replica status to VOTING I0905 15:53:16.403081 25791 recover.cpp:556] Successfully joined the Paxos group I0905 15:53:16.403197 25791 recover.cpp:440] Recover process terminated I0905 15:53:16.403388 25796 log.cpp:656] Attempting to start the writer I0905 15:53:16.403993 25784 replica.cpp:474] Replica received implicit promise request with proposal 1 I0905 15:53:16.404147 25784 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 132156ns I0905 15:53:16.404167 25784 replica.cpp:342] Persisted promised to 1 I0905 15:53:16.404542 25795 coordinator.cpp:230] Coordinator attemping to fill missing position I0905 15:53:16.405498 25787 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0905 15:53:16.405868 25787 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 347231ns I0905 15:53:16.405886 25787 replica.cpp:676] Persisted action at 0 I0905 15:53:16.406553 25788 replica.cpp:508] Replica received write request for position 0 I0905 15:53:16.406582 25788 leveldb.cpp:438] Reading position from leveldb took 11402ns I0905 15:53:16.529067 25788 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 535803ns I0905 15:53:16.529088 25788 replica.cpp:676] Persisted action at 0 I0905 15:53:16.529355 25784 replica.cpp:655] Replica received learned notice for position 0 I0905 15:53:16.529784 25784 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 406036ns I0905 15:53:16.529806 25784 replica.cpp:676] Persisted action at 0 I0905 15:53:16.529817 25784 replica.cpp:661] Replica learned NOP action at position 0 I0905 15:53:16.530108 25783 log.cpp:672] Writer started with ending position 0 I0905 15:53:16.530597 25792 leveldb.cpp:438] Reading position from leveldb took 14594ns I0905 15:53:16.532060 25787 registrar.cpp:346] Successfully fetched the registry (0B) I0905 15:53:16.532091 25787 registrar.cpp:422] Attempting to update the 'registry' I0905 15:53:16.533537 25785 log.cpp:680] Attempting to append 140 bytes to the log I0905 15:53:16.533596 25785 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0905 15:53:16.533998 25798 replica.cpp:508] Replica received write request for position 1 I0905 15:53:16.534397 25798 leveldb.cpp:343] Persisting action (159 bytes) to leveldb took 372452ns I0905 15:53:16.534416 25798 replica.cpp:676] Persisted action at 1 I0905 15:53:16.534808 25793 replica.cpp:655] Replica received learned notice for position 1 I0905 15:53:16.534996 25793 leveldb.cpp:343] Persisting action (161 bytes) to leveldb took 164609ns I0905 15:53:16.535014 25793 replica.cpp:676] Persisted action at 1 I0905 15:53:16.535025 25793 replica.cpp:661] Replica learned APPEND action at position 1 I0905 15:53:16.535368 25784 registrar.cpp:479] Successfully updated 'registry' I0905 15:53:16.535419 25784 registrar.cpp:372] Successfully recovered registrar I0905 15:53:16.535452 25785 log.cpp:699] Attempting to truncate the log to 1 I0905 15:53:16.535555 25791 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0905 15:53:16.535553 25792 master.cpp:1063] Recovered 0 slaves from the Registry (102B) ; allowing 10mins for slaves to re-register I0905 15:53:16.536038 25784 replica.cpp:508] Replica received write request for position 2 I0905 15:53:16.536166 25784 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 101619ns I0905 15:53:16.536185 25784 replica.cpp:676] Persisted action at 2 I0905 15:53:16.536497 25791 replica.cpp:655] Replica received learned notice for position 2 I0905 15:53:16.536633 25791 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 109281ns I0905 15:53:16.536664 25791 leveldb.cpp:401] Deleting ~1 keys from leveldb took 14164ns I0905 15:53:16.536677 25791 replica.cpp:676] Persisted action at 2 I0905 15:53:16.536689 25791 replica.cpp:661] Replica learned TRUNCATE action at position 2 I0905 15:53:16.548408 25769 sched.cpp:137] Version: 0.21.0 I0905 15:53:16.548627 25792 sched.cpp:233] New master detected at master@67.195.81.186:49188 I0905 15:53:16.548653 25792 sched.cpp:283] Authenticating with master master@67.195.81.186:49188 I0905 15:53:16.548857 25797 authenticatee.hpp:128] Creating new client SASL connection I0905 15:53:16.548950 25797 master.cpp:3637] Authenticating scheduler-33430370-6af5-4c7b-bbd8-f6a43269ecf5@67.195.81.186:49188 I0905 15:53:16.549041 25797 authenticator.hpp:156] Creating new server SASL connection I0905 15:53:16.549120 25797 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0905 15:53:16.549141 25797 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0905 15:53:16.549180 25797 authenticator.hpp:262] Received SASL authentication start I0905 15:53:16.549229 25797 authenticator.hpp:384] Authentication requires more steps I0905 15:53:16.549268 25797 authenticatee.hpp:265] Received SASL authentication step I0905 15:53:16.549351 25787 authenticator.hpp:290] Received SASL authentication step I0905 15:53:16.549378 25787 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'penates.apache.org' server FQDN: 'penates.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0905 15:53:16.549391 25787 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0905 15:53:16.549403 25787 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0905 15:53:16.549415 25787 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'penates.apache.org' server FQDN: 'penates.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0905 15:53:16.549424 25787 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0905 15:53:16.549432 25787 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0905 15:53:16.549448 25787 authenticator.hpp:376] Authentication success I0905 15:53:16.549489 25787 authenticatee.hpp:305] Authentication success I0905 15:53:16.549525 25787 master.cpp:3677] Successfully authenticated principal 'test-principal' at scheduler-33430370-6af5-4c7b-bbd8-f6a43269ecf5@67.195.81.186:49188 I0905 15:53:16.549669 25783 sched.cpp:357] Successfully authenticated with master master@67.195.81.186:49188 I0905 15:53:16.549690 25783 sched.cpp:476] Sending registration request to master@67.195.81.186:49188 I0905 15:53:16.549751 25787 master.cpp:1324] Received registration request from scheduler-33430370-6af5-4c7b-bbd8-f6a43269ecf5@67.195.81.186:49188 I0905 15:53:16.549782 25787 master.cpp:1284] Authorizing framework principal 'test-principal' to receive offers for role '*' I0905 15:53:16.551250 25791 sched.cpp:233] New master detected at master@67.195.81.186:49188 I0905 15:53:16.551273 25791 sched.cpp:283] Authenticating with master master@67.195.81.186:49188 I0905 15:53:16.551357 25788 authenticatee.hpp:128] Creating new client SASL connection I0905 15:53:16.551456 25791 master.cpp:3637] Authenticating scheduler-33430370-6af5-4c7b-bbd8-f6a43269ecf5@67.195.81.186:49188 I0905 15:53:16.551553 25788 authenticator.hpp:156] Creating new server SASL connection I0905 15:53:16.551673 25786 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0905 15:53:16.551697 25786 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0905 15:53:16.551755 25792 authenticator.hpp:262] Received SASL authentication start I0905 15:53:16.551808 25792 authenticator.hpp:384] Authentication requires more steps I0905 15:53:16.551856 25792 authenticatee.hpp:265] Received SASL authentication step I0905 15:53:16.551920 25786 authenticator.hpp:290] Received SASL authentication step I0905 15:53:16.551949 25786 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'penates.apache.org' server FQDN: 'penates.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0905 15:53:16.551966 25786 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0905 15:53:16.551985 25786 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0905 15:53:16.551997 25786 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'penates.apache.org' server FQDN: 'penates.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0905 15:53:16.552006 25786 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0905 15:53:16.552014 25786 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0905 15:53:16.552031 25786 authenticator.hpp:376] Authentication success I0905 15:53:16.552081 25792 authenticatee.hpp:305] Authentication success I0905 15:53:16.552100 25786 master.cpp:3677] Successfully authenticated principal 'test-principal' at scheduler-33430370-6af5-4c7b-bbd8-f6a43269ecf5@67.195.81.186:49188 I0905 15:53:16.552249 25792 sched.cpp:357] Successfully authenticated with master master@67.195.81.186:49188 I0905 15:53:17.402861 25793 hierarchical_allocator_process.hpp:697] No resources available to allocate! I0905 15:53:18.874348 25792 sched.cpp:476] Sending registration request to master@67.195.81.186:49188 I0905 15:53:18.874364 25793 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 1.471501003secs I0905 15:53:18.874420 25792 sched.cpp:476] Sending registration request to master@67.195.81.186:49188 I0905 15:53:18.874451 25793 master.cpp:1324] Received registration request from scheduler-33430370-6af5-4c7b-bbd8-f6a43269ecf5@67.195.81.186:49188 I0905 15:53:18.874480 25793 master.cpp:1284] Authorizing framework principal 'test-principal' to receive offers for role '*' I0905 15:53:18.874565 25793 master.cpp:1324] Received registration request from scheduler-33430370-6af5-4c7b-bbd8-f6a43269ecf5@67.195.81.186:49188 I0905 15:53:18.874588 25793 master.cpp:1284] Authorizing framework principal 'test-principal' to receive offers for role '*' : Failure Mock function called more times than expected - returning default value. Function call: authorize(@0x2b9ed7fe9350 40-byte object <90-BA B4-D4 9E-2B 00-00 00-00 00-00 00-00 00-00 A0-FA 06-F4 9E-2B 00-00 80-17 09-F4 9E-2B 00-00 00-00 00-00 03-00 00-00>) The mock function has no default action set, and its return type has no default value set. *** Aborted at 1409932398 (unix time) try """"date -d @1409932398"""" if you are using GNU date *** PC: @ 0x2b9ed6233f79 (unknown) *** SIGABRT (@0x95c000064a9) received by PID 25769 (TID 0x2b9ed7fea700) from PID 25769; stack trace: *** @ 0x2b9ed5fef340 (unknown) @ 0x2b9ed6233f79 (unknown) @ 0x2b9ed6237388 (unknown) @ 0x93a5ec testing::internal::GoogleTestFailureReporter::ReportFailure() @ 0x7296c5 testing::internal::FunctionMockerBase<>::UntypedPerformDefaultAction() @ 0x933094 testing::internal::UntypedFunctionMockerBase::UntypedInvokeWith() @ 0x71fbde mesos::internal::tests::MockAuthorizer::authorize() @ 0x2b9ed4038caf mesos::internal::master::Master::validate() @ 0x2b9ed4039763 mesos::internal::master::Master::registerFramework() @ 0x2b9ed40a0c0f ProtobufProcess<>::handler1<>() @ 0x2b9ed4050c57 std::_Function_handler<>::_M_invoke() @ 0x2b9ed407d202 ProtobufProcess<>::visit() @ 0x2b9ed402af1a mesos::internal::master::Master::_visit() @ 0x2b9ed4037eb8 mesos::internal::master::Master::visit() @ 0x2b9ed44cb792 process::ProcessManager::resume() @ 0x2b9ed44cba9c process::schedule() @ 0x2b9ed5fe7182 start_thread @ 0x2b9ed62f830d (unknown) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1778","09/09/2014 16:19:34",3,"Provide an option to validate flag value in stout/flags. ""Currently we can provide the default value for a flag, but cannot check if the flag is set to a reasonable value and, e.g., issue a warning. Passing an optional lambda checker to {{FlagBase::add()}} can be a possible solution.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1782","09/10/2014 00:15:48",1,"AllocatorTest/0.FrameworkExited is flaky ""{noformat:title=} [ RUN ] AllocatorTest/0.FrameworkExited Using temporary directory '/tmp/AllocatorTest_0_FrameworkExited_B6WZng' I0909 08:02:35.116555 18112 leveldb.cpp:176] Opened db in 31.64686ms I0909 08:02:35.126065 18112 leveldb.cpp:183] Compacted db in 9.449823ms I0909 08:02:35.126118 18112 leveldb.cpp:198] Created db iterator in 5858ns I0909 08:02:35.126137 18112 leveldb.cpp:204] Seeked to beginning of db in 1136ns I0909 08:02:35.126150 18112 leveldb.cpp:273] Iterated through 0 keys in the db in 560ns I0909 08:02:35.126178 18112 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0909 08:02:35.126502 18133 recover.cpp:425] Starting replica recovery I0909 08:02:35.126601 18133 recover.cpp:451] Replica is in EMPTY status I0909 08:02:35.127012 18133 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I0909 08:02:35.127094 18133 recover.cpp:188] Received a recover response from a replica in EMPTY status I0909 08:02:35.127223 18133 recover.cpp:542] Updating replica status to STARTING I0909 08:02:35.226631 18133 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 99.308134ms I0909 08:02:35.226690 18133 replica.cpp:320] Persisted replica status to STARTING I0909 08:02:35.226812 18131 recover.cpp:451] Replica is in STARTING status I0909 08:02:35.227246 18131 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I0909 08:02:35.227308 18131 recover.cpp:188] Received a recover response from a replica in STARTING status I0909 08:02:35.227409 18131 recover.cpp:542] Updating replica status to VOTING I0909 08:02:35.228540 18129 master.cpp:286] Master 20140909-080235-16842879-44005-18112 (precise) started on 127.0.1.1:44005 I0909 08:02:35.228593 18129 master.cpp:332] Master only allowing authenticated frameworks to register I0909 08:02:35.228607 18129 master.cpp:337] Master only allowing authenticated slaves to register I0909 08:02:35.228620 18129 credentials.hpp:36] Loading credentials for authentication from '/tmp/AllocatorTest_0_FrameworkExited_B6WZng/credentials' I0909 08:02:35.228754 18129 master.cpp:366] Authorization enabled I0909 08:02:35.229560 18129 master.cpp:120] No whitelist given. Advertising offers for all slaves I0909 08:02:35.229933 18129 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@127.0.1.1:44005 I0909 08:02:35.230057 18127 master.cpp:1212] The newly elected leader is master@127.0.1.1:44005 with id 20140909-080235-16842879-44005-18112 I0909 08:02:35.230129 18127 master.cpp:1225] Elected as the leading master! I0909 08:02:35.230144 18127 master.cpp:1043] Recovering from registrar I0909 08:02:35.230257 18127 registrar.cpp:313] Recovering registrar I0909 08:02:35.232461 18131 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 4.999384ms I0909 08:02:35.232489 18131 replica.cpp:320] Persisted replica status to VOTING I0909 08:02:35.232544 18131 recover.cpp:556] Successfully joined the Paxos group I0909 08:02:35.232611 18131 recover.cpp:440] Recover process terminated I0909 08:02:35.232727 18131 log.cpp:656] Attempting to start the writer I0909 08:02:35.233012 18131 replica.cpp:474] Replica received implicit promise request with proposal 1 I0909 08:02:35.238785 18131 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 5.749504ms I0909 08:02:35.238818 18131 replica.cpp:342] Persisted promised to 1 I0909 08:02:35.244056 18131 coordinator.cpp:230] Coordinator attemping to fill missing position I0909 08:02:35.244580 18131 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0909 08:02:35.250143 18131 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 5.382351ms I0909 08:02:35.250319 18131 replica.cpp:676] Persisted action at 0 I0909 08:02:35.250901 18131 replica.cpp:508] Replica received write request for position 0 I0909 08:02:35.251137 18131 leveldb.cpp:438] Reading position from leveldb took 18689ns I0909 08:02:35.256597 18131 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 5.274169ms I0909 08:02:35.256764 18131 replica.cpp:676] Persisted action at 0 I0909 08:02:35.263712 18126 replica.cpp:655] Replica received learned notice for position 0 I0909 08:02:35.269613 18126 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 5.417225ms I0909 08:02:35.351641 18126 replica.cpp:676] Persisted action at 0 I0909 08:02:35.351655 18126 replica.cpp:661] Replica learned NOP action at position 0 I0909 08:02:35.351889 18126 log.cpp:672] Writer started with ending position 0 I0909 08:02:35.352165 18126 leveldb.cpp:438] Reading position from leveldb took 25215ns I0909 08:02:35.353163 18126 registrar.cpp:346] Successfully fetched the registry (0B) I0909 08:02:35.353185 18126 registrar.cpp:422] Attempting to update the 'registry' I0909 08:02:35.354152 18126 log.cpp:680] Attempting to append 120 bytes to the log I0909 08:02:35.354195 18126 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0909 08:02:35.354416 18126 replica.cpp:508] Replica received write request for position 1 I0909 08:02:35.351579 18127 hierarchical_allocator_process.hpp:697] No resources available to allocate! I0909 08:02:35.354558 18127 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 2.984795ms I0909 08:02:35.360254 18126 leveldb.cpp:343] Persisting action (137 bytes) to leveldb took 5.811986ms I0909 08:02:35.360285 18126 replica.cpp:676] Persisted action at 1 I0909 08:02:35.364126 18132 replica.cpp:655] Replica received learned notice for position 1 I0909 08:02:35.369856 18132 leveldb.cpp:343] Persisting action (139 bytes) to leveldb took 5.702756ms I0909 08:02:35.369899 18132 replica.cpp:676] Persisted action at 1 I0909 08:02:35.369910 18132 replica.cpp:661] Replica learned APPEND action at position 1 I0909 08:02:35.370209 18132 registrar.cpp:479] Successfully updated 'registry' I0909 08:02:35.370311 18132 registrar.cpp:372] Successfully recovered registrar I0909 08:02:35.370477 18132 log.cpp:699] Attempting to truncate the log to 1 I0909 08:02:35.370553 18132 master.cpp:1070] Recovered 0 slaves from the Registry (84B) ; allowing 10mins for slaves to re-register I0909 08:02:35.370594 18132 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0909 08:02:35.371201 18127 replica.cpp:508] Replica received write request for position 2 I0909 08:02:35.376760 18127 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 5.264501ms I0909 08:02:35.377105 18127 replica.cpp:676] Persisted action at 2 I0909 08:02:35.377770 18127 replica.cpp:655] Replica received learned notice for position 2 I0909 08:02:35.383363 18127 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 5.272769ms I0909 08:02:35.383818 18127 leveldb.cpp:401] Deleting ~1 keys from leveldb took 28148ns I0909 08:02:35.384137 18127 replica.cpp:676] Persisted action at 2 I0909 08:02:35.384399 18127 replica.cpp:661] Replica learned TRUNCATE action at position 2 I0909 08:02:35.396512 18127 slave.cpp:167] Slave started on 64)@127.0.1.1:44005 I0909 08:02:35.654770 18131 hierarchical_allocator_process.hpp:697] No resources available to allocate! I0909 08:02:35.654847 18131 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 104933ns I0909 08:02:35.654974 18127 credentials.hpp:84] Loading credential for authentication from '/tmp/AllocatorTest_0_FrameworkExited_xV9Mk4/credential' I0909 08:02:35.655097 18127 slave.cpp:274] Slave using credential for: test-principal I0909 08:02:35.655203 18127 slave.cpp:287] Slave resources: cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] I0909 08:02:35.655274 18127 slave.cpp:315] Slave hostname: precise I0909 08:02:35.655285 18127 slave.cpp:316] Slave checkpoint: false I0909 08:02:35.655804 18127 state.cpp:33] Recovering state from '/tmp/AllocatorTest_0_FrameworkExited_xV9Mk4/meta' I0909 08:02:35.655913 18127 status_update_manager.cpp:193] Recovering status update manager I0909 08:02:35.656005 18127 slave.cpp:3202] Finished recovery I0909 08:02:35.656251 18127 slave.cpp:598] New master detected at master@127.0.1.1:44005 I0909 08:02:35.656285 18127 slave.cpp:672] Authenticating with master master@127.0.1.1:44005 I0909 08:02:35.656325 18127 slave.cpp:645] Detecting new master I0909 08:02:35.656358 18127 status_update_manager.cpp:167] New master detected at master@127.0.1.1:44005 I0909 08:02:35.656389 18127 authenticatee.hpp:128] Creating new client SASL connection I0909 08:02:35.656563 18127 master.cpp:3653] Authenticating slave(64)@127.0.1.1:44005 I0909 08:02:35.656651 18127 authenticator.hpp:156] Creating new server SASL connection I0909 08:02:35.656770 18127 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0909 08:02:35.656796 18127 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0909 08:02:35.656822 18127 authenticator.hpp:262] Received SASL authentication start I0909 08:02:35.656858 18127 authenticator.hpp:384] Authentication requires more steps I0909 08:02:35.656883 18127 authenticatee.hpp:265] Received SASL authentication step I0909 08:02:35.656924 18127 authenticator.hpp:290] Received SASL authentication step I0909 08:02:35.656960 18127 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0909 08:02:35.656971 18127 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0909 08:02:35.656982 18127 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0909 08:02:35.656997 18127 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0909 08:02:35.657004 18127 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.657008 18127 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.657019 18127 authenticator.hpp:376] Authentication success I0909 08:02:35.657047 18127 authenticatee.hpp:305] Authentication success I0909 08:02:35.657073 18127 master.cpp:3693] Successfully authenticated principal 'test-principal' at slave(64)@127.0.1.1:44005 I0909 08:02:35.657145 18127 slave.cpp:729] Successfully authenticated with master master@127.0.1.1:44005 I0909 08:02:35.657183 18127 slave.cpp:980] Will retry registration in 19.238717ms if necessary I0909 08:02:35.657276 18128 master.cpp:2843] Registering slave at slave(64)@127.0.1.1:44005 (precise) with id 20140909-080235-16842879-44005-18112-0 I0909 08:02:35.657389 18128 registrar.cpp:422] Attempting to update the 'registry' I0909 08:02:35.658382 18130 log.cpp:680] Attempting to append 295 bytes to the log I0909 08:02:35.658432 18130 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0909 08:02:35.658635 18130 replica.cpp:508] Replica received write request for position 3 I0909 08:02:35.660959 18112 sched.cpp:137] Version: 0.21.0 I0909 08:02:35.661093 18126 sched.cpp:233] New master detected at master@127.0.1.1:44005 I0909 08:02:35.661111 18126 sched.cpp:283] Authenticating with master master@127.0.1.1:44005 I0909 08:02:35.661175 18126 authenticatee.hpp:128] Creating new client SASL connection I0909 08:02:35.661306 18126 master.cpp:3653] Authenticating scheduler-fd929918-7057-4fef-923a-ed9d6fd355be@127.0.1.1:44005 I0909 08:02:35.661376 18126 authenticator.hpp:156] Creating new server SASL connection I0909 08:02:35.661466 18126 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0909 08:02:35.661483 18126 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0909 08:02:35.661504 18126 authenticator.hpp:262] Received SASL authentication start I0909 08:02:35.661530 18126 authenticator.hpp:384] Authentication requires more steps I0909 08:02:35.661552 18126 authenticatee.hpp:265] Received SASL authentication step I0909 08:02:35.661579 18126 authenticator.hpp:290] Received SASL authentication step I0909 08:02:35.661592 18126 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0909 08:02:35.661598 18126 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0909 08:02:35.661607 18126 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0909 08:02:35.661613 18126 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0909 08:02:35.661619 18126 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.661623 18126 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.661633 18126 authenticator.hpp:376] Authentication success I0909 08:02:35.661653 18126 authenticatee.hpp:305] Authentication success I0909 08:02:35.661672 18126 master.cpp:3693] Successfully authenticated principal 'test-principal' at scheduler-fd929918-7057-4fef-923a-ed9d6fd355be@127.0.1.1:44005 I0909 08:02:35.661730 18126 sched.cpp:357] Successfully authenticated with master master@127.0.1.1:44005 I0909 08:02:35.661741 18126 sched.cpp:476] Sending registration request to master@127.0.1.1:44005 I0909 08:02:35.661782 18126 master.cpp:1331] Received registration request from scheduler-fd929918-7057-4fef-923a-ed9d6fd355be@127.0.1.1:44005 I0909 08:02:35.661798 18126 master.cpp:1291] Authorizing framework principal 'test-principal' to receive offers for role '*' I0909 08:02:35.661917 18126 master.cpp:1390] Registering framework 20140909-080235-16842879-44005-18112-0000 at scheduler-fd929918-7057-4fef-923a-ed9d6fd355be@127.0.1.1:44005 I0909 08:02:35.662017 18126 sched.cpp:407] Framework registered with 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.662039 18126 sched.cpp:421] Scheduler::registered took 9070ns I0909 08:02:35.662119 18126 hierarchical_allocator_process.hpp:329] Added framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.662130 18126 hierarchical_allocator_process.hpp:697] No resources available to allocate! I0909 08:02:35.662135 18126 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 5558ns I0909 08:02:35.672230 18130 leveldb.cpp:343] Persisting action (314 bytes) to leveldb took 13.567526ms I0909 08:02:35.672268 18130 replica.cpp:676] Persisted action at 3 I0909 08:02:35.672483 18130 replica.cpp:655] Replica received learned notice for position 3 I0909 08:02:35.677322 18132 slave.cpp:980] Will retry registration in 14.890338ms if necessary I0909 08:02:35.677399 18132 master.cpp:2831] Ignoring register slave message from slave(64)@127.0.1.1:44005 (precise) as admission is already in progress I0909 08:02:35.680881 18130 leveldb.cpp:343] Persisting action (316 bytes) to leveldb took 8.376798ms I0909 08:02:35.680908 18130 replica.cpp:676] Persisted action at 3 I0909 08:02:35.680917 18130 replica.cpp:661] Replica learned APPEND action at position 3 I0909 08:02:35.681252 18130 registrar.cpp:479] Successfully updated 'registry' I0909 08:02:35.681330 18130 log.cpp:699] Attempting to truncate the log to 3 I0909 08:02:35.681385 18130 master.cpp:2883] Registered slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) I0909 08:02:35.681399 18130 master.cpp:4126] Adding slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) with cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] I0909 08:02:35.681504 18130 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0909 08:02:35.681570 18130 slave.cpp:763] Registered with master master@127.0.1.1:44005; given slave ID 20140909-080235-16842879-44005-18112-0 I0909 08:02:35.681689 18130 slave.cpp:2329] Received ping from slave-observer(50)@127.0.1.1:44005 I0909 08:02:35.681753 18130 hierarchical_allocator_process.hpp:442] Added slave 20140909-080235-16842879-44005-18112-0 (precise) with cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] (and cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] available) I0909 08:02:35.681808 18130 hierarchical_allocator_process.hpp:734] Offering cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 to framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.681892 18130 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20140909-080235-16842879-44005-18112-0 in 109580ns I0909 08:02:35.681968 18130 master.hpp:861] Adding offer 20140909-080235-16842879-44005-18112-0 with resources cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.682014 18130 master.cpp:3600] Sending 1 offers to framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.682443 18130 sched.cpp:544] Scheduler::resourceOffers took 254258ns I0909 08:02:35.682633 18130 master.hpp:871] Removing offer 20140909-080235-16842879-44005-18112-0 with resources cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.682684 18130 master.cpp:2201] Processing reply for offers: [ 20140909-080235-16842879-44005-18112-0 ] on slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) for framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.682708 18130 master.cpp:2284] Authorizing framework principal 'test-principal' to launch task 0 as user 'jenkins' I0909 08:02:35.682971 18130 replica.cpp:508] Replica received write request for position 4 I0909 08:02:35.683132 18132 master.hpp:833] Adding task 0 with resources cpus(*):2; mem(*):512 on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.683159 18132 master.cpp:2350] Launching task 0 of framework 20140909-080235-16842879-44005-18112-0000 with resources cpus(*):2; mem(*):512 on slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) I0909 08:02:35.683363 18132 slave.cpp:1011] Got assigned task 0 for framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.683580 18132 slave.cpp:1121] Launching task 0 for framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.684833 18133 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1; mem(*):512; disk(*):25116; ports(*):[31000-32000] (total allocatable: cpus(*):1; mem(*):512; disk(*):25116; ports(*):[31000-32000]) on slave 20140909-080235-16842879-44005-18112-0 from framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.684864 18133 hierarchical_allocator_process.hpp:599] Framework 20140909-080235-16842879-44005-18112-0000 filtered slave 20140909-080235-16842879-44005-18112-0 for 5secs I0909 08:02:35.686401 18132 exec.cpp:132] Version: 0.21.0 I0909 08:02:35.686848 18128 exec.cpp:182] Executor started at: executor(8)@127.0.1.1:44005 with pid 18112 I0909 08:02:35.687095 18132 slave.cpp:1231] Queuing task '0' for executor executor-1 of framework '20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.687302 18132 slave.cpp:552] Successfully attached file '/tmp/AllocatorTest_0_FrameworkExited_xV9Mk4/slaves/20140909-080235-16842879-44005-18112-0/frameworks/20140909-080235-16842879-44005-18112-0000/executors/executor-1/runs/c4458e43-94ee-4b5e-bd74-5d39a09deff6' I0909 08:02:35.687568 18132 slave.cpp:1741] Got registration for executor 'executor-1' of framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.687893 18127 exec.cpp:206] Executor registered on slave 20140909-080235-16842879-44005-18112-0 I0909 08:02:35.688789 18127 exec.cpp:218] Executor::registered took 15015ns I0909 08:02:35.688977 18132 slave.cpp:1859] Flushing queued task 0 for executor 'executor-1' of framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.689260 18133 exec.cpp:293] Executor asked to run task '0' I0909 08:02:35.689441 18133 exec.cpp:302] Executor::launchTask took 24599ns I0909 08:02:35.691651 18112 sched.cpp:137] Version: 0.21.0 I0909 08:02:35.691946 18131 sched.cpp:233] New master detected at master@127.0.1.1:44005 I0909 08:02:35.692126 18131 sched.cpp:283] Authenticating with master master@127.0.1.1:44005 I0909 08:02:35.692399 18131 authenticatee.hpp:128] Creating new client SASL connection I0909 08:02:35.692791 18131 master.cpp:3653] Authenticating scheduler-6e711fc6-aad6-48bd-9ce7-2316b45c5482@127.0.1.1:44005 I0909 08:02:35.693068 18131 authenticator.hpp:156] Creating new server SASL connection I0909 08:02:35.693351 18131 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0909 08:02:35.693532 18131 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0909 08:02:35.693739 18131 authenticator.hpp:262] Received SASL authentication start I0909 08:02:35.693979 18131 authenticator.hpp:384] Authentication requires more steps I0909 08:02:35.694202 18131 authenticatee.hpp:265] Received SASL authentication step I0909 08:02:35.694449 18131 authenticator.hpp:290] Received SASL authentication step I0909 08:02:35.694633 18131 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0909 08:02:35.694792 18131 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0909 08:02:35.694980 18131 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0909 08:02:35.695158 18131 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0909 08:02:35.695369 18131 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.695724 18131 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.695907 18131 authenticator.hpp:376] Authentication success I0909 08:02:35.696117 18128 authenticatee.hpp:305] Authentication success I0909 08:02:35.698509 18130 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 15.520863ms I0909 08:02:35.698698 18130 replica.cpp:676] Persisted action at 4 I0909 08:02:35.698940 18128 sched.cpp:357] Successfully authenticated with master master@127.0.1.1:44005 I0909 08:02:35.699095 18126 master.cpp:3693] Successfully authenticated principal 'test-principal' at scheduler-6e711fc6-aad6-48bd-9ce7-2316b45c5482@127.0.1.1:44005 I0909 08:02:35.699354 18130 replica.cpp:655] Replica received learned notice for position 4 I0909 08:02:35.699973 18128 sched.cpp:476] Sending registration request to master@127.0.1.1:44005 I0909 08:02:35.700265 18128 master.cpp:1331] Received registration request from scheduler-6e711fc6-aad6-48bd-9ce7-2316b45c5482@127.0.1.1:44005 I0909 08:02:35.700515 18128 master.cpp:1291] Authorizing framework principal 'test-principal' to receive offers for role '*' I0909 08:02:35.700809 18128 master.cpp:1390] Registering framework 20140909-080235-16842879-44005-18112-0001 at scheduler-6e711fc6-aad6-48bd-9ce7-2316b45c5482@127.0.1.1:44005 I0909 08:02:35.701037 18133 sched.cpp:407] Framework registered with 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.701211 18133 sched.cpp:421] Scheduler::registered took 11991ns I0909 08:02:35.701488 18131 hierarchical_allocator_process.hpp:329] Added framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.701728 18131 hierarchical_allocator_process.hpp:734] Offering cpus(*):1; mem(*):512; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 to framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.701992 18131 hierarchical_allocator_process.hpp:659] Performed allocation for 1 slaves in 297969ns I0909 08:02:35.702229 18128 master.hpp:861] Adding offer 20140909-080235-16842879-44005-18112-1 with resources cpus(*):1; mem(*):512; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.702481 18128 master.cpp:3600] Sending 1 offers to framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.702901 18129 sched.cpp:544] Scheduler::resourceOffers took 127949ns I0909 08:02:35.703305 18128 master.hpp:871] Removing offer 20140909-080235-16842879-44005-18112-1 with resources cpus(*):1; mem(*):512; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.703629 18128 master.cpp:2201] Processing reply for offers: [ 20140909-080235-16842879-44005-18112-1 ] on slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) for framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.703908 18128 master.cpp:2284] Authorizing framework principal 'test-principal' to launch task 0 as user 'jenkins' I0909 08:02:35.703789 18132 slave.cpp:2542] Monitoring executor 'executor-1' of framework '20140909-080235-16842879-44005-18112-0000' in container 'c4458e43-94ee-4b5e-bd74-5d39a09deff6' I0909 08:02:35.704763 18128 master.hpp:833] Adding task 0 with resources cpus(*):1; mem(*):256 on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.704951 18128 master.cpp:2350] Launching task 0 of framework 20140909-080235-16842879-44005-18112-0001 with resources cpus(*):1; mem(*):256 on slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) I0909 08:02:35.705255 18129 slave.cpp:1011] Got assigned task 0 for framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.705582 18129 slave.cpp:1121] Launching task 0 for framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.707756 18129 exec.cpp:132] Version: 0.21.0 I0909 08:02:35.708035 18130 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 8.127072ms I0909 08:02:35.708281 18130 leveldb.cpp:401] Deleting ~2 keys from leveldb took 28817ns I0909 08:02:35.708459 18130 replica.cpp:676] Persisted action at 4 I0909 08:02:35.708632 18130 replica.cpp:661] Replica learned TRUNCATE action at position 4 I0909 08:02:35.708869 18133 exec.cpp:182] Executor started at: executor(9)@127.0.1.1:44005 with pid 18112 I0909 08:02:35.709120 18127 hierarchical_allocator_process.hpp:659] Performed allocation for 1 slaves in 35083ns I0909 08:02:35.709511 18129 slave.cpp:1231] Queuing task '0' for executor executor-2 of framework '20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.709707 18129 slave.cpp:552] Successfully attached file '/tmp/AllocatorTest_0_FrameworkExited_xV9Mk4/slaves/20140909-080235-16842879-44005-18112-0/frameworks/20140909-080235-16842879-44005-18112-0001/executors/executor-2/runs/7654870b-fd36-40b2-aac7-37b1bcfa821e' I0909 08:02:35.709913 18129 slave.cpp:1741] Got registration for executor 'executor-2' of framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.710188 18129 slave.cpp:1859] Flushing queued task 0 for executor 'executor-2' of framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.710516 18129 slave.cpp:2542] Monitoring executor 'executor-2' of framework '20140909-080235-16842879-44005-18112-0001' in container '7654870b-fd36-40b2-aac7-37b1bcfa821e' I0909 08:02:35.710321 18130 exec.cpp:206] Executor registered on slave 20140909-080235-16842879-44005-18112-0 I0909 08:02:35.711678 18130 exec.cpp:218] Executor::registered took 14355ns I0909 08:02:35.711987 18130 exec.cpp:293] Executor asked to run task '0' I0909 08:02:35.715551 18130 exec.cpp:302] Executor::launchTask took 3.40476ms I0909 08:02:35.716006 18131 sched.cpp:745] Stopping framework '20140909-080235-16842879-44005-18112-0000' I0909 08:02:35.716292 18128 master.cpp:1640] Asked to unregister framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.716490 18127 hierarchical_allocator_process.hpp:563] Recovered mem(*):256; disk(*):25116; ports(*):[31000-32000] (total allocatable: mem(*):256; disk(*):25116; ports(*):[31000-32000]) on slave 20140909-080235-16842879-44005-18112-0 from framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.716792 18127 hierarchical_allocator_process.hpp:599] Framework 20140909-080235-16842879-44005-18112-0001 filtered slave 20140909-080235-16842879-44005-18112-0 for 5secs I0909 08:02:35.717018 18128 master.cpp:3976] Removing framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.717269 18128 master.hpp:851] Removing task 0 with resources cpus(*):2; mem(*):512 on slave 20140909-080235-16842879-44005-18112-0 (precise) W0909 08:02:35.717607 18128 master.cpp:4419] Removing task 0 of framework 20140909-080235-16842879-44005-18112-0000 and slave 20140909-080235-16842879-44005-18112-0 in non-terminal state TASK_STAGING I0909 08:02:35.717470 18131 hierarchical_allocator_process.hpp:405] Deactivated framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.718065 18131 hierarchical_allocator_process.hpp:563] Recovered cpus(*):2; mem(*):512 (total allocatable: mem(*):768; disk(*):25116; ports(*):[31000-32000]; cpus(*):2) on slave 20140909-080235-16842879-44005-18112-0 from framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.717438 18132 slave.cpp:1414] Asked to shut down framework 20140909-080235-16842879-44005-18112-0000 by master@127.0.1.1:44005 I0909 08:02:35.718444 18132 slave.cpp:1439] Shutting down framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.718621 18132 slave.cpp:2882] Shutting down executor 'executor-1' of framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.718843 18133 exec.cpp:379] Executor asked to shutdown I0909 08:02:35.719022 18133 exec.cpp:394] Executor::shutdown took 13745ns I0909 08:02:35.722009 18128 hierarchical_allocator_process.hpp:360] Removed framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.830785 18131 hierarchical_allocator_process.hpp:734] Offering mem(*):768; disk(*):25116; ports(*):[31000-32000]; cpus(*):2 on slave 20140909-080235-16842879-44005-18112-0 to framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.830940 18131 hierarchical_allocator_process.hpp:659] Performed allocation for 1 slaves in 218030ns I0909 08:02:35.831056 18127 master.hpp:861] Adding offer 20140909-080235-16842879-44005-18112-2 with resources mem(*):768; disk(*):25116; ports(*):[31000-32000]; cpus(*):2 on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.831115 18127 master.cpp:3600] Sending 1 offers to framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.831248 18127 sched.cpp:544] Scheduler::resourceOffers took 18178ns I0909 08:02:35.831387 18112 master.cpp:650] Master terminating I0909 08:02:35.831441 18112 master.hpp:851] Removing task 0 with resources cpus(*):1; mem(*):256 on slave 20140909-080235-16842879-44005-18112-0 (precise) W0909 08:02:35.831488 18112 master.cpp:4419] Removing task 0 of framework 20140909-080235-16842879-44005-18112-0001 and slave 20140909-080235-16842879-44005-18112-0 in non-terminal state TASK_STAGING I0909 08:02:35.831573 18112 master.hpp:871] Removing offer 20140909-080235-16842879-44005-18112-2 with resources mem(*):768; disk(*):25116; ports(*):[31000-32000]; cpus(*):2 on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.832608 18112 slave.cpp:475] Slave terminating I0909 08:02:35.832630 18112 slave.cpp:1414] Asked to shut down framework 20140909-080235-16842879-44005-18112-0000 by @0.0.0.0:0 W0909 08:02:35.832643 18112 slave.cpp:1435] Ignoring shutdown framework 20140909-080235-16842879-44005-18112-0000 because it is terminating I0909 08:02:35.832648 18112 slave.cpp:1414] Asked to shut down framework 20140909-080235-16842879-44005-18112-0001 by @0.0.0.0:0 I0909 08:02:35.832654 18112 slave.cpp:1439] Shutting down framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.832664 18112 slave.cpp:2882] Shutting down executor 'executor-2' of framework 20140909-080235-16842879-44005-18112-0001 tests/allocator_tests.cpp:1444: Failure Actual function call count doesn't match EXPECT_CALL(this->allocator, resourcesRecovered(_, _, _, _))... Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] AllocatorTest/0.FrameworkExited, where TypeParam = mesos::internal::master::allocator::HierarchicalAllocatorProcess (756 ms) {noformat}"""," [ RUN ] AllocatorTest/0.FrameworkExited Using temporary directory '/tmp/AllocatorTest_0_FrameworkExited_B6WZng' I0909 08:02:35.116555 18112 leveldb.cpp:176] Opened db in 31.64686ms I0909 08:02:35.126065 18112 leveldb.cpp:183] Compacted db in 9.449823ms I0909 08:02:35.126118 18112 leveldb.cpp:198] Created db iterator in 5858ns I0909 08:02:35.126137 18112 leveldb.cpp:204] Seeked to beginning of db in 1136ns I0909 08:02:35.126150 18112 leveldb.cpp:273] Iterated through 0 keys in the db in 560ns I0909 08:02:35.126178 18112 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0909 08:02:35.126502 18133 recover.cpp:425] Starting replica recovery I0909 08:02:35.126601 18133 recover.cpp:451] Replica is in EMPTY status I0909 08:02:35.127012 18133 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I0909 08:02:35.127094 18133 recover.cpp:188] Received a recover response from a replica in EMPTY status I0909 08:02:35.127223 18133 recover.cpp:542] Updating replica status to STARTING I0909 08:02:35.226631 18133 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 99.308134ms I0909 08:02:35.226690 18133 replica.cpp:320] Persisted replica status to STARTING I0909 08:02:35.226812 18131 recover.cpp:451] Replica is in STARTING status I0909 08:02:35.227246 18131 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I0909 08:02:35.227308 18131 recover.cpp:188] Received a recover response from a replica in STARTING status I0909 08:02:35.227409 18131 recover.cpp:542] Updating replica status to VOTING I0909 08:02:35.228540 18129 master.cpp:286] Master 20140909-080235-16842879-44005-18112 (precise) started on 127.0.1.1:44005 I0909 08:02:35.228593 18129 master.cpp:332] Master only allowing authenticated frameworks to register I0909 08:02:35.228607 18129 master.cpp:337] Master only allowing authenticated slaves to register I0909 08:02:35.228620 18129 credentials.hpp:36] Loading credentials for authentication from '/tmp/AllocatorTest_0_FrameworkExited_B6WZng/credentials' I0909 08:02:35.228754 18129 master.cpp:366] Authorization enabled I0909 08:02:35.229560 18129 master.cpp:120] No whitelist given. Advertising offers for all slaves I0909 08:02:35.229933 18129 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@127.0.1.1:44005 I0909 08:02:35.230057 18127 master.cpp:1212] The newly elected leader is master@127.0.1.1:44005 with id 20140909-080235-16842879-44005-18112 I0909 08:02:35.230129 18127 master.cpp:1225] Elected as the leading master! I0909 08:02:35.230144 18127 master.cpp:1043] Recovering from registrar I0909 08:02:35.230257 18127 registrar.cpp:313] Recovering registrar I0909 08:02:35.232461 18131 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 4.999384ms I0909 08:02:35.232489 18131 replica.cpp:320] Persisted replica status to VOTING I0909 08:02:35.232544 18131 recover.cpp:556] Successfully joined the Paxos group I0909 08:02:35.232611 18131 recover.cpp:440] Recover process terminated I0909 08:02:35.232727 18131 log.cpp:656] Attempting to start the writer I0909 08:02:35.233012 18131 replica.cpp:474] Replica received implicit promise request with proposal 1 I0909 08:02:35.238785 18131 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 5.749504ms I0909 08:02:35.238818 18131 replica.cpp:342] Persisted promised to 1 I0909 08:02:35.244056 18131 coordinator.cpp:230] Coordinator attemping to fill missing position I0909 08:02:35.244580 18131 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0909 08:02:35.250143 18131 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 5.382351ms I0909 08:02:35.250319 18131 replica.cpp:676] Persisted action at 0 I0909 08:02:35.250901 18131 replica.cpp:508] Replica received write request for position 0 I0909 08:02:35.251137 18131 leveldb.cpp:438] Reading position from leveldb took 18689ns I0909 08:02:35.256597 18131 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 5.274169ms I0909 08:02:35.256764 18131 replica.cpp:676] Persisted action at 0 I0909 08:02:35.263712 18126 replica.cpp:655] Replica received learned notice for position 0 I0909 08:02:35.269613 18126 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 5.417225ms I0909 08:02:35.351641 18126 replica.cpp:676] Persisted action at 0 I0909 08:02:35.351655 18126 replica.cpp:661] Replica learned NOP action at position 0 I0909 08:02:35.351889 18126 log.cpp:672] Writer started with ending position 0 I0909 08:02:35.352165 18126 leveldb.cpp:438] Reading position from leveldb took 25215ns I0909 08:02:35.353163 18126 registrar.cpp:346] Successfully fetched the registry (0B) I0909 08:02:35.353185 18126 registrar.cpp:422] Attempting to update the 'registry' I0909 08:02:35.354152 18126 log.cpp:680] Attempting to append 120 bytes to the log I0909 08:02:35.354195 18126 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0909 08:02:35.354416 18126 replica.cpp:508] Replica received write request for position 1 I0909 08:02:35.351579 18127 hierarchical_allocator_process.hpp:697] No resources available to allocate! I0909 08:02:35.354558 18127 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 2.984795ms I0909 08:02:35.360254 18126 leveldb.cpp:343] Persisting action (137 bytes) to leveldb took 5.811986ms I0909 08:02:35.360285 18126 replica.cpp:676] Persisted action at 1 I0909 08:02:35.364126 18132 replica.cpp:655] Replica received learned notice for position 1 I0909 08:02:35.369856 18132 leveldb.cpp:343] Persisting action (139 bytes) to leveldb took 5.702756ms I0909 08:02:35.369899 18132 replica.cpp:676] Persisted action at 1 I0909 08:02:35.369910 18132 replica.cpp:661] Replica learned APPEND action at position 1 I0909 08:02:35.370209 18132 registrar.cpp:479] Successfully updated 'registry' I0909 08:02:35.370311 18132 registrar.cpp:372] Successfully recovered registrar I0909 08:02:35.370477 18132 log.cpp:699] Attempting to truncate the log to 1 I0909 08:02:35.370553 18132 master.cpp:1070] Recovered 0 slaves from the Registry (84B) ; allowing 10mins for slaves to re-register I0909 08:02:35.370594 18132 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0909 08:02:35.371201 18127 replica.cpp:508] Replica received write request for position 2 I0909 08:02:35.376760 18127 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 5.264501ms I0909 08:02:35.377105 18127 replica.cpp:676] Persisted action at 2 I0909 08:02:35.377770 18127 replica.cpp:655] Replica received learned notice for position 2 I0909 08:02:35.383363 18127 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 5.272769ms I0909 08:02:35.383818 18127 leveldb.cpp:401] Deleting ~1 keys from leveldb took 28148ns I0909 08:02:35.384137 18127 replica.cpp:676] Persisted action at 2 I0909 08:02:35.384399 18127 replica.cpp:661] Replica learned TRUNCATE action at position 2 I0909 08:02:35.396512 18127 slave.cpp:167] Slave started on 64)@127.0.1.1:44005 I0909 08:02:35.654770 18131 hierarchical_allocator_process.hpp:697] No resources available to allocate! I0909 08:02:35.654847 18131 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 104933ns I0909 08:02:35.654974 18127 credentials.hpp:84] Loading credential for authentication from '/tmp/AllocatorTest_0_FrameworkExited_xV9Mk4/credential' I0909 08:02:35.655097 18127 slave.cpp:274] Slave using credential for: test-principal I0909 08:02:35.655203 18127 slave.cpp:287] Slave resources: cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] I0909 08:02:35.655274 18127 slave.cpp:315] Slave hostname: precise I0909 08:02:35.655285 18127 slave.cpp:316] Slave checkpoint: false I0909 08:02:35.655804 18127 state.cpp:33] Recovering state from '/tmp/AllocatorTest_0_FrameworkExited_xV9Mk4/meta' I0909 08:02:35.655913 18127 status_update_manager.cpp:193] Recovering status update manager I0909 08:02:35.656005 18127 slave.cpp:3202] Finished recovery I0909 08:02:35.656251 18127 slave.cpp:598] New master detected at master@127.0.1.1:44005 I0909 08:02:35.656285 18127 slave.cpp:672] Authenticating with master master@127.0.1.1:44005 I0909 08:02:35.656325 18127 slave.cpp:645] Detecting new master I0909 08:02:35.656358 18127 status_update_manager.cpp:167] New master detected at master@127.0.1.1:44005 I0909 08:02:35.656389 18127 authenticatee.hpp:128] Creating new client SASL connection I0909 08:02:35.656563 18127 master.cpp:3653] Authenticating slave(64)@127.0.1.1:44005 I0909 08:02:35.656651 18127 authenticator.hpp:156] Creating new server SASL connection I0909 08:02:35.656770 18127 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0909 08:02:35.656796 18127 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0909 08:02:35.656822 18127 authenticator.hpp:262] Received SASL authentication start I0909 08:02:35.656858 18127 authenticator.hpp:384] Authentication requires more steps I0909 08:02:35.656883 18127 authenticatee.hpp:265] Received SASL authentication step I0909 08:02:35.656924 18127 authenticator.hpp:290] Received SASL authentication step I0909 08:02:35.656960 18127 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0909 08:02:35.656971 18127 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0909 08:02:35.656982 18127 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0909 08:02:35.656997 18127 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0909 08:02:35.657004 18127 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.657008 18127 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.657019 18127 authenticator.hpp:376] Authentication success I0909 08:02:35.657047 18127 authenticatee.hpp:305] Authentication success I0909 08:02:35.657073 18127 master.cpp:3693] Successfully authenticated principal 'test-principal' at slave(64)@127.0.1.1:44005 I0909 08:02:35.657145 18127 slave.cpp:729] Successfully authenticated with master master@127.0.1.1:44005 I0909 08:02:35.657183 18127 slave.cpp:980] Will retry registration in 19.238717ms if necessary I0909 08:02:35.657276 18128 master.cpp:2843] Registering slave at slave(64)@127.0.1.1:44005 (precise) with id 20140909-080235-16842879-44005-18112-0 I0909 08:02:35.657389 18128 registrar.cpp:422] Attempting to update the 'registry' I0909 08:02:35.658382 18130 log.cpp:680] Attempting to append 295 bytes to the log I0909 08:02:35.658432 18130 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0909 08:02:35.658635 18130 replica.cpp:508] Replica received write request for position 3 I0909 08:02:35.660959 18112 sched.cpp:137] Version: 0.21.0 I0909 08:02:35.661093 18126 sched.cpp:233] New master detected at master@127.0.1.1:44005 I0909 08:02:35.661111 18126 sched.cpp:283] Authenticating with master master@127.0.1.1:44005 I0909 08:02:35.661175 18126 authenticatee.hpp:128] Creating new client SASL connection I0909 08:02:35.661306 18126 master.cpp:3653] Authenticating scheduler-fd929918-7057-4fef-923a-ed9d6fd355be@127.0.1.1:44005 I0909 08:02:35.661376 18126 authenticator.hpp:156] Creating new server SASL connection I0909 08:02:35.661466 18126 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0909 08:02:35.661483 18126 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0909 08:02:35.661504 18126 authenticator.hpp:262] Received SASL authentication start I0909 08:02:35.661530 18126 authenticator.hpp:384] Authentication requires more steps I0909 08:02:35.661552 18126 authenticatee.hpp:265] Received SASL authentication step I0909 08:02:35.661579 18126 authenticator.hpp:290] Received SASL authentication step I0909 08:02:35.661592 18126 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0909 08:02:35.661598 18126 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0909 08:02:35.661607 18126 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0909 08:02:35.661613 18126 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0909 08:02:35.661619 18126 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.661623 18126 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.661633 18126 authenticator.hpp:376] Authentication success I0909 08:02:35.661653 18126 authenticatee.hpp:305] Authentication success I0909 08:02:35.661672 18126 master.cpp:3693] Successfully authenticated principal 'test-principal' at scheduler-fd929918-7057-4fef-923a-ed9d6fd355be@127.0.1.1:44005 I0909 08:02:35.661730 18126 sched.cpp:357] Successfully authenticated with master master@127.0.1.1:44005 I0909 08:02:35.661741 18126 sched.cpp:476] Sending registration request to master@127.0.1.1:44005 I0909 08:02:35.661782 18126 master.cpp:1331] Received registration request from scheduler-fd929918-7057-4fef-923a-ed9d6fd355be@127.0.1.1:44005 I0909 08:02:35.661798 18126 master.cpp:1291] Authorizing framework principal 'test-principal' to receive offers for role '*' I0909 08:02:35.661917 18126 master.cpp:1390] Registering framework 20140909-080235-16842879-44005-18112-0000 at scheduler-fd929918-7057-4fef-923a-ed9d6fd355be@127.0.1.1:44005 I0909 08:02:35.662017 18126 sched.cpp:407] Framework registered with 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.662039 18126 sched.cpp:421] Scheduler::registered took 9070ns I0909 08:02:35.662119 18126 hierarchical_allocator_process.hpp:329] Added framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.662130 18126 hierarchical_allocator_process.hpp:697] No resources available to allocate! I0909 08:02:35.662135 18126 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 5558ns I0909 08:02:35.672230 18130 leveldb.cpp:343] Persisting action (314 bytes) to leveldb took 13.567526ms I0909 08:02:35.672268 18130 replica.cpp:676] Persisted action at 3 I0909 08:02:35.672483 18130 replica.cpp:655] Replica received learned notice for position 3 I0909 08:02:35.677322 18132 slave.cpp:980] Will retry registration in 14.890338ms if necessary I0909 08:02:35.677399 18132 master.cpp:2831] Ignoring register slave message from slave(64)@127.0.1.1:44005 (precise) as admission is already in progress I0909 08:02:35.680881 18130 leveldb.cpp:343] Persisting action (316 bytes) to leveldb took 8.376798ms I0909 08:02:35.680908 18130 replica.cpp:676] Persisted action at 3 I0909 08:02:35.680917 18130 replica.cpp:661] Replica learned APPEND action at position 3 I0909 08:02:35.681252 18130 registrar.cpp:479] Successfully updated 'registry' I0909 08:02:35.681330 18130 log.cpp:699] Attempting to truncate the log to 3 I0909 08:02:35.681385 18130 master.cpp:2883] Registered slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) I0909 08:02:35.681399 18130 master.cpp:4126] Adding slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) with cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] I0909 08:02:35.681504 18130 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0909 08:02:35.681570 18130 slave.cpp:763] Registered with master master@127.0.1.1:44005; given slave ID 20140909-080235-16842879-44005-18112-0 I0909 08:02:35.681689 18130 slave.cpp:2329] Received ping from slave-observer(50)@127.0.1.1:44005 I0909 08:02:35.681753 18130 hierarchical_allocator_process.hpp:442] Added slave 20140909-080235-16842879-44005-18112-0 (precise) with cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] (and cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] available) I0909 08:02:35.681808 18130 hierarchical_allocator_process.hpp:734] Offering cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 to framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.681892 18130 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20140909-080235-16842879-44005-18112-0 in 109580ns I0909 08:02:35.681968 18130 master.hpp:861] Adding offer 20140909-080235-16842879-44005-18112-0 with resources cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.682014 18130 master.cpp:3600] Sending 1 offers to framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.682443 18130 sched.cpp:544] Scheduler::resourceOffers took 254258ns I0909 08:02:35.682633 18130 master.hpp:871] Removing offer 20140909-080235-16842879-44005-18112-0 with resources cpus(*):3; mem(*):1024; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.682684 18130 master.cpp:2201] Processing reply for offers: [ 20140909-080235-16842879-44005-18112-0 ] on slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) for framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.682708 18130 master.cpp:2284] Authorizing framework principal 'test-principal' to launch task 0 as user 'jenkins' I0909 08:02:35.682971 18130 replica.cpp:508] Replica received write request for position 4 I0909 08:02:35.683132 18132 master.hpp:833] Adding task 0 with resources cpus(*):2; mem(*):512 on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.683159 18132 master.cpp:2350] Launching task 0 of framework 20140909-080235-16842879-44005-18112-0000 with resources cpus(*):2; mem(*):512 on slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) I0909 08:02:35.683363 18132 slave.cpp:1011] Got assigned task 0 for framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.683580 18132 slave.cpp:1121] Launching task 0 for framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.684833 18133 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1; mem(*):512; disk(*):25116; ports(*):[31000-32000] (total allocatable: cpus(*):1; mem(*):512; disk(*):25116; ports(*):[31000-32000]) on slave 20140909-080235-16842879-44005-18112-0 from framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.684864 18133 hierarchical_allocator_process.hpp:599] Framework 20140909-080235-16842879-44005-18112-0000 filtered slave 20140909-080235-16842879-44005-18112-0 for 5secs I0909 08:02:35.686401 18132 exec.cpp:132] Version: 0.21.0 I0909 08:02:35.686848 18128 exec.cpp:182] Executor started at: executor(8)@127.0.1.1:44005 with pid 18112 I0909 08:02:35.687095 18132 slave.cpp:1231] Queuing task '0' for executor executor-1 of framework '20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.687302 18132 slave.cpp:552] Successfully attached file '/tmp/AllocatorTest_0_FrameworkExited_xV9Mk4/slaves/20140909-080235-16842879-44005-18112-0/frameworks/20140909-080235-16842879-44005-18112-0000/executors/executor-1/runs/c4458e43-94ee-4b5e-bd74-5d39a09deff6' I0909 08:02:35.687568 18132 slave.cpp:1741] Got registration for executor 'executor-1' of framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.687893 18127 exec.cpp:206] Executor registered on slave 20140909-080235-16842879-44005-18112-0 I0909 08:02:35.688789 18127 exec.cpp:218] Executor::registered took 15015ns I0909 08:02:35.688977 18132 slave.cpp:1859] Flushing queued task 0 for executor 'executor-1' of framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.689260 18133 exec.cpp:293] Executor asked to run task '0' I0909 08:02:35.689441 18133 exec.cpp:302] Executor::launchTask took 24599ns I0909 08:02:35.691651 18112 sched.cpp:137] Version: 0.21.0 I0909 08:02:35.691946 18131 sched.cpp:233] New master detected at master@127.0.1.1:44005 I0909 08:02:35.692126 18131 sched.cpp:283] Authenticating with master master@127.0.1.1:44005 I0909 08:02:35.692399 18131 authenticatee.hpp:128] Creating new client SASL connection I0909 08:02:35.692791 18131 master.cpp:3653] Authenticating scheduler-6e711fc6-aad6-48bd-9ce7-2316b45c5482@127.0.1.1:44005 I0909 08:02:35.693068 18131 authenticator.hpp:156] Creating new server SASL connection I0909 08:02:35.693351 18131 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0909 08:02:35.693532 18131 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0909 08:02:35.693739 18131 authenticator.hpp:262] Received SASL authentication start I0909 08:02:35.693979 18131 authenticator.hpp:384] Authentication requires more steps I0909 08:02:35.694202 18131 authenticatee.hpp:265] Received SASL authentication step I0909 08:02:35.694449 18131 authenticator.hpp:290] Received SASL authentication step I0909 08:02:35.694633 18131 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0909 08:02:35.694792 18131 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0909 08:02:35.694980 18131 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0909 08:02:35.695158 18131 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'precise' server FQDN: 'precise' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0909 08:02:35.695369 18131 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.695724 18131 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0909 08:02:35.695907 18131 authenticator.hpp:376] Authentication success I0909 08:02:35.696117 18128 authenticatee.hpp:305] Authentication success I0909 08:02:35.698509 18130 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 15.520863ms I0909 08:02:35.698698 18130 replica.cpp:676] Persisted action at 4 I0909 08:02:35.698940 18128 sched.cpp:357] Successfully authenticated with master master@127.0.1.1:44005 I0909 08:02:35.699095 18126 master.cpp:3693] Successfully authenticated principal 'test-principal' at scheduler-6e711fc6-aad6-48bd-9ce7-2316b45c5482@127.0.1.1:44005 I0909 08:02:35.699354 18130 replica.cpp:655] Replica received learned notice for position 4 I0909 08:02:35.699973 18128 sched.cpp:476] Sending registration request to master@127.0.1.1:44005 I0909 08:02:35.700265 18128 master.cpp:1331] Received registration request from scheduler-6e711fc6-aad6-48bd-9ce7-2316b45c5482@127.0.1.1:44005 I0909 08:02:35.700515 18128 master.cpp:1291] Authorizing framework principal 'test-principal' to receive offers for role '*' I0909 08:02:35.700809 18128 master.cpp:1390] Registering framework 20140909-080235-16842879-44005-18112-0001 at scheduler-6e711fc6-aad6-48bd-9ce7-2316b45c5482@127.0.1.1:44005 I0909 08:02:35.701037 18133 sched.cpp:407] Framework registered with 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.701211 18133 sched.cpp:421] Scheduler::registered took 11991ns I0909 08:02:35.701488 18131 hierarchical_allocator_process.hpp:329] Added framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.701728 18131 hierarchical_allocator_process.hpp:734] Offering cpus(*):1; mem(*):512; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 to framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.701992 18131 hierarchical_allocator_process.hpp:659] Performed allocation for 1 slaves in 297969ns I0909 08:02:35.702229 18128 master.hpp:861] Adding offer 20140909-080235-16842879-44005-18112-1 with resources cpus(*):1; mem(*):512; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.702481 18128 master.cpp:3600] Sending 1 offers to framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.702901 18129 sched.cpp:544] Scheduler::resourceOffers took 127949ns I0909 08:02:35.703305 18128 master.hpp:871] Removing offer 20140909-080235-16842879-44005-18112-1 with resources cpus(*):1; mem(*):512; disk(*):25116; ports(*):[31000-32000] on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.703629 18128 master.cpp:2201] Processing reply for offers: [ 20140909-080235-16842879-44005-18112-1 ] on slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) for framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.703908 18128 master.cpp:2284] Authorizing framework principal 'test-principal' to launch task 0 as user 'jenkins' I0909 08:02:35.703789 18132 slave.cpp:2542] Monitoring executor 'executor-1' of framework '20140909-080235-16842879-44005-18112-0000' in container 'c4458e43-94ee-4b5e-bd74-5d39a09deff6' I0909 08:02:35.704763 18128 master.hpp:833] Adding task 0 with resources cpus(*):1; mem(*):256 on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.704951 18128 master.cpp:2350] Launching task 0 of framework 20140909-080235-16842879-44005-18112-0001 with resources cpus(*):1; mem(*):256 on slave 20140909-080235-16842879-44005-18112-0 at slave(64)@127.0.1.1:44005 (precise) I0909 08:02:35.705255 18129 slave.cpp:1011] Got assigned task 0 for framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.705582 18129 slave.cpp:1121] Launching task 0 for framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.707756 18129 exec.cpp:132] Version: 0.21.0 I0909 08:02:35.708035 18130 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 8.127072ms I0909 08:02:35.708281 18130 leveldb.cpp:401] Deleting ~2 keys from leveldb took 28817ns I0909 08:02:35.708459 18130 replica.cpp:676] Persisted action at 4 I0909 08:02:35.708632 18130 replica.cpp:661] Replica learned TRUNCATE action at position 4 I0909 08:02:35.708869 18133 exec.cpp:182] Executor started at: executor(9)@127.0.1.1:44005 with pid 18112 I0909 08:02:35.709120 18127 hierarchical_allocator_process.hpp:659] Performed allocation for 1 slaves in 35083ns I0909 08:02:35.709511 18129 slave.cpp:1231] Queuing task '0' for executor executor-2 of framework '20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.709707 18129 slave.cpp:552] Successfully attached file '/tmp/AllocatorTest_0_FrameworkExited_xV9Mk4/slaves/20140909-080235-16842879-44005-18112-0/frameworks/20140909-080235-16842879-44005-18112-0001/executors/executor-2/runs/7654870b-fd36-40b2-aac7-37b1bcfa821e' I0909 08:02:35.709913 18129 slave.cpp:1741] Got registration for executor 'executor-2' of framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.710188 18129 slave.cpp:1859] Flushing queued task 0 for executor 'executor-2' of framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.710516 18129 slave.cpp:2542] Monitoring executor 'executor-2' of framework '20140909-080235-16842879-44005-18112-0001' in container '7654870b-fd36-40b2-aac7-37b1bcfa821e' I0909 08:02:35.710321 18130 exec.cpp:206] Executor registered on slave 20140909-080235-16842879-44005-18112-0 I0909 08:02:35.711678 18130 exec.cpp:218] Executor::registered took 14355ns I0909 08:02:35.711987 18130 exec.cpp:293] Executor asked to run task '0' I0909 08:02:35.715551 18130 exec.cpp:302] Executor::launchTask took 3.40476ms I0909 08:02:35.716006 18131 sched.cpp:745] Stopping framework '20140909-080235-16842879-44005-18112-0000' I0909 08:02:35.716292 18128 master.cpp:1640] Asked to unregister framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.716490 18127 hierarchical_allocator_process.hpp:563] Recovered mem(*):256; disk(*):25116; ports(*):[31000-32000] (total allocatable: mem(*):256; disk(*):25116; ports(*):[31000-32000]) on slave 20140909-080235-16842879-44005-18112-0 from framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.716792 18127 hierarchical_allocator_process.hpp:599] Framework 20140909-080235-16842879-44005-18112-0001 filtered slave 20140909-080235-16842879-44005-18112-0 for 5secs I0909 08:02:35.717018 18128 master.cpp:3976] Removing framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.717269 18128 master.hpp:851] Removing task 0 with resources cpus(*):2; mem(*):512 on slave 20140909-080235-16842879-44005-18112-0 (precise) W0909 08:02:35.717607 18128 master.cpp:4419] Removing task 0 of framework 20140909-080235-16842879-44005-18112-0000 and slave 20140909-080235-16842879-44005-18112-0 in non-terminal state TASK_STAGING I0909 08:02:35.717470 18131 hierarchical_allocator_process.hpp:405] Deactivated framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.718065 18131 hierarchical_allocator_process.hpp:563] Recovered cpus(*):2; mem(*):512 (total allocatable: mem(*):768; disk(*):25116; ports(*):[31000-32000]; cpus(*):2) on slave 20140909-080235-16842879-44005-18112-0 from framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.717438 18132 slave.cpp:1414] Asked to shut down framework 20140909-080235-16842879-44005-18112-0000 by master@127.0.1.1:44005 I0909 08:02:35.718444 18132 slave.cpp:1439] Shutting down framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.718621 18132 slave.cpp:2882] Shutting down executor 'executor-1' of framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.718843 18133 exec.cpp:379] Executor asked to shutdown I0909 08:02:35.719022 18133 exec.cpp:394] Executor::shutdown took 13745ns I0909 08:02:35.722009 18128 hierarchical_allocator_process.hpp:360] Removed framework 20140909-080235-16842879-44005-18112-0000 I0909 08:02:35.830785 18131 hierarchical_allocator_process.hpp:734] Offering mem(*):768; disk(*):25116; ports(*):[31000-32000]; cpus(*):2 on slave 20140909-080235-16842879-44005-18112-0 to framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.830940 18131 hierarchical_allocator_process.hpp:659] Performed allocation for 1 slaves in 218030ns I0909 08:02:35.831056 18127 master.hpp:861] Adding offer 20140909-080235-16842879-44005-18112-2 with resources mem(*):768; disk(*):25116; ports(*):[31000-32000]; cpus(*):2 on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.831115 18127 master.cpp:3600] Sending 1 offers to framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.831248 18127 sched.cpp:544] Scheduler::resourceOffers took 18178ns I0909 08:02:35.831387 18112 master.cpp:650] Master terminating I0909 08:02:35.831441 18112 master.hpp:851] Removing task 0 with resources cpus(*):1; mem(*):256 on slave 20140909-080235-16842879-44005-18112-0 (precise) W0909 08:02:35.831488 18112 master.cpp:4419] Removing task 0 of framework 20140909-080235-16842879-44005-18112-0001 and slave 20140909-080235-16842879-44005-18112-0 in non-terminal state TASK_STAGING I0909 08:02:35.831573 18112 master.hpp:871] Removing offer 20140909-080235-16842879-44005-18112-2 with resources mem(*):768; disk(*):25116; ports(*):[31000-32000]; cpus(*):2 on slave 20140909-080235-16842879-44005-18112-0 (precise) I0909 08:02:35.832608 18112 slave.cpp:475] Slave terminating I0909 08:02:35.832630 18112 slave.cpp:1414] Asked to shut down framework 20140909-080235-16842879-44005-18112-0000 by @0.0.0.0:0 W0909 08:02:35.832643 18112 slave.cpp:1435] Ignoring shutdown framework 20140909-080235-16842879-44005-18112-0000 because it is terminating I0909 08:02:35.832648 18112 slave.cpp:1414] Asked to shut down framework 20140909-080235-16842879-44005-18112-0001 by @0.0.0.0:0 I0909 08:02:35.832654 18112 slave.cpp:1439] Shutting down framework 20140909-080235-16842879-44005-18112-0001 I0909 08:02:35.832664 18112 slave.cpp:2882] Shutting down executor 'executor-2' of framework 20140909-080235-16842879-44005-18112-0001 tests/allocator_tests.cpp:1444: Failure Actual function call count doesn't match EXPECT_CALL(this->allocator, resourcesRecovered(_, _, _, _))... Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] AllocatorTest/0.FrameworkExited, where TypeParam = mesos::internal::master::allocator::HierarchicalAllocatorProcess (756 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1799","09/16/2014 21:04:42",3,"Reconciliation can send out-of-order updates. ""When a slave re-registers with the master, it currently sends the latest task state for all tasks that are not both terminal and acknowledged. However, reconciliation assumes that we always have the latest unacknowledged state of the task represented in the master. As a result, out-of-order updates are possible, e.g. (1) Slave has task T in TASK_FINISHED, with unacknowledged updates: [TASK_RUNNING, TASK_FINISHED]. (2) Master fails over. (3) New master re-registers the slave with T in TASK_FINISHED. (4) Reconciliation request arrives, master sends TASK_FINISHED. (5) Slave sends TASK_RUNNING to master, master sends TASK_RUNNING. I think the fix here is to preserve the task state invariants in the master, namely, that the master has the latest unacknowledged state of the task. This means when the slave re-registers, it should instead send the latest acknowledged state of each task.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1802","09/17/2014 01:32:34",5,"HealthCheckTest.HealthStatusChange is flaky on jenkins. ""https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Disable-Java-Disable-Python-Disable-Webui/2374/consoleFull """," [ RUN ] HealthCheckTest.HealthStatusChange Using temporary directory '/tmp/HealthCheckTest_HealthStatusChange_IYnlu2' I0916 22:56:14.034612 21026 leveldb.cpp:176] Opened db in 2.155713ms I0916 22:56:14.034965 21026 leveldb.cpp:183] Compacted db in 332489ns I0916 22:56:14.034984 21026 leveldb.cpp:198] Created db iterator in 3710ns I0916 22:56:14.034996 21026 leveldb.cpp:204] Seeked to beginning of db in 642ns I0916 22:56:14.035006 21026 leveldb.cpp:273] Iterated through 0 keys in the db in 343ns I0916 22:56:14.035023 21026 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0916 22:56:14.035200 21054 recover.cpp:425] Starting replica recovery I0916 22:56:14.035403 21041 recover.cpp:451] Replica is in EMPTY status I0916 22:56:14.035888 21045 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I0916 22:56:14.035969 21052 recover.cpp:188] Received a recover response from a replica in EMPTY status I0916 22:56:14.036118 21042 recover.cpp:542] Updating replica status to STARTING I0916 22:56:14.036603 21046 master.cpp:286] Master 20140916-225614-3125920579-47865-21026 (penates.apache.org) started on 67.195.81.186:47865 I0916 22:56:14.036634 21046 master.cpp:332] Master only allowing authenticated frameworks to register I0916 22:56:14.036648 21046 master.cpp:337] Master only allowing authenticated slaves to register I0916 22:56:14.036659 21046 credentials.hpp:36] Loading credentials for authentication from '/tmp/HealthCheckTest_HealthStatusChange_IYnlu2/credentials' I0916 22:56:14.036686 21045 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 480322ns I0916 22:56:14.036700 21045 replica.cpp:320] Persisted replica status to STARTING I0916 22:56:14.036769 21046 master.cpp:366] Authorization enabled I0916 22:56:14.036826 21045 recover.cpp:451] Replica is in STARTING status I0916 22:56:14.036944 21052 master.cpp:120] No whitelist given. Advertising offers for all slaves I0916 22:56:14.036968 21049 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@67.195.81.186:47865 I0916 22:56:14.037284 21054 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I0916 22:56:14.037312 21046 master.cpp:1212] The newly elected leader is master@67.195.81.186:47865 with id 20140916-225614-3125920579-47865-21026 I0916 22:56:14.037333 21046 master.cpp:1225] Elected as the leading master! I0916 22:56:14.037345 21046 master.cpp:1043] Recovering from registrar I0916 22:56:14.037504 21040 registrar.cpp:313] Recovering registrar I0916 22:56:14.037505 21053 recover.cpp:188] Received a recover response from a replica in STARTING status I0916 22:56:14.037681 21047 recover.cpp:542] Updating replica status to VOTING I0916 22:56:14.038072 21052 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 330251ns I0916 22:56:14.038087 21052 replica.cpp:320] Persisted replica status to VOTING I0916 22:56:14.038127 21053 recover.cpp:556] Successfully joined the Paxos group I0916 22:56:14.038202 21053 recover.cpp:440] Recover process terminated I0916 22:56:14.038364 21048 log.cpp:656] Attempting to start the writer I0916 22:56:14.038812 21053 replica.cpp:474] Replica received implicit promise request with proposal 1 I0916 22:56:14.038925 21053 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 92623ns I0916 22:56:14.038944 21053 replica.cpp:342] Persisted promised to 1 I0916 22:56:14.039201 21052 coordinator.cpp:230] Coordinator attemping to fill missing position I0916 22:56:14.039676 21047 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0916 22:56:14.039836 21047 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 144215ns I0916 22:56:14.039850 21047 replica.cpp:676] Persisted action at 0 I0916 22:56:14.040243 21047 replica.cpp:508] Replica received write request for position 0 I0916 22:56:14.040267 21047 leveldb.cpp:438] Reading position from leveldb took 10323ns I0916 22:56:14.040362 21047 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 79471ns I0916 22:56:14.040375 21047 replica.cpp:676] Persisted action at 0 I0916 22:56:14.040556 21054 replica.cpp:655] Replica received learned notice for position 0 I0916 22:56:14.040658 21054 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 83975ns I0916 22:56:14.040676 21054 replica.cpp:676] Persisted action at 0 I0916 22:56:14.040689 21054 replica.cpp:661] Replica learned NOP action at position 0 I0916 22:56:14.041023 21043 log.cpp:672] Writer started with ending position 0 I0916 22:56:14.041342 21052 leveldb.cpp:438] Reading position from leveldb took 10642ns I0916 22:56:14.042325 21050 registrar.cpp:346] Successfully fetched the registry (0B) I0916 22:56:14.042346 21050 registrar.cpp:422] Attempting to update the 'registry' I0916 22:56:14.043306 21054 log.cpp:680] Attempting to append 140 bytes to the log I0916 22:56:14.043354 21050 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0916 22:56:14.043637 21047 replica.cpp:508] Replica received write request for position 1 I0916 22:56:14.044042 21047 leveldb.cpp:343] Persisting action (159 bytes) to leveldb took 386690ns I0916 22:56:14.044057 21047 replica.cpp:676] Persisted action at 1 I0916 22:56:14.044271 21040 replica.cpp:655] Replica received learned notice for position 1 I0916 22:56:14.044435 21040 leveldb.cpp:343] Persisting action (161 bytes) to leveldb took 145186ns I0916 22:56:14.044448 21040 replica.cpp:676] Persisted action at 1 I0916 22:56:14.044456 21040 replica.cpp:661] Replica learned APPEND action at position 1 I0916 22:56:14.044729 21055 registrar.cpp:479] Successfully updated 'registry' I0916 22:56:14.044776 21047 log.cpp:699] Attempting to truncate the log to 1 I0916 22:56:14.044795 21055 registrar.cpp:372] Successfully recovered registrar I0916 22:56:14.044831 21051 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0916 22:56:14.044899 21053 master.cpp:1070] Recovered 0 slaves from the Registry (102B) ; allowing 10mins for slaves to re-register I0916 22:56:14.045133 21055 replica.cpp:508] Replica received write request for position 2 I0916 22:56:14.045450 21055 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 300867ns I0916 22:56:14.045465 21055 replica.cpp:676] Persisted action at 2 I0916 22:56:14.045725 21052 replica.cpp:655] Replica received learned notice for position 2 I0916 22:56:14.045925 21052 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 182657ns I0916 22:56:14.045948 21052 leveldb.cpp:401] Deleting ~1 keys from leveldb took 10733ns I0916 22:56:14.045958 21052 replica.cpp:676] Persisted action at 2 I0916 22:56:14.045964 21052 replica.cpp:661] Replica learned TRUNCATE action at position 2 I0916 22:56:14.055306 21026 containerizer.cpp:89] Using isolation: posix/cpu,posix/mem I0916 22:56:14.057139 21048 slave.cpp:169] Slave started on 102)@67.195.81.186:47865 I0916 22:56:14.057178 21048 credentials.hpp:84] Loading credential for authentication from '/tmp/HealthCheckTest_HealthStatusChange_cGKTiG/credential' I0916 22:56:14.057283 21048 slave.cpp:276] Slave using credential for: test-principal I0916 22:56:14.057354 21048 slave.cpp:289] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0916 22:56:14.057457 21048 slave.cpp:317] Slave hostname: penates.apache.org I0916 22:56:14.057468 21048 slave.cpp:318] Slave checkpoint: false I0916 22:56:14.057754 21043 state.cpp:33] Recovering state from '/tmp/HealthCheckTest_HealthStatusChange_cGKTiG/meta' I0916 22:56:14.057864 21042 status_update_manager.cpp:193] Recovering status update manager I0916 22:56:14.057958 21042 containerizer.cpp:252] Recovering containerizer I0916 22:56:14.058226 21042 slave.cpp:3219] Finished recovery I0916 22:56:14.058452 21047 slave.cpp:600] New master detected at master@67.195.81.186:47865 I0916 22:56:14.058485 21047 slave.cpp:674] Authenticating with master master@67.195.81.186:47865 I0916 22:56:14.058506 21042 status_update_manager.cpp:167] New master detected at master@67.195.81.186:47865 I0916 22:56:14.058539 21047 slave.cpp:647] Detecting new master I0916 22:56:14.058555 21042 authenticatee.hpp:128] Creating new client SASL connection I0916 22:56:14.058656 21043 master.cpp:3653] Authenticating slave(102)@67.195.81.186:47865 I0916 22:56:14.058737 21040 authenticator.hpp:156] Creating new server SASL connection I0916 22:56:14.058830 21047 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0916 22:56:14.058852 21047 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0916 22:56:14.058884 21047 authenticator.hpp:262] Received SASL authentication start I0916 22:56:14.058936 21047 authenticator.hpp:384] Authentication requires more steps I0916 22:56:14.058981 21047 authenticatee.hpp:265] Received SASL authentication step I0916 22:56:14.059052 21040 authenticator.hpp:290] Received SASL authentication step I0916 22:56:14.059074 21040 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'penates.apache.org' server FQDN: 'penates.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0916 22:56:14.059087 21040 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0916 22:56:14.059101 21040 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0916 22:56:14.059111 21040 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'penates.apache.org' server FQDN: 'penates.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0916 22:56:14.059118 21040 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0916 22:56:14.059123 21040 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0916 22:56:14.059135 21040 authenticator.hpp:376] Authentication success I0916 22:56:14.059182 21047 authenticatee.hpp:305] Authentication success I0916 22:56:14.059192 21040 master.cpp:3693] Successfully authenticated principal 'test-principal' at slave(102)@67.195.81.186:47865 I0916 22:56:14.059309 21047 slave.cpp:731] Successfully authenticated with master master@67.195.81.186:47865 I0916 22:56:14.059348 21047 slave.cpp:994] Will retry registration in 12.6149ms if necessary I0916 22:56:14.059396 21040 master.cpp:2843] Registering slave at slave(102)@67.195.81.186:47865 (penates.apache.org) with id 20140916-225614-3125920579-47865-21026-0 I0916 22:56:14.059495 21054 registrar.cpp:422] Attempting to update the 'registry' I0916 22:56:14.059558 21026 sched.cpp:137] Version: 0.21.0 I0916 22:56:14.059710 21041 sched.cpp:233] New master detected at master@67.195.81.186:47865 I0916 22:56:14.059730 21041 sched.cpp:283] Authenticating with master master@67.195.81.186:47865 I0916 22:56:14.059788 21052 authenticatee.hpp:128] Creating new client SASL connection I0916 22:56:14.059890 21043 master.cpp:3653] Authenticating scheduler-59427aee-c9d1-45c7-96fc-12d0d48529a4@67.195.81.186:47865 I0916 22:56:14.059960 21055 authenticator.hpp:156] Creating new server SASL connection I0916 22:56:14.060039 21040 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0916 22:56:14.060061 21040 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0916 22:56:14.060107 21055 authenticator.hpp:262] Received SASL authentication start I0916 22:56:14.060158 21055 authenticator.hpp:384] Authentication requires more steps I0916 22:56:14.060189 21055 authenticatee.hpp:265] Received SASL authentication step I0916 22:56:14.060220 21055 authenticator.hpp:290] Received SASL authentication step I0916 22:56:14.060236 21055 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'penates.apache.org' server FQDN: 'penates.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0916 22:56:14.060250 21055 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0916 22:56:14.060277 21055 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0916 22:56:14.060288 21055 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'penates.apache.org' server FQDN: 'penates.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0916 22:56:14.060295 21055 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0916 22:56:14.060300 21055 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0916 22:56:14.060312 21055 authenticator.hpp:376] Authentication success I0916 22:56:14.060349 21040 authenticatee.hpp:305] Authentication success I0916 22:56:14.060364 21055 master.cpp:3693] Successfully authenticated principal 'test-principal' at scheduler-59427aee-c9d1-45c7-96fc-12d0d48529a4@67.195.81.186:47865 I0916 22:56:14.060480 21046 sched.cpp:357] Successfully authenticated with master master@67.195.81.186:47865 I0916 22:56:14.060499 21046 sched.cpp:476] Sending registration request to master@67.195.81.186:47865 I0916 22:56:14.060564 21050 master.cpp:1331] Received registration request from scheduler-59427aee-c9d1-45c7-96fc-12d0d48529a4@67.195.81.186:47865 I0916 22:56:14.060593 21050 master.cpp:1291] Authorizing framework principal 'test-principal' to receive offers for role '*' I0916 22:56:14.060767 21053 master.cpp:1390] Registering framework 20140916-225614-3125920579-47865-21026-0000 at scheduler-59427aee-c9d1-45c7-96fc-12d0d48529a4@67.195.81.186:47865 I0916 22:56:14.060797 21049 log.cpp:680] Attempting to append 337 bytes to the log I0916 22:56:14.060873 21042 hierarchical_allocator_process.hpp:329] Added framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:14.060873 21040 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0916 22:56:14.060899 21042 hierarchical_allocator_process.hpp:697] No resources available to allocate! I0916 22:56:14.060909 21042 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 11862ns I0916 22:56:14.061061 21044 sched.cpp:407] Framework registered with 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:14.061115 21044 sched.cpp:421] Scheduler::registered took 34395ns I0916 22:56:14.061173 21047 replica.cpp:508] Replica received write request for position 3 I0916 22:56:14.061298 21047 leveldb.cpp:343] Persisting action (356 bytes) to leveldb took 108843ns I0916 22:56:14.061311 21047 replica.cpp:676] Persisted action at 3 I0916 22:56:14.061553 21049 replica.cpp:655] Replica received learned notice for position 3 I0916 22:56:14.061965 21049 leveldb.cpp:343] Persisting action (358 bytes) to leveldb took 392670ns I0916 22:56:14.061985 21049 replica.cpp:676] Persisted action at 3 I0916 22:56:14.061996 21049 replica.cpp:661] Replica learned APPEND action at position 3 I0916 22:56:14.062268 21050 registrar.cpp:479] Successfully updated 'registry' I0916 22:56:14.062331 21051 log.cpp:699] Attempting to truncate the log to 3 I0916 22:56:14.062355 21040 master.cpp:2883] Registered slave 20140916-225614-3125920579-47865-21026-0 at slave(102)@67.195.81.186:47865 (penates.apache.org) I0916 22:56:14.062386 21043 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0916 22:56:14.062376 21040 master.cpp:4126] Adding slave 20140916-225614-3125920579-47865-21026-0 at slave(102)@67.195.81.186:47865 (penates.apache.org) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0916 22:56:14.062510 21045 slave.cpp:765] Registered with master master@67.195.81.186:47865; given slave ID 20140916-225614-3125920579-47865-21026-0 I0916 22:56:14.062573 21045 slave.cpp:2346] Received ping from slave-observer(98)@67.195.81.186:47865 I0916 22:56:14.062599 21049 hierarchical_allocator_process.hpp:442] Added slave 20140916-225614-3125920579-47865-21026-0 (penates.apache.org) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0916 22:56:14.062669 21049 hierarchical_allocator_process.hpp:734] Offering cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140916-225614-3125920579-47865-21026-0 to framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:14.062764 21041 replica.cpp:508] Replica received write request for position 4 I0916 22:56:14.062788 21049 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20140916-225614-3125920579-47865-21026-0 in 145691ns I0916 22:56:14.062839 21050 master.hpp:861] Adding offer 20140916-225614-3125920579-47865-21026-0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140916-225614-3125920579-47865-21026-0 (penates.apache.org) I0916 22:56:14.062891 21041 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 110169ns I0916 22:56:14.062907 21041 replica.cpp:676] Persisted action at 4 I0916 22:56:14.062911 21050 master.cpp:3600] Sending 1 offers to framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:14.063065 21043 sched.cpp:544] Scheduler::resourceOffers took 39808ns I0916 22:56:14.063163 21046 replica.cpp:655] Replica received learned notice for position 4 I0916 22:56:14.063272 21046 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 89981ns I0916 22:56:14.063308 21046 leveldb.cpp:401] Deleting ~2 keys from leveldb took 18542ns I0916 22:56:14.063323 21046 replica.cpp:676] Persisted action at 4 I0916 22:56:14.063333 21046 replica.cpp:661] Replica learned TRUNCATE action at position 4 I0916 22:56:14.063482 21044 master.hpp:871] Removing offer 20140916-225614-3125920579-47865-21026-0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140916-225614-3125920579-47865-21026-0 (penates.apache.org) I0916 22:56:14.063535 21044 master.cpp:2201] Processing reply for offers: [ 20140916-225614-3125920579-47865-21026-0 ] on slave 20140916-225614-3125920579-47865-21026-0 at slave(102)@67.195.81.186:47865 (penates.apache.org) for framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:14.063561 21044 master.cpp:2284] Authorizing framework principal 'test-principal' to launch task 1 as user 'jenkins' I0916 22:56:14.063824 21040 master.hpp:833] Adding task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140916-225614-3125920579-47865-21026-0 (penates.apache.org) I0916 22:56:14.063860 21040 master.cpp:2350] Launching task 1 of framework 20140916-225614-3125920579-47865-21026-0000 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140916-225614-3125920579-47865-21026-0 at slave(102)@67.195.81.186:47865 (penates.apache.org) I0916 22:56:14.063943 21050 slave.cpp:1025] Got assigned task 1 for framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:14.064158 21050 slave.cpp:1135] Launching task 1 for framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:14.065439 21050 slave.cpp:1248] Queuing task '1' for executor 1 of framework '20140916-225614-3125920579-47865-21026-0000 I0916 22:56:14.065460 21041 containerizer.cpp:394] Starting container 'd383a013-89cf-47c6-ad8e-39e2f3e971fd' for executor '1' of framework '20140916-225614-3125920579-47865-21026-0000' I0916 22:56:14.065477 21050 slave.cpp:554] Successfully attached file '/tmp/HealthCheckTest_HealthStatusChange_cGKTiG/slaves/20140916-225614-3125920579-47865-21026-0/frameworks/20140916-225614-3125920579-47865-21026-0000/executors/1/runs/d383a013-89cf-47c6-ad8e-39e2f3e971fd' I0916 22:56:14.066735 21055 launcher.cpp:137] Forked child with pid '21858' for container 'd383a013-89cf-47c6-ad8e-39e2f3e971fd' I0916 22:56:14.067486 21044 containerizer.cpp:510] Fetching URIs for container 'd383a013-89cf-47c6-ad8e-39e2f3e971fd' using command '/home/jenkins/jenkins-slave/workspace/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Disable-Java-Disable-Python-Disable-Webui/build/src/mesos-fetcher' I0916 22:56:15.037449 21050 hierarchical_allocator_process.hpp:659] Performed allocation for 1 slaves in 43708ns I0916 22:56:15.038743 21054 slave.cpp:2559] Monitoring executor '1' of framework '20140916-225614-3125920579-47865-21026-0000' in container 'd383a013-89cf-47c6-ad8e-39e2f3e971fd' I0916 22:56:15.078441 21053 slave.cpp:1758] Got registration for executor '1' of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:15.078866 21053 slave.cpp:1876] Flushing queued task 1 for executor '1' of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:15.084800 21043 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: a16d2819-e9f4-4119-bde6-f00ad33033e5) for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:15.084969 21041 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: a16d2819-e9f4-4119-bde6-f00ad33033e5) for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:15.084995 21041 status_update_manager.cpp:499] Creating StatusUpdate stream for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:15.085160 21041 status_update_manager.cpp:373] Forwarding status update TASK_RUNNING (UUID: a16d2819-e9f4-4119-bde6-f00ad33033e5) for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 to master@67.195.81.186:47865 I0916 22:56:15.085314 21043 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: a16d2819-e9f4-4119-bde6-f00ad33033e5) for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:15.085332 21041 master.cpp:3212] Forwarding status update TASK_RUNNING (UUID: a16d2819-e9f4-4119-bde6-f00ad33033e5) for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:15.085335 21043 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: a16d2819-e9f4-4119-bde6-f00ad33033e5) for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:15.085435 21041 master.cpp:3178] Status update TASK_RUNNING (UUID: a16d2819-e9f4-4119-bde6-f00ad33033e5) for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 from slave 20140916-225614-3125920579-47865-21026-0 at slave(102)@67.195.81.186:47865 (penates.apache.org) I0916 22:56:15.085675 21044 sched.cpp:635] Scheduler::statusUpdate took 113998ns I0916 22:56:15.085888 21052 master.cpp:2693] Forwarding status update acknowledgement a16d2819-e9f4-4119-bde6-f00ad33033e5 for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 to slave 20140916-225614-3125920579-47865-21026-0 at slave(102)@67.195.81.186:47865 (penates.apache.org) I0916 22:56:15.086109 21051 status_update_manager.cpp:398] Received status update acknowledgement (UUID: a16d2819-e9f4-4119-bde6-f00ad33033e5) for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:15.086205 21051 slave.cpp:1698] Status update manager successfully handled status update acknowledgement (UUID: a16d2819-e9f4-4119-bde6-f00ad33033e5) for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 I../../src/tests/health_check_tests.cpp:330: Failure Failed to wait 10secs for statusHealth1 0916 22:56:16.038705 21049 hierarchical_allocator_process.hpp:659] Performed allocation for 1 slaves in 40061ns I0916 22:56:16.126260 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: 792b8e42-0d72-451b-978a-7d1f29a15751) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.190274 21045 master.cpp:741] Framework 20140916-225614-3125920579-47865-21026-0000 disconnected I0916 22:56:28.190304 21045 master.cpp:1687] Deactivating framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:19.037235 21050 master.cpp:120] No whitelist given. Advertising offers for all slaves I0916 22:56:28.190394 21045 master.cpp:763] Giving framework 20140916-225614-3125920579-47865-21026-0000 0ns to failover ../../src/tests/health_check_tests.cpp:319: Failure Actual function call count doesn't match EXPECT_CALL(sched, statusUpdate(&driver, _))... Expected: to be called 4 times Actual: called once - unsatisfied and active I0916 22:56:28.190624 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: 5783bb6f-112f-4434-a160-a336e890398a) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.190757 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: b4a9f647-3894-47f3-b55e-49d0355b20f9) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.190773 21046 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 792b8e42-0d72-451b-978a-7d1f29a15751) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.190831 21040 hierarchical_allocator_process.hpp:405] Deactivated framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.190856 21054 master.cpp:3471] Framework failover timeout, removing framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.190846 21046 status_update_manager.cpp:373] Forwarding status update TASK_RUNNING (UUID: 792b8e42-0d72-451b-978a-7d1f29a15751) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 to master@67.195.81.186:47865 I0916 22:56:28.190887 21054 master.cpp:3976] Removing framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.190887 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: b5d5d6c7-e92c-4ca0-ab72-656542c14ade) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.190994 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: c0225c5c-b15e-4b5e-a063-07a29703ea12) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.190996 21046 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 5783bb6f-112f-4434-a160-a336e890398a) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.190999 21054 master.hpp:851] Removing task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20140916-225614-3125920579-47865-21026-0 (penates.apache.org) I0916 22:56:28.191090 21046 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: b4a9f647-3894-47f3-b55e-49d0355b20f9) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 W0916 22:56:28.191141 21054 master.cpp:4419] Removing task 1 of framework 20140916-225614-3125920579-47865-21026-0000 and slave 20140916-225614-3125920579-47865-21026-0 in non-terminal state TASK_RUNNING I0916 22:56:28.191093 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: e9e3fdb1-d8e0-4bfc-970b-fcd098cace13) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.191181 21046 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: b5d5d6c7-e92c-4ca0-ab72-656542c14ade) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.191256 21054 master.cpp:650] Master terminating I0916 22:56:28.191258 21043 hierarchical_allocator_process.hpp:563] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 20140916-225614-3125920579-47865-21026-0 from framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.369088 21046 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: c0225c5c-b15e-4b5e-a063-07a29703ea12) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.191319 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: 780d211b-6ecc-478d-93e9-6744ed0a2d33) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.369132 21043 hierarchical_allocator_process.hpp:360] Removed framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.369225 21046 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: e9e3fdb1-d8e0-4bfc-970b-fcd098cace13) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.369283 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: f41475b5-9b45-478e-8cd9-2cf7854627dd) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.369323 21046 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 780d211b-6ecc-478d-93e9-6744ed0a2d33) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.369415 21046 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: f41475b5-9b45-478e-8cd9-2cf7854627dd) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.369420 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: 5f3a1b44-51f0-4deb-ba4c-e7238f63f856) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.369536 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: 4663a09b-147f-455c-a577-3d967ddf5256) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.369642 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: 89cbabd7-0169-4b58-8df7-d8fd4bc4a287) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.369685 21055 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 5f3a1b44-51f0-4deb-ba4c-e7238f63f856) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.369753 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: 3c491f72-95f1-4c52-b7ca-c6470f748eb5) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.369802 21055 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 4663a09b-147f-455c-a577-3d967ddf5256) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.369884 21055 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 89cbabd7-0169-4b58-8df7-d8fd4bc4a287) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.369889 21052 slave.cpp:2110] Handling status update TASK_RUNNING (UUID: 218be9bd-a229-4808-8fb6-1e507830cdaf) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 from executor(1)@67.195.81.186:35510 I0916 22:56:28.369943 21055 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 3c491f72-95f1-4c52-b7ca-c6470f748eb5) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.369978 21052 slave.cpp:1431] Asked to shut down framework 20140916-225614-3125920579-47865-21026-0000 by master@67.195.81.186:47865 I0916 22:56:28.369998 21052 slave.cpp:1456] Shutting down framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.370009 21055 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 218be9bd-a229-4808-8fb6-1e507830cdaf) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.370018 21052 slave.cpp:2899] Shutting down executor '1' of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.370183 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: 792b8e42-0d72-451b-978a-7d1f29a15751) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.370206 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: 792b8e42-0d72-451b-978a-7d1f29a15751) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.370426 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: 5783bb6f-112f-4434-a160-a336e890398a) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.370447 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: 5783bb6f-112f-4434-a160-a336e890398a) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.370635 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: b4a9f647-3894-47f3-b55e-49d0355b20f9) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.370657 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: b4a9f647-3894-47f3-b55e-49d0355b20f9) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.370815 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: b5d5d6c7-e92c-4ca0-ab72-656542c14ade) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.370837 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: b5d5d6c7-e92c-4ca0-ab72-656542c14ade) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.370972 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: c0225c5c-b15e-4b5e-a063-07a29703ea12) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.371000 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: c0225c5c-b15e-4b5e-a063-07a29703ea12) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.371155 21052 slave.cpp:2378] master@67.195.81.186:47865 exited W0916 22:56:28.371177 21052 slave.cpp:2381] Master disconnected! Waiting for a new master to be elected I0916 22:56:28.371202 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: e9e3fdb1-d8e0-4bfc-970b-fcd098cace13) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.540035 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: e9e3fdb1-d8e0-4bfc-970b-fcd098cace13) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.371701 21053 containerizer.cpp:882] Destroying container 'd383a013-89cf-47c6-ad8e-39e2f3e971fd' I0916 22:56:28.540177 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: 780d211b-6ecc-478d-93e9-6744ed0a2d33) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.540196 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: 780d211b-6ecc-478d-93e9-6744ed0a2d33) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.540324 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: f41475b5-9b45-478e-8cd9-2cf7854627dd) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.540350 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: f41475b5-9b45-478e-8cd9-2cf7854627dd) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.540403 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: 5f3a1b44-51f0-4deb-ba4c-e7238f63f856) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.540421 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: 5f3a1b44-51f0-4deb-ba4c-e7238f63f856) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.540530 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: 4663a09b-147f-455c-a577-3d967ddf5256) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.540556 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: 4663a09b-147f-455c-a577-3d967ddf5256) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.540664 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: 89cbabd7-0169-4b58-8df7-d8fd4bc4a287) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.540681 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: 89cbabd7-0169-4b58-8df7-d8fd4bc4a287) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.540889 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: 3c491f72-95f1-4c52-b7ca-c6470f748eb5) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.540918 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: 3c491f72-95f1-4c52-b7ca-c6470f748eb5) for task 1 in health state unhealthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:28.541082 21052 slave.cpp:2267] Status update manager successfully handled status update TASK_RUNNING (UUID: 218be9bd-a229-4808-8fb6-1e507830cdaf) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:28.541111 21052 slave.cpp:2273] Sending acknowledgement for status update TASK_RUNNING (UUID: 218be9bd-a229-4808-8fb6-1e507830cdaf) for task 1 in health state healthy of framework 20140916-225614-3125920579-47865-21026-0000 to executor(1)@67.195.81.186:35510 I0916 22:56:29.047708 21053 containerizer.cpp:997] Executor for container 'd383a013-89cf-47c6-ad8e-39e2f3e971fd' has exited I0916 22:56:29.048037 21050 slave.cpp:2617] Executor '1' of framework 20140916-225614-3125920579-47865-21026-0000 terminated with signal Killed I0916 22:56:29.048197 21050 slave.cpp:2753] Cleaning up executor '1' of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:29.048373 21050 slave.cpp:2828] Cleaning up framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:29.048444 21043 status_update_manager.cpp:282] Closing status update streams for framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:29.048457 21050 slave.cpp:477] Slave terminating I0916 22:56:29.048476 21043 status_update_manager.cpp:530] Cleaning up status update stream for task 1 of framework 20140916-225614-3125920579-47865-21026-0000 I0916 22:56:29.048462 21041 gc.cpp:56] Scheduling '/tmp/HealthCheckTest_HealthStatusChange_cGKTiG/slaves/20140916-225614-3125920579-47865-21026-0/frameworks/20140916-225614-3125920579-47865-21026-0000/executors/1/runs/d383a013-89cf-47c6-ad8e-39e2f3e971fd' for gc 6.99999944121481days in the future I0916 22:56:29.048568 21041 gc.cpp:56] Scheduling '/tmp/HealthCheckTest_HealthStatusChange_cGKTiG/slaves/20140916-225614-3125920579-47865-21026-0/frameworks/20140916-225614-3125920579-47865-21026-0000/executors/1' for gc 6.99999944031111days in the future I0916 22:56:29.048607 21041 gc.cpp:56] Scheduling '/tmp/HealthCheckTest_HealthStatusChange_cGKTiG/slaves/20140916-225614-3125920579-47865-21026-0/frameworks/20140916-225614-3125920579-47865-21026-0000' for gc 6.99999943939852days in the future [ FAILED ] HealthCheckTest.HealthStatusChange (15019 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1808","09/17/2014 23:59:19",3,"Expose RTT in container stats ""As we expose the bandwidth, so we should expose the RTT as a measure of latency each container is experiencing. We can use {{ss}} to get the per-socket statistics and filter and aggregate accordingly to get a measure of RTT.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1813","09/18/2014 04:42:47",1,"Fail fast in example frameworks if task goes into unexpected state ""Most of the example frameworks launch a bunch of tasks and exit if *all* of them reach FINISHED state. But if there is a bug in the code resulting in TASK_LOST, the framework waits forever. Instead the framework should abort if an un-expected task state is encountered.""","",0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1814","09/18/2014 17:42:28",2,"Task attempted to use more offers than requested in example jave and python frameworks """""," [ RUN ] ExamplesTest.JavaFramework Using temporary directory '/tmp/ExamplesTest_JavaFramework_2PcFCh' Enabling authentication for the framework WARNING: Logging before InitGoogleLogging() is written to STDERR I0917 23:14:35.199069 31510 process.cpp:1771] libprocess is initialized on 127.0.1.1:34609 for 8 cpus I0917 23:14:35.199794 31510 logging.cpp:177] Logging to STDERR I0917 23:14:35.225342 31510 leveldb.cpp:176] Opened db in 22.197149ms I0917 23:14:35.231133 31510 leveldb.cpp:183] Compacted db in 5.601897ms I0917 23:14:35.231498 31510 leveldb.cpp:198] Created db iterator in 215441ns I0917 23:14:35.231608 31510 leveldb.cpp:204] Seeked to beginning of db in 11488ns I0917 23:14:35.231722 31510 leveldb.cpp:273] Iterated through 0 keys in the db in 14016ns I0917 23:14:35.231917 31510 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0917 23:14:35.233129 31526 recover.cpp:425] Starting replica recovery I0917 23:14:35.233614 31526 recover.cpp:451] Replica is in EMPTY status I0917 23:14:35.234994 31526 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I0917 23:14:35.240116 31519 recover.cpp:188] Received a recover response from a replica in EMPTY status I0917 23:14:35.240782 31519 recover.cpp:542] Updating replica status to STARTING I0917 23:14:35.242846 31524 master.cpp:286] Master 20140917-231435-16842879-34609-31503 (saucy) started on 127.0.1.1:34609 I0917 23:14:35.243191 31524 master.cpp:332] Master only allowing authenticated frameworks to register I0917 23:14:35.243288 31524 master.cpp:339] Master allowing unauthenticated slaves to register I0917 23:14:35.243399 31524 credentials.hpp:36] Loading credentials for authentication from '/tmp/ExamplesTest_JavaFramework_2PcFCh/credentials' W0917 23:14:35.243588 31524 credentials.hpp:51] Permissions on credentials file '/tmp/ExamplesTest_JavaFramework_2PcFCh/credentials' are too open. It is recommended that your credentials file is NOT accessible by others. I0917 23:14:35.243846 31524 master.cpp:366] Authorization enabled I0917 23:14:35.244882 31520 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@127.0.1.1:34609 I0917 23:14:35.245224 31520 master.cpp:120] No whitelist given. Advertising offers for all slaves I0917 23:14:35.246934 31524 master.cpp:1211] The newly elected leader is master@127.0.1.1:34609 with id 20140917-231435-16842879-34609-31503 I0917 23:14:35.247234 31524 master.cpp:1224] Elected as the leading master! I0917 23:14:35.247336 31524 master.cpp:1042] Recovering from registrar I0917 23:14:35.247542 31526 registrar.cpp:313] Recovering registrar I0917 23:14:35.250555 31510 containerizer.cpp:89] Using isolation: posix/cpu,posix/mem I0917 23:14:35.252326 31510 containerizer.cpp:89] Using isolation: posix/cpu,posix/mem I0917 23:14:35.252821 31520 slave.cpp:169] Slave started on 1)@127.0.1.1:34609 I0917 23:14:35.253552 31520 slave.cpp:289] Slave resources: cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] I0917 23:14:35.253906 31520 slave.cpp:317] Slave hostname: saucy I0917 23:14:35.254004 31520 slave.cpp:318] Slave checkpoint: true I0917 23:14:35.254818 31520 state.cpp:33] Recovering state from '/tmp/mesos-w8snRW/0/meta' I0917 23:14:35.255106 31519 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 13.99622ms I0917 23:14:35.255235 31519 replica.cpp:320] Persisted replica status to STARTING I0917 23:14:35.255419 31519 recover.cpp:451] Replica is in STARTING status I0917 23:14:35.255834 31519 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I0917 23:14:35.256000 31519 recover.cpp:188] Received a recover response from a replica in STARTING status I0917 23:14:35.256217 31519 recover.cpp:542] Updating replica status to VOTING I0917 23:14:35.256641 31520 status_update_manager.cpp:193] Recovering status update manager I0917 23:14:35.257064 31520 containerizer.cpp:252] Recovering containerizer I0917 23:14:35.257725 31520 slave.cpp:3220] Finished recovery I0917 23:14:35.258463 31520 slave.cpp:600] New master detected at master@127.0.1.1:34609 I0917 23:14:35.258769 31524 status_update_manager.cpp:167] New master detected at master@127.0.1.1:34609 I0917 23:14:35.258885 31520 slave.cpp:636] No credentials provided. Attempting to register without authentication I0917 23:14:35.259024 31520 slave.cpp:647] Detecting new master I0917 23:14:35.259863 31520 slave.cpp:169] Slave started on 2)@127.0.1.1:34609 I0917 23:14:35.260288 31520 slave.cpp:289] Slave resources: cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] I0917 23:14:35.260493 31520 slave.cpp:317] Slave hostname: saucy I0917 23:14:35.260588 31520 slave.cpp:318] Slave checkpoint: true I0917 23:14:35.265127 31510 containerizer.cpp:89] Using isolation: posix/cpu,posix/mem I0917 23:14:35.265877 31519 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 9.536278ms I0917 23:14:35.265983 31519 replica.cpp:320] Persisted replica status to VOTING I0917 23:14:35.266324 31519 recover.cpp:556] Successfully joined the Paxos group I0917 23:14:35.266511 31519 recover.cpp:440] Recover process terminated I0917 23:14:35.266978 31519 log.cpp:656] Attempting to start the writer I0917 23:14:35.268165 31523 replica.cpp:474] Replica received implicit promise request with proposal 1 I0917 23:14:35.269850 31525 slave.cpp:169] Slave started on 3)@127.0.1.1:34609 I0917 23:14:35.270365 31525 slave.cpp:289] Slave resources: cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] I0917 23:14:35.270658 31525 slave.cpp:317] Slave hostname: saucy I0917 23:14:35.270781 31525 slave.cpp:318] Slave checkpoint: true I0917 23:14:35.271332 31525 state.cpp:33] Recovering state from '/tmp/mesos-w8snRW/2/meta' I0917 23:14:35.271580 31522 status_update_manager.cpp:193] Recovering status update manager I0917 23:14:35.271838 31522 containerizer.cpp:252] Recovering containerizer I0917 23:14:35.272238 31525 slave.cpp:3220] Finished recovery I0917 23:14:35.273002 31525 slave.cpp:600] New master detected at master@127.0.1.1:34609 I0917 23:14:35.273252 31521 status_update_manager.cpp:167] New master detected at master@127.0.1.1:34609 I0917 23:14:35.273360 31525 slave.cpp:636] No credentials provided. Attempting to register without authentication I0917 23:14:35.273507 31525 slave.cpp:647] Detecting new master I0917 23:14:35.275413 31525 state.cpp:33] Recovering state from '/tmp/mesos-w8snRW/1/meta' I0917 23:14:35.278506 31523 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 10.232514ms I0917 23:14:35.278712 31523 replica.cpp:342] Persisted promised to 1 I0917 23:14:35.279585 31523 coordinator.cpp:230] Coordinator attemping to fill missing position I0917 23:14:35.280400 31523 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0917 23:14:35.280900 31526 status_update_manager.cpp:193] Recovering status update manager I0917 23:14:35.281282 31519 containerizer.cpp:252] Recovering containerizer I0917 23:14:35.281615 31520 slave.cpp:3220] Finished recovery I0917 23:14:35.281891 31510 sched.cpp:137] Version: 0.21.0 I0917 23:14:35.282306 31526 sched.cpp:233] New master detected at master@127.0.1.1:34609 I0917 23:14:35.282464 31526 sched.cpp:283] Authenticating with master master@127.0.1.1:34609 I0917 23:14:35.282891 31526 authenticatee.hpp:104] Initializing client SASL I0917 23:14:35.284816 31526 authenticatee.hpp:128] Creating new client SASL connection I0917 23:14:35.285428 31519 master.cpp:873] Dropping 'mesos.internal.AuthenticateMessage' message since not recovered yet I0917 23:14:35.288007 31521 slave.cpp:600] New master detected at master@127.0.1.1:34609 I0917 23:14:35.288399 31521 slave.cpp:636] No credentials provided. Attempting to register without authentication I0917 23:14:35.288535 31521 slave.cpp:647] Detecting new master I0917 23:14:35.288501 31519 status_update_manager.cpp:167] New master detected at master@127.0.1.1:34609 I0917 23:14:35.289625 31523 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 8.997343ms I0917 23:14:35.289784 31523 replica.cpp:676] Persisted action at 0 I0917 23:14:35.292667 31521 replica.cpp:508] Replica received write request for position 0 I0917 23:14:35.293112 31521 leveldb.cpp:438] Reading position from leveldb took 325638ns I0917 23:14:35.301774 31521 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 8.576338ms I0917 23:14:35.301916 31521 replica.cpp:676] Persisted action at 0 I0917 23:14:35.302289 31521 replica.cpp:655] Replica received learned notice for position 0 I0917 23:14:35.310542 31521 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 8.087789ms I0917 23:14:35.310675 31521 replica.cpp:676] Persisted action at 0 I0917 23:14:35.310946 31521 replica.cpp:661] Replica learned NOP action at position 0 I0917 23:14:35.311254 31521 log.cpp:672] Writer started with ending position 0 I0917 23:14:35.311957 31521 leveldb.cpp:438] Reading position from leveldb took 35110ns I0917 23:14:35.320283 31521 registrar.cpp:346] Successfully fetched the registry (0B) I0917 23:14:35.320513 31521 registrar.cpp:422] Attempting to update the 'registry' I0917 23:14:35.322226 31525 log.cpp:680] Attempting to append 118 bytes to the log I0917 23:14:35.322549 31525 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0917 23:14:35.322931 31525 replica.cpp:508] Replica received write request for position 1 I0917 23:14:35.330169 31525 leveldb.cpp:343] Persisting action (135 bytes) to leveldb took 7.133053ms I0917 23:14:35.330340 31525 replica.cpp:676] Persisted action at 1 I0917 23:14:35.330890 31525 replica.cpp:655] Replica received learned notice for position 1 I0917 23:14:35.339218 31525 leveldb.cpp:343] Persisting action (137 bytes) to leveldb took 8.192024ms I0917 23:14:35.339380 31525 replica.cpp:676] Persisted action at 1 I0917 23:14:35.339715 31525 replica.cpp:661] Replica learned APPEND action at position 1 I0917 23:14:35.340615 31525 registrar.cpp:479] Successfully updated 'registry' I0917 23:14:35.340802 31525 registrar.cpp:372] Successfully recovered registrar I0917 23:14:35.341104 31525 log.cpp:699] Attempting to truncate the log to 1 I0917 23:14:35.341351 31525 master.cpp:1069] Recovered 0 slaves from the Registry (82B) ; allowing 10mins for slaves to re-register I0917 23:14:35.341527 31525 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0917 23:14:35.341964 31525 replica.cpp:508] Replica received write request for position 2 I0917 23:14:35.352336 31525 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 10.213086ms I0917 23:14:35.352494 31525 replica.cpp:676] Persisted action at 2 I0917 23:14:35.356258 31523 replica.cpp:655] Replica received learned notice for position 2 I0917 23:14:35.364992 31523 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 8.606522ms I0917 23:14:35.365166 31523 leveldb.cpp:401] Deleting ~1 keys from leveldb took 48378ns I0917 23:14:35.365404 31523 replica.cpp:676] Persisted action at 2 I0917 23:14:35.365537 31523 replica.cpp:661] Replica learned TRUNCATE action at position 2 I0917 23:14:35.568366 31523 slave.cpp:994] Will retry registration in 423.208575ms if necessary I0917 23:14:35.568840 31522 master.cpp:2870] Registering slave at slave(3)@127.0.1.1:34609 (saucy) with id 20140917-231435-16842879-34609-31503-0 I0917 23:14:35.569422 31522 registrar.cpp:422] Attempting to update the 'registry' I0917 23:14:35.572013 31522 log.cpp:680] Attempting to append 289 bytes to the log I0917 23:14:35.572273 31519 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0917 23:14:35.572816 31519 replica.cpp:508] Replica received write request for position 3 I0917 23:14:35.579784 31519 leveldb.cpp:343] Persisting action (308 bytes) to leveldb took 6.809365ms I0917 23:14:35.579907 31519 replica.cpp:676] Persisted action at 3 I0917 23:14:35.580512 31519 replica.cpp:655] Replica received learned notice for position 3 I0917 23:14:35.588748 31519 leveldb.cpp:343] Persisting action (310 bytes) to leveldb took 8.112519ms I0917 23:14:35.588888 31519 replica.cpp:676] Persisted action at 3 I0917 23:14:35.588985 31519 replica.cpp:661] Replica learned APPEND action at position 3 I0917 23:14:35.589754 31519 registrar.cpp:479] Successfully updated 'registry' I0917 23:14:35.590070 31519 master.cpp:2910] Registered slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) I0917 23:14:35.590255 31519 master.cpp:4118] Adding slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) with cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] I0917 23:14:35.590831 31519 slave.cpp:765] Registered with master master@127.0.1.1:34609; given slave ID 20140917-231435-16842879-34609-31503-0 I0917 23:14:35.589913 31523 log.cpp:699] Attempting to truncate the log to 3 I0917 23:14:35.591414 31523 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0917 23:14:35.591815 31523 replica.cpp:508] Replica received write request for position 4 I0917 23:14:35.591117 31521 hierarchical_allocator_process.hpp:442] Added slave 20140917-231435-16842879-34609-31503-0 (saucy) with cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] (and cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] available) I0917 23:14:35.592293 31521 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20140917-231435-16842879-34609-31503-0 in 64364ns I0917 23:14:35.592953 31519 slave.cpp:778] Checkpointing SlaveInfo to '/tmp/mesos-w8snRW/2/meta/slaves/20140917-231435-16842879-34609-31503-0/slave.info' I0917 23:14:35.593475 31519 slave.cpp:2347] Received ping from slave-observer(1)@127.0.1.1:34609 I0917 23:14:35.601356 31523 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 9.420461ms I0917 23:14:35.601539 31523 replica.cpp:676] Persisted action at 4 I0917 23:14:35.602325 31523 replica.cpp:655] Replica received learned notice for position 4 I0917 23:14:35.610779 31523 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 8.34398ms I0917 23:14:35.611114 31523 leveldb.cpp:401] Deleting ~2 keys from leveldb took 66521ns I0917 23:14:35.611554 31523 replica.cpp:676] Persisted action at 4 I0917 23:14:35.611690 31523 replica.cpp:661] Replica learned TRUNCATE action at position 4 I0917 23:14:36.033941 31523 slave.cpp:994] Will retry registration in 322.705631ms if necessary I0917 23:14:36.034276 31521 master.cpp:2870] Registering slave at slave(1)@127.0.1.1:34609 (saucy) with id 20140917-231435-16842879-34609-31503-1 I0917 23:14:36.034536 31521 registrar.cpp:422] Attempting to update the 'registry' I0917 23:14:36.035889 31521 log.cpp:680] Attempting to append 454 bytes to the log I0917 23:14:36.036099 31524 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 5 I0917 23:14:36.036416 31524 replica.cpp:508] Replica received write request for position 5 I0917 23:14:36.046672 31524 leveldb.cpp:343] Persisting action (473 bytes) to leveldb took 10.160627ms I0917 23:14:36.047035 31524 replica.cpp:676] Persisted action at 5 I0917 23:14:36.047613 31524 replica.cpp:655] Replica received learned notice for position 5 I0917 23:14:36.053006 31524 leveldb.cpp:343] Persisting action (475 bytes) to leveldb took 5.180742ms I0917 23:14:36.053246 31524 replica.cpp:676] Persisted action at 5 I0917 23:14:36.053678 31524 replica.cpp:661] Replica learned APPEND action at position 5 I0917 23:14:36.060384 31524 registrar.cpp:479] Successfully updated 'registry' I0917 23:14:36.061328 31524 master.cpp:2910] Registered slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) I0917 23:14:36.061537 31524 master.cpp:4118] Adding slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) with cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] I0917 23:14:36.061982 31524 slave.cpp:765] Registered with master master@127.0.1.1:34609; given slave ID 20140917-231435-16842879-34609-31503-1 I0917 23:14:36.062891 31524 slave.cpp:778] Checkpointing SlaveInfo to '/tmp/mesos-w8snRW/0/meta/slaves/20140917-231435-16842879-34609-31503-1/slave.info' I0917 23:14:36.061050 31525 log.cpp:699] Attempting to truncate the log to 5 I0917 23:14:36.063244 31525 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 6 I0917 23:14:36.063746 31525 replica.cpp:508] Replica received write request for position 6 I0917 23:14:36.062386 31520 hierarchical_allocator_process.hpp:442] Added slave 20140917-231435-16842879-34609-31503-1 (saucy) with cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] (and cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] available) I0917 23:14:36.064352 31520 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20140917-231435-16842879-34609-31503-1 in 35730ns I0917 23:14:36.065166 31524 slave.cpp:2347] Received ping from slave-observer(2)@127.0.1.1:34609 I0917 23:14:36.070137 31525 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 6.242192ms I0917 23:14:36.070355 31525 replica.cpp:676] Persisted action at 6 I0917 23:14:36.071005 31525 replica.cpp:655] Replica received learned notice for position 6 I0917 23:14:36.076560 31525 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 5.368532ms I0917 23:14:36.077137 31525 leveldb.cpp:401] Deleting ~2 keys from leveldb took 371245ns I0917 23:14:36.077241 31525 replica.cpp:676] Persisted action at 6 I0917 23:14:36.077345 31525 replica.cpp:661] Replica learned TRUNCATE action at position 6 I0917 23:14:36.141270 31522 slave.cpp:994] Will retry registration in 1.857205901secs if necessary I0917 23:14:36.141644 31522 master.cpp:2870] Registering slave at slave(2)@127.0.1.1:34609 (saucy) with id 20140917-231435-16842879-34609-31503-2 I0917 23:14:36.141930 31522 registrar.cpp:422] Attempting to update the 'registry' I0917 23:14:36.143316 31521 log.cpp:680] Attempting to append 619 bytes to the log I0917 23:14:36.143646 31521 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 7 I0917 23:14:36.143954 31521 replica.cpp:508] Replica received write request for position 7 I0917 23:14:36.148875 31521 leveldb.cpp:343] Persisting action (638 bytes) to leveldb took 4.787834ms I0917 23:14:36.149085 31521 replica.cpp:676] Persisted action at 7 I0917 23:14:36.149673 31521 replica.cpp:655] Replica received learned notice for position 7 I0917 23:14:36.155232 31521 leveldb.cpp:343] Persisting action (640 bytes) to leveldb took 5.472209ms I0917 23:14:36.155522 31521 replica.cpp:676] Persisted action at 7 I0917 23:14:36.155936 31521 replica.cpp:661] Replica learned APPEND action at position 7 I0917 23:14:36.156481 31521 registrar.cpp:479] Successfully updated 'registry' I0917 23:14:36.156663 31526 log.cpp:699] Attempting to truncate the log to 7 I0917 23:14:36.156813 31526 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 8 I0917 23:14:36.157155 31526 replica.cpp:508] Replica received write request for position 8 I0917 23:14:36.157510 31520 master.cpp:2910] Registered slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) I0917 23:14:36.157645 31520 master.cpp:4118] Adding slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) with cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] I0917 23:14:36.157928 31520 slave.cpp:765] Registered with master master@127.0.1.1:34609; given slave ID 20140917-231435-16842879-34609-31503-2 I0917 23:14:36.158304 31520 slave.cpp:778] Checkpointing SlaveInfo to '/tmp/mesos-w8snRW/1/meta/slaves/20140917-231435-16842879-34609-31503-2/slave.info' I0917 23:14:36.158685 31520 hierarchical_allocator_process.hpp:442] Added slave 20140917-231435-16842879-34609-31503-2 (saucy) with cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] (and cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] available) I0917 23:14:36.158821 31520 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20140917-231435-16842879-34609-31503-2 in 23287ns I0917 23:14:36.158965 31520 slave.cpp:2347] Received ping from slave-observer(3)@127.0.1.1:34609 I0917 23:14:36.167183 31526 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 9.894561ms I0917 23:14:36.167323 31526 replica.cpp:676] Persisted action at 8 I0917 23:14:36.167765 31526 replica.cpp:655] Replica received learned notice for position 8 I0917 23:14:36.177480 31526 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 9.585411ms I0917 23:14:36.177675 31526 leveldb.cpp:401] Deleting ~2 keys from leveldb took 37564ns I0917 23:14:36.177973 31526 replica.cpp:676] Persisted action at 8 I0917 23:14:36.178089 31526 replica.cpp:661] Replica learned TRUNCATE action at position 8 I0917 23:14:36.245735 31526 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 96108ns I0917 23:14:37.246182 31519 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 83121ns I0917 23:14:38.246640 31519 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 126479ns I0917 23:14:39.247378 31526 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 81524ns I0917 23:14:39.895262 31488 exec.cpp:86] Committing suicide by killing the process group I0917 23:14:39.900475 31494 exec.cpp:86] Committing suicide by killing the process group I0917 23:14:39.904479 31482 exec.cpp:86] Committing suicide by killing the process group I0917 23:14:40.246654 31520 master.cpp:120] No whitelist given. Advertising offers for all slaves I0917 23:14:40.247879 31524 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 84970ns W0917 23:14:40.283375 31522 sched.cpp:378] Authentication timed out I0917 23:14:40.283819 31522 sched.cpp:338] Failed to authenticate with master master@127.0.1.1:34609: Authentication discarded I0917 23:14:40.283980 31522 sched.cpp:283] Authenticating with master master@127.0.1.1:34609 I0917 23:14:40.284317 31522 authenticatee.hpp:128] Creating new client SASL connection I0917 23:14:40.284855 31522 master.cpp:3669] Authenticating scheduler-4c79cf12-ea1b-49e5-bae1-d34f91605227@127.0.1.1:34609 I0917 23:14:40.285302 31522 authenticator.hpp:94] Initializing server SASL I0917 23:14:40.285907 31522 auxprop.cpp:45] Initialized in-memory auxiliary property plugin I0917 23:14:40.286022 31522 authenticator.hpp:156] Creating new server SASL connection I0917 23:14:40.286351 31525 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0917 23:14:40.286486 31525 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0917 23:14:40.286731 31522 authenticator.hpp:262] Received SASL authentication start I0917 23:14:40.286923 31522 authenticator.hpp:384] Authentication requires more steps I0917 23:14:40.287080 31524 authenticatee.hpp:265] Received SASL authentication step I0917 23:14:40.287230 31522 authenticator.hpp:290] Received SASL authentication step I0917 23:14:40.287385 31522 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'saucy' server FQDN: 'saucy' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0917 23:14:40.287508 31522 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0917 23:14:40.287664 31522 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0917 23:14:40.287777 31522 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'saucy' server FQDN: 'saucy' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0917 23:14:40.287912 31522 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0917 23:14:40.288020 31522 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0917 23:14:40.288214 31522 authenticator.hpp:376] Authentication success I0917 23:14:40.288396 31526 authenticatee.hpp:305] Authentication success I0917 23:14:40.291566 31526 sched.cpp:357] Successfully authenticated with master master@127.0.1.1:34609 I0917 23:14:40.291960 31521 master.cpp:3709] Successfully authenticated principal 'test-principal' at scheduler-4c79cf12-ea1b-49e5-bae1-d34f91605227@127.0.1.1:34609 I0917 23:14:40.292358 31526 sched.cpp:476] Sending registration request to master@127.0.1.1:34609 I0917 23:14:40.292563 31525 master.cpp:1330] Received registration request from scheduler-4c79cf12-ea1b-49e5-bae1-d34f91605227@127.0.1.1:34609 I0917 23:14:40.292762 31525 master.cpp:1290] Authorizing framework principal 'test-principal' to receive offers for role '*' I0917 23:14:40.293253 31525 master.cpp:1389] Registering framework 20140917-231435-16842879-34609-31503-0000 at scheduler-4c79cf12-ea1b-49e5-bae1-d34f91605227@127.0.1.1:34609 I0917 23:14:40.293767 31525 hierarchical_allocator_process.hpp:329] Added framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.294075 31525 hierarchical_allocator_process.hpp:734] Offering cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-0 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.294450 31525 hierarchical_allocator_process.hpp:734] Offering cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-1 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.294659 31525 hierarchical_allocator_process.hpp:734] Offering cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-2 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.295035 31520 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-0 with resources cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:40.295318 31520 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-1 with resources cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:40.295614 31520 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-2 with resources cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-1 (saucy) I0917 23:14:40.295778 31520 master.cpp:3616] Sending 3 offers to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.295984 31525 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 1.975722ms I0917 23:14:40.296185 31526 sched.cpp:407] Framework registered with 20140917-231435-16842879-34609-31503-0000 Registered! ID = 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.317849 31526 sched.cpp:421] Scheduler::registered took 21.576128ms Launching task 0 Launching task 1 Launching task 2 I0917 23:14:40.365347 31526 sched.cpp:544] Scheduler::resourceOffers took 47.086567ms I0917 23:14:40.365958 31523 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-0 with resources cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:40.366112 31523 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-0 ] on slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 W0917 23:14:40.366549 31523 master.cpp:1898] Executor default for task 0 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0917 23:14:40.366693 31523 master.cpp:1909] Executor default for task 0 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0917 23:14:40.366950 31523 master.cpp:2311] Authorizing framework principal 'test-principal' to launch task 0 as user 'jenkins' I0917 23:14:40.367545 31523 master.hpp:839] Adding task 0 with resources cpus(*):1; mem(*):128 on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:40.367696 31523 master.cpp:2377] Launching task 0 of framework 20140917-231435-16842879-34609-31503-0000 with resources cpus(*):1; mem(*):128 on slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) I0917 23:14:40.367955 31524 slave.cpp:1025] Got assigned task 0 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.368505 31524 slave.cpp:1135] Launching task 0 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.370308 31522 hierarchical_allocator_process.hpp:563] Recovered mem(*):873; disk(*):24988; ports(*):[31000-32000] (total allocatable: mem(*):873; disk(*):24988; ports(*):[31000-32000]) on slave 20140917-231435-16842879-34609-31503-0 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.370450 31522 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-0 for 1secs I0917 23:14:40.371031 31520 containerizer.cpp:394] Starting container 'a70a0cfd-ee78-43a0-b0b7-26bcd205f142' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000' I0917 23:14:40.373828 31524 slave.cpp:1248] Queuing task '0' for executor default of framework '20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.374595 31524 slave.cpp:554] Successfully attached file '/tmp/mesos-w8snRW/2/slaves/20140917-231435-16842879-34609-31503-0/frameworks/20140917-231435-16842879-34609-31503-0000/executors/default/runs/a70a0cfd-ee78-43a0-b0b7-26bcd205f142' I0917 23:14:40.374486 31520 launcher.cpp:137] Forked child with pid '31534' for container 'a70a0cfd-ee78-43a0-b0b7-26bcd205f142' I0917 23:14:40.375830 31520 containerizer.cpp:510] Fetching URIs for container 'a70a0cfd-ee78-43a0-b0b7-26bcd205f142' using command '/var/jenkins/workspace/mesos-ubuntu-13.10-clang/src/mesos-fetcher' I0917 23:14:40.377596 31523 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-1 with resources cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:40.377781 31523 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-1 ] on slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 W0917 23:14:40.378062 31523 master.cpp:1898] Executor default for task 1 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0917 23:14:40.378233 31523 master.cpp:1909] Executor default for task 1 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0917 23:14:40.378429 31523 master.cpp:2311] Authorizing framework principal 'test-principal' to launch task 1 as user 'jenkins' I0917 23:14:40.379017 31523 master.hpp:839] Adding task 1 with resources cpus(*):1; mem(*):128 on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:40.379166 31523 master.cpp:2377] Launching task 1 of framework 20140917-231435-16842879-34609-31503-0000 with resources cpus(*):1; mem(*):128 on slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) I0917 23:14:40.379525 31525 slave.cpp:1025] Got assigned task 1 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.379914 31525 slave.cpp:1135] Launching task 1 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.381691 31519 containerizer.cpp:394] Starting container '1c6cf8b1-972b-4b73-8467-bc4503dd9332' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000' I0917 23:14:40.383585 31525 slave.cpp:1248] Queuing task '1' for executor default of framework '20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.384318 31522 hierarchical_allocator_process.hpp:563] Recovered mem(*):873; disk(*):24988; ports(*):[31000-32000] (total allocatable: mem(*):873; disk(*):24988; ports(*):[31000-32000]) on slave 20140917-231435-16842879-34609-31503-2 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.384598 31522 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-2 for 1secs I0917 23:14:40.384814 31525 slave.cpp:554] Successfully attached file '/tmp/mesos-w8snRW/1/slaves/20140917-231435-16842879-34609-31503-2/frameworks/20140917-231435-16842879-34609-31503-0000/executors/default/runs/1c6cf8b1-972b-4b73-8467-bc4503dd9332' I0917 23:14:40.385326 31526 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-2 with resources cpus(*):1; mem(*):1001; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-1 (saucy) I0917 23:14:40.385509 31526 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-2 ] on slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 W0917 23:14:40.385666 31526 master.cpp:1898] Executor default for task 2 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0917 23:14:40.385856 31526 master.cpp:1909] Executor default for task 2 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0917 23:14:40.386008 31526 master.cpp:2311] Authorizing framework principal 'test-principal' to launch task 2 as user 'jenkins' I0917 23:14:40.386518 31526 master.hpp:839] Adding task 2 with resources cpus(*):1; mem(*):128 on slave 20140917-231435-16842879-34609-31503-1 (saucy) I0917 23:14:40.386818 31519 launcher.cpp:137] Forked child with pid '31536' for container '1c6cf8b1-972b-4b73-8467-bc4503dd9332' I0917 23:14:40.393805 31519 containerizer.cpp:510] Fetching URIs for container '1c6cf8b1-972b-4b73-8467-bc4503dd9332' using command '/var/jenkins/workspace/mesos-ubuntu-13.10-clang/src/mesos-fetcher' I0917 23:14:40.395980 31526 master.cpp:2377] Launching task 2 of framework 20140917-231435-16842879-34609-31503-0000 with resources cpus(*):1; mem(*):128 on slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) I0917 23:14:40.396574 31521 slave.cpp:1025] Got assigned task 2 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.396946 31521 slave.cpp:1135] Launching task 2 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.398582 31521 slave.cpp:1248] Queuing task '2' for executor default of framework '20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.400519 31521 slave.cpp:554] Successfully attached file '/tmp/mesos-w8snRW/0/slaves/20140917-231435-16842879-34609-31503-1/frameworks/20140917-231435-16842879-34609-31503-0000/executors/default/runs/6add4792-3bc4-4ac9-8225-969f09279561' I0917 23:14:40.400287 31525 containerizer.cpp:394] Starting container '6add4792-3bc4-4ac9-8225-969f09279561' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000' I0917 23:14:40.401595 31522 hierarchical_allocator_process.hpp:563] Recovered mem(*):873; disk(*):24988; ports(*):[31000-32000] (total allocatable: mem(*):873; disk(*):24988; ports(*):[31000-32000]) on slave 20140917-231435-16842879-34609-31503-1 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:40.403118 31522 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-1 for 1secs I0917 23:14:40.431401 31525 launcher.cpp:137] Forked child with pid '31551' for container '6add4792-3bc4-4ac9-8225-969f09279561' I0917 23:14:40.436882 31525 containerizer.cpp:510] Fetching URIs for container '6add4792-3bc4-4ac9-8225-969f09279561' using command '/var/jenkins/workspace/mesos-ubuntu-13.10-clang/src/mesos-fetcher' I0917 23:14:41.248602 31523 hierarchical_allocator_process.hpp:816] Filtered mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-2 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:41.249043 31523 hierarchical_allocator_process.hpp:816] Filtered mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-0 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:41.249259 31523 hierarchical_allocator_process.hpp:816] Filtered mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-1 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:41.249377 31523 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 1.040603ms I0917 23:14:41.384321 31526 slave.cpp:2560] Monitoring executor 'default' of framework '20140917-231435-16842879-34609-31503-0000' in container 'a70a0cfd-ee78-43a0-b0b7-26bcd205f142' I0917 23:14:41.395431 31526 slave.cpp:2560] Monitoring executor 'default' of framework '20140917-231435-16842879-34609-31503-0000' in container '1c6cf8b1-972b-4b73-8467-bc4503dd9332' I0917 23:14:41.413548 31522 slave.cpp:2560] Monitoring executor 'default' of framework '20140917-231435-16842879-34609-31503-0000' in container '6add4792-3bc4-4ac9-8225-969f09279561' WARNING: Logging before InitGoogleLogging() is written to STDERR WARNING: Logging before InitGoogleLogging() is written to STDERR WARNING: Logging before InitGoogleLogging() is written to STDERR I0917 23:14:41.935613 31641 process.cpp:1771] libprocess is initialized on 127.0.1.1:40557 for 8 cpus I0917 23:14:41.938254 31641 logging.cpp:177] Logging to STDERR I0917 23:14:41.934036 31643 process.cpp:1771] libprocess is initialized on 127.0.1.1:33558 for 8 cpus I0917 23:14:41.939977 31643 logging.cpp:177] Logging to STDERR I0917 23:14:41.937566 31644 process.cpp:1771] libprocess is initialized on 127.0.1.1:51898 for 8 cpus II0917 23:14:41.941807 31644 logging.cpp:177] Logging to STDERR II0917 23:14:41.943460 31641 exec.cpp:132] Version: 0.21.0 0917 23:14:41.941642 31643 exec.cpp:132] Version: 0.21.0 0917 23:14:41.943018 31644 exec.cpp:132] Version: 0.21.0 II0917 23:14:41.950268 31683 exec.cpp:182] Executor started at: executor(1)@127.0.1.1:40557 with pid 31636 II0917 23:14:41.951695 31522 slave.cpp:1758] Got registration for executor 'default' of framework 20140917-231435-16842879-34609-31503-0000 from executor(1)@127.0.1.1:40557 I0917 23:14:41.952024 31522 slave.cpp:1877] Flushing queued task 0 for executor 'default' of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:41.951012 31674 exec.cpp:182] Executor started at: executor(1)@127.0.1.1:33558 with pid 31639 I0917 23:14:41.954056 31526 slave.cpp:1758] Got registration for executor 'default' of framework 20140917-231435-16842879-34609-31503-0000 from executor(1)@127.0.1.1:33558 I0917 23:14:41.954582 31526 slave.cpp:1877] Flushing queued task 1 for executor 'default' of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:41.950121 31692 exec.cpp:182] Executor started at: executor(1)@127.0.1.1:51898 with pid 31642 I0917 23:14:41.955857 31521 slave.cpp:1758] Got registration for executor 'default' of framework 20140917-231435-16842879-34609-31503-0000 from executor(1)@127.0.1.1:51898 I0917 23:14:41.957639 31521 slave.cpp:1877] Flushing queued task 2 for executor 'default' of framework 20140917-231435-16842879-34609-31503-0000 0917 23:14:41.955232 31676 exec.cpp:206] Executor registered on slave 20140917-231435-16842879-34609-31503-2 0917 23:14:41.953346 31685 exec.cpp:206] Executor registered on slave 20140917-231435-16842879-34609-31503-0 I0917 23:14:41.968797 31694 exec.cpp:206] Executor registered on slave 20140917-231435-16842879-34609-31503-1 Registered executor on saucyRegistered executor on saucyRegistered executor on saucy II I0917 23:14:42.169991 31694 exec.cpp:218] Executor::registered took 199.821555ms I0917 23:14:42.170224 31676 exec.cpp:218] Executor::registered took 208.770192ms I0917 23:14:42.170574 31685 exec.cpp:218] Executor::registered took 205.42106ms I0917 23:14:42.171221 31676 exec.cpp:293] Executor asked to run task '1' 0917 23:14:42.170928 31694 exec.cpp:293] Executor asked to run task '2' 0917 23:14:42.171532 31685 exec.cpp:293] Executor asked to run task '0' III0917 23:14:42.194025 31685 exec.cpp:302] Executor::launchTask took 20.907244ms 0917 23:14:42.194463 31694 exec.cpp:302] Executor::launchTask took 22.463085ms 0917 23:14:42.193789 31676 exec.cpp:302] Executor::launchTask took 22.090512ms I0917 23:14:42.249930 31520 hierarchical_allocator_process.hpp:734] Offering mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-0 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.250046 31520 hierarchical_allocator_process.hpp:734] Offering mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-1 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.250120 31520 hierarchical_allocator_process.hpp:734] Offering mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-2 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.250221 31520 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 427470ns I0917 23:14:42.250301 31520 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-3 with resources mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:42.250375 31520 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-4 with resources mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:42.250427 31520 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-5 with resources mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-1 (saucy) I0917 23:14:42.250454 31520 master.cpp:3616] Sending 3 offers to framework 20140917-231435-16842879-34609-31503-0000 Launching task 3 Launching task 4 I0917 23:14:42.256711 31520 sched.cpp:544] Scheduler::resourceOffers took 6.128709ms I0917 23:14:42.257067 31519 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-3 with resources mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:42.259182 31519 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-3 ] on slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 II0917 23:14:42.263245 31519 master.cpp:3234] Sending status update TASK_LOST (UUID: 6c082b40-ec70-4c8a-a1b8-5580374e3340) for task 3 of framework 20140917-231435-16842879-34609-31503-0000 'Task 3 attempted to use cpus(*):1; mem(*):128 which is greater than offered mem(*):873; disk(*):24988; ports(*):[31000-32000]' I0917 23:14:42.263723 31523 hierarchical_allocator_process.hpp:563] Recovered mem(*):873; disk(*):24988; ports(*):[31000-32000] (total allocatable: mem(*):873; disk(*):24988; ports(*):[31000-32000]) on slave 20140917-231435-16842879-34609-31503-0 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.264802 31523 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-0 for 1secs I0917 23:14:42.265305 31522 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-4 with resources mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-2 (saucy) II0917 23:14:42.266269 31522 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-4 ] on slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.266079 31689 exec.cpp:525] Executor sending status update TASK_RUNNING (UUID: d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 0917 23:14:42.267781 31672 exec.cpp:525] Executor sending status update TASK_RUNNING (UUID: 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 0917 23:14:42.262806 31678 exec.cpp:525] Executor sending status update TASK_RUNNING (UUID: 774aa76c-87bd-4d39-a5e2-d880206fedb3) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.276619 31521 slave.cpp:2111] Handling status update TASK_RUNNING (UUID: 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 from executor(1)@127.0.1.1:33558 I0917 23:14:42.277762 31521 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.277570 31525 slave.cpp:2111] Handling status update TASK_RUNNING (UUID: 774aa76c-87bd-4d39-a5e2-d880206fedb3) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 from executor(1)@127.0.1.1:40557 I0917 23:14:42.277437 31526 slave.cpp:2111] Handling status update TASK_RUNNING (UUID: d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 from executor(1)@127.0.1.1:51898 I0917 23:14:42.280566 31526 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.280719 31526 status_update_manager.cpp:499] Creating StatusUpdate stream for task 2 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.280987 31526 status_update_manager.cpp:373] Forwarding status update TASK_RUNNING (UUID: d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 to master@127.0.1.1:34609 I0917 23:14:42.281231 31526 slave.cpp:2268] Status update manager successfully handled status update TASK_RUNNING (UUID: d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.281352 31526 slave.cpp:2274] Sending acknowledgement for status update TASK_RUNNING (UUID: d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 to executor(1)@127.0.1.1:51898 I0917 23:14:42.280252 31525 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 774aa76c-87bd-4d39-a5e2-d880206fedb3) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.281714 31525 status_update_manager.cpp:499] Creating StatusUpdate stream for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.281883 31525 status_update_manager.cpp:373] Forwarding status update TASK_RUNNING (UUID: 774aa76c-87bd-4d39-a5e2-d880206fedb3) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 to master@127.0.1.1:34609 I0917 23:14:42.282068 31525 slave.cpp:2268] Status update manager successfully handled status update TASK_RUNNING (UUID: 774aa76c-87bd-4d39-a5e2-d880206fedb3) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.282349 31525 slave.cpp:2274] Sending acknowledgement for status update TASK_RUNNING (UUID: 774aa76c-87bd-4d39-a5e2-d880206fedb3) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 to executor(1)@127.0.1.1:40557 I0917 23:14:42.282624 31521 status_update_manager.cpp:499] Creating StatusUpdate stream for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.282802 31521 status_update_manager.cpp:373] Forwarding status update TASK_RUNNING (UUID: 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 to master@127.0.1.1:34609 I0917 23:14:42.282940 31522 master.cpp:3234] Sending status update TASK_LOST (UUID: 24d8174b-ab65-47d0-8e4b-9ad1b5c05030) for task 4 of framework 20140917-231435-16842879-34609-31503-0000 'Task 4 attempted to use cpus(*):1; mem(*):128 which is greater than offered mem(*):873; disk(*):24988; ports(*):[31000-32000]' I0917 23:14:42.283248 31524 slave.cpp:2268] Status update manager successfully handled status update TASK_RUNNING (UUID: 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 II0917 23:14:42.283634 31524 slave.cpp:2274] Sending acknowledgement for status update TASK_RUNNING (UUID: 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 to executor(1)@127.0.1.1:33558 I0917 23:14:42.284031 31522 master.cpp:3239] Forwarding status update TASK_RUNNING (UUID: d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 0917 23:14:42.283514 31684 exec.cpp:339] Executor received status update acknowledgement 774aa76c-87bd-4d39-a5e2-d880206fedb3 for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.285044 31522 master.cpp:3205] Status update TASK_RUNNING (UUID: d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 from slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) I0917 23:14:42.284976 31525 hierarchical_allocator_process.hpp:563] Recovered mem(*):873; disk(*):24988; ports(*):[31000-32000] (total allocatable: mem(*):873; disk(*):24988; ports(*):[31000-32000]) on slave 20140917-231435-16842879-34609-31503-2 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.285399 31525 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-2 for 1secs I0917 23:14:42.285539 31522 master.cpp:3239] Forwarding status update TASK_RUNNING (UUID: 774aa76c-87bd-4d39-a5e2-d880206fedb3) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.286206 31522 master.cpp:3205] Status update TASK_RUNNING (UUID: 774aa76c-87bd-4d39-a5e2-d880206fedb3) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 from slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) I0917 23:14:42.286468 31522 master.cpp:3239] Forwarding status update TASK_RUNNING (UUID: 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.286607 31522 master.cpp:3205] Status update TASK_RUNNING (UUID: 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 from slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) I0917 23:14:42.286854 31524 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-5 with resources mem(*):873; disk(*):24988; ports(*):[31000-32000] on slave 20140917-231435-16842879-34609-31503-1 (saucy) II0917 23:14:42.287259 31524 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-5 ] on slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 0917 23:14:42.287132 31671 exec.cpp:339] Executor received status update acknowledgement 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.287842 31524 hierarchical_allocator_process.hpp:563] Recovered mem(*):873; disk(*):24988; ports(*):[31000-32000] (total allocatable: mem(*):873; disk(*):24988; ports(*):[31000-32000]) on slave 20140917-231435-16842879-34609-31503-1 from framework 20140917-231435-16842879-34609-31503-0000 II0917 23:14:42.291944 31524 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-1 for 1secs 0917 23:14:42.288897 31688 exec.cpp:339] Executor received status update acknowledgement d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad for task 2 of framework 20140917-231435-16842879-34609-31503-0000 Status update: task 3 is in state TASK_LOST I0917 23:14:42.314281 31520 sched.cpp:635] Scheduler::statusUpdate took 21.943967ms Status update: task 4 is in state TASK_LOST I0917 23:14:42.314991 31520 sched.cpp:635] Scheduler::statusUpdate took 513983ns Status update: task 2 is in state TASK_RUNNING I0917 23:14:42.316504 31520 sched.cpp:635] Scheduler::statusUpdate took 1.370298ms I0917 23:14:42.316826 31526 master.cpp:2720] Forwarding status update acknowledgement d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad for task 2 of framework 20140917-231435-16842879-34609-31503-0000 to slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) I0917 23:14:42.318004 31526 status_update_manager.cpp:398] Received status update acknowledgement (UUID: d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.318320 31526 slave.cpp:1698] Status update manager successfully handled status update acknowledgement (UUID: d4ac4252-ea9d-4c1d-b8c9-ea2d9704e6ad) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 Status update: task 0 is in state TASK_RUNNING I0917 23:14:42.319246 31520 sched.cpp:635] Scheduler::statusUpdate took 546461ns I0917 23:14:42.319432 31521 master.cpp:2720] Forwarding status update acknowledgement 774aa76c-87bd-4d39-a5e2-d880206fedb3 for task 0 of framework 20140917-231435-16842879-34609-31503-0000 to slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) I0917 23:14:42.319612 31521 status_update_manager.cpp:398] Received status update acknowledgement (UUID: 774aa76c-87bd-4d39-a5e2-d880206fedb3) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.319789 31521 slave.cpp:1698] Status update manager successfully handled status update acknowledgement (UUID: 774aa76c-87bd-4d39-a5e2-d880206fedb3) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 Status update: task 1 is in state TASK_RUNNING I0917 23:14:42.325856 31520 sched.cpp:635] Scheduler::statusUpdate took 5.910398ms I0917 23:14:42.326161 31519 master.cpp:2720] Forwarding status update acknowledgement 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb for task 1 of framework 20140917-231435-16842879-34609-31503-0000 to slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) I0917 23:14:42.326392 31522 status_update_manager.cpp:398] Received status update acknowledgement (UUID: 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.326689 31522 slave.cpp:1698] Status update manager successfully handled status update acknowledgement (UUID: 3a405ad5-f8f8-4a7d-9f3d-ba0d02ab4cdb) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.385427 31524 monitor.cpp:140] Failed to collect resource usage for container 'a70a0cfd-ee78-43a0-b0b7-26bcd205f142' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: a70a0cfd-ee78-43a0-b0b7-26bcd205f142 I0917 23:14:42.396478 31524 monitor.cpp:140] Failed to collect resource usage for container '1c6cf8b1-972b-4b73-8467-bc4503dd9332' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: 1c6cf8b1-972b-4b73-8467-bc4503dd9332 Running task value: """"1"""" Running task value: """"0"""" Running task value: """"2"""" I0917 23:14:42.747750 31669 exec.cpp:525] Executor sending status update TASK_FINISHED (UUID: c3e603da-200e-4ac6-8ec4-ab09bd839be7) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.748468 31525 slave.cpp:2111] Handling status update TASK_FINISHED (UUID: c3e603da-200e-4ac6-8ec4-ab09bd839be7) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 from executor(1)@127.0.1.1:33558 I0917 23:14:42.748540 31525 slave.cpp:3938] Terminating task 1 I0917 23:14:42.748910 31525 status_update_manager.cpp:320] Received status update TASK_FINISHED (UUID: c3e603da-200e-4ac6-8ec4-ab09bd839be7) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.748955 31525 status_update_manager.cpp:373] Forwarding status update TASK_FINISHED (UUID: c3e603da-200e-4ac6-8ec4-ab09bd839be7) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 to master@127.0.1.1:34609 I0917 23:14:42.749049 31525 master.cpp:3239] Forwarding status update TASK_FINISHED (UUID: c3e603da-200e-4ac6-8ec4-ab09bd839be7) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.749084 31525 master.cpp:3205] Status update TASK_FINISHED (UUID: c3e603da-200e-4ac6-8ec4-ab09bd839be7) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 from slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) I0917 23:14:42.749125 31525 master.cpp:4385] Removing task 1 with resources cpus(*):1; mem(*):128 of framework 20140917-231435-16842879-34609-31503-0000 on slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) I0917 23:14:42.749271 31525 slave.cpp:2268] Status update manager successfully handled status update TASK_FINISHED (UUID: c3e603da-200e-4ac6-8ec4-ab09bd839be7) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.749285 31525 slave.cpp:2274] Sending acknowledgement for status update TASK_FINISHED (UUID: c3e603da-200e-4ac6-8ec4-ab09bd839be7) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 to executor(1)@127.0.1.1:33558 I0917 23:14:42.749987 31521 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1; mem(*):128 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-2 from framework 20140917-231435-16842879-34609-31503-0000 Status update: task 1 is in state TASK_FINISHED Finished tasks: 1 I0917 23:14:42.751500 31525 sched.cpp:635] Scheduler::statusUpdate took 2.067671ms I0917 23:14:42.751701 31521 master.cpp:2720] Forwarding status update acknowledgement c3e603da-200e-4ac6-8ec4-ab09bd839be7 for task 1 of framework 20140917-231435-16842879-34609-31503-0000 to slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) I0917 23:14:42.751891 31521 status_update_manager.cpp:398] Received status update acknowledgement (UUID: c3e603da-200e-4ac6-8ec4-ab09bd839be7) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.752101 31521 status_update_manager.cpp:530] Cleaning up status update stream for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.752358 31521 slave.cpp:1698] Status update manager successfully handled status update acknowledgement (UUID: c3e603da-200e-4ac6-8ec4-ab09bd839be7) for task 1 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.752497 31521 slave.cpp:3977] Completing task 1 III0917 23:14:42.753000 31669 exec.cpp:339] Executor received status update acknowledgement c3e603da-200e-4ac6-8ec4-ab09bd839be7 for task 1 of framework 20140917-231435-16842879-34609-31503-0000 0917 23:14:42.753515 31680 exec.cpp:525] Executor sending status update TASK_FINISHED (UUID: 7870de4c-5721-4dbb-b50c-0077513d6e78) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.755370 31519 slave.cpp:2111] Handling status update TASK_FINISHED (UUID: 7870de4c-5721-4dbb-b50c-0077513d6e78) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 from executor(1)@127.0.1.1:40557 I0917 23:14:42.755519 31519 slave.cpp:3938] Terminating task 0 I0917 23:14:42.755897 31519 status_update_manager.cpp:320] Received status update TASK_FINISHED (UUID: 7870de4c-5721-4dbb-b50c-0077513d6e78) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.756033 31519 status_update_manager.cpp:373] Forwarding status update TASK_FINISHED (UUID: 7870de4c-5721-4dbb-b50c-0077513d6e78) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 to master@127.0.1.1:34609 I0917 23:14:42.756324 31519 master.cpp:3239] Forwarding status update TASK_FINISHED (UUID: 7870de4c-5721-4dbb-b50c-0077513d6e78) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.756497 31519 master.cpp:3205] Status update TASK_FINISHED (UUID: 7870de4c-5721-4dbb-b50c-0077513d6e78) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 from slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) 0917 23:14:42.754186 31693 exec.cpp:525] Executor sending status update TASK_FINISHED (UUID: de91c600-abe2-4081-ad39-25fbd637b938) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 Status update: task 0 is in state TASK_FINISHEDI0917 23:14:42.764149 31519 master.cpp:4385] Removing task 0 with resources cpus(*):1; mem(*):128 of framework 20140917-231435-16842879-34609-31503-0000 on slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) I0917 23:14:42.756443 31523 slave.cpp:2268] Status update manager successfully handled status update TASK_FINISHED (UUID: 7870de4c-5721-4dbb-b50c-0077513d6e78) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.766275 31523 slave.cpp:2274] Sending acknowledgement for status update TASK_FINISHED (UUID: 7870de4c-5721-4dbb-b50c-0077513d6e78) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 to executor(1)@127.0.1.1:40557 I0917 23:14:42.766705 31523 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1; mem(*):128 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-0 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.767081 31519 slave.cpp:2111] Handling status update TASK_FINISHED (UUID: de91c600-abe2-4081-ad39-25fbd637b938) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 from executor(1)@127.0.1.1:51898 I0917 23:14:42.767195 31519 slave.cpp:3938] Terminating task 2 I0917 23:14:42.767546 31519 status_update_manager.cpp:320] Received status update TASK_FINISHED (UUID: de91c600-abe2-4081-ad39-25fbd637b938) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 Finished tasks: 2 I0917 23:14:42.768301 31519 status_update_manager.cpp:373] Forwarding status update TASK_FINISHED (UUID: de91c600-abe2-4081-ad39-25fbd637b938) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 to master@127.0.1.1:34609 I0917 23:14:42.768503 31519 master.cpp:3239] Forwarding status update TASK_FINISHED (UUID: de91c600-abe2-4081-ad39-25fbd637b938) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.768614 31523 slave.cpp:2268] Status update manager successfully handled status update TASK_FINISHED (UUID: de91c600-abe2-4081-ad39-25fbd637b938) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.768723 31523 slave.cpp:2274] Sending acknowledgement for status update TASK_FINISHED (UUID: de91c600-abe2-4081-ad39-25fbd637b938) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 to executor(1)@127.0.1.1:51898 I0917 23:14:42.768264 31521 sched.cpp:635] Scheduler::statusUpdate took 7.95946ms I0917 23:14:42.769163 31519 master.cpp:3205] Status update TASK_FINISHED (UUID: de91c600-abe2-4081-ad39-25fbd637b938) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 from slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) Status update: task 2 is in state TASK_FINISHEDII0917 23:14:42.769947 31691 exec.cpp:339] Executor received status update acknowledgement de91c600-abe2-4081-ad39-25fbd637b938 for task 2 of framework 20140917-231435-16842879-34609-31503-0000 0917 23:14:42.770182 31682 exec.cpp:339] Executor received status update acknowledgement 7870de4c-5721-4dbb-b50c-0077513d6e78 for task 0 of framework 20140917-231435-16842879-34609-31503-0000 Finished tasks: 3 I0917 23:14:42.776676 31519 master.cpp:4385] Removing task 2 with resources cpus(*):1; mem(*):128 of framework 20140917-231435-16842879-34609-31503-0000 on slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) I0917 23:14:42.776577 31523 sched.cpp:635] Scheduler::statusUpdate took 7.246343ms I0917 23:14:42.777061 31519 master.cpp:2720] Forwarding status update acknowledgement 7870de4c-5721-4dbb-b50c-0077513d6e78 for task 0 of framework 20140917-231435-16842879-34609-31503-0000 to slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) I0917 23:14:42.777197 31523 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1; mem(*):128 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-1 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.777384 31519 master.cpp:2720] Forwarding status update acknowledgement de91c600-abe2-4081-ad39-25fbd637b938 for task 2 of framework 20140917-231435-16842879-34609-31503-0000 to slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) I0917 23:14:42.777580 31519 status_update_manager.cpp:398] Received status update acknowledgement (UUID: de91c600-abe2-4081-ad39-25fbd637b938) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.777721 31519 status_update_manager.cpp:530] Cleaning up status update stream for task 2 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.777523 31523 status_update_manager.cpp:398] Received status update acknowledgement (UUID: 7870de4c-5721-4dbb-b50c-0077513d6e78) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.777992 31523 status_update_manager.cpp:530] Cleaning up status update stream for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.778156 31519 slave.cpp:1698] Status update manager successfully handled status update acknowledgement (UUID: de91c600-abe2-4081-ad39-25fbd637b938) for task 2 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.778313 31519 slave.cpp:3977] Completing task 2 I0917 23:14:42.778291 31521 slave.cpp:1698] Status update manager successfully handled status update acknowledgement (UUID: 7870de4c-5721-4dbb-b50c-0077513d6e78) for task 0 of framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:42.779542 31521 slave.cpp:3977] Completing task 0 I0917 23:14:43.250752 31520 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:43.251107 31520 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:43.251287 31520 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:43.251521 31520 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 867376ns I0917 23:14:43.251665 31523 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-6 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:43.251854 31523 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-7 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:43.252030 31523 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-8 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 (saucy) I0917 23:14:43.252184 31523 master.cpp:3616] Sending 3 offers to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:43.253651 31523 sched.cpp:544] Scheduler::resourceOffers took 1.199343ms I0917 23:14:43.254057 31524 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-6 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:43.254179 31524 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-6 ] on slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:43.254555 31524 hierarchical_allocator_process.hpp:563] Recovered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-0 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:43.254693 31524 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-0 for 1secs I0917 23:14:43.254902 31521 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-7 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:43.255035 31521 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-7 ] on slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:43.255257 31521 hierarchical_allocator_process.hpp:563] Recovered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-2 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:43.255414 31521 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-2 for 1secs I0917 23:14:43.255630 31522 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-8 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 (saucy) I0917 23:14:43.255754 31522 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-8 ] on slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:43.255944 31522 hierarchical_allocator_process.hpp:563] Recovered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-1 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:43.256085 31522 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-1 for 1secs I0917 23:14:43.386253 31522 monitor.cpp:140] Failed to collect resource usage for container 'a70a0cfd-ee78-43a0-b0b7-26bcd205f142' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: a70a0cfd-ee78-43a0-b0b7-26bcd205f142 I0917 23:14:43.397584 31521 monitor.cpp:140] Failed to collect resource usage for container '1c6cf8b1-972b-4b73-8467-bc4503dd9332' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: 1c6cf8b1-972b-4b73-8467-bc4503dd9332 I0917 23:14:44.252192 31525 hierarchical_allocator_process.hpp:816] Filtered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:44.252557 31525 hierarchical_allocator_process.hpp:816] Filtered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:44.252712 31525 hierarchical_allocator_process.hpp:816] Filtered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:44.252879 31525 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 791479ns I0917 23:14:44.386862 31523 monitor.cpp:140] Failed to collect resource usage for container 'a70a0cfd-ee78-43a0-b0b7-26bcd205f142' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: a70a0cfd-ee78-43a0-b0b7-26bcd205f142 I0917 23:14:44.398048 31524 monitor.cpp:140] Failed to collect resource usage for container '1c6cf8b1-972b-4b73-8467-bc4503dd9332' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: 1c6cf8b1-972b-4b73-8467-bc4503dd9332 I0917 23:14:45.247548 31520 master.cpp:120] No whitelist given. Advertising offers for all slaves I0917 23:14:45.253939 31521 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:45.254165 31521 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:45.254356 31521 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:45.254585 31521 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 762921ns I0917 23:14:45.254741 31523 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-9 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 (saucy) I0917 23:14:45.254945 31523 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-10 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:45.255112 31523 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-11 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:45.255259 31523 master.cpp:3616] Sending 3 offers to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:45.256947 31523 sched.cpp:544] Scheduler::resourceOffers took 1.430971ms I0917 23:14:45.257366 31524 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-9 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 (saucy) I0917 23:14:45.257498 31524 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-9 ] on slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:45.257766 31524 hierarchical_allocator_process.hpp:563] Recovered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-1 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:45.257901 31524 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-1 for 1secs I0917 23:14:45.258277 31522 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-10 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:45.258407 31522 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-10 ] on slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:45.258625 31522 hierarchical_allocator_process.hpp:563] Recovered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-0 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:45.258771 31522 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-0 for 1secs I0917 23:14:45.258975 31522 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-11 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:45.259114 31522 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-11 ] on slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:45.259313 31522 hierarchical_allocator_process.hpp:563] Recovered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-2 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:45.259464 31522 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-2 for 1secs I0917 23:14:45.388000 31524 monitor.cpp:140] Failed to collect resource usage for container 'a70a0cfd-ee78-43a0-b0b7-26bcd205f142' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: a70a0cfd-ee78-43a0-b0b7-26bcd205f142 I0917 23:14:45.399271 31521 monitor.cpp:140] Failed to collect resource usage for container '1c6cf8b1-972b-4b73-8467-bc4503dd9332' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: 1c6cf8b1-972b-4b73-8467-bc4503dd9332 I0917 23:14:46.254964 31523 hierarchical_allocator_process.hpp:816] Filtered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:46.255506 31523 hierarchical_allocator_process.hpp:816] Filtered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:46.255667 31523 hierarchical_allocator_process.hpp:816] Filtered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:46.255867 31523 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 1.022627ms I0917 23:14:46.388937 31525 monitor.cpp:140] Failed to collect resource usage for container 'a70a0cfd-ee78-43a0-b0b7-26bcd205f142' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: a70a0cfd-ee78-43a0-b0b7-26bcd205f142 I0917 23:14:46.400216 31520 monitor.cpp:140] Failed to collect resource usage for container '1c6cf8b1-972b-4b73-8467-bc4503dd9332' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: 1c6cf8b1-972b-4b73-8467-bc4503dd9332 I0917 23:14:47.256769 31523 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:47.257114 31523 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:47.257302 31523 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:47.257544 31523 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 848269ns I0917 23:14:47.257709 31521 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-12 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:47.257920 31521 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-13 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:47.258103 31521 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-14 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 (saucy) I0917 23:14:47.258236 31521 master.cpp:3616] Sending 3 offers to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:47.260226 31521 sched.cpp:544] Scheduler::resourceOffers took 1.743624ms I0917 23:14:47.260470 31525 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-12 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:47.260594 31525 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-12 ] on slave 20140917-231435-16842879-34609-31503-0 at slave(3)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:47.260891 31525 hierarchical_allocator_process.hpp:563] Recovered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-0 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:47.261092 31525 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-0 for 1secs I0917 23:14:47.261380 31519 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-13 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:47.261514 31519 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-13 ] on slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:47.261744 31519 hierarchical_allocator_process.hpp:563] Recovered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-2 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:47.261873 31519 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-2 for 1secs I0917 23:14:47.262086 31520 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-14 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 (saucy) I0917 23:14:47.262212 31520 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-14 ] on slave 20140917-231435-16842879-34609-31503-1 at slave(1)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:47.262408 31520 hierarchical_allocator_process.hpp:563] Recovered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-1 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:47.262526 31520 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-1 for 1secs I0917 23:14:47.390128 31520 monitor.cpp:140] Failed to collect resource usage for container 'a70a0cfd-ee78-43a0-b0b7-26bcd205f142' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: a70a0cfd-ee78-43a0-b0b7-26bcd205f142 I0917 23:14:47.401535 31520 monitor.cpp:140] Failed to collect resource usage for container '1c6cf8b1-972b-4b73-8467-bc4503dd9332' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: 1c6cf8b1-972b-4b73-8467-bc4503dd9332 I0917 23:14:48.258159 31520 hierarchical_allocator_process.hpp:816] Filtered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:48.258453 31520 hierarchical_allocator_process.hpp:816] Filtered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:48.258616 31520 hierarchical_allocator_process.hpp:816] Filtered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:48.258735 31520 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 726125ns I0917 23:14:48.390913 31519 monitor.cpp:140] Failed to collect resource usage for container 'a70a0cfd-ee78-43a0-b0b7-26bcd205f142' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: a70a0cfd-ee78-43a0-b0b7-26bcd205f142 I0917 23:14:48.402195 31523 monitor.cpp:140] Failed to collect resource usage for container '1c6cf8b1-972b-4b73-8467-bc4503dd9332' for executor 'default' of framework '20140917-231435-16842879-34609-31503-0000': Unknown container: 1c6cf8b1-972b-4b73-8467-bc4503dd9332 I0917 23:14:49.260015 31519 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:49.260422 31519 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:49.260701 31519 hierarchical_allocator_process.hpp:734] Offering mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:49.260953 31519 hierarchical_allocator_process.hpp:659] Performed allocation for 3 slaves in 1.015379ms I0917 23:14:49.261144 31520 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-15 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:49.261371 31520 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-16 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-0 (saucy) I0917 23:14:49.261576 31520 master.hpp:862] Adding offer 20140917-231435-16842879-34609-31503-17 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-1 (saucy) I0917 23:14:49.261729 31520 master.cpp:3616] Sending 3 offers to framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:49.263228 31520 sched.cpp:544] Scheduler::resourceOffers took 1.214892ms I0917 23:14:49.263566 31524 master.hpp:871] Removing offer 20140917-231435-16842879-34609-31503-15 with resources mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 on slave 20140917-231435-16842879-34609-31503-2 (saucy) I0917 23:14:49.263701 31524 master.cpp:2228] Processing reply for offers: [ 20140917-231435-16842879-34609-31503-15 ] on slave 20140917-231435-16842879-34609-31503-2 at slave(2)@127.0.1.1:34609 (saucy) for framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:49.264742 31524 hierarchical_allocator_process.hpp:563] Recovered mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1 (total allocatable: mem(*):1001; disk(*):24988; ports(*):[31000-32000]; cpus(*):1) on slave 20140917-231435-16842879-34609-31503-2 from framework 20140917-231435-16842879-34609-31503-0000 I0917 23:14:49.264775 31524 hierarchical_allocator_process.hpp:599] Framework 20140917-231435-16842879-34609-31503-0000 filtered slave 20140917-231435-16842879-34609-31503-2 for 1secs ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1815","09/18/2014 18:33:45",3,"Create a guide to becoming a committer ""We have a committer's guide, but the process by which one becomes a committer is unclear. We should set some guidelines and a process by which we can grow contributors into committers.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1830","09/25/2014 19:36:44",5,"Expose master stats differentiating between master-generated and slave-generated LOST tasks ""The master exports a monotonically-increasing counter of tasks transitioned to TASK_LOST. This loses fidelity of the source of the lost task. A first step in exposing the source of lost tasks might be to just differentiate between TASK_LOST transitions initiated by the master vs the slave (and maybe bad input from the scheduler).""","",0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1844","09/30/2014 02:31:25",1,"AllocatorTest/0.SlaveLost is flaky """""," [ RUN ] AllocatorTest/0.SlaveLost Using temporary directory '/tmp/AllocatorTest_0_SlaveLost_Z2oazw' I0929 16:58:29.484141 3486 leveldb.cpp:176] Opened db in 604109ns I0929 16:58:29.484629 3486 leveldb.cpp:183] Compacted db in 172697ns I0929 16:58:29.484912 3486 leveldb.cpp:198] Created db iterator in 6429ns I0929 16:58:29.485133 3486 leveldb.cpp:204] Seeked to beginning of db in 1618ns I0929 16:58:29.485337 3486 leveldb.cpp:273] Iterated through 0 keys in the db in 752ns I0929 16:58:29.485595 3486 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0929 16:58:29.486017 3500 recover.cpp:425] Starting replica recovery I0929 16:58:29.486304 3500 recover.cpp:451] Replica is in EMPTY status I0929 16:58:29.486793 3500 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I0929 16:58:29.487205 3500 recover.cpp:188] Received a recover response from a replica in EMPTY status I0929 16:58:29.487540 3500 recover.cpp:542] Updating replica status to STARTING I0929 16:58:29.487911 3500 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 36629ns I0929 16:58:29.488173 3500 replica.cpp:320] Persisted replica status to STARTING I0929 16:58:29.488438 3500 recover.cpp:451] Replica is in STARTING status I0929 16:58:29.488891 3500 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I0929 16:58:29.489187 3500 recover.cpp:188] Received a recover response from a replica in STARTING status I0929 16:58:29.489516 3500 recover.cpp:542] Updating replica status to VOTING I0929 16:58:29.489887 3502 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 32099ns I0929 16:58:29.490124 3502 replica.cpp:320] Persisted replica status to VOTING I0929 16:58:29.490381 3500 recover.cpp:556] Successfully joined the Paxos group I0929 16:58:29.490713 3500 recover.cpp:440] Recover process terminated I0929 16:58:29.493401 3506 master.cpp:312] Master 20140929-165829-2759502016-55618-3486 (fedora-20) started on 192.168.122.164:55618 I0929 16:58:29.493700 3506 master.cpp:358] Master only allowing authenticated frameworks to register I0929 16:58:29.493921 3506 master.cpp:363] Master only allowing authenticated slaves to register I0929 16:58:29.494123 3506 credentials.hpp:36] Loading credentials for authentication from '/tmp/AllocatorTest_0_SlaveLost_Z2oazw/credentials' I0929 16:58:29.494500 3506 master.cpp:392] Authorization enabled I0929 16:58:29.495249 3506 master.cpp:120] No whitelist given. Advertising offers for all slaves I0929 16:58:29.495728 3502 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@192.168.122.164:55618 I0929 16:58:29.496196 3506 master.cpp:1241] The newly elected leader is master@192.168.122.164:55618 with id 20140929-165829-2759502016-55618-3486 I0929 16:58:29.496469 3506 master.cpp:1254] Elected as the leading master! I0929 16:58:29.496713 3506 master.cpp:1072] Recovering from registrar I0929 16:58:29.497020 3506 registrar.cpp:312] Recovering registrar I0929 16:58:29.497486 3506 log.cpp:656] Attempting to start the writer I0929 16:58:29.498105 3506 replica.cpp:474] Replica received implicit promise request with proposal 1 I0929 16:58:29.498373 3506 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 27145ns I0929 16:58:29.498605 3506 replica.cpp:342] Persisted promised to 1 I0929 16:58:29.500880 3500 coordinator.cpp:230] Coordinator attemping to fill missing position I0929 16:58:29.501404 3500 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I0929 16:58:29.501687 3500 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 57971ns I0929 16:58:29.501935 3500 replica.cpp:676] Persisted action at 0 I0929 16:58:29.504905 3507 replica.cpp:508] Replica received write request for position 0 I0929 16:58:29.505130 3507 leveldb.cpp:438] Reading position from leveldb took 18418ns I0929 16:58:29.505377 3507 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 19998ns I0929 16:58:29.505571 3507 replica.cpp:676] Persisted action at 0 I0929 16:58:29.505957 3507 replica.cpp:655] Replica received learned notice for position 0 I0929 16:58:29.506186 3507 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 21648ns I0929 16:58:29.506433 3507 replica.cpp:676] Persisted action at 0 I0929 16:58:29.506767 3507 replica.cpp:661] Replica learned NOP action at position 0 I0929 16:58:29.507199 3507 log.cpp:672] Writer started with ending position 0 I0929 16:58:29.507730 3507 leveldb.cpp:438] Reading position from leveldb took 11532ns I0929 16:58:29.508915 3507 registrar.cpp:345] Successfully fetched the registry (0B) I0929 16:58:29.509230 3507 registrar.cpp:421] Attempting to update the 'registry' I0929 16:58:29.510516 3500 log.cpp:680] Attempting to append 130 bytes to the log I0929 16:58:29.510949 3500 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0929 16:58:29.511363 3500 replica.cpp:508] Replica received write request for position 1 I0929 16:58:29.511697 3500 leveldb.cpp:343] Persisting action (149 bytes) to leveldb took 66530ns I0929 16:58:29.512039 3500 replica.cpp:676] Persisted action at 1 I0929 16:58:29.512460 3500 replica.cpp:655] Replica received learned notice for position 1 I0929 16:58:29.512778 3500 leveldb.cpp:343] Persisting action (151 bytes) to leveldb took 24121ns I0929 16:58:29.513013 3500 replica.cpp:676] Persisted action at 1 I0929 16:58:29.513239 3500 replica.cpp:661] Replica learned APPEND action at position 1 I0929 16:58:29.513674 3500 log.cpp:699] Attempting to truncate the log to 1 I0929 16:58:29.513954 3500 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0929 16:58:29.514385 3500 replica.cpp:508] Replica received write request for position 2 I0929 16:58:29.514680 3500 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 65014ns I0929 16:58:29.514991 3500 replica.cpp:676] Persisted action at 2 I0929 16:58:29.516978 3501 replica.cpp:655] Replica received learned notice for position 2 I0929 16:58:29.517319 3501 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 24103ns I0929 16:58:29.517546 3501 leveldb.cpp:401] Deleting ~1 keys from leveldb took 16533ns I0929 16:58:29.517801 3501 replica.cpp:676] Persisted action at 2 I0929 16:58:29.518039 3501 replica.cpp:661] Replica learned TRUNCATE action at position 2 I0929 16:58:29.518539 3507 registrar.cpp:478] Successfully updated 'registry' I0929 16:58:29.518885 3507 registrar.cpp:371] Successfully recovered registrar I0929 16:58:29.519201 3507 master.cpp:1099] Recovered 0 slaves from the Registry (94B) ; allowing 10mins for slaves to re-register I0929 16:58:29.533073 3505 slave.cpp:169] Slave started on 57)@192.168.122.164:55618 I0929 16:58:29.533500 3505 credentials.hpp:84] Loading credential for authentication from '/tmp/AllocatorTest_0_SlaveLost_xdXHfg/credential' I0929 16:58:29.533834 3505 slave.cpp:276] Slave using credential for: test-principal I0929 16:58:29.534168 3505 slave.cpp:289] Slave resources: cpus(*):2; mem(*):1024; disk(*):752; ports(*):[31000-32000] I0929 16:58:29.534751 3505 slave.cpp:317] Slave hostname: fedora-20 I0929 16:58:29.534965 3505 slave.cpp:318] Slave checkpoint: false I0929 16:58:29.535557 3505 state.cpp:33] Recovering state from '/tmp/AllocatorTest_0_SlaveLost_xdXHfg/meta' I0929 16:58:29.535951 3505 status_update_manager.cpp:193] Recovering status update manager I0929 16:58:29.536290 3505 slave.cpp:3271] Finished recovery I0929 16:58:29.536782 3505 slave.cpp:598] New master detected at master@192.168.122.164:55618 I0929 16:58:29.537122 3505 slave.cpp:672] Authenticating with master master@192.168.122.164:55618 I0929 16:58:29.537492 3505 slave.cpp:645] Detecting new master I0929 16:58:29.537294 3506 status_update_manager.cpp:167] New master detected at master@192.168.122.164:55618 I0929 16:58:29.537642 3507 authenticatee.hpp:128] Creating new client SASL connection I0929 16:58:29.538769 3502 master.cpp:3737] Authenticating slave(57)@192.168.122.164:55618 I0929 16:58:29.539091 3502 authenticator.hpp:156] Creating new server SASL connection I0929 16:58:29.539710 3503 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0929 16:58:29.539943 3503 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0929 16:58:29.540206 3502 authenticator.hpp:262] Received SASL authentication start I0929 16:58:29.540457 3502 authenticator.hpp:384] Authentication requires more steps I0929 16:58:29.540757 3502 authenticatee.hpp:265] Received SASL authentication step I0929 16:58:29.541121 3502 authenticator.hpp:290] Received SASL authentication step I0929 16:58:29.541368 3502 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'fedora-20' server FQDN: 'fedora-20' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0929 16:58:29.541599 3502 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0929 16:58:29.541874 3502 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0929 16:58:29.542129 3502 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'fedora-20' server FQDN: 'fedora-20' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0929 16:58:29.542333 3502 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0929 16:58:29.542553 3502 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0929 16:58:29.542785 3502 authenticator.hpp:376] Authentication success I0929 16:58:29.543047 3502 authenticatee.hpp:305] Authentication success I0929 16:58:29.543381 3502 slave.cpp:729] Successfully authenticated with master master@192.168.122.164:55618 I0929 16:58:29.543707 3502 slave.cpp:992] Will retry registration in 11.795692ms if necessary I0929 16:58:29.543179 3503 master.cpp:3777] Successfully authenticated principal 'test-principal' at slave(57)@192.168.122.164:55618 I0929 16:58:29.544255 3503 master.cpp:2930] Registering slave at slave(57)@192.168.122.164:55618 (fedora-20) with id 20140929-165829-2759502016-55618-3486-0 I0929 16:58:29.544587 3503 registrar.cpp:421] Attempting to update the 'registry' I0929 16:58:29.545816 3500 log.cpp:680] Attempting to append 299 bytes to the log I0929 16:58:29.546267 3500 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0929 16:58:29.546749 3500 replica.cpp:508] Replica received write request for position 3 I0929 16:58:29.547030 3500 leveldb.cpp:343] Persisting action (318 bytes) to leveldb took 31759ns I0929 16:58:29.547236 3500 replica.cpp:676] Persisted action at 3 I0929 16:58:29.548902 3506 replica.cpp:655] Replica received learned notice for position 3 I0929 16:58:29.549139 3506 leveldb.cpp:343] Persisting action (320 bytes) to leveldb took 25595ns I0929 16:58:29.549343 3506 replica.cpp:676] Persisted action at 3 I0929 16:58:29.549607 3506 replica.cpp:661] Replica learned APPEND action at position 3 I0929 16:58:29.550081 3506 log.cpp:699] Attempting to truncate the log to 3 I0929 16:58:29.550497 3506 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0929 16:58:29.550943 3506 replica.cpp:508] Replica received write request for position 4 I0929 16:58:29.551198 3506 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 20852ns I0929 16:58:29.551409 3506 replica.cpp:676] Persisted action at 4 I0929 16:58:29.551795 3506 replica.cpp:655] Replica received learned notice for position 4 I0929 16:58:29.552094 3506 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 22182ns I0929 16:58:29.552320 3506 leveldb.cpp:401] Deleting ~2 keys from leveldb took 18503ns I0929 16:58:29.552525 3506 replica.cpp:676] Persisted action at 4 I0929 16:58:29.552781 3506 replica.cpp:661] Replica learned TRUNCATE action at position 4 I0929 16:58:29.550289 3503 registrar.cpp:478] Successfully updated 'registry' I0929 16:58:29.553553 3503 master.cpp:2970] Registered slave 20140929-165829-2759502016-55618-3486-0 at slave(57)@192.168.122.164:55618 (fedora-20) I0929 16:58:29.553807 3503 master.cpp:4180] Adding slave 20140929-165829-2759502016-55618-3486-0 at slave(57)@192.168.122.164:55618 (fedora-20) with cpus(*):2; mem(*):1024; disk(*):752; ports(*):[31000-32000] I0929 16:58:29.554152 3503 slave.cpp:763] Registered with master master@192.168.122.164:55618; given slave ID 20140929-165829-2759502016-55618-3486-0 I0929 16:58:29.554455 3503 slave.cpp:2345] Received ping from slave-observer(56)@192.168.122.164:55618 I0929 16:58:29.554707 3504 hierarchical_allocator_process.hpp:442] Added slave 20140929-165829-2759502016-55618-3486-0 (fedora-20) with cpus(*):2; mem(*):1024; disk(*):752; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):752; ports(*):[31000-32000] available) I0929 16:58:29.555064 3504 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20140929-165829-2759502016-55618-3486-0 in 13111ns I0929 16:58:29.558220 3486 sched.cpp:137] Version: 0.21.0 I0929 16:58:29.558821 3501 sched.cpp:233] New master detected at master@192.168.122.164:55618 I0929 16:58:29.559054 3501 sched.cpp:283] Authenticating with master master@192.168.122.164:55618 I0929 16:58:29.559360 3501 authenticatee.hpp:128] Creating new client SASL connection I0929 16:58:29.560096 3501 master.cpp:3737] Authenticating scheduler-c8df3f3b-2552-476f-9daf-9aa2f012ad28@192.168.122.164:55618 I0929 16:58:29.560430 3501 authenticator.hpp:156] Creating new server SASL connection I0929 16:58:29.561141 3501 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0929 16:58:29.561465 3501 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0929 16:58:29.561743 3501 authenticator.hpp:262] Received SASL authentication start I0929 16:58:29.562098 3501 authenticator.hpp:384] Authentication requires more steps I0929 16:58:29.562353 3501 authenticatee.hpp:265] Received SASL authentication step I0929 16:58:29.562721 3507 authenticator.hpp:290] Received SASL authentication step I0929 16:58:29.563022 3507 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'fedora-20' server FQDN: 'fedora-20' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0929 16:58:29.563254 3507 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0929 16:58:29.563484 3507 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0929 16:58:29.563736 3507 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'fedora-20' server FQDN: 'fedora-20' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0929 16:58:29.563976 3507 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0929 16:58:29.564188 3507 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0929 16:58:29.564415 3507 authenticator.hpp:376] Authentication success I0929 16:58:29.564673 3507 master.cpp:3777] Successfully authenticated principal 'test-principal' at scheduler-c8df3f3b-2552-476f-9daf-9aa2f012ad28@192.168.122.164:55618 I0929 16:58:29.568681 3501 authenticatee.hpp:305] Authentication success I0929 16:58:29.569046 3501 sched.cpp:357] Successfully authenticated with master master@192.168.122.164:55618 I0929 16:58:29.569286 3501 sched.cpp:476] Sending registration request to master@192.168.122.164:55618 I0929 16:58:29.569581 3507 master.cpp:1360] Received registration request from scheduler-c8df3f3b-2552-476f-9daf-9aa2f012ad28@192.168.122.164:55618 I0929 16:58:29.569846 3507 master.cpp:1320] Authorizing framework principal 'test-principal' to receive offers for role '*' I0929 16:58:29.570219 3507 master.cpp:1419] Registering framework 20140929-165829-2759502016-55618-3486-0000 at scheduler-c8df3f3b-2552-476f-9daf-9aa2f012ad28@192.168.122.164:55618 I0929 16:58:29.570543 3506 sched.cpp:407] Framework registered with 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.570811 3506 sched.cpp:421] Scheduler::registered took 13811ns I0929 16:58:29.571135 3502 hierarchical_allocator_process.hpp:329] Added framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.571393 3502 hierarchical_allocator_process.hpp:734] Offering cpus(*):2; mem(*):1024; disk(*):752; ports(*):[31000-32000] on slave 20140929-165829-2759502016-55618-3486-0 to framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.571723 3502 hierarchical_allocator_process.hpp:659] Performed allocation for 1 slaves in 368547ns I0929 16:58:29.572125 3507 master.hpp:868] Adding offer 20140929-165829-2759502016-55618-3486-0 with resources cpus(*):2; mem(*):1024; disk(*):752; ports(*):[31000-32000] on slave 20140929-165829-2759502016-55618-3486-0 (fedora-20) I0929 16:58:29.572374 3507 master.cpp:3679] Sending 1 offers to framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.572841 3503 sched.cpp:544] Scheduler::resourceOffers took 114306ns I0929 16:58:29.573197 3507 master.hpp:877] Removing offer 20140929-165829-2759502016-55618-3486-0 with resources cpus(*):2; mem(*):1024; disk(*):752; ports(*):[31000-32000] on slave 20140929-165829-2759502016-55618-3486-0 (fedora-20) I0929 16:58:29.573457 3507 master.cpp:2274] Processing reply for offers: [ 20140929-165829-2759502016-55618-3486-0 ] on slave 20140929-165829-2759502016-55618-3486-0 at slave(57)@192.168.122.164:55618 (fedora-20) for framework 20140929-165829-2759502016-55618-3486-0000 W0929 16:58:29.573717 3507 master.cpp:1944] Executor default for task 0 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0929 16:58:29.573953 3507 master.cpp:1955] Executor default for task 0 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0929 16:58:29.574177 3507 master.cpp:2357] Authorizing framework principal 'test-principal' to launch task 0 as user 'jenkins' I0929 16:58:29.574745 3507 master.hpp:845] Adding task 0 with resources cpus(*):2; mem(*):512 on slave 20140929-165829-2759502016-55618-3486-0 (fedora-20) I0929 16:58:29.574992 3507 master.cpp:2423] Launching task 0 of framework 20140929-165829-2759502016-55618-3486-0000 with resources cpus(*):2; mem(*):512 on slave 20140929-165829-2759502016-55618-3486-0 at slave(57)@192.168.122.164:55618 (fedora-20) I0929 16:58:29.575315 3503 slave.cpp:1023] Got assigned task 0 for framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.575724 3503 slave.cpp:1133] Launching task 0 for framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.578129 3503 exec.cpp:132] Version: 0.21.0 I0929 16:58:29.578505 3504 exec.cpp:182] Executor started at: executor(30)@192.168.122.164:55618 with pid 3486 I0929 16:58:29.578867 3503 slave.cpp:1246] Queuing task '0' for executor default of framework '20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.579144 3503 slave.cpp:554] Successfully attached file '/tmp/AllocatorTest_0_SlaveLost_xdXHfg/slaves/20140929-165829-2759502016-55618-3486-0/frameworks/20140929-165829-2759502016-55618-3486-0000/executors/default/runs/b0de9759-7054-4763-90f4-889ddc3a8524' I0929 16:58:29.579401 3503 slave.cpp:1756] Got registration for executor 'default' of framework 20140929-165829-2759502016-55618-3486-0000 from executor(30)@192.168.122.164:55618 I0929 16:58:29.579879 3506 exec.cpp:206] Executor registered on slave 20140929-165829-2759502016-55618-3486-0 I0929 16:58:29.580921 3506 exec.cpp:218] Executor::registered took 17644ns I0929 16:58:29.581188 3503 slave.cpp:1875] Flushing queued task 0 for executor 'default' of framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.581526 3504 exec.cpp:293] Executor asked to run task '0' I0929 16:58:29.581807 3504 exec.cpp:302] Executor::launchTask took 42649ns I0929 16:58:29.583133 3504 exec.cpp:525] Executor sending status update TASK_RUNNING (UUID: 454bdb88-fd27-4201-b2c7-4ea03a6d00b3) for task 0 of framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.586869 3503 slave.cpp:2611] Monitoring executor 'default' of framework '20140929-165829-2759502016-55618-3486-0000' in container 'b0de9759-7054-4763-90f4-889ddc3a8524' I0929 16:58:29.587252 3503 slave.cpp:2109] Handling status update TASK_RUNNING (UUID: 454bdb88-fd27-4201-b2c7-4ea03a6d00b3) for task 0 of framework 20140929-165829-2759502016-55618-3486-0000 from executor(30)@192.168.122.164:55618 I0929 16:58:29.587723 3502 hierarchical_allocator_process.hpp:563] Recovered mem(*):512; disk(*):752; ports(*):[31000-32000] (total allocatable: mem(*):512; disk(*):752; ports(*):[31000-32000]) on slave 20140929-165829-2759502016-55618-3486-0 from framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.588127 3502 hierarchical_allocator_process.hpp:599] Framework 20140929-165829-2759502016-55618-3486-0000 filtered slave 20140929-165829-2759502016-55618-3486-0 for 5secs I0929 16:58:29.588433 3506 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 454bdb88-fd27-4201-b2c7-4ea03a6d00b3) for task 0 of framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.588767 3506 status_update_manager.cpp:499] Creating StatusUpdate stream for task 0 of framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.589054 3506 status_update_manager.cpp:373] Forwarding status update TASK_RUNNING (UUID: 454bdb88-fd27-4201-b2c7-4ea03a6d00b3) for task 0 of framework 20140929-165829-2759502016-55618-3486-0000 to master@192.168.122.164:55618 I0929 16:58:29.589400 3506 master.cpp:3301] Forwarding status update TASK_RUNNING (UUID: 454bdb88-fd27-4201-b2c7-4ea03a6d00b3) for task 0 of framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.589702 3506 master.cpp:3273] Status update TASK_RUNNING (UUID: 454bdb88-fd27-4201-b2c7-4ea03a6d00b3) for task 0 of framework 20140929-165829-2759502016-55618-3486-0000 from slave 20140929-165829-2759502016-55618-3486-0 at slave(57)@192.168.122.164:55618 (fedora-20) I0929 16:58:29.589923 3500 sched.cpp:635] Scheduler::statusUpdate took 36034ns I0929 16:58:29.590337 3500 master.cpp:2777] Forwarding status update acknowledgement 454bdb88-fd27-4201-b2c7-4ea03a6d00b3 for task 0 of framework 20140929-165829-2759502016-55618-3486-0000 to slave 20140929-165829-2759502016-55618-3486-0 at slave(57)@192.168.122.164:55618 (fedora-20) I0929 16:58:29.590643 3503 slave.cpp:477] Slave terminating I0929 16:58:29.590893 3503 slave.cpp:1429] Asked to shut down framework 20140929-165829-2759502016-55618-3486-0000 by @0.0.0.0:0 I0929 16:58:29.591136 3503 slave.cpp:1454] Shutting down framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.591367 3503 slave.cpp:2951] Shutting down executor 'default' of framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.591701 3501 master.cpp:817] Slave 20140929-165829-2759502016-55618-3486-0 at slave(57)@192.168.122.164:55618 (fedora-20) disconnected I0929 16:58:29.591917 3501 master.cpp:821] Removing disconnected slave 20140929-165829-2759502016-55618-3486-0 at slave(57)@192.168.122.164:55618 (fedora-20) because it is not checkpointing! I0929 16:58:29.592149 3501 master.cpp:4301] Removing slave 20140929-165829-2759502016-55618-3486-0 at slave(57)@192.168.122.164:55618 (fedora-20) I0929 16:58:29.593868 3505 hierarchical_allocator_process.hpp:467] Removed slave 20140929-165829-2759502016-55618-3486-0 I0929 16:58:29.594907 3486 containerizer.cpp:89] Using isolation: posix/cpu,posix/mem I0929 16:58:29.595091 3501 master.cpp:4485] Removing task 0 with resources cpus(*):2; mem(*):512 of framework 20140929-165829-2759502016-55618-3486-0000 on slave 20140929-165829-2759502016-55618-3486-0 at slave(57)@192.168.122.164:55618 (fedora-20) I0929 16:58:29.595960 3501 master.cpp:4514] Removing executor 'default' with resources of framework 20140929-165829-2759502016-55618-3486-0000 on slave 20140929-165829-2759502016-55618-3486-0 at slave(57)@192.168.122.164:55618 (fedora-20) tests/allocator_tests.cpp:1552: Failure Mock function called more times than expected - taking default action specified at: ./tests/mesos.hpp:616: Function call: resourcesRecovered(@0x7f958007f590 20140929-165829-2759502016-55618-3486-0000, @0x7f958007f5b0 20140929-165829-2759502016-55618-3486-0, @0x7f958007f5d0 {}, @0x7f958007f5e8 16-byte object <01-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00>) Expected: to be called twice I0929 16:58:29.596640 3506 registrar.cpp:421] Attempting to update the 'registry' Actual: called 3 times - over-saturated and active I0929 16:58:29.598697 3506 log.cpp:680] Attempting to append 133 bytes to the log I0929 16:58:29.598984 3500 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 5 I0929 16:58:29.599422 3500 replica.cpp:508] Replica received write request for position 5 I0929 16:58:29.599712 3500 leveldb.cpp:343] Persisting action (152 bytes) to leveldb took 65914ns I0929 16:58:29.599931 3500 replica.cpp:676] Persisted action at 5 I0929 16:58:29.600332 3500 replica.cpp:655] Replica received learned notice for position 5 I0929 16:58:29.600621 3500 leveldb.cpp:343] Persisting action (154 bytes) to leveldb took 24641ns I0929 16:58:29.600858 3500 replica.cpp:676] Persisted action at 5 I0929 16:58:29.601060 3500 replica.cpp:661] Replica learned APPEND action at position 5 I0929 16:58:29.601588 3506 registrar.cpp:478] Successfully updated 'registry' I0929 16:58:29.601765 3500 log.cpp:699] Attempting to truncate the log to 5 I0929 16:58:29.602308 3501 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 6 I0929 16:58:29.602736 3505 replica.cpp:508] Replica received write request for position 6 I0929 16:58:29.602967 3505 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 22681ns I0929 16:58:29.603175 3505 replica.cpp:676] Persisted action at 6 I0929 16:58:29.603591 3501 replica.cpp:655] Replica received learned notice for position 6 I0929 16:58:29.603903 3501 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 23564ns I0929 16:58:29.604161 3501 leveldb.cpp:401] Deleting ~2 keys from leveldb took 18683ns I0929 16:58:29.604378 3501 replica.cpp:676] Persisted action at 6 I0929 16:58:29.604575 3501 replica.cpp:661] Replica learned TRUNCATE action at position 6 I0929 16:58:29.604970 3502 master.cpp:4393] Removed slave 20140929-165829-2759502016-55618-3486-0 (fedora-20) I0929 16:58:29.605197 3502 master.cpp:3296] Sending status update TASK_LOST (UUID: cfc350bc-4ebf-4ea1-9fe4-27f53825c787) for task 0 of framework 20140929-165829-2759502016-55618-3486-0000 'Slave fedora-20 removed' I0929 16:58:29.605445 3502 master.cpp:4411] Notifying framework 20140929-165829-2759502016-55618-3486-0000 of lost slave 20140929-165829-2759502016-55618-3486-0 (fedora-20) after recovering I0929 16:58:29.605756 3502 sched.cpp:635] Scheduler::statusUpdate took 9369ns I0929 16:58:29.605996 3502 sched.cpp:686] Lost slave 20140929-165829-2759502016-55618-3486-0 I0929 16:58:29.606210 3502 sched.cpp:697] Scheduler::slaveLost took 13761ns I0929 16:58:29.607326 3501 slave.cpp:169] Slave started on 58)@192.168.122.164:55618 I0929 16:58:29.607640 3501 credentials.hpp:84] Loading credential for authentication from '/tmp/AllocatorTest_0_SlaveLost_NcoJ6Z/credential' I0929 16:58:29.607975 3501 slave.cpp:276] Slave using credential for: test-principal I0929 16:58:29.608253 3501 slave.cpp:289] Slave resources: cpus(*):3; mem(*):256; disk(*):1024; ports(*):[31000-32000] I0929 16:58:29.608832 3501 slave.cpp:317] Slave hostname: fedora-20 I0929 16:58:29.608989 3501 slave.cpp:318] Slave checkpoint: false I0929 16:58:29.609542 3501 state.cpp:33] Recovering state from '/tmp/AllocatorTest_0_SlaveLost_NcoJ6Z/meta' I0929 16:58:29.609904 3500 status_update_manager.cpp:193] Recovering status update manager I0929 16:58:29.610119 3500 containerizer.cpp:252] Recovering containerizer I0929 16:58:29.610589 3507 slave.cpp:3271] Finished recovery I0929 16:58:29.611037 3507 slave.cpp:598] New master detected at master@192.168.122.164:55618 I0929 16:58:29.611264 3507 slave.cpp:672] Authenticating with master master@192.168.122.164:55618 I0929 16:58:29.611529 3507 slave.cpp:645] Detecting new master I0929 16:58:29.611385 3506 status_update_manager.cpp:167] New master detected at master@192.168.122.164:55618 I0929 16:58:29.611719 3503 authenticatee.hpp:128] Creating new client SASL connection I0929 16:58:29.612570 3503 master.cpp:3737] Authenticating slave(58)@192.168.122.164:55618 I0929 16:58:29.612843 3503 authenticator.hpp:156] Creating new server SASL connection I0929 16:58:29.613394 3503 authenticatee.hpp:219] Received SASL authentication mechanisms: CRAM-MD5 I0929 16:58:29.613706 3503 authenticatee.hpp:245] Attempting to authenticate with mechanism 'CRAM-MD5' I0929 16:58:29.614083 3503 authenticator.hpp:262] Received SASL authentication start I0929 16:58:29.614326 3503 authenticator.hpp:384] Authentication requires more steps I0929 16:58:29.614552 3503 authenticatee.hpp:265] Received SASL authentication step I0929 16:58:29.614828 3503 authenticator.hpp:290] Received SASL authentication step I0929 16:58:29.615067 3503 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'fedora-20' server FQDN: 'fedora-20' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0929 16:58:29.615314 3503 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I0929 16:58:29.615562 3503 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0929 16:58:29.615766 3503 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'fedora-20' server FQDN: 'fedora-20' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0929 16:58:29.616060 3503 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0929 16:58:29.616387 3503 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0929 16:58:29.616631 3503 authenticator.hpp:376] Authentication success I0929 16:58:29.616929 3503 authenticatee.hpp:305] Authentication success I0929 16:58:29.617081 3501 master.cpp:3777] Successfully authenticated principal 'test-principal' at slave(58)@192.168.122.164:55618 I0929 16:58:29.620779 3500 slave.cpp:729] Successfully authenticated with master master@192.168.122.164:55618 I0929 16:58:29.621150 3500 slave.cpp:992] Will retry registration in 15.66596ms if necessary I0929 16:58:29.621526 3501 master.cpp:2930] Registering slave at slave(58)@192.168.122.164:55618 (fedora-20) with id 20140929-165829-2759502016-55618-3486-1 I0929 16:58:29.621976 3501 registrar.cpp:421] Attempting to update the 'registry' I0929 16:58:29.623364 3506 log.cpp:680] Attempting to append 299 bytes to the log I0929 16:58:29.623780 3506 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 7 I0929 16:58:29.624407 3506 replica.cpp:508] Replica received write request for position 7 I0929 16:58:29.624712 3506 leveldb.cpp:343] Persisting action (318 bytes) to leveldb took 64462ns I0929 16:58:29.624984 3506 replica.cpp:676] Persisted action at 7 I0929 16:58:29.625460 3506 replica.cpp:655] Replica received learned notice for position 7 I0929 16:58:29.625838 3506 leveldb.cpp:343] Persisting action (320 bytes) to leveldb took 30316ns I0929 16:58:29.626093 3506 replica.cpp:676] Persisted action at 7 I0929 16:58:29.626382 3506 replica.cpp:661] Replica learned APPEND action at position 7 I0929 16:58:29.626832 3506 log.cpp:699] Attempting to truncate the log to 7 I0929 16:58:29.627231 3506 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 8 I0929 16:58:29.627789 3506 replica.cpp:508] Replica received write request for position 8 I0929 16:58:29.628073 3506 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 26181ns I0929 16:58:29.628347 3506 replica.cpp:676] Persisted action at 8 I0929 16:58:29.628829 3506 replica.cpp:655] Replica received learned notice for position 8 I0929 16:58:29.629323 3506 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 28559ns I0929 16:58:29.629581 3506 leveldb.cpp:401] Deleting ~2 keys from leveldb took 22253ns I0929 16:58:29.629897 3506 replica.cpp:676] Persisted action at 8 I0929 16:58:29.630159 3506 replica.cpp:661] Replica learned TRUNCATE action at position 8 I0929 16:58:29.630910 3501 registrar.cpp:478] Successfully updated 'registry' I0929 16:58:29.631356 3501 master.cpp:2970] Registered slave 20140929-165829-2759502016-55618-3486-1 at slave(58)@192.168.122.164:55618 (fedora-20) I0929 16:58:29.631624 3501 master.cpp:4180] Adding slave 20140929-165829-2759502016-55618-3486-1 at slave(58)@192.168.122.164:55618 (fedora-20) with cpus(*):3; mem(*):256; disk(*):1024; ports(*):[31000-32000] I0929 16:58:29.632066 3501 slave.cpp:763] Registered with master master@192.168.122.164:55618; given slave ID 20140929-165829-2759502016-55618-3486-1 I0929 16:58:29.632493 3501 slave.cpp:2345] Received ping from slave-observer(57)@192.168.122.164:55618 I0929 16:58:29.632298 3506 hierarchical_allocator_process.hpp:442] Added slave 20140929-165829-2759502016-55618-3486-1 (fedora-20) with cpus(*):3; mem(*):256; disk(*):1024; ports(*):[31000-32000] (and cpus(*):3; mem(*):256; disk(*):1024; ports(*):[31000-32000] available) I0929 16:58:29.633102 3506 hierarchical_allocator_process.hpp:734] Offering cpus(*):3; mem(*):256; disk(*):1024; ports(*):[31000-32000] on slave 20140929-165829-2759502016-55618-3486-1 to framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.633496 3507 master.hpp:868] Adding offer 20140929-165829-2759502016-55618-3486-1 with resources cpus(*):3; mem(*):256; disk(*):1024; ports(*):[31000-32000] on slave 20140929-165829-2759502016-55618-3486-1 (fedora-20) I0929 16:58:29.633833 3507 master.cpp:3679] Sending 1 offers to framework 20140929-165829-2759502016-55618-3486-0000 I0929 16:58:29.634218 3507 sched.cpp:544] Scheduler::resourceOffers took 32550ns I0929 16:58:29.634784 3507 sched.cpp:745] Stopping framework '20140929-165829-2759502016-55618-3486-0000' I0929 16:58:29.634558 3486 master.cpp:676] Master terminating I0929 16:58:29.635319 3486 master.hpp:877] Removing offer 20140929-165829-2759502016-55618-3486-1 with resources cpus(*):3; mem(*):256; disk(*):1024; ports(*):[31000-32000] on slave 20140929-165829-2759502016-55618-3486-1 (fedora-20) I0929 16:58:29.635725 3506 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20140929-165829-2759502016-55618-3486-1 in 2.656855ms I0929 16:58:29.644737 3503 slave.cpp:2430] master@192.168.122.164:55618 exited W0929 16:58:29.645407 3503 slave.cpp:2433] Master disconnected! Waiting for a new master to be elected I0929 16:58:29.656318 3486 slave.cpp:477] Slave terminating tests/allocator_tests.cpp:1532: Failure Actual function call count doesn't match EXPECT_CALL(this->allocator, resourcesRecovered(_, _, _, _))... Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] AllocatorTest/0.SlaveLost, where TypeParam = mesos::internal::master::allocator::HierarchicalAllocatorProcess (179 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1853","10/01/2014 18:36:46",3,"Remove /proc and /sys remounts from port_mapping isolator ""/proc/net reflects a new network namespace regardless and remount doesn't actually do what we expected anyway, i.e., it's not sufficient for a new pid namespace and a new mount is required.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1855","10/01/2014 21:24:54",1,"Mesos 0.20.1 doesn't compile ""The compilation of Mesos 0.20.1 fails on Ubuntu Trusty with the following error - slave/containerizer/mesos/containerizer.cpp -fPIC -DPIC -o slave/containerizer/mesos/.libs/libmesos_no_3rdparty_la-containerizer.o In file included from ./linux/routing/filter/ip.hpp:36:0, from ./slave/containerizer/isolators/network/port_mapping.hpp:42, from slave/containerizer/mesos/containerizer.cpp:44: ./linux/routing/filter/filter.hpp:29:43: fatal error: linux/routing/filter/handle.hpp: No such file or directory #include """"linux/routing/filter/handle.hpp"""" ^""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1862","10/03/2014 20:25:58",3,"Performance regression in the Master's http metrics. ""As part of the change to hold on to terminal unacknowledged tasks in the master, we introduced a performance regression during the following patch: https://github.com/apache/mesos/commit/0760b007ad65bc91e8cea377339978c78d36d247 Rather than keeping a running count of allocated resources, we now compute resources on-demand. This was done in order to ignore terminal task's resources. As a result of this change, the /stats.json and /metrics/snapshot endpoints on the master have slowed down substantially on large clusters. {{perf top}} reveals some of the resource computation during a request to stats.json: {noformat: perf top} Events: 36K cycles 10.53% libc-2.5.so [.] _int_free 9.90% libc-2.5.so [.] malloc 8.56% libmesos-0.21.0.so [.] std::_Rb_tree, std::less, std::allocator >:: 8.23% libc-2.5.so [.] _int_malloc 5.80% libstdc++.so.6.0.8 [.] std::_Rb_tree_increment(std::_Rb_tree_node_base*) 5.33% [kernel] [k] _raw_spin_lock 3.13% libstdc++.so.6.0.8 [.] std::string::assign(std::string const&) 2.95% libmesos-0.21.0.so [.] process::SocketManager::exited(process::ProcessBase*) 2.43% libmesos-0.21.0.so [.] mesos::Resource::MergeFrom(mesos::Resource const&) 1.88% libmesos-0.21.0.so [.] mesos::internal::master::Slave::used() const 1.48% libstdc++.so.6.0.8 [.] __gnu_cxx::__atomic_add(int volatile*, int) 1.45% [kernel] [k] find_busiest_group 1.41% libc-2.5.so [.] free 1.38% libmesos-0.21.0.so [.] mesos::Value_Range::MergeFrom(mesos::Value_Range const&) 1.13% libmesos-0.21.0.so [.] mesos::Value_Scalar::MergeFrom(mesos::Value_Scalar const&) 1.12% libmesos-0.21.0.so [.] mesos::Resource::SharedDtor() 1.07% libstdc++.so.6.0.8 [.] __gnu_cxx::__exchange_and_add(int volatile*, int) 0.94% libmesos-0.21.0.so [.] google::protobuf::UnknownFieldSet::MergeFrom(google::protobuf::UnknownFieldSet const&) 0.92% libstdc++.so.6.0.8 [.] operator new(unsigned long) 0.88% libmesos-0.21.0.so [.] mesos::Value_Ranges::MergeFrom(mesos::Value_Ranges const&) 0.75% libmesos-0.21.0.so [.] mesos::matches(mesos::Resource const&, mesos::Resource const&) {noformat}"""," commit 0760b007ad65bc91e8cea377339978c78d36d247 Author: Benjamin Mahler Date: Thu Sep 11 10:48:20 2014 -0700 Minor cleanups to the Master code. Review: https://reviews.apache.org/r/25566 $ time curl localhost:5050/health real 0m0.004s user 0m0.001s sys 0m0.002s $ time curl localhost:5050/stats.json > /dev/null real 0m15.402s user 0m0.001s sys 0m0.003s $ time curl localhost:5050/metrics/snapshot > /dev/null real 0m6.059s user 0m0.002s sys 0m0.002s Events: 36K cycles 10.53% libc-2.5.so [.] _int_free 9.90% libc-2.5.so [.] malloc 8.56% libmesos-0.21.0.so [.] std::_Rb_tree, std::less, std::allocator >:: 8.23% libc-2.5.so [.] _int_malloc 5.80% libstdc++.so.6.0.8 [.] std::_Rb_tree_increment(std::_Rb_tree_node_base*) 5.33% [kernel] [k] _raw_spin_lock 3.13% libstdc++.so.6.0.8 [.] std::string::assign(std::string const&) 2.95% libmesos-0.21.0.so [.] process::SocketManager::exited(process::ProcessBase*) 2.43% libmesos-0.21.0.so [.] mesos::Resource::MergeFrom(mesos::Resource const&) 1.88% libmesos-0.21.0.so [.] mesos::internal::master::Slave::used() const 1.48% libstdc++.so.6.0.8 [.] __gnu_cxx::__atomic_add(int volatile*, int) 1.45% [kernel] [k] find_busiest_group 1.41% libc-2.5.so [.] free 1.38% libmesos-0.21.0.so [.] mesos::Value_Range::MergeFrom(mesos::Value_Range const&) 1.13% libmesos-0.21.0.so [.] mesos::Value_Scalar::MergeFrom(mesos::Value_Scalar const&) 1.12% libmesos-0.21.0.so [.] mesos::Resource::SharedDtor() 1.07% libstdc++.so.6.0.8 [.] __gnu_cxx::__exchange_and_add(int volatile*, int) 0.94% libmesos-0.21.0.so [.] google::protobuf::UnknownFieldSet::MergeFrom(google::protobuf::UnknownFieldSet const&) 0.92% libstdc++.so.6.0.8 [.] operator new(unsigned long) 0.88% libmesos-0.21.0.so [.] mesos::Value_Ranges::MergeFrom(mesos::Value_Ranges const&) 0.75% libmesos-0.21.0.so [.] mesos::matches(mesos::Resource const&, mesos::Resource const&) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1865","10/04/2014 00:09:23",3,"Redirect to the leader master when current master is not a leader ""Some of the API endpoints, for example /master/tasks.json, will return bogus information if you query a non-leading master: This is very hard for end-users to work around. For example if I query """"which master is leading"""" followed by """"leader: which tasks are running"""" it is possible that the leader fails over in between, leaving me with an incorrect answer and no way to know that this happened. In my opinion the API should return the correct response (by asking the current leader?) or an error (500 Not the leader?) but it's unacceptable to return a successful wrong answer. """," [steven@Anesthetize:~]% curl http://master1.mesos-vpcqa.otenv.com:5050/master/tasks.json | jq . | head -n 10 { """"tasks"""": [] } [steven@Anesthetize:~]% curl http://master2.mesos-vpcqa.otenv.com:5050/master/tasks.json | jq . | head -n 10 { """"tasks"""": [] } [steven@Anesthetize:~]% curl http://master3.mesos-vpcqa.otenv.com:5050/master/tasks.json | jq . | head -n 10 { """"tasks"""": [ { """"executor_id"""": """""""", """"framework_id"""": """"20140724-231003-419644938-5050-1707-0000"""", """"id"""": """"pp.guestcenterwebhealthmonitor.606cd6ee-4b50-11e4-825b-5212e05f35db"""", """"name"""": """"pp.guestcenterwebhealthmonitor.606cd6ee-4b50-11e4-825b-5212e05f35db"""", """"resources"""": { """"cpus"""": 0.25, """"disk"""": 0, ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-1941","10/16/2014 23:52:34",3,"Make executor's user owner of executor's cgroup directory ""Currently, when cgroups are enabled, and executor is spawned, it's mounted under, for ex: /sys/fs/cgroup/cpu/mesos/. This directory in current implementation is only writable by root user. This prevents process launched by executor to mount its child processes under this cgroup, because the cgroup directory is only writable by root. To enable a executor spawned process to mount it's child processes under it's cgroup directory, the cgroup directory should be made writable by the user which spawns the executor.""","",0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2007","10/29/2014 17:42:16",2,"AllocatorTest/0.SlaveReregistersFirst is flaky ""{noformat:title=} [ RUN ] AllocatorTest/0.SlaveReregistersFirst Using temporary directory '/tmp/AllocatorTest_0_SlaveReregistersFirst_YPe61d' I1028 23:48:22.360447 31190 leveldb.cpp:176] Opened db in 2.192575ms I1028 23:48:22.361253 31190 leveldb.cpp:183] Compacted db in 760753ns I1028 23:48:22.361320 31190 leveldb.cpp:198] Created db iterator in 22188ns I1028 23:48:22.361340 31190 leveldb.cpp:204] Seeked to beginning of db in 1950ns I1028 23:48:22.361351 31190 leveldb.cpp:273] Iterated through 0 keys in the db in 345ns I1028 23:48:22.361403 31190 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I1028 23:48:22.362185 31217 recover.cpp:437] Starting replica recovery I1028 23:48:22.362764 31219 recover.cpp:463] Replica is in EMPTY status I1028 23:48:22.363955 31210 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I1028 23:48:22.364320 31217 recover.cpp:188] Received a recover response from a replica in EMPTY status I1028 23:48:22.364820 31211 recover.cpp:554] Updating replica status to STARTING I1028 23:48:22.365365 31215 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 418991ns I1028 23:48:22.365391 31215 replica.cpp:320] Persisted replica status to STARTING I1028 23:48:22.365617 31217 recover.cpp:463] Replica is in STARTING status I1028 23:48:22.366328 31206 master.cpp:312] Master 20141028-234822-3193029443-50043-31190 (pietas.apache.org) started on 67.195.81.190:50043 I1028 23:48:22.366377 31206 master.cpp:358] Master only allowing authenticated frameworks to register I1028 23:48:22.366391 31206 master.cpp:363] Master only allowing authenticated slaves to register I1028 23:48:22.366402 31206 credentials.hpp:36] Loading credentials for authentication from '/tmp/AllocatorTest_0_SlaveReregistersFirst_YPe61d/credentials' I1028 23:48:22.366708 31206 master.cpp:392] Authorization enabled I1028 23:48:22.366886 31209 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I1028 23:48:22.367311 31208 master.cpp:120] No whitelist given. Advertising offers for all slaves I1028 23:48:22.367312 31207 recover.cpp:188] Received a recover response from a replica in STARTING status I1028 23:48:22.367686 31211 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@67.195.81.190:50043 I1028 23:48:22.367863 31212 recover.cpp:554] Updating replica status to VOTING I1028 23:48:22.368477 31218 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 375527ns I1028 23:48:22.368505 31218 replica.cpp:320] Persisted replica status to VOTING I1028 23:48:22.368517 31204 master.cpp:1242] The newly elected leader is master@67.195.81.190:50043 with id 20141028-234822-3193029443-50043-31190 I1028 23:48:22.368549 31204 master.cpp:1255] Elected as the leading master! I1028 23:48:22.368567 31204 master.cpp:1073] Recovering from registrar I1028 23:48:22.368621 31215 recover.cpp:568] Successfully joined the Paxos group I1028 23:48:22.368716 31219 registrar.cpp:313] Recovering registrar I1028 23:48:22.369000 31215 recover.cpp:452] Recover process terminated I1028 23:48:22.369523 31208 log.cpp:656] Attempting to start the writer I1028 23:48:22.370909 31205 replica.cpp:474] Replica received implicit promise request with proposal 1 I1028 23:48:22.371266 31205 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 325016ns I1028 23:48:22.371290 31205 replica.cpp:342] Persisted promised to 1 I1028 23:48:22.371979 31218 coordinator.cpp:230] Coordinator attemping to fill missing position I1028 23:48:22.373378 31210 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I1028 23:48:22.373746 31210 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 329018ns I1028 23:48:22.373772 31210 replica.cpp:676] Persisted action at 0 I1028 23:48:22.374897 31214 replica.cpp:508] Replica received write request for position 0 I1028 23:48:22.374951 31214 leveldb.cpp:438] Reading position from leveldb took 26002ns I1028 23:48:22.375272 31214 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 289094ns I1028 23:48:22.375298 31214 replica.cpp:676] Persisted action at 0 I1028 23:48:22.375886 31204 replica.cpp:655] Replica received learned notice for position 0 I1028 23:48:22.376258 31204 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 346650ns I1028 23:48:22.376277 31204 replica.cpp:676] Persisted action at 0 I1028 23:48:22.376298 31204 replica.cpp:661] Replica learned NOP action at position 0 I1028 23:48:22.376843 31215 log.cpp:672] Writer started with ending position 0 I1028 23:48:22.378056 31205 leveldb.cpp:438] Reading position from leveldb took 28265ns I1028 23:48:22.380323 31217 registrar.cpp:346] Successfully fetched the registry (0B) in 11.55584ms I1028 23:48:22.380466 31217 registrar.cpp:445] Applied 1 operations in 50632ns; attempting to update the 'registry' I1028 23:48:22.382472 31217 log.cpp:680] Attempting to append 139 bytes to the log I1028 23:48:22.382715 31210 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I1028 23:48:22.383463 31210 replica.cpp:508] Replica received write request for position 1 I1028 23:48:22.383857 31210 leveldb.cpp:343] Persisting action (158 bytes) to leveldb took 363758ns I1028 23:48:22.383875 31210 replica.cpp:676] Persisted action at 1 I1028 23:48:22.384397 31218 replica.cpp:655] Replica received learned notice for position 1 I1028 23:48:22.384840 31218 leveldb.cpp:343] Persisting action (160 bytes) to leveldb took 420161ns I1028 23:48:22.384862 31218 replica.cpp:676] Persisted action at 1 I1028 23:48:22.384882 31218 replica.cpp:661] Replica learned APPEND action at position 1 I1028 23:48:22.385684 31211 registrar.cpp:490] Successfully updated the 'registry' in 5.158144ms I1028 23:48:22.385818 31211 registrar.cpp:376] Successfully recovered registrar I1028 23:48:22.385912 31214 log.cpp:699] Attempting to truncate the log to 1 I1028 23:48:22.386101 31218 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I1028 23:48:22.386124 31211 master.cpp:1100] Recovered 0 slaves from the Registry (101B) ; allowing 10mins for slaves to re-register I1028 23:48:22.387398 31209 replica.cpp:508] Replica received write request for position 2 I1028 23:48:22.387758 31209 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 334969ns I1028 23:48:22.387776 31209 replica.cpp:676] Persisted action at 2 I1028 23:48:22.388272 31204 replica.cpp:655] Replica received learned notice for position 2 I1028 23:48:22.388453 31204 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 159390ns I1028 23:48:22.388501 31204 leveldb.cpp:401] Deleting ~1 keys from leveldb took 30409ns I1028 23:48:22.388516 31204 replica.cpp:676] Persisted action at 2 I1028 23:48:22.388531 31204 replica.cpp:661] Replica learned TRUNCATE action at position 2 I1028 23:48:22.400737 31207 slave.cpp:169] Slave started on 34)@67.195.81.190:50043 I1028 23:48:22.400786 31207 credentials.hpp:84] Loading credential for authentication from '/tmp/AllocatorTest_0_SlaveReregistersFirst_QPPV21/credential' I1028 23:48:22.400996 31207 slave.cpp:276] Slave using credential for: test-principal I1028 23:48:22.401304 31207 slave.cpp:289] Slave resources: cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] I1028 23:48:22.401413 31207 slave.cpp:318] Slave hostname: pietas.apache.org I1028 23:48:22.401520 31207 slave.cpp:319] Slave checkpoint: false W1028 23:48:22.401535 31207 slave.cpp:321] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I1028 23:48:22.402349 31207 state.cpp:33] Recovering state from '/tmp/AllocatorTest_0_SlaveReregistersFirst_QPPV21/meta' I1028 23:48:22.402678 31207 status_update_manager.cpp:197] Recovering status update manager I1028 23:48:22.403048 31211 slave.cpp:3456] Finished recovery I1028 23:48:22.403815 31215 slave.cpp:602] New master detected at master@67.195.81.190:50043 I1028 23:48:22.403852 31215 slave.cpp:665] Authenticating with master master@67.195.81.190:50043 I1028 23:48:22.403875 31206 status_update_manager.cpp:171] Pausing sending status updates I1028 23:48:22.403961 31215 slave.cpp:638] Detecting new master I1028 23:48:22.404016 31211 authenticatee.hpp:133] Creating new client SASL connection I1028 23:48:22.404230 31204 master.cpp:3853] Authenticating slave(34)@67.195.81.190:50043 I1028 23:48:22.404464 31205 authenticator.hpp:161] Creating new server SASL connection I1028 23:48:22.404613 31211 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1028 23:48:22.404649 31211 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1028 23:48:22.404734 31211 authenticator.hpp:267] Received SASL authentication start I1028 23:48:22.404783 31211 authenticator.hpp:389] Authentication requires more steps I1028 23:48:22.404898 31215 authenticatee.hpp:270] Received SASL authentication step I1028 23:48:22.404999 31215 authenticator.hpp:295] Received SASL authentication step I1028 23:48:22.405030 31215 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1028 23:48:22.405047 31215 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1028 23:48:22.405086 31215 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1028 23:48:22.405109 31215 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1028 23:48:22.405122 31215 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1028 23:48:22.405129 31215 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1028 23:48:22.405146 31215 authenticator.hpp:381] Authentication success I1028 23:48:22.405243 31213 authenticatee.hpp:310] Authentication success I1028 23:48:22.405253 31214 master.cpp:3893] Successfully authenticated principal 'test-principal' at slave(34)@67.195.81.190:50043 I1028 23:48:22.405505 31213 slave.cpp:722] Successfully authenticated with master master@67.195.81.190:50043 I1028 23:48:22.405619 31213 slave.cpp:1050] Will retry registration in 17.050994ms if necessary I1028 23:48:22.405819 31215 master.cpp:3032] Registering slave at slave(34)@67.195.81.190:50043 (pietas.apache.org) with id 20141028-234822-3193029443-50043-31190-S0 I1028 23:48:22.406262 31216 registrar.cpp:445] Applied 1 operations in 52647ns; attempting to update the 'registry' I1028 23:48:22.406697 31190 sched.cpp:137] Version: 0.21.0 I1028 23:48:22.407083 31211 sched.cpp:233] New master detected at master@67.195.81.190:50043 I1028 23:48:22.407114 31211 sched.cpp:283] Authenticating with master master@67.195.81.190:50043 I1028 23:48:22.407290 31214 authenticatee.hpp:133] Creating new client SASL connection I1028 23:48:22.407424 31214 master.cpp:3853] Authenticating scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:22.407659 31207 authenticator.hpp:161] Creating new server SASL connection I1028 23:48:22.407757 31207 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1028 23:48:22.407774 31207 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1028 23:48:22.407830 31207 authenticator.hpp:267] Received SASL authentication start I1028 23:48:22.407868 31207 authenticator.hpp:389] Authentication requires more steps I1028 23:48:22.407927 31207 authenticatee.hpp:270] Received SASL authentication step I1028 23:48:22.408015 31212 authenticator.hpp:295] Received SASL authentication step I1028 23:48:22.408037 31212 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1028 23:48:22.408046 31212 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1028 23:48:22.408072 31212 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1028 23:48:22.408092 31212 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1028 23:48:22.408100 31212 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1028 23:48:22.408105 31212 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1028 23:48:22.408116 31212 authenticator.hpp:381] Authentication success I1028 23:48:22.408192 31210 authenticatee.hpp:310] Authentication success I1028 23:48:22.408210 31217 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:22.408419 31210 sched.cpp:357] Successfully authenticated with master master@67.195.81.190:50043 I1028 23:48:22.408460 31210 sched.cpp:476] Sending registration request to master@67.195.81.190:50043 I1028 23:48:22.408568 31217 master.cpp:1362] Received registration request for framework 'default' at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:22.408617 31217 master.cpp:1321] Authorizing framework principal 'test-principal' to receive offers for role '*' I1028 23:48:22.408937 31214 master.cpp:1426] Registering framework 20141028-234822-3193029443-50043-31190-0000 (default) at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:22.409265 31213 sched.cpp:407] Framework registered with 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.409267 31212 hierarchical_allocator_process.hpp:329] Added framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.409312 31212 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1028 23:48:22.409324 31215 log.cpp:680] Attempting to append 316 bytes to the log I1028 23:48:22.409333 31213 sched.cpp:421] Scheduler::registered took 38591ns I1028 23:48:22.409327 31212 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 24107ns I1028 23:48:22.409518 31205 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I1028 23:48:22.410127 31206 replica.cpp:508] Replica received write request for position 3 I1028 23:48:22.410706 31206 leveldb.cpp:343] Persisting action (335 bytes) to leveldb took 554098ns I1028 23:48:22.410725 31206 replica.cpp:676] Persisted action at 3 I1028 23:48:22.411151 31217 replica.cpp:655] Replica received learned notice for position 3 I1028 23:48:22.411499 31217 leveldb.cpp:343] Persisting action (337 bytes) to leveldb took 326572ns I1028 23:48:22.411519 31217 replica.cpp:676] Persisted action at 3 I1028 23:48:22.411533 31217 replica.cpp:661] Replica learned APPEND action at position 3 I1028 23:48:22.412292 31219 registrar.cpp:490] Successfully updated the 'registry' in 5.972992ms I1028 23:48:22.412518 31218 log.cpp:699] Attempting to truncate the log to 3 I1028 23:48:22.412621 31213 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I1028 23:48:22.412734 31219 slave.cpp:2522] Received ping from slave-observer(38)@67.195.81.190:50043 I1028 23:48:22.412787 31206 master.cpp:3086] Registered slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] I1028 23:48:22.412858 31219 slave.cpp:756] Registered with master master@67.195.81.190:50043; given slave ID 20141028-234822-3193029443-50043-31190-S0 I1028 23:48:22.412994 31210 status_update_manager.cpp:178] Resuming sending status updates I1028 23:48:22.413014 31211 hierarchical_allocator_process.hpp:442] Added slave 20141028-234822-3193029443-50043-31190-S0 (pietas.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] available) I1028 23:48:22.413159 31211 hierarchical_allocator_process.hpp:734] Offering cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] on slave 20141028-234822-3193029443-50043-31190-S0 to framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.413290 31208 replica.cpp:508] Replica received write request for position 4 I1028 23:48:22.413421 31211 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20141028-234822-3193029443-50043-31190-S0 in 346658ns I1028 23:48:22.413650 31208 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 336067ns I1028 23:48:22.413668 31208 replica.cpp:676] Persisted action at 4 I1028 23:48:22.413797 31216 master.cpp:3795] Sending 1 offers to framework 20141028-234822-3193029443-50043-31190-0000 (default) at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:22.414077 31212 replica.cpp:655] Replica received learned notice for position 4 I1028 23:48:22.414356 31212 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 260401ns I1028 23:48:22.414403 31212 leveldb.cpp:401] Deleting ~2 keys from leveldb took 28541ns I1028 23:48:22.414417 31212 replica.cpp:676] Persisted action at 4 I1028 23:48:22.414446 31212 replica.cpp:661] Replica learned TRUNCATE action at position 4 I1028 23:48:22.414422 31207 sched.cpp:544] Scheduler::resourceOffers took 310278ns I1028 23:48:22.415086 31214 master.cpp:2321] Processing reply for offers: [ 20141028-234822-3193029443-50043-31190-O0 ] on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) for framework 20141028-234822-3193029443-50043-31190-0000 (default) at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 W1028 23:48:22.415163 31214 master.cpp:1969] Executor default for task 0 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W1028 23:48:22.415186 31214 master.cpp:1980] Executor default for task 0 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I1028 23:48:22.415256 31214 master.cpp:2417] Authorizing framework principal 'test-principal' to launch task 0 as user 'jenkins' I1028 23:48:22.416033 31219 master.hpp:877] Adding task 0 with resources cpus(*):1; mem(*):500 on slave 20141028-234822-3193029443-50043-31190-S0 (pietas.apache.org) I1028 23:48:22.416084 31219 master.cpp:2480] Launching task 0 of framework 20141028-234822-3193029443-50043-31190-0000 (default) at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 with resources cpus(*):1; mem(*):500 on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:22.416317 31214 slave.cpp:1081] Got assigned task 0 for framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.416679 31215 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1; mem(*):524; disk(*):3.70122e+06; ports(*):[31000-32000] (total allocatable: cpus(*):1; mem(*):524; disk(*):3.70122e+06; ports(*):[31000-32000]) on slave 20141028-234822-3193029443-50043-31190-S0 from framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.416721 31215 hierarchical_allocator_process.hpp:599] Framework 20141028-234822-3193029443-50043-31190-0000 filtered slave 20141028-234822-3193029443-50043-31190-S0 for 5secs I1028 23:48:22.416724 31214 slave.cpp:1191] Launching task 0 for framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.418534 31214 slave.cpp:3871] Launching executor default of framework 20141028-234822-3193029443-50043-31190-0000 in work directory '/tmp/AllocatorTest_0_SlaveReregistersFirst_QPPV21/slaves/20141028-234822-3193029443-50043-31190-S0/frameworks/20141028-234822-3193029443-50043-31190-0000/executors/default/runs/d593f433-3c16-4678-8f76-4038fe2841c4' I1028 23:48:22.420557 31214 exec.cpp:132] Version: 0.21.0 I1028 23:48:22.420755 31213 exec.cpp:182] Executor started at: executor(22)@67.195.81.190:50043 with pid 31190 I1028 23:48:22.420903 31214 slave.cpp:1317] Queuing task '0' for executor default of framework '20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.420997 31214 slave.cpp:555] Successfully attached file '/tmp/AllocatorTest_0_SlaveReregistersFirst_QPPV21/slaves/20141028-234822-3193029443-50043-31190-S0/frameworks/20141028-234822-3193029443-50043-31190-0000/executors/default/runs/d593f433-3c16-4678-8f76-4038fe2841c4' I1028 23:48:22.421058 31214 slave.cpp:1849] Got registration for executor 'default' of framework 20141028-234822-3193029443-50043-31190-0000 from executor(22)@67.195.81.190:50043 I1028 23:48:22.421295 31214 slave.cpp:1968] Flushing queued task 0 for executor 'default' of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.421391 31205 exec.cpp:206] Executor registered on slave 20141028-234822-3193029443-50043-31190-S0 I1028 23:48:22.421495 31214 slave.cpp:2802] Monitoring executor 'default' of framework '20141028-234822-3193029443-50043-31190-0000' in container 'd593f433-3c16-4678-8f76-4038fe2841c4' I1028 23:48:22.422873 31205 exec.cpp:218] Executor::registered took 19148ns I1028 23:48:22.422991 31205 exec.cpp:293] Executor asked to run task '0' I1028 23:48:22.423085 31205 exec.cpp:302] Executor::launchTask took 76519ns I1028 23:48:22.424541 31205 exec.cpp:525] Executor sending status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.424724 31205 slave.cpp:2202] Handling status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 from executor(22)@67.195.81.190:50043 I1028 23:48:22.424932 31213 status_update_manager.cpp:317] Received status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.424963 31213 status_update_manager.cpp:494] Creating StatusUpdate stream for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.425122 31213 status_update_manager.cpp:371] Forwarding update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 to the slave I1028 23:48:22.425257 31205 slave.cpp:2442] Forwarding the update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 to master@67.195.81.190:50043 I1028 23:48:22.425398 31205 slave.cpp:2369] Status update manager successfully handled status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.425420 31205 slave.cpp:2375] Sending acknowledgement for status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 to executor(22)@67.195.81.190:50043 I1028 23:48:22.425583 31212 master.cpp:3410] Forwarding status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.425621 31206 exec.cpp:339] Executor received status update acknowledgement 10174aa0-0e5a-4f9d-a530-dee64e93f222 for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.425786 31212 master.cpp:3382] Status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 from slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:22.425832 31212 master.cpp:4617] Updating the latest state of task 0 of framework 20141028-234822-3193029443-50043-31190-0000 to TASK_RUNNING I1028 23:48:22.425885 31208 sched.cpp:635] Scheduler::statusUpdate took 49727ns I1028 23:48:22.426082 31208 master.cpp:2882] Forwarding status update acknowledgement 10174aa0-0e5a-4f9d-a530-dee64e93f222 for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 (default) at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 to slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:22.426360 31206 status_update_manager.cpp:389] Received status update acknowledgement (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.426623 31206 slave.cpp:1789] Status update manager successfully handled status update acknowledgement (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.426893 31210 master.cpp:677] Master terminating W1028 23:48:22.427028 31210 master.cpp:4662] Removing task 0 with resources cpus(*):1; mem(*):500 of framework 20141028-234822-3193029443-50043-31190-0000 on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) in non-terminal state TASK_RUNNING I1028 23:48:22.427397 31209 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1; mem(*):500 (total allocatable: cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000]) on slave 20141028-234822-3193029443-50043-31190-S0 from framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.427512 31210 master.cpp:4705] Removing executor 'default' with resources of framework 20141028-234822-3193029443-50043-31190-0000 on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:22.428129 31206 slave.cpp:2607] master@67.195.81.190:50043 exited W1028 23:48:22.428153 31206 slave.cpp:2610] Master disconnected! Waiting for a new master to be elected I1028 23:48:22.434645 31190 leveldb.cpp:176] Opened db in 2.551453ms I1028 23:48:22.437157 31190 leveldb.cpp:183] Compacted db in 2.484612ms I1028 23:48:22.437203 31190 leveldb.cpp:198] Created db iterator in 19171ns I1028 23:48:22.437235 31190 leveldb.cpp:204] Seeked to beginning of db in 18300ns I1028 23:48:22.437306 31190 leveldb.cpp:273] Iterated through 3 keys in the db in 59465ns I1028 23:48:22.437347 31190 replica.cpp:741] Replica recovered with log positions 3 -> 4 with 0 holes and 0 unlearned I1028 23:48:22.437827 31216 recover.cpp:437] Starting replica recovery I1028 23:48:22.438127 31216 recover.cpp:463] Replica is in VOTING status I1028 23:48:22.438443 31216 recover.cpp:452] Recover process terminated I1028 23:48:22.439877 31212 master.cpp:312] Master 20141028-234822-3193029443-50043-31190 (pietas.apache.org) started on 67.195.81.190:50043 I1028 23:48:22.439916 31212 master.cpp:358] Master only allowing authenticated frameworks to register I1028 23:48:22.439931 31212 master.cpp:363] Master only allowing authenticated slaves to register I1028 23:48:22.439946 31212 credentials.hpp:36] Loading credentials for authentication from '/tmp/AllocatorTest_0_SlaveReregistersFirst_YPe61d/credentials' I1028 23:48:22.440142 31212 master.cpp:392] Authorization enabled I1028 23:48:22.440439 31218 master.cpp:120] No whitelist given. Advertising offers for all slaves I1028 23:48:22.440901 31213 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@67.195.81.190:50043 I1028 23:48:22.441395 31206 master.cpp:1242] The newly elected leader is master@67.195.81.190:50043 with id 20141028-234822-3193029443-50043-31190 I1028 23:48:22.441421 31206 master.cpp:1255] Elected as the leading master! I1028 23:48:22.441457 31206 master.cpp:1073] Recovering from registrar I1028 23:48:22.441623 31205 registrar.cpp:313] Recovering registrar I1028 23:48:22.442172 31219 log.cpp:656] Attempting to start the writer I1028 23:48:22.443235 31219 replica.cpp:474] Replica received implicit promise request with proposal 2 I1028 23:48:22.443685 31219 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 427888ns I1028 23:48:22.443703 31219 replica.cpp:342] Persisted promised to 2 I1028 23:48:22.444371 31213 coordinator.cpp:230] Coordinator attemping to fill missing position I1028 23:48:22.444687 31209 log.cpp:672] Writer started with ending position 4 I1028 23:48:22.445754 31215 leveldb.cpp:438] Reading position from leveldb took 47909ns I1028 23:48:22.445826 31215 leveldb.cpp:438] Reading position from leveldb took 30611ns I1028 23:48:22.446941 31218 registrar.cpp:346] Successfully fetched the registry (277B) in 5.213184ms I1028 23:48:22.447118 31218 registrar.cpp:445] Applied 1 operations in 42362ns; attempting to update the 'registry' I1028 23:48:22.449329 31204 log.cpp:680] Attempting to append 316 bytes to the log I1028 23:48:22.449477 31218 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 5 I1028 23:48:22.450187 31215 replica.cpp:508] Replica received write request for position 5 I1028 23:48:22.450767 31215 leveldb.cpp:343] Persisting action (335 bytes) to leveldb took 554400ns I1028 23:48:22.450788 31215 replica.cpp:676] Persisted action at 5 I1028 23:48:22.451561 31215 replica.cpp:655] Replica received learned notice for position 5 I1028 23:48:22.451979 31215 leveldb.cpp:343] Persisting action (337 bytes) to leveldb took 397219ns I1028 23:48:22.452000 31215 replica.cpp:676] Persisted action at 5 I1028 23:48:22.452020 31215 replica.cpp:661] Replica learned APPEND action at position 5 I1028 23:48:22.452993 31213 registrar.cpp:490] Successfully updated the 'registry' in 5.816832ms I1028 23:48:22.453136 31213 registrar.cpp:376] Successfully recovered registrar I1028 23:48:22.453238 31208 log.cpp:699] Attempting to truncate the log to 5 I1028 23:48:22.453384 31214 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 6 I1028 23:48:22.453518 31215 master.cpp:1100] Recovered 1 slaves from the Registry (277B) ; allowing 10mins for slaves to re-register I1028 23:48:22.454116 31207 replica.cpp:508] Replica received write request for position 6 I1028 23:48:22.454570 31207 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 427424ns I1028 23:48:22.454589 31207 replica.cpp:676] Persisted action at 6 I1028 23:48:22.455095 31219 replica.cpp:655] Replica received learned notice for position 6 I1028 23:48:22.455399 31219 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 282466ns I1028 23:48:22.455462 31219 leveldb.cpp:401] Deleting ~2 keys from leveldb took 43939ns I1028 23:48:22.455478 31219 replica.cpp:676] Persisted action at 6 I1028 23:48:22.455494 31219 replica.cpp:661] Replica learned TRUNCATE action at position 6 I1028 23:48:22.465553 31213 status_update_manager.cpp:171] Pausing sending status updates I1028 23:48:22.465566 31216 slave.cpp:602] New master detected at master@67.195.81.190:50043 I1028 23:48:22.465612 31216 slave.cpp:665] Authenticating with master master@67.195.81.190:50043 I1028 23:48:23.441506 31206 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1028 23:48:27.441004 31214 master.cpp:120] No whitelist given. Advertising offers for all slaves I1028 23:48:30.101379 31206 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 6.659877806secs I1028 23:48:30.101568 31216 slave.cpp:638] Detecting new master I1028 23:48:30.101632 31214 authenticatee.hpp:133] Creating new client SASL connection I1028 23:48:30.102021 31218 master.cpp:3853] Authenticating slave(34)@67.195.81.190:50043 I1028 23:48:30.102329 31212 authenticator.hpp:161] Creating new server SASL connection I1028 23:48:30.102505 31216 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1028 23:48:30.102545 31216 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1028 23:48:30.102638 31216 authenticator.hpp:267] Received SASL authentication start I1028 23:48:30.102709 31216 authenticator.hpp:389] Authentication requires more steps I1028 23:48:30.102812 31216 authenticatee.hpp:270] Received SASL authentication step I1028 23:48:30.102957 31204 authenticator.hpp:295] Received SASL authentication step I1028 23:48:30.102982 31204 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1028 23:48:30.102993 31204 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1028 23:48:30.103032 31204 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1028 23:48:30.103049 31204 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1028 23:48:30.103056 31204 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1028 23:48:30.103061 31204 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1028 23:48:30.103073 31204 authenticator.hpp:381] Authentication success I1028 23:48:30.103149 31209 authenticatee.hpp:310] Authentication success I1028 23:48:30.103153 31204 master.cpp:3893] Successfully authenticated principal 'test-principal' at slave(34)@67.195.81.190:50043 I1028 23:48:30.103371 31209 slave.cpp:722] Successfully authenticated with master master@67.195.81.190:50043 I1028 23:48:30.103773 31209 slave.cpp:1050] Will retry registration in 12.861518ms if necessary I1028 23:48:30.104068 31219 master.cpp:3210] Re-registering slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:30.104760 31216 registrar.cpp:445] Applied 1 operations in 71655ns; attempting to update the 'registry' I1028 23:48:30.107877 31205 log.cpp:680] Attempting to append 316 bytes to the log I1028 23:48:30.108070 31219 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 7 I1028 23:48:30.109110 31211 replica.cpp:508] Replica received write request for position 7 I1028 23:48:30.109434 31211 leveldb.cpp:343] Persisting action (335 bytes) to leveldb took 281545ns I1028 23:48:30.109484 31211 replica.cpp:676] Persisted action at 7 I1028 23:48:30.110124 31219 replica.cpp:655] Replica received learned notice for position 7 I1028 23:48:30.110903 31219 leveldb.cpp:343] Persisting action (337 bytes) to leveldb took 750414ns I1028 23:48:30.110927 31219 replica.cpp:676] Persisted action at 7 I1028 23:48:30.110950 31219 replica.cpp:661] Replica learned APPEND action at position 7 I1028 23:48:30.112160 31205 registrar.cpp:490] Successfully updated the 'registry' in 7.33824ms I1028 23:48:30.112529 31217 log.cpp:699] Attempting to truncate the log to 7 I1028 23:48:30.112714 31207 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 8 I1028 23:48:30.112870 31210 master.hpp:877] Adding task 0 with resources cpus(*):1; mem(*):500 on slave 20141028-234822-3193029443-50043-31190-S0 (pietas.apache.org) W1028 23:48:30.113136 31210 master.cpp:4394] Possibly orphaned task 0 of framework 20141028-234822-3193029443-50043-31190-0000 running on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:30.113198 31219 slave.cpp:2522] Received ping from slave-observer(39)@67.195.81.190:50043 I1028 23:48:30.113340 31210 master.cpp:3278] Re-registered slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] I1028 23:48:30.113499 31219 slave.cpp:824] Re-registered with master master@67.195.81.190:50043 I1028 23:48:30.113636 31219 replica.cpp:508] Replica received write request for position 8 I1028 23:48:30.113652 31210 status_update_manager.cpp:178] Resuming sending status updates I1028 23:48:30.113759 31212 hierarchical_allocator_process.hpp:442] Added slave 20141028-234822-3193029443-50043-31190-S0 (pietas.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] (and cpus(*):1; mem(*):524; disk(*):3.70122e+06; ports(*):[31000-32000] available) I1028 23:48:30.113904 31212 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20141028-234822-3193029443-50043-31190-S0 in 74698ns I1028 23:48:30.114116 31219 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 452165ns I1028 23:48:30.114142 31219 replica.cpp:676] Persisted action at 8 I1028 23:48:30.114786 31213 replica.cpp:655] Replica received learned notice for position 8 I1028 23:48:30.115337 31213 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 525187ns I1028 23:48:30.115399 31213 leveldb.cpp:401] Deleting ~2 keys from leveldb took 37689ns I1028 23:48:30.115418 31213 replica.cpp:676] Persisted action at 8 I1028 23:48:30.115484 31213 replica.cpp:661] Replica learned TRUNCATE action at position 8 I1028 23:48:30.116603 31212 sched.cpp:227] Scheduler::disconnected took 16969ns I1028 23:48:30.116624 31212 sched.cpp:233] New master detected at master@67.195.81.190:50043 I1028 23:48:30.116657 31212 sched.cpp:283] Authenticating with master master@67.195.81.190:50043 I1028 23:48:30.116870 31205 authenticatee.hpp:133] Creating new client SASL connection I1028 23:48:30.117084 31207 master.cpp:3853] Authenticating scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:30.117279 31212 authenticator.hpp:161] Creating new server SASL connection I1028 23:48:30.117410 31210 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1028 23:48:30.117507 31210 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1028 23:48:30.117604 31214 authenticator.hpp:267] Received SASL authentication start I1028 23:48:30.117652 31214 authenticator.hpp:389] Authentication requires more steps I1028 23:48:30.117738 31210 authenticatee.hpp:270] Received SASL authentication step I1028 23:48:30.117905 31208 authenticator.hpp:295] Received SASL authentication step I1028 23:48:30.117935 31208 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1028 23:48:30.117947 31208 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1028 23:48:30.117979 31208 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1028 23:48:30.118001 31208 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I../../src/tests/allocator_tests.cpp:2405: Failure 1028 23:48:30.118013 31208 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true Failed to wait 10secs for resourceOffers2 I1028 23:48:31.101976 31212 hierarchical_allocator_process.hpp:659] Performed allocation for 1 slaves in 124354ns I1028 23:48:58.775811 31208 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true W1028 23:48:35.117725 31214 sched.cpp:378] Authentication timed out W1028 23:48:35.117784 31219 master.cpp:3911] Authentication timed out I1028 23:48:45.114322 31213 slave.cpp:2522] Received ping from slave-observer(39)@67.195.81.190:50043 I1028 23:48:35.102212 31206 master.cpp:120] No whitelist given. Advertising offers for all slaves I1028 23:48:58.775874 31208 authenticator.hpp:381] Authentication success I1028 23:48:58.776267 31214 sched.cpp:338] Failed to authenticate with master master@67.195.81.190:50043: Authentication discarded ../../src/tests/allocator_tests.cpp:2396: Failure Actual function call count doesn't match EXPECT_CALL(allocator2, frameworkAdded(_, _, _))... Expected: to be called once Actual: never called - unsatisfied and active I1028 23:48:58.776526 31204 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:58.776626 31214 sched.cpp:283] Authenticating with master master@67.195.81.190:50043 I1028 23:48:58.776928 31204 authenticatee.hpp:133] Creating new client SASL connection I1028 23:48:58.777194 31210 master.cpp:3853] Authenticating scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 W1028 23:48:58.777528 31210 master.cpp:3888] Failed to authenticate scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043: Failed to communicate with authenticatee ../../src/tests/allocator_tests.cpp:2399: Failure Actual function call count doesn't match EXPECT_CALL(sched, resourceOffers(&driver, _))... Expected: to be called once Actual: never called - unsatisfied and active ../../src/tests/allocator_tests.cpp:2394: Failure Actual function call count doesn't match EXPECT_CALL(sched, registered(&driver, _, _))... Expected: to be called once Actual: never called - unsatisfied and active I1028 23:48:58.778053 31205 slave.cpp:591] Re-detecting master I1028 23:48:58.778084 31205 slave.cpp:638] Detecting new master I1028 23:48:58.778115 31207 status_update_manager.cpp:171] Pausing sending status updates F1028 23:48:58.778115 31205 logging.cpp:57] RAW: Pure virtual method called I1028 23:48:58.778724 31210 master.cpp:677] Master terminating W1028 23:48:58.778919 31210 master.cpp:4662] Removing task 0 with resources cpus(*):1; mem(*):500 of framework 20141028-234822-3193029443-50043-31190-0000 on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) in non-terminal state TASK_RUNNING *** Aborted at 1414540138 (unix time) try """"date -d @1414540138"""" if you are using GNU date *** PC: @ 0x91bc86 process::PID<>::PID() *** SIGSEGV (@0x0) received by PID 31190 (TID 0x2b20a6d95700) from PID 0; stack trace: *** @ 0x2b20a41ff340 (unknown) @ 0x2b20a1f2a188 google::LogMessage::Fail() @ 0x2b20a1f2f87c google::RawLog__() @ 0x91bc86 process::PID<>::PID() @ 0x91bf24 process::Process<>::self() @ 0x2b20a15d5c06 __cxa_pure_virtual @ 0x2b20a1877752 mesos::internal::slave::Slave::detected() @ 0x2b20a1671f24 process::dispatch<>() @ 0x2b20a18b35f9 _ZZN7process8dispatchIN5mesos8internal5slave5SlaveERKNS_6FutureI6OptionINS1_10MasterInfoEEEES9_EEvRKNS_3PIDIT_EEMSD_FvT0_ET1_ENKUlPNS_11ProcessBaseEE_clESM_ @ 0x2b20a1663217 mesos::internal::master::allocator::Allocator::resourcesRecovered() @ 0x2b20a1650d01 mesos::internal::master::Master::removeTask() @ 0x2b20a162fb41 mesos::internal::master::Master::finalize() @ 0x2b20a1eb69a1 process::ProcessBase::visit() @ 0x2b20a1ec0464 process::TerminateEvent::visit() @ 0x8e0812 process::ProcessBase::serve() @ 0x2b20a18da89e _ZNSt17_Function_handlerIFvPN7process11ProcessBaseEEZNS0_8dispatchIN5mesos8internal5slave5SlaveERKNS0_6FutureI6OptionINS5_10MasterInfoEEEESD_EEvRKNS0_3PIDIT_EEMSH_FvT0_ET1_EUlS2_E_E9_M_invokeERKSt9_Any_dataS2_ @ 0x2b20a1eb1ca0 process::ProcessManager::resume() @ 0x2b20a1ea8365 process::schedule() @ 0x2b20a41f7182 start_thread @ 0x2b20a4507fbd (unknown) make[3]: *** [check-local] Segmentation fault {noformat}"""," [ RUN ] AllocatorTest/0.SlaveReregistersFirst Using temporary directory '/tmp/AllocatorTest_0_SlaveReregistersFirst_YPe61d' I1028 23:48:22.360447 31190 leveldb.cpp:176] Opened db in 2.192575ms I1028 23:48:22.361253 31190 leveldb.cpp:183] Compacted db in 760753ns I1028 23:48:22.361320 31190 leveldb.cpp:198] Created db iterator in 22188ns I1028 23:48:22.361340 31190 leveldb.cpp:204] Seeked to beginning of db in 1950ns I1028 23:48:22.361351 31190 leveldb.cpp:273] Iterated through 0 keys in the db in 345ns I1028 23:48:22.361403 31190 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I1028 23:48:22.362185 31217 recover.cpp:437] Starting replica recovery I1028 23:48:22.362764 31219 recover.cpp:463] Replica is in EMPTY status I1028 23:48:22.363955 31210 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I1028 23:48:22.364320 31217 recover.cpp:188] Received a recover response from a replica in EMPTY status I1028 23:48:22.364820 31211 recover.cpp:554] Updating replica status to STARTING I1028 23:48:22.365365 31215 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 418991ns I1028 23:48:22.365391 31215 replica.cpp:320] Persisted replica status to STARTING I1028 23:48:22.365617 31217 recover.cpp:463] Replica is in STARTING status I1028 23:48:22.366328 31206 master.cpp:312] Master 20141028-234822-3193029443-50043-31190 (pietas.apache.org) started on 67.195.81.190:50043 I1028 23:48:22.366377 31206 master.cpp:358] Master only allowing authenticated frameworks to register I1028 23:48:22.366391 31206 master.cpp:363] Master only allowing authenticated slaves to register I1028 23:48:22.366402 31206 credentials.hpp:36] Loading credentials for authentication from '/tmp/AllocatorTest_0_SlaveReregistersFirst_YPe61d/credentials' I1028 23:48:22.366708 31206 master.cpp:392] Authorization enabled I1028 23:48:22.366886 31209 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I1028 23:48:22.367311 31208 master.cpp:120] No whitelist given. Advertising offers for all slaves I1028 23:48:22.367312 31207 recover.cpp:188] Received a recover response from a replica in STARTING status I1028 23:48:22.367686 31211 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@67.195.81.190:50043 I1028 23:48:22.367863 31212 recover.cpp:554] Updating replica status to VOTING I1028 23:48:22.368477 31218 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 375527ns I1028 23:48:22.368505 31218 replica.cpp:320] Persisted replica status to VOTING I1028 23:48:22.368517 31204 master.cpp:1242] The newly elected leader is master@67.195.81.190:50043 with id 20141028-234822-3193029443-50043-31190 I1028 23:48:22.368549 31204 master.cpp:1255] Elected as the leading master! I1028 23:48:22.368567 31204 master.cpp:1073] Recovering from registrar I1028 23:48:22.368621 31215 recover.cpp:568] Successfully joined the Paxos group I1028 23:48:22.368716 31219 registrar.cpp:313] Recovering registrar I1028 23:48:22.369000 31215 recover.cpp:452] Recover process terminated I1028 23:48:22.369523 31208 log.cpp:656] Attempting to start the writer I1028 23:48:22.370909 31205 replica.cpp:474] Replica received implicit promise request with proposal 1 I1028 23:48:22.371266 31205 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 325016ns I1028 23:48:22.371290 31205 replica.cpp:342] Persisted promised to 1 I1028 23:48:22.371979 31218 coordinator.cpp:230] Coordinator attemping to fill missing position I1028 23:48:22.373378 31210 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I1028 23:48:22.373746 31210 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 329018ns I1028 23:48:22.373772 31210 replica.cpp:676] Persisted action at 0 I1028 23:48:22.374897 31214 replica.cpp:508] Replica received write request for position 0 I1028 23:48:22.374951 31214 leveldb.cpp:438] Reading position from leveldb took 26002ns I1028 23:48:22.375272 31214 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 289094ns I1028 23:48:22.375298 31214 replica.cpp:676] Persisted action at 0 I1028 23:48:22.375886 31204 replica.cpp:655] Replica received learned notice for position 0 I1028 23:48:22.376258 31204 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 346650ns I1028 23:48:22.376277 31204 replica.cpp:676] Persisted action at 0 I1028 23:48:22.376298 31204 replica.cpp:661] Replica learned NOP action at position 0 I1028 23:48:22.376843 31215 log.cpp:672] Writer started with ending position 0 I1028 23:48:22.378056 31205 leveldb.cpp:438] Reading position from leveldb took 28265ns I1028 23:48:22.380323 31217 registrar.cpp:346] Successfully fetched the registry (0B) in 11.55584ms I1028 23:48:22.380466 31217 registrar.cpp:445] Applied 1 operations in 50632ns; attempting to update the 'registry' I1028 23:48:22.382472 31217 log.cpp:680] Attempting to append 139 bytes to the log I1028 23:48:22.382715 31210 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I1028 23:48:22.383463 31210 replica.cpp:508] Replica received write request for position 1 I1028 23:48:22.383857 31210 leveldb.cpp:343] Persisting action (158 bytes) to leveldb took 363758ns I1028 23:48:22.383875 31210 replica.cpp:676] Persisted action at 1 I1028 23:48:22.384397 31218 replica.cpp:655] Replica received learned notice for position 1 I1028 23:48:22.384840 31218 leveldb.cpp:343] Persisting action (160 bytes) to leveldb took 420161ns I1028 23:48:22.384862 31218 replica.cpp:676] Persisted action at 1 I1028 23:48:22.384882 31218 replica.cpp:661] Replica learned APPEND action at position 1 I1028 23:48:22.385684 31211 registrar.cpp:490] Successfully updated the 'registry' in 5.158144ms I1028 23:48:22.385818 31211 registrar.cpp:376] Successfully recovered registrar I1028 23:48:22.385912 31214 log.cpp:699] Attempting to truncate the log to 1 I1028 23:48:22.386101 31218 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I1028 23:48:22.386124 31211 master.cpp:1100] Recovered 0 slaves from the Registry (101B) ; allowing 10mins for slaves to re-register I1028 23:48:22.387398 31209 replica.cpp:508] Replica received write request for position 2 I1028 23:48:22.387758 31209 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 334969ns I1028 23:48:22.387776 31209 replica.cpp:676] Persisted action at 2 I1028 23:48:22.388272 31204 replica.cpp:655] Replica received learned notice for position 2 I1028 23:48:22.388453 31204 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 159390ns I1028 23:48:22.388501 31204 leveldb.cpp:401] Deleting ~1 keys from leveldb took 30409ns I1028 23:48:22.388516 31204 replica.cpp:676] Persisted action at 2 I1028 23:48:22.388531 31204 replica.cpp:661] Replica learned TRUNCATE action at position 2 I1028 23:48:22.400737 31207 slave.cpp:169] Slave started on 34)@67.195.81.190:50043 I1028 23:48:22.400786 31207 credentials.hpp:84] Loading credential for authentication from '/tmp/AllocatorTest_0_SlaveReregistersFirst_QPPV21/credential' I1028 23:48:22.400996 31207 slave.cpp:276] Slave using credential for: test-principal I1028 23:48:22.401304 31207 slave.cpp:289] Slave resources: cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] I1028 23:48:22.401413 31207 slave.cpp:318] Slave hostname: pietas.apache.org I1028 23:48:22.401520 31207 slave.cpp:319] Slave checkpoint: false W1028 23:48:22.401535 31207 slave.cpp:321] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I1028 23:48:22.402349 31207 state.cpp:33] Recovering state from '/tmp/AllocatorTest_0_SlaveReregistersFirst_QPPV21/meta' I1028 23:48:22.402678 31207 status_update_manager.cpp:197] Recovering status update manager I1028 23:48:22.403048 31211 slave.cpp:3456] Finished recovery I1028 23:48:22.403815 31215 slave.cpp:602] New master detected at master@67.195.81.190:50043 I1028 23:48:22.403852 31215 slave.cpp:665] Authenticating with master master@67.195.81.190:50043 I1028 23:48:22.403875 31206 status_update_manager.cpp:171] Pausing sending status updates I1028 23:48:22.403961 31215 slave.cpp:638] Detecting new master I1028 23:48:22.404016 31211 authenticatee.hpp:133] Creating new client SASL connection I1028 23:48:22.404230 31204 master.cpp:3853] Authenticating slave(34)@67.195.81.190:50043 I1028 23:48:22.404464 31205 authenticator.hpp:161] Creating new server SASL connection I1028 23:48:22.404613 31211 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1028 23:48:22.404649 31211 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1028 23:48:22.404734 31211 authenticator.hpp:267] Received SASL authentication start I1028 23:48:22.404783 31211 authenticator.hpp:389] Authentication requires more steps I1028 23:48:22.404898 31215 authenticatee.hpp:270] Received SASL authentication step I1028 23:48:22.404999 31215 authenticator.hpp:295] Received SASL authentication step I1028 23:48:22.405030 31215 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1028 23:48:22.405047 31215 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1028 23:48:22.405086 31215 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1028 23:48:22.405109 31215 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1028 23:48:22.405122 31215 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1028 23:48:22.405129 31215 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1028 23:48:22.405146 31215 authenticator.hpp:381] Authentication success I1028 23:48:22.405243 31213 authenticatee.hpp:310] Authentication success I1028 23:48:22.405253 31214 master.cpp:3893] Successfully authenticated principal 'test-principal' at slave(34)@67.195.81.190:50043 I1028 23:48:22.405505 31213 slave.cpp:722] Successfully authenticated with master master@67.195.81.190:50043 I1028 23:48:22.405619 31213 slave.cpp:1050] Will retry registration in 17.050994ms if necessary I1028 23:48:22.405819 31215 master.cpp:3032] Registering slave at slave(34)@67.195.81.190:50043 (pietas.apache.org) with id 20141028-234822-3193029443-50043-31190-S0 I1028 23:48:22.406262 31216 registrar.cpp:445] Applied 1 operations in 52647ns; attempting to update the 'registry' I1028 23:48:22.406697 31190 sched.cpp:137] Version: 0.21.0 I1028 23:48:22.407083 31211 sched.cpp:233] New master detected at master@67.195.81.190:50043 I1028 23:48:22.407114 31211 sched.cpp:283] Authenticating with master master@67.195.81.190:50043 I1028 23:48:22.407290 31214 authenticatee.hpp:133] Creating new client SASL connection I1028 23:48:22.407424 31214 master.cpp:3853] Authenticating scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:22.407659 31207 authenticator.hpp:161] Creating new server SASL connection I1028 23:48:22.407757 31207 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1028 23:48:22.407774 31207 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1028 23:48:22.407830 31207 authenticator.hpp:267] Received SASL authentication start I1028 23:48:22.407868 31207 authenticator.hpp:389] Authentication requires more steps I1028 23:48:22.407927 31207 authenticatee.hpp:270] Received SASL authentication step I1028 23:48:22.408015 31212 authenticator.hpp:295] Received SASL authentication step I1028 23:48:22.408037 31212 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1028 23:48:22.408046 31212 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1028 23:48:22.408072 31212 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1028 23:48:22.408092 31212 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1028 23:48:22.408100 31212 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1028 23:48:22.408105 31212 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1028 23:48:22.408116 31212 authenticator.hpp:381] Authentication success I1028 23:48:22.408192 31210 authenticatee.hpp:310] Authentication success I1028 23:48:22.408210 31217 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:22.408419 31210 sched.cpp:357] Successfully authenticated with master master@67.195.81.190:50043 I1028 23:48:22.408460 31210 sched.cpp:476] Sending registration request to master@67.195.81.190:50043 I1028 23:48:22.408568 31217 master.cpp:1362] Received registration request for framework 'default' at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:22.408617 31217 master.cpp:1321] Authorizing framework principal 'test-principal' to receive offers for role '*' I1028 23:48:22.408937 31214 master.cpp:1426] Registering framework 20141028-234822-3193029443-50043-31190-0000 (default) at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:22.409265 31213 sched.cpp:407] Framework registered with 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.409267 31212 hierarchical_allocator_process.hpp:329] Added framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.409312 31212 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1028 23:48:22.409324 31215 log.cpp:680] Attempting to append 316 bytes to the log I1028 23:48:22.409333 31213 sched.cpp:421] Scheduler::registered took 38591ns I1028 23:48:22.409327 31212 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 24107ns I1028 23:48:22.409518 31205 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I1028 23:48:22.410127 31206 replica.cpp:508] Replica received write request for position 3 I1028 23:48:22.410706 31206 leveldb.cpp:343] Persisting action (335 bytes) to leveldb took 554098ns I1028 23:48:22.410725 31206 replica.cpp:676] Persisted action at 3 I1028 23:48:22.411151 31217 replica.cpp:655] Replica received learned notice for position 3 I1028 23:48:22.411499 31217 leveldb.cpp:343] Persisting action (337 bytes) to leveldb took 326572ns I1028 23:48:22.411519 31217 replica.cpp:676] Persisted action at 3 I1028 23:48:22.411533 31217 replica.cpp:661] Replica learned APPEND action at position 3 I1028 23:48:22.412292 31219 registrar.cpp:490] Successfully updated the 'registry' in 5.972992ms I1028 23:48:22.412518 31218 log.cpp:699] Attempting to truncate the log to 3 I1028 23:48:22.412621 31213 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I1028 23:48:22.412734 31219 slave.cpp:2522] Received ping from slave-observer(38)@67.195.81.190:50043 I1028 23:48:22.412787 31206 master.cpp:3086] Registered slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] I1028 23:48:22.412858 31219 slave.cpp:756] Registered with master master@67.195.81.190:50043; given slave ID 20141028-234822-3193029443-50043-31190-S0 I1028 23:48:22.412994 31210 status_update_manager.cpp:178] Resuming sending status updates I1028 23:48:22.413014 31211 hierarchical_allocator_process.hpp:442] Added slave 20141028-234822-3193029443-50043-31190-S0 (pietas.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] available) I1028 23:48:22.413159 31211 hierarchical_allocator_process.hpp:734] Offering cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] on slave 20141028-234822-3193029443-50043-31190-S0 to framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.413290 31208 replica.cpp:508] Replica received write request for position 4 I1028 23:48:22.413421 31211 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20141028-234822-3193029443-50043-31190-S0 in 346658ns I1028 23:48:22.413650 31208 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 336067ns I1028 23:48:22.413668 31208 replica.cpp:676] Persisted action at 4 I1028 23:48:22.413797 31216 master.cpp:3795] Sending 1 offers to framework 20141028-234822-3193029443-50043-31190-0000 (default) at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:22.414077 31212 replica.cpp:655] Replica received learned notice for position 4 I1028 23:48:22.414356 31212 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 260401ns I1028 23:48:22.414403 31212 leveldb.cpp:401] Deleting ~2 keys from leveldb took 28541ns I1028 23:48:22.414417 31212 replica.cpp:676] Persisted action at 4 I1028 23:48:22.414446 31212 replica.cpp:661] Replica learned TRUNCATE action at position 4 I1028 23:48:22.414422 31207 sched.cpp:544] Scheduler::resourceOffers took 310278ns I1028 23:48:22.415086 31214 master.cpp:2321] Processing reply for offers: [ 20141028-234822-3193029443-50043-31190-O0 ] on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) for framework 20141028-234822-3193029443-50043-31190-0000 (default) at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 W1028 23:48:22.415163 31214 master.cpp:1969] Executor default for task 0 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W1028 23:48:22.415186 31214 master.cpp:1980] Executor default for task 0 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I1028 23:48:22.415256 31214 master.cpp:2417] Authorizing framework principal 'test-principal' to launch task 0 as user 'jenkins' I1028 23:48:22.416033 31219 master.hpp:877] Adding task 0 with resources cpus(*):1; mem(*):500 on slave 20141028-234822-3193029443-50043-31190-S0 (pietas.apache.org) I1028 23:48:22.416084 31219 master.cpp:2480] Launching task 0 of framework 20141028-234822-3193029443-50043-31190-0000 (default) at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 with resources cpus(*):1; mem(*):500 on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:22.416317 31214 slave.cpp:1081] Got assigned task 0 for framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.416679 31215 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1; mem(*):524; disk(*):3.70122e+06; ports(*):[31000-32000] (total allocatable: cpus(*):1; mem(*):524; disk(*):3.70122e+06; ports(*):[31000-32000]) on slave 20141028-234822-3193029443-50043-31190-S0 from framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.416721 31215 hierarchical_allocator_process.hpp:599] Framework 20141028-234822-3193029443-50043-31190-0000 filtered slave 20141028-234822-3193029443-50043-31190-S0 for 5secs I1028 23:48:22.416724 31214 slave.cpp:1191] Launching task 0 for framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.418534 31214 slave.cpp:3871] Launching executor default of framework 20141028-234822-3193029443-50043-31190-0000 in work directory '/tmp/AllocatorTest_0_SlaveReregistersFirst_QPPV21/slaves/20141028-234822-3193029443-50043-31190-S0/frameworks/20141028-234822-3193029443-50043-31190-0000/executors/default/runs/d593f433-3c16-4678-8f76-4038fe2841c4' I1028 23:48:22.420557 31214 exec.cpp:132] Version: 0.21.0 I1028 23:48:22.420755 31213 exec.cpp:182] Executor started at: executor(22)@67.195.81.190:50043 with pid 31190 I1028 23:48:22.420903 31214 slave.cpp:1317] Queuing task '0' for executor default of framework '20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.420997 31214 slave.cpp:555] Successfully attached file '/tmp/AllocatorTest_0_SlaveReregistersFirst_QPPV21/slaves/20141028-234822-3193029443-50043-31190-S0/frameworks/20141028-234822-3193029443-50043-31190-0000/executors/default/runs/d593f433-3c16-4678-8f76-4038fe2841c4' I1028 23:48:22.421058 31214 slave.cpp:1849] Got registration for executor 'default' of framework 20141028-234822-3193029443-50043-31190-0000 from executor(22)@67.195.81.190:50043 I1028 23:48:22.421295 31214 slave.cpp:1968] Flushing queued task 0 for executor 'default' of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.421391 31205 exec.cpp:206] Executor registered on slave 20141028-234822-3193029443-50043-31190-S0 I1028 23:48:22.421495 31214 slave.cpp:2802] Monitoring executor 'default' of framework '20141028-234822-3193029443-50043-31190-0000' in container 'd593f433-3c16-4678-8f76-4038fe2841c4' I1028 23:48:22.422873 31205 exec.cpp:218] Executor::registered took 19148ns I1028 23:48:22.422991 31205 exec.cpp:293] Executor asked to run task '0' I1028 23:48:22.423085 31205 exec.cpp:302] Executor::launchTask took 76519ns I1028 23:48:22.424541 31205 exec.cpp:525] Executor sending status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.424724 31205 slave.cpp:2202] Handling status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 from executor(22)@67.195.81.190:50043 I1028 23:48:22.424932 31213 status_update_manager.cpp:317] Received status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.424963 31213 status_update_manager.cpp:494] Creating StatusUpdate stream for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.425122 31213 status_update_manager.cpp:371] Forwarding update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 to the slave I1028 23:48:22.425257 31205 slave.cpp:2442] Forwarding the update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 to master@67.195.81.190:50043 I1028 23:48:22.425398 31205 slave.cpp:2369] Status update manager successfully handled status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.425420 31205 slave.cpp:2375] Sending acknowledgement for status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 to executor(22)@67.195.81.190:50043 I1028 23:48:22.425583 31212 master.cpp:3410] Forwarding status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.425621 31206 exec.cpp:339] Executor received status update acknowledgement 10174aa0-0e5a-4f9d-a530-dee64e93f222 for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.425786 31212 master.cpp:3382] Status update TASK_RUNNING (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 from slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:22.425832 31212 master.cpp:4617] Updating the latest state of task 0 of framework 20141028-234822-3193029443-50043-31190-0000 to TASK_RUNNING I1028 23:48:22.425885 31208 sched.cpp:635] Scheduler::statusUpdate took 49727ns I1028 23:48:22.426082 31208 master.cpp:2882] Forwarding status update acknowledgement 10174aa0-0e5a-4f9d-a530-dee64e93f222 for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 (default) at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 to slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:22.426360 31206 status_update_manager.cpp:389] Received status update acknowledgement (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.426623 31206 slave.cpp:1789] Status update manager successfully handled status update acknowledgement (UUID: 10174aa0-0e5a-4f9d-a530-dee64e93f222) for task 0 of framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.426893 31210 master.cpp:677] Master terminating W1028 23:48:22.427028 31210 master.cpp:4662] Removing task 0 with resources cpus(*):1; mem(*):500 of framework 20141028-234822-3193029443-50043-31190-0000 on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) in non-terminal state TASK_RUNNING I1028 23:48:22.427397 31209 hierarchical_allocator_process.hpp:563] Recovered cpus(*):1; mem(*):500 (total allocatable: cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000]) on slave 20141028-234822-3193029443-50043-31190-S0 from framework 20141028-234822-3193029443-50043-31190-0000 I1028 23:48:22.427512 31210 master.cpp:4705] Removing executor 'default' with resources of framework 20141028-234822-3193029443-50043-31190-0000 on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:22.428129 31206 slave.cpp:2607] master@67.195.81.190:50043 exited W1028 23:48:22.428153 31206 slave.cpp:2610] Master disconnected! Waiting for a new master to be elected I1028 23:48:22.434645 31190 leveldb.cpp:176] Opened db in 2.551453ms I1028 23:48:22.437157 31190 leveldb.cpp:183] Compacted db in 2.484612ms I1028 23:48:22.437203 31190 leveldb.cpp:198] Created db iterator in 19171ns I1028 23:48:22.437235 31190 leveldb.cpp:204] Seeked to beginning of db in 18300ns I1028 23:48:22.437306 31190 leveldb.cpp:273] Iterated through 3 keys in the db in 59465ns I1028 23:48:22.437347 31190 replica.cpp:741] Replica recovered with log positions 3 -> 4 with 0 holes and 0 unlearned I1028 23:48:22.437827 31216 recover.cpp:437] Starting replica recovery I1028 23:48:22.438127 31216 recover.cpp:463] Replica is in VOTING status I1028 23:48:22.438443 31216 recover.cpp:452] Recover process terminated I1028 23:48:22.439877 31212 master.cpp:312] Master 20141028-234822-3193029443-50043-31190 (pietas.apache.org) started on 67.195.81.190:50043 I1028 23:48:22.439916 31212 master.cpp:358] Master only allowing authenticated frameworks to register I1028 23:48:22.439931 31212 master.cpp:363] Master only allowing authenticated slaves to register I1028 23:48:22.439946 31212 credentials.hpp:36] Loading credentials for authentication from '/tmp/AllocatorTest_0_SlaveReregistersFirst_YPe61d/credentials' I1028 23:48:22.440142 31212 master.cpp:392] Authorization enabled I1028 23:48:22.440439 31218 master.cpp:120] No whitelist given. Advertising offers for all slaves I1028 23:48:22.440901 31213 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@67.195.81.190:50043 I1028 23:48:22.441395 31206 master.cpp:1242] The newly elected leader is master@67.195.81.190:50043 with id 20141028-234822-3193029443-50043-31190 I1028 23:48:22.441421 31206 master.cpp:1255] Elected as the leading master! I1028 23:48:22.441457 31206 master.cpp:1073] Recovering from registrar I1028 23:48:22.441623 31205 registrar.cpp:313] Recovering registrar I1028 23:48:22.442172 31219 log.cpp:656] Attempting to start the writer I1028 23:48:22.443235 31219 replica.cpp:474] Replica received implicit promise request with proposal 2 I1028 23:48:22.443685 31219 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 427888ns I1028 23:48:22.443703 31219 replica.cpp:342] Persisted promised to 2 I1028 23:48:22.444371 31213 coordinator.cpp:230] Coordinator attemping to fill missing position I1028 23:48:22.444687 31209 log.cpp:672] Writer started with ending position 4 I1028 23:48:22.445754 31215 leveldb.cpp:438] Reading position from leveldb took 47909ns I1028 23:48:22.445826 31215 leveldb.cpp:438] Reading position from leveldb took 30611ns I1028 23:48:22.446941 31218 registrar.cpp:346] Successfully fetched the registry (277B) in 5.213184ms I1028 23:48:22.447118 31218 registrar.cpp:445] Applied 1 operations in 42362ns; attempting to update the 'registry' I1028 23:48:22.449329 31204 log.cpp:680] Attempting to append 316 bytes to the log I1028 23:48:22.449477 31218 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 5 I1028 23:48:22.450187 31215 replica.cpp:508] Replica received write request for position 5 I1028 23:48:22.450767 31215 leveldb.cpp:343] Persisting action (335 bytes) to leveldb took 554400ns I1028 23:48:22.450788 31215 replica.cpp:676] Persisted action at 5 I1028 23:48:22.451561 31215 replica.cpp:655] Replica received learned notice for position 5 I1028 23:48:22.451979 31215 leveldb.cpp:343] Persisting action (337 bytes) to leveldb took 397219ns I1028 23:48:22.452000 31215 replica.cpp:676] Persisted action at 5 I1028 23:48:22.452020 31215 replica.cpp:661] Replica learned APPEND action at position 5 I1028 23:48:22.452993 31213 registrar.cpp:490] Successfully updated the 'registry' in 5.816832ms I1028 23:48:22.453136 31213 registrar.cpp:376] Successfully recovered registrar I1028 23:48:22.453238 31208 log.cpp:699] Attempting to truncate the log to 5 I1028 23:48:22.453384 31214 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 6 I1028 23:48:22.453518 31215 master.cpp:1100] Recovered 1 slaves from the Registry (277B) ; allowing 10mins for slaves to re-register I1028 23:48:22.454116 31207 replica.cpp:508] Replica received write request for position 6 I1028 23:48:22.454570 31207 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 427424ns I1028 23:48:22.454589 31207 replica.cpp:676] Persisted action at 6 I1028 23:48:22.455095 31219 replica.cpp:655] Replica received learned notice for position 6 I1028 23:48:22.455399 31219 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 282466ns I1028 23:48:22.455462 31219 leveldb.cpp:401] Deleting ~2 keys from leveldb took 43939ns I1028 23:48:22.455478 31219 replica.cpp:676] Persisted action at 6 I1028 23:48:22.455494 31219 replica.cpp:661] Replica learned TRUNCATE action at position 6 I1028 23:48:22.465553 31213 status_update_manager.cpp:171] Pausing sending status updates I1028 23:48:22.465566 31216 slave.cpp:602] New master detected at master@67.195.81.190:50043 I1028 23:48:22.465612 31216 slave.cpp:665] Authenticating with master master@67.195.81.190:50043 I1028 23:48:23.441506 31206 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1028 23:48:27.441004 31214 master.cpp:120] No whitelist given. Advertising offers for all slaves I1028 23:48:30.101379 31206 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 6.659877806secs I1028 23:48:30.101568 31216 slave.cpp:638] Detecting new master I1028 23:48:30.101632 31214 authenticatee.hpp:133] Creating new client SASL connection I1028 23:48:30.102021 31218 master.cpp:3853] Authenticating slave(34)@67.195.81.190:50043 I1028 23:48:30.102329 31212 authenticator.hpp:161] Creating new server SASL connection I1028 23:48:30.102505 31216 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1028 23:48:30.102545 31216 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1028 23:48:30.102638 31216 authenticator.hpp:267] Received SASL authentication start I1028 23:48:30.102709 31216 authenticator.hpp:389] Authentication requires more steps I1028 23:48:30.102812 31216 authenticatee.hpp:270] Received SASL authentication step I1028 23:48:30.102957 31204 authenticator.hpp:295] Received SASL authentication step I1028 23:48:30.102982 31204 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1028 23:48:30.102993 31204 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1028 23:48:30.103032 31204 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1028 23:48:30.103049 31204 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1028 23:48:30.103056 31204 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1028 23:48:30.103061 31204 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1028 23:48:30.103073 31204 authenticator.hpp:381] Authentication success I1028 23:48:30.103149 31209 authenticatee.hpp:310] Authentication success I1028 23:48:30.103153 31204 master.cpp:3893] Successfully authenticated principal 'test-principal' at slave(34)@67.195.81.190:50043 I1028 23:48:30.103371 31209 slave.cpp:722] Successfully authenticated with master master@67.195.81.190:50043 I1028 23:48:30.103773 31209 slave.cpp:1050] Will retry registration in 12.861518ms if necessary I1028 23:48:30.104068 31219 master.cpp:3210] Re-registering slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:30.104760 31216 registrar.cpp:445] Applied 1 operations in 71655ns; attempting to update the 'registry' I1028 23:48:30.107877 31205 log.cpp:680] Attempting to append 316 bytes to the log I1028 23:48:30.108070 31219 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 7 I1028 23:48:30.109110 31211 replica.cpp:508] Replica received write request for position 7 I1028 23:48:30.109434 31211 leveldb.cpp:343] Persisting action (335 bytes) to leveldb took 281545ns I1028 23:48:30.109484 31211 replica.cpp:676] Persisted action at 7 I1028 23:48:30.110124 31219 replica.cpp:655] Replica received learned notice for position 7 I1028 23:48:30.110903 31219 leveldb.cpp:343] Persisting action (337 bytes) to leveldb took 750414ns I1028 23:48:30.110927 31219 replica.cpp:676] Persisted action at 7 I1028 23:48:30.110950 31219 replica.cpp:661] Replica learned APPEND action at position 7 I1028 23:48:30.112160 31205 registrar.cpp:490] Successfully updated the 'registry' in 7.33824ms I1028 23:48:30.112529 31217 log.cpp:699] Attempting to truncate the log to 7 I1028 23:48:30.112714 31207 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 8 I1028 23:48:30.112870 31210 master.hpp:877] Adding task 0 with resources cpus(*):1; mem(*):500 on slave 20141028-234822-3193029443-50043-31190-S0 (pietas.apache.org) W1028 23:48:30.113136 31210 master.cpp:4394] Possibly orphaned task 0 of framework 20141028-234822-3193029443-50043-31190-0000 running on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) I1028 23:48:30.113198 31219 slave.cpp:2522] Received ping from slave-observer(39)@67.195.81.190:50043 I1028 23:48:30.113340 31210 master.cpp:3278] Re-registered slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] I1028 23:48:30.113499 31219 slave.cpp:824] Re-registered with master master@67.195.81.190:50043 I1028 23:48:30.113636 31219 replica.cpp:508] Replica received write request for position 8 I1028 23:48:30.113652 31210 status_update_manager.cpp:178] Resuming sending status updates I1028 23:48:30.113759 31212 hierarchical_allocator_process.hpp:442] Added slave 20141028-234822-3193029443-50043-31190-S0 (pietas.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] (and cpus(*):1; mem(*):524; disk(*):3.70122e+06; ports(*):[31000-32000] available) I1028 23:48:30.113904 31212 hierarchical_allocator_process.hpp:679] Performed allocation for slave 20141028-234822-3193029443-50043-31190-S0 in 74698ns I1028 23:48:30.114116 31219 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 452165ns I1028 23:48:30.114142 31219 replica.cpp:676] Persisted action at 8 I1028 23:48:30.114786 31213 replica.cpp:655] Replica received learned notice for position 8 I1028 23:48:30.115337 31213 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 525187ns I1028 23:48:30.115399 31213 leveldb.cpp:401] Deleting ~2 keys from leveldb took 37689ns I1028 23:48:30.115418 31213 replica.cpp:676] Persisted action at 8 I1028 23:48:30.115484 31213 replica.cpp:661] Replica learned TRUNCATE action at position 8 I1028 23:48:30.116603 31212 sched.cpp:227] Scheduler::disconnected took 16969ns I1028 23:48:30.116624 31212 sched.cpp:233] New master detected at master@67.195.81.190:50043 I1028 23:48:30.116657 31212 sched.cpp:283] Authenticating with master master@67.195.81.190:50043 I1028 23:48:30.116870 31205 authenticatee.hpp:133] Creating new client SASL connection I1028 23:48:30.117084 31207 master.cpp:3853] Authenticating scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:30.117279 31212 authenticator.hpp:161] Creating new server SASL connection I1028 23:48:30.117410 31210 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1028 23:48:30.117507 31210 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1028 23:48:30.117604 31214 authenticator.hpp:267] Received SASL authentication start I1028 23:48:30.117652 31214 authenticator.hpp:389] Authentication requires more steps I1028 23:48:30.117738 31210 authenticatee.hpp:270] Received SASL authentication step I1028 23:48:30.117905 31208 authenticator.hpp:295] Received SASL authentication step I1028 23:48:30.117935 31208 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1028 23:48:30.117947 31208 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1028 23:48:30.117979 31208 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1028 23:48:30.118001 31208 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pietas.apache.org' server FQDN: 'pietas.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I../../src/tests/allocator_tests.cpp:2405: Failure 1028 23:48:30.118013 31208 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true Failed to wait 10secs for resourceOffers2 I1028 23:48:31.101976 31212 hierarchical_allocator_process.hpp:659] Performed allocation for 1 slaves in 124354ns I1028 23:48:58.775811 31208 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true W1028 23:48:35.117725 31214 sched.cpp:378] Authentication timed out W1028 23:48:35.117784 31219 master.cpp:3911] Authentication timed out I1028 23:48:45.114322 31213 slave.cpp:2522] Received ping from slave-observer(39)@67.195.81.190:50043 I1028 23:48:35.102212 31206 master.cpp:120] No whitelist given. Advertising offers for all slaves I1028 23:48:58.775874 31208 authenticator.hpp:381] Authentication success I1028 23:48:58.776267 31214 sched.cpp:338] Failed to authenticate with master master@67.195.81.190:50043: Authentication discarded ../../src/tests/allocator_tests.cpp:2396: Failure Actual function call count doesn't match EXPECT_CALL(allocator2, frameworkAdded(_, _, _))... Expected: to be called once Actual: never called - unsatisfied and active I1028 23:48:58.776526 31204 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 I1028 23:48:58.776626 31214 sched.cpp:283] Authenticating with master master@67.195.81.190:50043 I1028 23:48:58.776928 31204 authenticatee.hpp:133] Creating new client SASL connection I1028 23:48:58.777194 31210 master.cpp:3853] Authenticating scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043 W1028 23:48:58.777528 31210 master.cpp:3888] Failed to authenticate scheduler-0aa33fc7-0d29-487c-80eb-f933681f9c95@67.195.81.190:50043: Failed to communicate with authenticatee ../../src/tests/allocator_tests.cpp:2399: Failure Actual function call count doesn't match EXPECT_CALL(sched, resourceOffers(&driver, _))... Expected: to be called once Actual: never called - unsatisfied and active ../../src/tests/allocator_tests.cpp:2394: Failure Actual function call count doesn't match EXPECT_CALL(sched, registered(&driver, _, _))... Expected: to be called once Actual: never called - unsatisfied and active I1028 23:48:58.778053 31205 slave.cpp:591] Re-detecting master I1028 23:48:58.778084 31205 slave.cpp:638] Detecting new master I1028 23:48:58.778115 31207 status_update_manager.cpp:171] Pausing sending status updates F1028 23:48:58.778115 31205 logging.cpp:57] RAW: Pure virtual method called I1028 23:48:58.778724 31210 master.cpp:677] Master terminating W1028 23:48:58.778919 31210 master.cpp:4662] Removing task 0 with resources cpus(*):1; mem(*):500 of framework 20141028-234822-3193029443-50043-31190-0000 on slave 20141028-234822-3193029443-50043-31190-S0 at slave(34)@67.195.81.190:50043 (pietas.apache.org) in non-terminal state TASK_RUNNING *** Aborted at 1414540138 (unix time) try """"date -d @1414540138"""" if you are using GNU date *** PC: @ 0x91bc86 process::PID<>::PID() *** SIGSEGV (@0x0) received by PID 31190 (TID 0x2b20a6d95700) from PID 0; stack trace: *** @ 0x2b20a41ff340 (unknown) @ 0x2b20a1f2a188 google::LogMessage::Fail() @ 0x2b20a1f2f87c google::RawLog__() @ 0x91bc86 process::PID<>::PID() @ 0x91bf24 process::Process<>::self() @ 0x2b20a15d5c06 __cxa_pure_virtual @ 0x2b20a1877752 mesos::internal::slave::Slave::detected() @ 0x2b20a1671f24 process::dispatch<>() @ 0x2b20a18b35f9 _ZZN7process8dispatchIN5mesos8internal5slave5SlaveERKNS_6FutureI6OptionINS1_10MasterInfoEEEES9_EEvRKNS_3PIDIT_EEMSD_FvT0_ET1_ENKUlPNS_11ProcessBaseEE_clESM_ @ 0x2b20a1663217 mesos::internal::master::allocator::Allocator::resourcesRecovered() @ 0x2b20a1650d01 mesos::internal::master::Master::removeTask() @ 0x2b20a162fb41 mesos::internal::master::Master::finalize() @ 0x2b20a1eb69a1 process::ProcessBase::visit() @ 0x2b20a1ec0464 process::TerminateEvent::visit() @ 0x8e0812 process::ProcessBase::serve() @ 0x2b20a18da89e _ZNSt17_Function_handlerIFvPN7process11ProcessBaseEEZNS0_8dispatchIN5mesos8internal5slave5SlaveERKNS0_6FutureI6OptionINS5_10MasterInfoEEEESD_EEvRKNS0_3PIDIT_EEMSH_FvT0_ET1_EUlS2_E_E9_M_invokeERKSt9_Any_dataS2_ @ 0x2b20a1eb1ca0 process::ProcessManager::resume() @ 0x2b20a1ea8365 process::schedule() @ 0x2b20a41f7182 start_thread @ 0x2b20a4507fbd (unknown) make[3]: *** [check-local] Segmentation fault ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2008","10/29/2014 18:17:52",2,"MasterAuthorizationTest.DuplicateReregistration is flaky ""{noformat:title=} [ RUN ] MasterAuthorizationTest.DuplicateReregistration Using temporary directory '/tmp/MasterAuthorizationTest_DuplicateReregistration_DLOmYX' I1029 08:25:26.021766 32232 leveldb.cpp:176] Opened db in 3.066621ms I1029 08:25:26.022734 32232 leveldb.cpp:183] Compacted db in 935019ns I1029 08:25:26.022766 32232 leveldb.cpp:198] Created db iterator in 4350ns I1029 08:25:26.022785 32232 leveldb.cpp:204] Seeked to beginning of db in 902ns I1029 08:25:26.022799 32232 leveldb.cpp:273] Iterated through 0 keys in the db in 387ns I1029 08:25:26.022831 32232 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I1029 08:25:26.023305 32248 recover.cpp:437] Starting replica recovery I1029 08:25:26.023598 32248 recover.cpp:463] Replica is in EMPTY status I1029 08:25:26.025059 32260 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I1029 08:25:26.025320 32247 recover.cpp:188] Received a recover response from a replica in EMPTY status I1029 08:25:26.025585 32256 recover.cpp:554] Updating replica status to STARTING I1029 08:25:26.026546 32249 master.cpp:312] Master 20141029-082526-3142697795-40696-32232 (pomona.apache.org) started on 67.195.81.187:40696 I1029 08:25:26.026561 32261 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 694444ns I1029 08:25:26.026592 32249 master.cpp:358] Master only allowing authenticated frameworks to register I1029 08:25:26.026592 32261 replica.cpp:320] Persisted replica status to STARTING I1029 08:25:26.026605 32249 master.cpp:363] Master only allowing authenticated slaves to register I1029 08:25:26.026639 32249 credentials.hpp:36] Loading credentials for authentication from '/tmp/MasterAuthorizationTest_DuplicateReregistration_DLOmYX/credentials' I1029 08:25:26.026877 32249 master.cpp:392] Authorization enabled I1029 08:25:26.026901 32260 recover.cpp:463] Replica is in STARTING status I1029 08:25:26.027498 32261 master.cpp:120] No whitelist given. Advertising offers for all slaves I1029 08:25:26.027541 32248 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@67.195.81.187:40696 I1029 08:25:26.028055 32252 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I1029 08:25:26.028451 32247 recover.cpp:188] Received a recover response from a replica in STARTING status I1029 08:25:26.028733 32249 master.cpp:1242] The newly elected leader is master@67.195.81.187:40696 with id 20141029-082526-3142697795-40696-32232 I1029 08:25:26.028764 32249 master.cpp:1255] Elected as the leading master! I1029 08:25:26.028781 32249 master.cpp:1073] Recovering from registrar I1029 08:25:26.028904 32246 recover.cpp:554] Updating replica status to VOTING I1029 08:25:26.029163 32257 registrar.cpp:313] Recovering registrar I1029 08:25:26.029556 32251 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 485711ns I1029 08:25:26.029588 32251 replica.cpp:320] Persisted replica status to VOTING I1029 08:25:26.029726 32253 recover.cpp:568] Successfully joined the Paxos group I1029 08:25:26.029932 32253 recover.cpp:452] Recover process terminated I1029 08:25:26.030436 32250 log.cpp:656] Attempting to start the writer I1029 08:25:26.032152 32248 replica.cpp:474] Replica received implicit promise request with proposal 1 I1029 08:25:26.032778 32248 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 597030ns I1029 08:25:26.032807 32248 replica.cpp:342] Persisted promised to 1 I1029 08:25:26.033481 32254 coordinator.cpp:230] Coordinator attemping to fill missing position I1029 08:25:26.035429 32247 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I1029 08:25:26.036154 32247 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 690208ns I1029 08:25:26.036181 32247 replica.cpp:676] Persisted action at 0 I1029 08:25:26.037344 32249 replica.cpp:508] Replica received write request for position 0 I1029 08:25:26.037395 32249 leveldb.cpp:438] Reading position from leveldb took 22607ns I1029 08:25:26.038074 32249 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 647429ns I1029 08:25:26.038105 32249 replica.cpp:676] Persisted action at 0 I1029 08:25:26.038683 32247 replica.cpp:655] Replica received learned notice for position 0 I1029 08:25:26.039378 32247 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 664911ns I1029 08:25:26.039407 32247 replica.cpp:676] Persisted action at 0 I1029 08:25:26.039433 32247 replica.cpp:661] Replica learned NOP action at position 0 I1029 08:25:26.040045 32252 log.cpp:672] Writer started with ending position 0 I1029 08:25:26.041378 32251 leveldb.cpp:438] Reading position from leveldb took 25625ns I1029 08:25:26.044642 32246 registrar.cpp:346] Successfully fetched the registry (0B) in 15.433984ms I1029 08:25:26.044742 32246 registrar.cpp:445] Applied 1 operations in 16444ns; attempting to update the 'registry' I1029 08:25:26.047538 32256 log.cpp:680] Attempting to append 139 bytes to the log I1029 08:25:26.156330 32247 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I1029 08:25:26.158460 32261 replica.cpp:508] Replica received write request for position 1 I1029 08:25:26.159277 32261 leveldb.cpp:343] Persisting action (158 bytes) to leveldb took 782308ns I1029 08:25:26.159328 32261 replica.cpp:676] Persisted action at 1 I1029 08:25:26.160267 32255 replica.cpp:655] Replica received learned notice for position 1 I1029 08:25:26.161070 32255 leveldb.cpp:343] Persisting action (160 bytes) to leveldb took 750259ns I1029 08:25:26.161100 32255 replica.cpp:676] Persisted action at 1 I1029 08:25:26.161125 32255 replica.cpp:661] Replica learned APPEND action at position 1 I1029 08:25:26.162199 32253 registrar.cpp:490] Successfully updated the 'registry' in 117.40416ms I1029 08:25:26.162400 32253 registrar.cpp:376] Successfully recovered registrar I1029 08:25:26.162724 32249 master.cpp:1100] Recovered 0 slaves from the Registry (101B) ; allowing 10mins for slaves to re-register I1029 08:25:26.162757 32253 log.cpp:699] Attempting to truncate the log to 1 I1029 08:25:26.162919 32256 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I1029 08:25:26.163949 32250 replica.cpp:508] Replica received write request for position 2 I1029 08:25:26.164589 32250 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 603175ns I1029 08:25:26.164618 32250 replica.cpp:676] Persisted action at 2 I1029 08:25:26.165385 32251 replica.cpp:655] Replica received learned notice for position 2 I1029 08:25:26.166007 32251 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 594003ns I1029 08:25:26.166056 32251 leveldb.cpp:401] Deleting ~1 keys from leveldb took 23309ns I1029 08:25:26.166077 32251 replica.cpp:676] Persisted action at 2 I1029 08:25:26.166100 32251 replica.cpp:661] Replica learned TRUNCATE action at position 2 I1029 08:25:26.178493 32232 sched.cpp:137] Version: 0.21.0 I1029 08:25:26.179029 32256 sched.cpp:233] New master detected at master@67.195.81.187:40696 I1029 08:25:26.179078 32256 sched.cpp:283] Authenticating with master master@67.195.81.187:40696 I1029 08:25:26.179424 32246 authenticatee.hpp:133] Creating new client SASL connection I1029 08:25:26.179678 32259 master.cpp:3853] Authenticating scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.179970 32250 authenticator.hpp:161] Creating new server SASL connection I1029 08:25:26.180165 32250 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1029 08:25:26.180191 32250 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1029 08:25:26.180272 32250 authenticator.hpp:267] Received SASL authentication start I1029 08:25:26.180378 32250 authenticator.hpp:389] Authentication requires more steps I1029 08:25:26.180557 32260 authenticatee.hpp:270] Received SASL authentication step I1029 08:25:26.180704 32254 authenticator.hpp:295] Received SASL authentication step I1029 08:25:26.180737 32254 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1029 08:25:26.180748 32254 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1029 08:25:26.180780 32254 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1029 08:25:26.180804 32254 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1029 08:25:26.180816 32254 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1029 08:25:26.180824 32254 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1029 08:25:26.180841 32254 authenticator.hpp:381] Authentication success I1029 08:25:26.180937 32259 authenticatee.hpp:310] Authentication success I1029 08:25:26.180991 32260 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.181422 32259 sched.cpp:357] Successfully authenticated with master master@67.195.81.187:40696 I1029 08:25:26.181449 32259 sched.cpp:476] Sending registration request to master@67.195.81.187:40696 I1029 08:25:26.181697 32260 master.cpp:1362] Received registration request for framework 'default' at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.181758 32260 master.cpp:1321] Authorizing framework principal 'test-principal' to receive offers for role '*' I1029 08:25:26.182063 32260 master.cpp:1426] Registering framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.182430 32248 hierarchical_allocator_process.hpp:329] Added framework 20141029-082526-3142697795-40696-32232-0000 I1029 08:25:26.182462 32248 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:26.182462 32261 sched.cpp:407] Framework registered with 20141029-082526-3142697795-40696-32232-0000 I1029 08:25:26.182473 32248 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 15372ns I1029 08:25:26.182554 32261 sched.cpp:421] Scheduler::registered took 60059ns I1029 08:25:26.185515 32260 sched.cpp:227] Scheduler::disconnected took 16607ns I1029 08:25:26.185538 32260 sched.cpp:233] New master detected at master@67.195.81.187:40696 I1029 08:25:26.185567 32260 sched.cpp:283] Authenticating with master master@67.195.81.187:40696 I1029 08:25:26.185783 32246 authenticatee.hpp:133] Creating new client SASL connection I1029 08:25:26.186218 32250 master.cpp:3853] Authenticating scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.186456 32247 authenticator.hpp:161] Creating new server SASL connection I1029 08:25:26.186594 32250 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1029 08:25:26.186621 32250 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1029 08:25:26.186745 32259 authenticator.hpp:267] Received SASL authentication start I1029 08:25:26.186800 32259 authenticator.hpp:389] Authentication requires more steps I1029 08:25:26.186936 32260 authenticatee.hpp:270] Received SASL authentication step I1029 08:25:26.187062 32249 authenticator.hpp:295] Received SASL authentication step I1029 08:25:26.187095 32249 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1029 08:25:26.187108 32249 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1029 08:25:26.187137 32249 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1029 08:25:26.187162 32249 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1029 08:25:26.187175 32249 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1029 08:25:26.187182 32249 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1029 08:25:26.187199 32249 authenticator.hpp:381] Authentication success I1029 08:25:26.187327 32249 authenticatee.hpp:310] Authentication success I1029 08:25:26.187366 32260 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.187631 32249 sched.cpp:357] Successfully authenticated with master master@67.195.81.187:40696 I1029 08:25:26.187659 32249 sched.cpp:476] Sending registration request to master@67.195.81.187:40696 I1029 08:25:27.028445 32251 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:28.045682 32251 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 1.017231941secs I1029 08:25:28.045760 32249 sched.cpp:476] Sending registration request to master@67.195.81.187:40696 I1029 08:25:28.045900 32253 master.cpp:1499] Received re-registration request from framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:28.045989 32253 master.cpp:1321] Authorizing framework principal 'test-principal' to receive offers for role '*' I1029 08:25:28.046455 32253 master.cpp:1499] Received re-registration request from framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:28.046529 32253 master.cpp:1321] Authorizing framework principal 'test-principal' to receive offers for role '*' I1029 08:25:28.050155 32247 sched.cpp:233] New master detected at master@67.195.81.187:40696 I1029 08:25:28.050217 32247 sched.cpp:283] Authenticating with master master@67.195.81.187:40696 I1029 08:25:28.050405 32252 master.cpp:1552] Re-registering framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:28.050509 32253 authenticatee.hpp:133] Creating new client SASL connection I1029 08:25:28.050566 32252 master.cpp:1592] Allowing framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 to re-register with an already used id I1029 08:25:28.051084 32257 sched.cpp:449] Framework re-registered with 20141029-082526-3142697795-40696-32232-0000 I1029 08:25:28.051151 32252 master.cpp:3853] Authenticating scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:28.051167 32257 sched.cpp:463] Scheduler::reregistered took 52801ns I1029 08:25:28.051723 32261 authenticator.hpp:161] Creating new server SASL connection I1029 08:25:28.052042 32249 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1029 08:25:28.052077 32249 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1029 08:25:28.052170 32249 master.cpp:1534] Dropping re-registration request of framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 because new authentication attempt is in progress I1029 08:25:28.052218 32257 authenticator.hpp:267] Received SASL authentication start I1029 08:25:28.052325 32257 authenticator.hpp:389] Authentication requires more steps I1029 08:25:28.052428 32257 authenticatee.hpp:270] Received SASL authentication step I1029 08:25:28.052641 32246 authenticator.hpp:295] Received SASL authentication step I1029 08:25:28.052685 32246 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1029 08:25:28.052701 32246 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1029 08:25:28.052739 32246 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1029 08:25:28.052767 32246 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1029 08:25:28.052779 32246 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1029 08:25:28.052788 32246 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1029 08:25:28.052804 32246 authenticator.hpp:381] Authentication success I1029 08:25:28.052947 32252 authenticatee.hpp:310] Authentication success I1029 08:25:28.053020 32246 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:28.053462 32247 sched.cpp:357] Successfully authenticated with master master@67.195.81.187:40696 I1029 08:25:29.046855 32261 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:29.046880 32261 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 35632ns I1029 08:25:30.047458 32253 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:30.047487 32253 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 43031ns I1029 08:25:31.028373 32261 master.cpp:120] No whitelist given. Advertising offers for all slaves I1029 08:25:31.048673 32249 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:31.048702 32249 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 44769ns I1029 08:25:32.049576 32259 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:32.049604 32259 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 51919ns I1029 08:25:33.050864 32249 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:33.050896 32249 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 38019ns I1029 08:25:34.051961 32251 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:34.051993 32251 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 64619ns I1029 08:25:35.052196 32249 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:35.052223 32249 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 34475ns I1029 08:25:36.029101 32259 master.cpp:120] No whitelist given. Advertising offers for all slaves I1029 08:25:36.053067 32249 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:36.053095 32249 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 38354ns I1029 08:25:37.053506 32259 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:37.053536 32259 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 38249ns tests/master_authorization_tests.cpp:877: Failure Failed to wait 10secs for frameworkReregisteredMessage I1029 08:25:38.053241 32259 master.cpp:768] Framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 disconnected I1029 08:25:38.053375 32259 master.cpp:1731] Disconnecting framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:38.053426 32259 master.cpp:1747] Deactivating framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:38.053932 32259 master.cpp:790] Giving framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 0ns to failover I1029 08:25:38.054072 32257 hierarchical_allocator_process.hpp:405] Deactivated framework 20141029-082526-3142697795-40696-32232-0000 I1029 08:25:38.054208 32257 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:38.054236 32257 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 38534ns I1029 08:25:38.054508 32258 master.cpp:3665] Framework failover timeout, removing framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:38.054549 32258 master.cpp:4201] Removing framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:38.055179 32252 master.cpp:677] Master terminating I1029 08:25:38.055181 32254 hierarchical_allocator_process.hpp:360] Removed framework 20141029-082526-3142697795-40696-32232-0000 ../3rdparty/libprocess/include/process/gmock.hpp:345: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: message matcher (8-byte object , 1-byte object <95>, 1-byte object <30>) Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] MasterAuthorizationTest.DuplicateReregistration (12042 ms) {noformat}"""," [ RUN ] MasterAuthorizationTest.DuplicateReregistration Using temporary directory '/tmp/MasterAuthorizationTest_DuplicateReregistration_DLOmYX' I1029 08:25:26.021766 32232 leveldb.cpp:176] Opened db in 3.066621ms I1029 08:25:26.022734 32232 leveldb.cpp:183] Compacted db in 935019ns I1029 08:25:26.022766 32232 leveldb.cpp:198] Created db iterator in 4350ns I1029 08:25:26.022785 32232 leveldb.cpp:204] Seeked to beginning of db in 902ns I1029 08:25:26.022799 32232 leveldb.cpp:273] Iterated through 0 keys in the db in 387ns I1029 08:25:26.022831 32232 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I1029 08:25:26.023305 32248 recover.cpp:437] Starting replica recovery I1029 08:25:26.023598 32248 recover.cpp:463] Replica is in EMPTY status I1029 08:25:26.025059 32260 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I1029 08:25:26.025320 32247 recover.cpp:188] Received a recover response from a replica in EMPTY status I1029 08:25:26.025585 32256 recover.cpp:554] Updating replica status to STARTING I1029 08:25:26.026546 32249 master.cpp:312] Master 20141029-082526-3142697795-40696-32232 (pomona.apache.org) started on 67.195.81.187:40696 I1029 08:25:26.026561 32261 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 694444ns I1029 08:25:26.026592 32249 master.cpp:358] Master only allowing authenticated frameworks to register I1029 08:25:26.026592 32261 replica.cpp:320] Persisted replica status to STARTING I1029 08:25:26.026605 32249 master.cpp:363] Master only allowing authenticated slaves to register I1029 08:25:26.026639 32249 credentials.hpp:36] Loading credentials for authentication from '/tmp/MasterAuthorizationTest_DuplicateReregistration_DLOmYX/credentials' I1029 08:25:26.026877 32249 master.cpp:392] Authorization enabled I1029 08:25:26.026901 32260 recover.cpp:463] Replica is in STARTING status I1029 08:25:26.027498 32261 master.cpp:120] No whitelist given. Advertising offers for all slaves I1029 08:25:26.027541 32248 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@67.195.81.187:40696 I1029 08:25:26.028055 32252 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I1029 08:25:26.028451 32247 recover.cpp:188] Received a recover response from a replica in STARTING status I1029 08:25:26.028733 32249 master.cpp:1242] The newly elected leader is master@67.195.81.187:40696 with id 20141029-082526-3142697795-40696-32232 I1029 08:25:26.028764 32249 master.cpp:1255] Elected as the leading master! I1029 08:25:26.028781 32249 master.cpp:1073] Recovering from registrar I1029 08:25:26.028904 32246 recover.cpp:554] Updating replica status to VOTING I1029 08:25:26.029163 32257 registrar.cpp:313] Recovering registrar I1029 08:25:26.029556 32251 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 485711ns I1029 08:25:26.029588 32251 replica.cpp:320] Persisted replica status to VOTING I1029 08:25:26.029726 32253 recover.cpp:568] Successfully joined the Paxos group I1029 08:25:26.029932 32253 recover.cpp:452] Recover process terminated I1029 08:25:26.030436 32250 log.cpp:656] Attempting to start the writer I1029 08:25:26.032152 32248 replica.cpp:474] Replica received implicit promise request with proposal 1 I1029 08:25:26.032778 32248 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 597030ns I1029 08:25:26.032807 32248 replica.cpp:342] Persisted promised to 1 I1029 08:25:26.033481 32254 coordinator.cpp:230] Coordinator attemping to fill missing position I1029 08:25:26.035429 32247 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I1029 08:25:26.036154 32247 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 690208ns I1029 08:25:26.036181 32247 replica.cpp:676] Persisted action at 0 I1029 08:25:26.037344 32249 replica.cpp:508] Replica received write request for position 0 I1029 08:25:26.037395 32249 leveldb.cpp:438] Reading position from leveldb took 22607ns I1029 08:25:26.038074 32249 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 647429ns I1029 08:25:26.038105 32249 replica.cpp:676] Persisted action at 0 I1029 08:25:26.038683 32247 replica.cpp:655] Replica received learned notice for position 0 I1029 08:25:26.039378 32247 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 664911ns I1029 08:25:26.039407 32247 replica.cpp:676] Persisted action at 0 I1029 08:25:26.039433 32247 replica.cpp:661] Replica learned NOP action at position 0 I1029 08:25:26.040045 32252 log.cpp:672] Writer started with ending position 0 I1029 08:25:26.041378 32251 leveldb.cpp:438] Reading position from leveldb took 25625ns I1029 08:25:26.044642 32246 registrar.cpp:346] Successfully fetched the registry (0B) in 15.433984ms I1029 08:25:26.044742 32246 registrar.cpp:445] Applied 1 operations in 16444ns; attempting to update the 'registry' I1029 08:25:26.047538 32256 log.cpp:680] Attempting to append 139 bytes to the log I1029 08:25:26.156330 32247 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I1029 08:25:26.158460 32261 replica.cpp:508] Replica received write request for position 1 I1029 08:25:26.159277 32261 leveldb.cpp:343] Persisting action (158 bytes) to leveldb took 782308ns I1029 08:25:26.159328 32261 replica.cpp:676] Persisted action at 1 I1029 08:25:26.160267 32255 replica.cpp:655] Replica received learned notice for position 1 I1029 08:25:26.161070 32255 leveldb.cpp:343] Persisting action (160 bytes) to leveldb took 750259ns I1029 08:25:26.161100 32255 replica.cpp:676] Persisted action at 1 I1029 08:25:26.161125 32255 replica.cpp:661] Replica learned APPEND action at position 1 I1029 08:25:26.162199 32253 registrar.cpp:490] Successfully updated the 'registry' in 117.40416ms I1029 08:25:26.162400 32253 registrar.cpp:376] Successfully recovered registrar I1029 08:25:26.162724 32249 master.cpp:1100] Recovered 0 slaves from the Registry (101B) ; allowing 10mins for slaves to re-register I1029 08:25:26.162757 32253 log.cpp:699] Attempting to truncate the log to 1 I1029 08:25:26.162919 32256 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I1029 08:25:26.163949 32250 replica.cpp:508] Replica received write request for position 2 I1029 08:25:26.164589 32250 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 603175ns I1029 08:25:26.164618 32250 replica.cpp:676] Persisted action at 2 I1029 08:25:26.165385 32251 replica.cpp:655] Replica received learned notice for position 2 I1029 08:25:26.166007 32251 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 594003ns I1029 08:25:26.166056 32251 leveldb.cpp:401] Deleting ~1 keys from leveldb took 23309ns I1029 08:25:26.166077 32251 replica.cpp:676] Persisted action at 2 I1029 08:25:26.166100 32251 replica.cpp:661] Replica learned TRUNCATE action at position 2 I1029 08:25:26.178493 32232 sched.cpp:137] Version: 0.21.0 I1029 08:25:26.179029 32256 sched.cpp:233] New master detected at master@67.195.81.187:40696 I1029 08:25:26.179078 32256 sched.cpp:283] Authenticating with master master@67.195.81.187:40696 I1029 08:25:26.179424 32246 authenticatee.hpp:133] Creating new client SASL connection I1029 08:25:26.179678 32259 master.cpp:3853] Authenticating scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.179970 32250 authenticator.hpp:161] Creating new server SASL connection I1029 08:25:26.180165 32250 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1029 08:25:26.180191 32250 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1029 08:25:26.180272 32250 authenticator.hpp:267] Received SASL authentication start I1029 08:25:26.180378 32250 authenticator.hpp:389] Authentication requires more steps I1029 08:25:26.180557 32260 authenticatee.hpp:270] Received SASL authentication step I1029 08:25:26.180704 32254 authenticator.hpp:295] Received SASL authentication step I1029 08:25:26.180737 32254 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1029 08:25:26.180748 32254 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1029 08:25:26.180780 32254 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1029 08:25:26.180804 32254 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1029 08:25:26.180816 32254 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1029 08:25:26.180824 32254 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1029 08:25:26.180841 32254 authenticator.hpp:381] Authentication success I1029 08:25:26.180937 32259 authenticatee.hpp:310] Authentication success I1029 08:25:26.180991 32260 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.181422 32259 sched.cpp:357] Successfully authenticated with master master@67.195.81.187:40696 I1029 08:25:26.181449 32259 sched.cpp:476] Sending registration request to master@67.195.81.187:40696 I1029 08:25:26.181697 32260 master.cpp:1362] Received registration request for framework 'default' at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.181758 32260 master.cpp:1321] Authorizing framework principal 'test-principal' to receive offers for role '*' I1029 08:25:26.182063 32260 master.cpp:1426] Registering framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.182430 32248 hierarchical_allocator_process.hpp:329] Added framework 20141029-082526-3142697795-40696-32232-0000 I1029 08:25:26.182462 32248 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:26.182462 32261 sched.cpp:407] Framework registered with 20141029-082526-3142697795-40696-32232-0000 I1029 08:25:26.182473 32248 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 15372ns I1029 08:25:26.182554 32261 sched.cpp:421] Scheduler::registered took 60059ns I1029 08:25:26.185515 32260 sched.cpp:227] Scheduler::disconnected took 16607ns I1029 08:25:26.185538 32260 sched.cpp:233] New master detected at master@67.195.81.187:40696 I1029 08:25:26.185567 32260 sched.cpp:283] Authenticating with master master@67.195.81.187:40696 I1029 08:25:26.185783 32246 authenticatee.hpp:133] Creating new client SASL connection I1029 08:25:26.186218 32250 master.cpp:3853] Authenticating scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.186456 32247 authenticator.hpp:161] Creating new server SASL connection I1029 08:25:26.186594 32250 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1029 08:25:26.186621 32250 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1029 08:25:26.186745 32259 authenticator.hpp:267] Received SASL authentication start I1029 08:25:26.186800 32259 authenticator.hpp:389] Authentication requires more steps I1029 08:25:26.186936 32260 authenticatee.hpp:270] Received SASL authentication step I1029 08:25:26.187062 32249 authenticator.hpp:295] Received SASL authentication step I1029 08:25:26.187095 32249 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1029 08:25:26.187108 32249 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1029 08:25:26.187137 32249 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1029 08:25:26.187162 32249 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1029 08:25:26.187175 32249 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1029 08:25:26.187182 32249 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1029 08:25:26.187199 32249 authenticator.hpp:381] Authentication success I1029 08:25:26.187327 32249 authenticatee.hpp:310] Authentication success I1029 08:25:26.187366 32260 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:26.187631 32249 sched.cpp:357] Successfully authenticated with master master@67.195.81.187:40696 I1029 08:25:26.187659 32249 sched.cpp:476] Sending registration request to master@67.195.81.187:40696 I1029 08:25:27.028445 32251 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:28.045682 32251 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 1.017231941secs I1029 08:25:28.045760 32249 sched.cpp:476] Sending registration request to master@67.195.81.187:40696 I1029 08:25:28.045900 32253 master.cpp:1499] Received re-registration request from framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:28.045989 32253 master.cpp:1321] Authorizing framework principal 'test-principal' to receive offers for role '*' I1029 08:25:28.046455 32253 master.cpp:1499] Received re-registration request from framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:28.046529 32253 master.cpp:1321] Authorizing framework principal 'test-principal' to receive offers for role '*' I1029 08:25:28.050155 32247 sched.cpp:233] New master detected at master@67.195.81.187:40696 I1029 08:25:28.050217 32247 sched.cpp:283] Authenticating with master master@67.195.81.187:40696 I1029 08:25:28.050405 32252 master.cpp:1552] Re-registering framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:28.050509 32253 authenticatee.hpp:133] Creating new client SASL connection I1029 08:25:28.050566 32252 master.cpp:1592] Allowing framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 to re-register with an already used id I1029 08:25:28.051084 32257 sched.cpp:449] Framework re-registered with 20141029-082526-3142697795-40696-32232-0000 I1029 08:25:28.051151 32252 master.cpp:3853] Authenticating scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:28.051167 32257 sched.cpp:463] Scheduler::reregistered took 52801ns I1029 08:25:28.051723 32261 authenticator.hpp:161] Creating new server SASL connection I1029 08:25:28.052042 32249 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1029 08:25:28.052077 32249 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1029 08:25:28.052170 32249 master.cpp:1534] Dropping re-registration request of framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 because new authentication attempt is in progress I1029 08:25:28.052218 32257 authenticator.hpp:267] Received SASL authentication start I1029 08:25:28.052325 32257 authenticator.hpp:389] Authentication requires more steps I1029 08:25:28.052428 32257 authenticatee.hpp:270] Received SASL authentication step I1029 08:25:28.052641 32246 authenticator.hpp:295] Received SASL authentication step I1029 08:25:28.052685 32246 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1029 08:25:28.052701 32246 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1029 08:25:28.052739 32246 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1029 08:25:28.052767 32246 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1029 08:25:28.052779 32246 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1029 08:25:28.052788 32246 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1029 08:25:28.052804 32246 authenticator.hpp:381] Authentication success I1029 08:25:28.052947 32252 authenticatee.hpp:310] Authentication success I1029 08:25:28.053020 32246 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:28.053462 32247 sched.cpp:357] Successfully authenticated with master master@67.195.81.187:40696 I1029 08:25:29.046855 32261 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:29.046880 32261 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 35632ns I1029 08:25:30.047458 32253 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:30.047487 32253 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 43031ns I1029 08:25:31.028373 32261 master.cpp:120] No whitelist given. Advertising offers for all slaves I1029 08:25:31.048673 32249 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:31.048702 32249 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 44769ns I1029 08:25:32.049576 32259 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:32.049604 32259 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 51919ns I1029 08:25:33.050864 32249 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:33.050896 32249 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 38019ns I1029 08:25:34.051961 32251 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:34.051993 32251 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 64619ns I1029 08:25:35.052196 32249 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:35.052223 32249 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 34475ns I1029 08:25:36.029101 32259 master.cpp:120] No whitelist given. Advertising offers for all slaves I1029 08:25:36.053067 32249 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:36.053095 32249 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 38354ns I1029 08:25:37.053506 32259 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:37.053536 32259 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 38249ns tests/master_authorization_tests.cpp:877: Failure Failed to wait 10secs for frameworkReregisteredMessage I1029 08:25:38.053241 32259 master.cpp:768] Framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 disconnected I1029 08:25:38.053375 32259 master.cpp:1731] Disconnecting framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:38.053426 32259 master.cpp:1747] Deactivating framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:38.053932 32259 master.cpp:790] Giving framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 0ns to failover I1029 08:25:38.054072 32257 hierarchical_allocator_process.hpp:405] Deactivated framework 20141029-082526-3142697795-40696-32232-0000 I1029 08:25:38.054208 32257 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1029 08:25:38.054236 32257 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 38534ns I1029 08:25:38.054508 32258 master.cpp:3665] Framework failover timeout, removing framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:38.054549 32258 master.cpp:4201] Removing framework 20141029-082526-3142697795-40696-32232-0000 (default) at scheduler-9ba6b803-40b4-48b9-bcef-45a329f6b2a4@67.195.81.187:40696 I1029 08:25:38.055179 32252 master.cpp:677] Master terminating I1029 08:25:38.055181 32254 hierarchical_allocator_process.hpp:360] Removed framework 20141029-082526-3142697795-40696-32232-0000 ../3rdparty/libprocess/include/process/gmock.hpp:345: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: message matcher (8-byte object , 1-byte object <95>, 1-byte object <30>) Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] MasterAuthorizationTest.DuplicateReregistration (12042 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2017","10/30/2014 17:16:19",5,"Segfault with ""Pure virtual method called"" when tests fail ""The most recent one: {noformat:title=DRFAllocatorTest.DRFAllocatorProcess} [ RUN ] DRFAllocatorTest.DRFAllocatorProcess Using temporary directory '/tmp/DRFAllocatorTest_DRFAllocatorProcess_BI905j' I1030 05:55:06.934813 24459 leveldb.cpp:176] Opened db in 3.175202ms I1030 05:55:06.935925 24459 leveldb.cpp:183] Compacted db in 1.077924ms I1030 05:55:06.935976 24459 leveldb.cpp:198] Created db iterator in 16460ns I1030 05:55:06.935995 24459 leveldb.cpp:204] Seeked to beginning of db in 2018ns I1030 05:55:06.936005 24459 leveldb.cpp:273] Iterated through 0 keys in the db in 335ns I1030 05:55:06.936039 24459 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I1030 05:55:06.936705 24480 recover.cpp:437] Starting replica recovery I1030 05:55:06.937023 24480 recover.cpp:463] Replica is in EMPTY status I1030 05:55:06.938158 24475 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I1030 05:55:06.938859 24482 recover.cpp:188] Received a recover response from a replica in EMPTY status I1030 05:55:06.939486 24474 recover.cpp:554] Updating replica status to STARTING I1030 05:55:06.940249 24489 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 591981ns I1030 05:55:06.940274 24489 replica.cpp:320] Persisted replica status to STARTING I1030 05:55:06.940752 24481 recover.cpp:463] Replica is in STARTING status I1030 05:55:06.940820 24489 master.cpp:312] Master 20141030-055506-3142697795-40429-24459 (pomona.apache.org) started on 67.195.81.187:40429 I1030 05:55:06.940871 24489 master.cpp:358] Master only allowing authenticated frameworks to register I1030 05:55:06.940891 24489 master.cpp:363] Master only allowing authenticated slaves to register I1030 05:55:06.940908 24489 credentials.hpp:36] Loading credentials for authentication from '/tmp/DRFAllocatorTest_DRFAllocatorProcess_BI905j/credentials' I1030 05:55:06.941215 24489 master.cpp:392] Authorization enabled I1030 05:55:06.941751 24475 master.cpp:120] No whitelist given. Advertising offers for all slaves I1030 05:55:06.942227 24474 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I1030 05:55:06.942401 24476 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@67.195.81.187:40429 I1030 05:55:06.942895 24483 recover.cpp:188] Received a recover response from a replica in STARTING status I1030 05:55:06.943035 24474 master.cpp:1242] The newly elected leader is master@67.195.81.187:40429 with id 20141030-055506-3142697795-40429-24459 I1030 05:55:06.943063 24474 master.cpp:1255] Elected as the leading master! I1030 05:55:06.943079 24474 master.cpp:1073] Recovering from registrar I1030 05:55:06.943313 24480 registrar.cpp:313] Recovering registrar I1030 05:55:06.943455 24475 recover.cpp:554] Updating replica status to VOTING I1030 05:55:06.944144 24474 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 536365ns I1030 05:55:06.944172 24474 replica.cpp:320] Persisted replica status to VOTING I1030 05:55:06.944355 24489 recover.cpp:568] Successfully joined the Paxos group I1030 05:55:06.944576 24489 recover.cpp:452] Recover process terminated I1030 05:55:06.945155 24486 log.cpp:656] Attempting to start the writer I1030 05:55:06.947013 24473 replica.cpp:474] Replica received implicit promise request with proposal 1 I1030 05:55:06.947854 24473 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 806463ns I1030 05:55:06.947883 24473 replica.cpp:342] Persisted promised to 1 I1030 05:55:06.948547 24481 coordinator.cpp:230] Coordinator attemping to fill missing position I1030 05:55:06.950269 24479 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I1030 05:55:06.950933 24479 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 603843ns I1030 05:55:06.950961 24479 replica.cpp:676] Persisted action at 0 I1030 05:55:06.952180 24476 replica.cpp:508] Replica received write request for position 0 I1030 05:55:06.952239 24476 leveldb.cpp:438] Reading position from leveldb took 28437ns I1030 05:55:06.952896 24476 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 623980ns I1030 05:55:06.952926 24476 replica.cpp:676] Persisted action at 0 I1030 05:55:06.953543 24485 replica.cpp:655] Replica received learned notice for position 0 I1030 05:55:06.954082 24485 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 511807ns I1030 05:55:06.954107 24485 replica.cpp:676] Persisted action at 0 I1030 05:55:06.954128 24485 replica.cpp:661] Replica learned NOP action at position 0 I1030 05:55:06.954710 24473 log.cpp:672] Writer started with ending position 0 I1030 05:55:06.956215 24478 leveldb.cpp:438] Reading position from leveldb took 33085ns I1030 05:55:06.959481 24475 registrar.cpp:346] Successfully fetched the registry (0B) in 16.11904ms I1030 05:55:06.959616 24475 registrar.cpp:445] Applied 1 operations in 28239ns; attempting to update the 'registry' I1030 05:55:06.962514 24487 log.cpp:680] Attempting to append 139 bytes to the log I1030 05:55:06.962646 24474 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I1030 05:55:06.964146 24486 replica.cpp:508] Replica received write request for position 1 I1030 05:55:06.964962 24486 leveldb.cpp:343] Persisting action (158 bytes) to leveldb took 743389ns I1030 05:55:06.964993 24486 replica.cpp:676] Persisted action at 1 I1030 05:55:06.965895 24473 replica.cpp:655] Replica received learned notice for position 1 I1030 05:55:06.966531 24473 leveldb.cpp:343] Persisting action (160 bytes) to leveldb took 607242ns I1030 05:55:06.966555 24473 replica.cpp:676] Persisted action at 1 I1030 05:55:06.966578 24473 replica.cpp:661] Replica learned APPEND action at position 1 I1030 05:55:06.967706 24481 registrar.cpp:490] Successfully updated the 'registry' in 8.036096ms I1030 05:55:06.967895 24481 registrar.cpp:376] Successfully recovered registrar I1030 05:55:06.967993 24482 log.cpp:699] Attempting to truncate the log to 1 I1030 05:55:06.968258 24479 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I1030 05:55:06.968268 24475 master.cpp:1100] Recovered 0 slaves from the Registry (101B) ; allowing 10mins for slaves to re-register I1030 05:55:06.969156 24476 replica.cpp:508] Replica received write request for position 2 I1030 05:55:06.969678 24476 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 491913ns I1030 05:55:06.969703 24476 replica.cpp:676] Persisted action at 2 I1030 05:55:06.970459 24478 replica.cpp:655] Replica received learned notice for position 2 I1030 05:55:06.971060 24478 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 573076ns I1030 05:55:06.971124 24478 leveldb.cpp:401] Deleting ~1 keys from leveldb took 35339ns I1030 05:55:06.971145 24478 replica.cpp:676] Persisted action at 2 I1030 05:55:06.971168 24478 replica.cpp:661] Replica learned TRUNCATE action at position 2 I1030 05:55:06.980211 24459 containerizer.cpp:100] Using isolation: posix/cpu,posix/mem I1030 05:55:06.984153 24473 slave.cpp:169] Slave started on 203)@67.195.81.187:40429 I1030 05:55:07.055308 24473 credentials.hpp:84] Loading credential for authentication from '/tmp/DRFAllocatorTest_DRFAllocatorProcess_wULx31/credential' I1030 05:55:06.988750 24459 sched.cpp:137] Version: 0.21.0 I1030 05:55:07.055521 24473 slave.cpp:276] Slave using credential for: test-principal I1030 05:55:07.055726 24473 slave.cpp:289] Slave resources: cpus(*):2; mem(*):1024; disk(*):0; ports(*):[31000-32000] I1030 05:55:07.055865 24473 slave.cpp:318] Slave hostname: pomona.apache.org I1030 05:55:07.055881 24473 slave.cpp:319] Slave checkpoint: false W1030 05:55:07.055889 24473 slave.cpp:321] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I1030 05:55:07.056172 24485 sched.cpp:233] New master detected at master@67.195.81.187:40429 I1030 05:55:07.056222 24485 sched.cpp:283] Authenticating with master master@67.195.81.187:40429 I1030 05:55:07.056717 24485 state.cpp:33] Recovering state from '/tmp/DRFAllocatorTest_DRFAllocatorProcess_wULx31/meta' I1030 05:55:07.056851 24475 authenticatee.hpp:133] Creating new client SASL connection I1030 05:55:07.057003 24473 status_update_manager.cpp:197] Recovering status update manager I1030 05:55:07.057252 24488 master.cpp:3853] Authenticating scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:07.057502 24489 containerizer.cpp:281] Recovering containerizer I1030 05:55:07.057524 24475 authenticator.hpp:161] Creating new server SASL connection I1030 05:55:07.057688 24475 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1030 05:55:07.057719 24475 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1030 05:55:07.057919 24481 authenticator.hpp:267] Received SASL authentication start I1030 05:55:07.057968 24481 authenticator.hpp:389] Authentication requires more steps I1030 05:55:07.058070 24473 authenticatee.hpp:270] Received SASL authentication step I1030 05:55:07.058199 24485 authenticator.hpp:295] Received SASL authentication step I1030 05:55:07.058223 24485 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1030 05:55:07.058233 24485 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1030 05:55:07.058259 24485 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1030 05:55:07.058290 24485 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1030 05:55:07.058302 24485 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1030 05:55:07.058307 24485 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1030 05:55:07.058320 24485 authenticator.hpp:381] Authentication success I1030 05:55:07.058467 24480 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:07.058493 24485 slave.cpp:3456] Finished recovery I1030 05:55:07.058593 24478 authenticatee.hpp:310] Authentication success I1030 05:55:07.058838 24478 sched.cpp:357] Successfully authenticated with master master@67.195.81.187:40429 I1030 05:55:07.058861 24478 sched.cpp:476] Sending registration request to master@67.195.81.187:40429 I1030 05:55:07.058969 24475 slave.cpp:602] New master detected at master@67.195.81.187:40429 I1030 05:55:07.058969 24487 status_update_manager.cpp:171] Pausing sending status updates I1030 05:55:07.059026 24475 slave.cpp:665] Authenticating with master master@67.195.81.187:40429 I1030 05:55:07.059061 24481 master.cpp:1362] Received registration request for framework 'framework1' at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:07.059131 24481 master.cpp:1321] Authorizing framework principal 'test-principal' to receive offers for role 'role1' I1030 05:55:07.059171 24475 slave.cpp:638] Detecting new master I1030 05:55:07.059214 24482 authenticatee.hpp:133] Creating new client SASL connection I1030 05:55:07.059550 24481 master.cpp:3853] Authenticating slave(203)@67.195.81.187:40429 I1030 05:55:07.059787 24487 authenticator.hpp:161] Creating new server SASL connection I1030 05:55:07.059922 24481 master.cpp:1426] Registering framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:07.059996 24474 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1030 05:55:07.060034 24474 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1030 05:55:07.060117 24474 authenticator.hpp:267] Received SASL authentication start I1030 05:55:07.060165 24474 authenticator.hpp:389] Authentication requires more steps I1030 05:55:07.060377 24476 hierarchical_allocator_process.hpp:329] Added framework 20141030-055506-3142697795-40429-24459-0000 I1030 05:55:07.060394 24488 sched.cpp:407] Framework registered with 20141030-055506-3142697795-40429-24459-0000 I1030 05:55:07.060403 24476 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1030 05:55:07.060431 24476 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 29857ns I1030 05:55:07.060443 24488 sched.cpp:421] Scheduler::registered took 19407ns I1030 05:55:07.060545 24478 authenticatee.hpp:270] Received SASL authentication step I1030 05:55:07.060645 24478 authenticator.hpp:295] Received SASL authentication step I1030 05:55:07.060673 24478 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1030 05:55:07.060685 24478 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1030 05:55:07.060714 24478 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1030 05:55:07.060740 24478 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1030 05:55:07.060760 24478 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1030 05:55:07.060770 24478 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1030 05:55:07.060788 24478 authenticator.hpp:381] Authentication success I1030 05:55:07.060920 24474 authenticatee.hpp:310] Authentication success I1030 05:55:07.060945 24485 master.cpp:3893] Successfully authenticated principal 'test-principal' at slave(203)@67.195.81.187:40429 I1030 05:55:07.061388 24489 slave.cpp:722] Successfully authenticated with master master@67.195.81.187:40429 I1030 05:55:07.061504 24489 slave.cpp:1050] Will retry registration in 4.778336ms if necessary I1030 05:55:07.061718 24480 master.cpp:3032] Registering slave at slave(203)@67.195.81.187:40429 (pomona.apache.org) with id 20141030-055506-3142697795-40429-24459-S0 I1030 05:55:07.062119 24489 registrar.cpp:445] Applied 1 operations in 53691ns; attempting to update the 'registry' I1030 05:55:07.065182 24479 log.cpp:680] Attempting to append 316 bytes to the log I1030 05:55:07.065337 24487 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I1030 05:55:07.066359 24474 replica.cpp:508] Replica received write request for position 3 I1030 05:55:07.066643 24474 leveldb.cpp:343] Persisting action (335 bytes) to leveldb took 249579ns I1030 05:55:07.066671 24474 replica.cpp:676] Persisted action at 3 I../../src/tests/allocator_tests.cpp:120: Failure Failed to wait 10secs for offers1 1030 05:55:07.067101 24477 slave.cpp:1050] Will retry registration in 24.08243ms if necessary I1030 05:55:07.067140 24473 master.cpp:3020] Ignoring register slave message from slave(203)@67.195.81.187:40429 (pomona.apache.org) as admission is already in progress I1030 05:55:07.067395 24488 replica.cpp:655] Replica received learned notice for position 3 I1030 05:55:07.943416 24478 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1030 05:55:19.804687 24478 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 11.861261123secs I1030 05:55:11.942713 24474 master.cpp:120] No whitelist given. Advertising offers for all slaves I1030 05:55:19.805850 24488 leveldb.cpp:343] Persisting action (337 bytes) to leveldb took 1.067224ms I1030 05:55:19.806012 24488 replica.cpp:676] Persisted action at 3 ../../src/tests/allocator_tests.cpp:115: Failure Actual function call count doesn't match EXPECT_CALL(sched1, resourceOffers(_, _))... Expected: to be called once Actual: never called - unsatisfied and active I1030 05:55:19.806144 24488 replica.cpp:661] Replica learned APPEND action at position 3 I1030 05:55:19.806695 24473 master.cpp:768] Framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 disconnected I1030 05:55:19.806726 24473 master.cpp:1731] Disconnecting framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:19.806751 24473 master.cpp:1747] Deactivating framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:19.806967 24473 master.cpp:790] Giving framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 0ns to failover ../../src/tests/allocator_tests.cpp:94: Failure Actual function call count doesn't match EXPECT_CALL(allocator, slaveAdded(_, _, _))... Expected: to be called once Actual: never called - unsatisfied and active F1030 05:55:19.806967 24480 logging.cpp:57] RAW: Pure virtual method called I1030 05:55:19.807348 24488 master.cpp:3665] Framework failover timeout, removing framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:19.807370 24488 master.cpp:4201] Removing framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 *** Aborted at 1414648519 (unix time) try """"date -d @1414648519"""" if you are using GNU date *** PC: @ 0x91bc86 process::PID<>::PID() *** SIGSEGV (@0x0) received by PID 24459 (TID 0x2b86c919a700) from PID 0; stack trace: *** I1030 05:55:19.808631 24489 registrar.cpp:490] Successfully updated the 'registry' in 12.746377984secs @ 0x2b86c55fc340 (unknown) I1030 05:55:19.808938 24473 log.cpp:699] Attempting to truncate the log to 3 @ 0x2b86c3327174 google::LogMessage::Fail() I1030 05:55:19.809084 24481 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 @ 0x91bc86 process::PID<>::PID() @ 0x2b86c332c868 google::RawLog__() I1030 05:55:19.810191 24479 replica.cpp:508] Replica received write request for position 4 I1030 05:55:19.810899 24479 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 678090ns I1030 05:55:19.810919 24479 replica.cpp:676] Persisted action at 4 @ 0x91bf24 process::Process<>::self() I1030 05:55:19.811635 24485 replica.cpp:655] Replica received learned notice for position 4 I1030 05:55:19.812180 24485 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 523927ns I1030 05:55:19.812228 24485 leveldb.cpp:401] Deleting ~2 keys from leveldb took 29523ns I1030 05:55:19.812242 24485 replica.cpp:676] Persisted action at 4 I @ 0x2b86c29d2a36 __cxa_pure_virtual 1030 05:55:19.812258 24485 replica.cpp:661] Replica learned TRUNCATE action at position 4 @ 0x1046936 testing::internal::UntypedFunctionMockerBase::UntypedInvokeWith() I1030 05:55:19.829655 24474 slave.cpp:1050] Will retry registration in 31.785967ms if necessary @ 0x9c0633 testing::internal::FunctionMockerBase<>::InvokeWith() @ 0x9b6152 testing::internal::FunctionMocker<>::Invoke() @ 0x9abdeb mesos::internal::tests::MockAllocatorProcess<>::frameworkDeactivated() @ 0x91c78f _ZZN7process8dispatchIN5mesos8internal6master9allocator16AllocatorProcessERKNS1_11FrameworkIDES6_EEvRKNS_3PIDIT_EEMSA_FvT0_ET1_ENKUlPNS_11ProcessBaseEE_clESJ_ @ 0x959ad7 _ZNSt17_Function_handlerIFvPN7process11ProcessBaseEEZNS0_8dispatchIN5mesos8internal6master9allocator16AllocatorProcessERKNS5_11FrameworkIDESA_EEvRKNS0_3PIDIT_EEMSE_FvT0_ET1_EUlS2_E_E9_M_invokeERKSt9_Any_dataS2_ @ 0x2b86c32d174f std::function<>::operator()() @ 0x2b86c32b2a17 process::ProcessBase::visit() @ 0x2b86c32bd34c process::DispatchEvent::visit() @ 0x8e0812 process::ProcessBase::serve() @ 0x2b86c32aec8c process::ProcessManager::resume() I1030 05:55:22.050081 24478 slave.cpp:1050] Will retry registration in 25.327301ms if necessary @ 0x2b86c32a5351 process::schedule() @ 0x2b86c55f4182 start_thread @ 0x2b86c5904fbd (unknown) {noformat}"""," [ RUN ] DRFAllocatorTest.DRFAllocatorProcess Using temporary directory '/tmp/DRFAllocatorTest_DRFAllocatorProcess_BI905j' I1030 05:55:06.934813 24459 leveldb.cpp:176] Opened db in 3.175202ms I1030 05:55:06.935925 24459 leveldb.cpp:183] Compacted db in 1.077924ms I1030 05:55:06.935976 24459 leveldb.cpp:198] Created db iterator in 16460ns I1030 05:55:06.935995 24459 leveldb.cpp:204] Seeked to beginning of db in 2018ns I1030 05:55:06.936005 24459 leveldb.cpp:273] Iterated through 0 keys in the db in 335ns I1030 05:55:06.936039 24459 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I1030 05:55:06.936705 24480 recover.cpp:437] Starting replica recovery I1030 05:55:06.937023 24480 recover.cpp:463] Replica is in EMPTY status I1030 05:55:06.938158 24475 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request I1030 05:55:06.938859 24482 recover.cpp:188] Received a recover response from a replica in EMPTY status I1030 05:55:06.939486 24474 recover.cpp:554] Updating replica status to STARTING I1030 05:55:06.940249 24489 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 591981ns I1030 05:55:06.940274 24489 replica.cpp:320] Persisted replica status to STARTING I1030 05:55:06.940752 24481 recover.cpp:463] Replica is in STARTING status I1030 05:55:06.940820 24489 master.cpp:312] Master 20141030-055506-3142697795-40429-24459 (pomona.apache.org) started on 67.195.81.187:40429 I1030 05:55:06.940871 24489 master.cpp:358] Master only allowing authenticated frameworks to register I1030 05:55:06.940891 24489 master.cpp:363] Master only allowing authenticated slaves to register I1030 05:55:06.940908 24489 credentials.hpp:36] Loading credentials for authentication from '/tmp/DRFAllocatorTest_DRFAllocatorProcess_BI905j/credentials' I1030 05:55:06.941215 24489 master.cpp:392] Authorization enabled I1030 05:55:06.941751 24475 master.cpp:120] No whitelist given. Advertising offers for all slaves I1030 05:55:06.942227 24474 replica.cpp:638] Replica in STARTING status received a broadcasted recover request I1030 05:55:06.942401 24476 hierarchical_allocator_process.hpp:299] Initializing hierarchical allocator process with master : master@67.195.81.187:40429 I1030 05:55:06.942895 24483 recover.cpp:188] Received a recover response from a replica in STARTING status I1030 05:55:06.943035 24474 master.cpp:1242] The newly elected leader is master@67.195.81.187:40429 with id 20141030-055506-3142697795-40429-24459 I1030 05:55:06.943063 24474 master.cpp:1255] Elected as the leading master! I1030 05:55:06.943079 24474 master.cpp:1073] Recovering from registrar I1030 05:55:06.943313 24480 registrar.cpp:313] Recovering registrar I1030 05:55:06.943455 24475 recover.cpp:554] Updating replica status to VOTING I1030 05:55:06.944144 24474 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 536365ns I1030 05:55:06.944172 24474 replica.cpp:320] Persisted replica status to VOTING I1030 05:55:06.944355 24489 recover.cpp:568] Successfully joined the Paxos group I1030 05:55:06.944576 24489 recover.cpp:452] Recover process terminated I1030 05:55:06.945155 24486 log.cpp:656] Attempting to start the writer I1030 05:55:06.947013 24473 replica.cpp:474] Replica received implicit promise request with proposal 1 I1030 05:55:06.947854 24473 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 806463ns I1030 05:55:06.947883 24473 replica.cpp:342] Persisted promised to 1 I1030 05:55:06.948547 24481 coordinator.cpp:230] Coordinator attemping to fill missing position I1030 05:55:06.950269 24479 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2 I1030 05:55:06.950933 24479 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 603843ns I1030 05:55:06.950961 24479 replica.cpp:676] Persisted action at 0 I1030 05:55:06.952180 24476 replica.cpp:508] Replica received write request for position 0 I1030 05:55:06.952239 24476 leveldb.cpp:438] Reading position from leveldb took 28437ns I1030 05:55:06.952896 24476 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 623980ns I1030 05:55:06.952926 24476 replica.cpp:676] Persisted action at 0 I1030 05:55:06.953543 24485 replica.cpp:655] Replica received learned notice for position 0 I1030 05:55:06.954082 24485 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 511807ns I1030 05:55:06.954107 24485 replica.cpp:676] Persisted action at 0 I1030 05:55:06.954128 24485 replica.cpp:661] Replica learned NOP action at position 0 I1030 05:55:06.954710 24473 log.cpp:672] Writer started with ending position 0 I1030 05:55:06.956215 24478 leveldb.cpp:438] Reading position from leveldb took 33085ns I1030 05:55:06.959481 24475 registrar.cpp:346] Successfully fetched the registry (0B) in 16.11904ms I1030 05:55:06.959616 24475 registrar.cpp:445] Applied 1 operations in 28239ns; attempting to update the 'registry' I1030 05:55:06.962514 24487 log.cpp:680] Attempting to append 139 bytes to the log I1030 05:55:06.962646 24474 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I1030 05:55:06.964146 24486 replica.cpp:508] Replica received write request for position 1 I1030 05:55:06.964962 24486 leveldb.cpp:343] Persisting action (158 bytes) to leveldb took 743389ns I1030 05:55:06.964993 24486 replica.cpp:676] Persisted action at 1 I1030 05:55:06.965895 24473 replica.cpp:655] Replica received learned notice for position 1 I1030 05:55:06.966531 24473 leveldb.cpp:343] Persisting action (160 bytes) to leveldb took 607242ns I1030 05:55:06.966555 24473 replica.cpp:676] Persisted action at 1 I1030 05:55:06.966578 24473 replica.cpp:661] Replica learned APPEND action at position 1 I1030 05:55:06.967706 24481 registrar.cpp:490] Successfully updated the 'registry' in 8.036096ms I1030 05:55:06.967895 24481 registrar.cpp:376] Successfully recovered registrar I1030 05:55:06.967993 24482 log.cpp:699] Attempting to truncate the log to 1 I1030 05:55:06.968258 24479 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I1030 05:55:06.968268 24475 master.cpp:1100] Recovered 0 slaves from the Registry (101B) ; allowing 10mins for slaves to re-register I1030 05:55:06.969156 24476 replica.cpp:508] Replica received write request for position 2 I1030 05:55:06.969678 24476 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 491913ns I1030 05:55:06.969703 24476 replica.cpp:676] Persisted action at 2 I1030 05:55:06.970459 24478 replica.cpp:655] Replica received learned notice for position 2 I1030 05:55:06.971060 24478 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 573076ns I1030 05:55:06.971124 24478 leveldb.cpp:401] Deleting ~1 keys from leveldb took 35339ns I1030 05:55:06.971145 24478 replica.cpp:676] Persisted action at 2 I1030 05:55:06.971168 24478 replica.cpp:661] Replica learned TRUNCATE action at position 2 I1030 05:55:06.980211 24459 containerizer.cpp:100] Using isolation: posix/cpu,posix/mem I1030 05:55:06.984153 24473 slave.cpp:169] Slave started on 203)@67.195.81.187:40429 I1030 05:55:07.055308 24473 credentials.hpp:84] Loading credential for authentication from '/tmp/DRFAllocatorTest_DRFAllocatorProcess_wULx31/credential' I1030 05:55:06.988750 24459 sched.cpp:137] Version: 0.21.0 I1030 05:55:07.055521 24473 slave.cpp:276] Slave using credential for: test-principal I1030 05:55:07.055726 24473 slave.cpp:289] Slave resources: cpus(*):2; mem(*):1024; disk(*):0; ports(*):[31000-32000] I1030 05:55:07.055865 24473 slave.cpp:318] Slave hostname: pomona.apache.org I1030 05:55:07.055881 24473 slave.cpp:319] Slave checkpoint: false W1030 05:55:07.055889 24473 slave.cpp:321] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I1030 05:55:07.056172 24485 sched.cpp:233] New master detected at master@67.195.81.187:40429 I1030 05:55:07.056222 24485 sched.cpp:283] Authenticating with master master@67.195.81.187:40429 I1030 05:55:07.056717 24485 state.cpp:33] Recovering state from '/tmp/DRFAllocatorTest_DRFAllocatorProcess_wULx31/meta' I1030 05:55:07.056851 24475 authenticatee.hpp:133] Creating new client SASL connection I1030 05:55:07.057003 24473 status_update_manager.cpp:197] Recovering status update manager I1030 05:55:07.057252 24488 master.cpp:3853] Authenticating scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:07.057502 24489 containerizer.cpp:281] Recovering containerizer I1030 05:55:07.057524 24475 authenticator.hpp:161] Creating new server SASL connection I1030 05:55:07.057688 24475 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1030 05:55:07.057719 24475 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1030 05:55:07.057919 24481 authenticator.hpp:267] Received SASL authentication start I1030 05:55:07.057968 24481 authenticator.hpp:389] Authentication requires more steps I1030 05:55:07.058070 24473 authenticatee.hpp:270] Received SASL authentication step I1030 05:55:07.058199 24485 authenticator.hpp:295] Received SASL authentication step I1030 05:55:07.058223 24485 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1030 05:55:07.058233 24485 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1030 05:55:07.058259 24485 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1030 05:55:07.058290 24485 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1030 05:55:07.058302 24485 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1030 05:55:07.058307 24485 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1030 05:55:07.058320 24485 authenticator.hpp:381] Authentication success I1030 05:55:07.058467 24480 master.cpp:3893] Successfully authenticated principal 'test-principal' at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:07.058493 24485 slave.cpp:3456] Finished recovery I1030 05:55:07.058593 24478 authenticatee.hpp:310] Authentication success I1030 05:55:07.058838 24478 sched.cpp:357] Successfully authenticated with master master@67.195.81.187:40429 I1030 05:55:07.058861 24478 sched.cpp:476] Sending registration request to master@67.195.81.187:40429 I1030 05:55:07.058969 24475 slave.cpp:602] New master detected at master@67.195.81.187:40429 I1030 05:55:07.058969 24487 status_update_manager.cpp:171] Pausing sending status updates I1030 05:55:07.059026 24475 slave.cpp:665] Authenticating with master master@67.195.81.187:40429 I1030 05:55:07.059061 24481 master.cpp:1362] Received registration request for framework 'framework1' at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:07.059131 24481 master.cpp:1321] Authorizing framework principal 'test-principal' to receive offers for role 'role1' I1030 05:55:07.059171 24475 slave.cpp:638] Detecting new master I1030 05:55:07.059214 24482 authenticatee.hpp:133] Creating new client SASL connection I1030 05:55:07.059550 24481 master.cpp:3853] Authenticating slave(203)@67.195.81.187:40429 I1030 05:55:07.059787 24487 authenticator.hpp:161] Creating new server SASL connection I1030 05:55:07.059922 24481 master.cpp:1426] Registering framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:07.059996 24474 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1030 05:55:07.060034 24474 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' I1030 05:55:07.060117 24474 authenticator.hpp:267] Received SASL authentication start I1030 05:55:07.060165 24474 authenticator.hpp:389] Authentication requires more steps I1030 05:55:07.060377 24476 hierarchical_allocator_process.hpp:329] Added framework 20141030-055506-3142697795-40429-24459-0000 I1030 05:55:07.060394 24488 sched.cpp:407] Framework registered with 20141030-055506-3142697795-40429-24459-0000 I1030 05:55:07.060403 24476 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1030 05:55:07.060431 24476 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 29857ns I1030 05:55:07.060443 24488 sched.cpp:421] Scheduler::registered took 19407ns I1030 05:55:07.060545 24478 authenticatee.hpp:270] Received SASL authentication step I1030 05:55:07.060645 24478 authenticator.hpp:295] Received SASL authentication step I1030 05:55:07.060673 24478 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1030 05:55:07.060685 24478 auxprop.cpp:153] Looking up auxiliary property '*userPassword' I1030 05:55:07.060714 24478 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1030 05:55:07.060740 24478 auxprop.cpp:81] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1030 05:55:07.060760 24478 auxprop.cpp:103] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1030 05:55:07.060770 24478 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1030 05:55:07.060788 24478 authenticator.hpp:381] Authentication success I1030 05:55:07.060920 24474 authenticatee.hpp:310] Authentication success I1030 05:55:07.060945 24485 master.cpp:3893] Successfully authenticated principal 'test-principal' at slave(203)@67.195.81.187:40429 I1030 05:55:07.061388 24489 slave.cpp:722] Successfully authenticated with master master@67.195.81.187:40429 I1030 05:55:07.061504 24489 slave.cpp:1050] Will retry registration in 4.778336ms if necessary I1030 05:55:07.061718 24480 master.cpp:3032] Registering slave at slave(203)@67.195.81.187:40429 (pomona.apache.org) with id 20141030-055506-3142697795-40429-24459-S0 I1030 05:55:07.062119 24489 registrar.cpp:445] Applied 1 operations in 53691ns; attempting to update the 'registry' I1030 05:55:07.065182 24479 log.cpp:680] Attempting to append 316 bytes to the log I1030 05:55:07.065337 24487 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I1030 05:55:07.066359 24474 replica.cpp:508] Replica received write request for position 3 I1030 05:55:07.066643 24474 leveldb.cpp:343] Persisting action (335 bytes) to leveldb took 249579ns I1030 05:55:07.066671 24474 replica.cpp:676] Persisted action at 3 I../../src/tests/allocator_tests.cpp:120: Failure Failed to wait 10secs for offers1 1030 05:55:07.067101 24477 slave.cpp:1050] Will retry registration in 24.08243ms if necessary I1030 05:55:07.067140 24473 master.cpp:3020] Ignoring register slave message from slave(203)@67.195.81.187:40429 (pomona.apache.org) as admission is already in progress I1030 05:55:07.067395 24488 replica.cpp:655] Replica received learned notice for position 3 I1030 05:55:07.943416 24478 hierarchical_allocator_process.hpp:697] No resources available to allocate! I1030 05:55:19.804687 24478 hierarchical_allocator_process.hpp:659] Performed allocation for 0 slaves in 11.861261123secs I1030 05:55:11.942713 24474 master.cpp:120] No whitelist given. Advertising offers for all slaves I1030 05:55:19.805850 24488 leveldb.cpp:343] Persisting action (337 bytes) to leveldb took 1.067224ms I1030 05:55:19.806012 24488 replica.cpp:676] Persisted action at 3 ../../src/tests/allocator_tests.cpp:115: Failure Actual function call count doesn't match EXPECT_CALL(sched1, resourceOffers(_, _))... Expected: to be called once Actual: never called - unsatisfied and active I1030 05:55:19.806144 24488 replica.cpp:661] Replica learned APPEND action at position 3 I1030 05:55:19.806695 24473 master.cpp:768] Framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 disconnected I1030 05:55:19.806726 24473 master.cpp:1731] Disconnecting framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:19.806751 24473 master.cpp:1747] Deactivating framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:19.806967 24473 master.cpp:790] Giving framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 0ns to failover ../../src/tests/allocator_tests.cpp:94: Failure Actual function call count doesn't match EXPECT_CALL(allocator, slaveAdded(_, _, _))... Expected: to be called once Actual: never called - unsatisfied and active F1030 05:55:19.806967 24480 logging.cpp:57] RAW: Pure virtual method called I1030 05:55:19.807348 24488 master.cpp:3665] Framework failover timeout, removing framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 I1030 05:55:19.807370 24488 master.cpp:4201] Removing framework 20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 *** Aborted at 1414648519 (unix time) try """"date -d @1414648519"""" if you are using GNU date *** PC: @ 0x91bc86 process::PID<>::PID() *** SIGSEGV (@0x0) received by PID 24459 (TID 0x2b86c919a700) from PID 0; stack trace: *** I1030 05:55:19.808631 24489 registrar.cpp:490] Successfully updated the 'registry' in 12.746377984secs @ 0x2b86c55fc340 (unknown) I1030 05:55:19.808938 24473 log.cpp:699] Attempting to truncate the log to 3 @ 0x2b86c3327174 google::LogMessage::Fail() I1030 05:55:19.809084 24481 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 @ 0x91bc86 process::PID<>::PID() @ 0x2b86c332c868 google::RawLog__() I1030 05:55:19.810191 24479 replica.cpp:508] Replica received write request for position 4 I1030 05:55:19.810899 24479 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 678090ns I1030 05:55:19.810919 24479 replica.cpp:676] Persisted action at 4 @ 0x91bf24 process::Process<>::self() I1030 05:55:19.811635 24485 replica.cpp:655] Replica received learned notice for position 4 I1030 05:55:19.812180 24485 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 523927ns I1030 05:55:19.812228 24485 leveldb.cpp:401] Deleting ~2 keys from leveldb took 29523ns I1030 05:55:19.812242 24485 replica.cpp:676] Persisted action at 4 I @ 0x2b86c29d2a36 __cxa_pure_virtual 1030 05:55:19.812258 24485 replica.cpp:661] Replica learned TRUNCATE action at position 4 @ 0x1046936 testing::internal::UntypedFunctionMockerBase::UntypedInvokeWith() I1030 05:55:19.829655 24474 slave.cpp:1050] Will retry registration in 31.785967ms if necessary @ 0x9c0633 testing::internal::FunctionMockerBase<>::InvokeWith() @ 0x9b6152 testing::internal::FunctionMocker<>::Invoke() @ 0x9abdeb mesos::internal::tests::MockAllocatorProcess<>::frameworkDeactivated() @ 0x91c78f _ZZN7process8dispatchIN5mesos8internal6master9allocator16AllocatorProcessERKNS1_11FrameworkIDES6_EEvRKNS_3PIDIT_EEMSA_FvT0_ET1_ENKUlPNS_11ProcessBaseEE_clESJ_ @ 0x959ad7 _ZNSt17_Function_handlerIFvPN7process11ProcessBaseEEZNS0_8dispatchIN5mesos8internal6master9allocator16AllocatorProcessERKNS5_11FrameworkIDESA_EEvRKNS0_3PIDIT_EEMSE_FvT0_ET1_EUlS2_E_E9_M_invokeERKSt9_Any_dataS2_ @ 0x2b86c32d174f std::function<>::operator()() @ 0x2b86c32b2a17 process::ProcessBase::visit() @ 0x2b86c32bd34c process::DispatchEvent::visit() @ 0x8e0812 process::ProcessBase::serve() @ 0x2b86c32aec8c process::ProcessManager::resume() I1030 05:55:22.050081 24478 slave.cpp:1050] Will retry registration in 25.327301ms if necessary @ 0x2b86c32a5351 process::schedule() @ 0x2b86c55f4182 start_thread @ 0x2b86c5904fbd (unknown) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2030","11/03/2014 18:16:30",3,"Maintain persistent disk resources in master memory. ""Maintain an in-memory data structure to track persistent disk resources on each slave. Update this data structure when slaves register/re-register/disconnect, etc.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2032","11/03/2014 19:29:08",13,"Update Maintenance design to account for persistent resources. ""With persistent resources and dynamic reservations, frameworks need to know how long the resources will be unavailable for maintenance operations. This is because for persistent resources, the framework needs to understand how long the persistent resource will be unavailable. For example, if there will be a 10 minute reboot for a kernel upgrade, the framework will not want to re-replicate all of it's persistent data on the machine. Rather, tolerating one unavailable replica for the maintenance window would be preferred. I'd like to do a revisit of the design to ensure it works well for persistent resources as well.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2035","11/03/2014 22:29:26",5,"Add reason to containerizer proto Termination ""When an isolator kills a task, the reason is unknown. As part of MESOS-1830, the reason is set to a general one but ideally we would have the termination reason to pass through to the status update.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2043","11/04/2014 20:16:11",5,"Framework auth fail with timeout error and never get authenticated ""I'm facing this issue in master as of https://github.com/apache/mesos/commit/74ea59e144d131814c66972fb0cc14784d3503d4 As [~adam-mesos] mentioned in IRC, this sounds similar to MESOS-1866. I'm running 1 master and 1 scheduler (aurora). The framework authentication fail due to time out: error on mesos master: scheduler error: Looks like 2 instances {{scheduler-20f88a53-5945-4977-b5af-28f6c52d3c94}} & {{scheduler-d2d4437b-d375-4467-a583-362152fe065a}} of same framework is trying to authenticate and fail. Restarting master and scheduler didn't fix it. This particular issue happen with 1 master and 1 scheduler after MESOS-1866 is fixed."""," I1104 19:37:17.741449 8329 master.cpp:3874] Authenticating scheduler-d2d4437b-d375-4467-a583-362152fe065a@SCHEDULER_IP:8083 I1104 19:37:17.741585 8329 master.cpp:3885] Using default CRAM-MD5 authenticator I1104 19:37:17.742106 8336 authenticator.hpp:169] Creating new server SASL connection W1104 19:37:22.742959 8329 master.cpp:3953] Authentication timed out W1104 19:37:22.743548 8329 master.cpp:3930] Failed to authenticate scheduler-d2d4437b-d375-4467-a583-362152fe065a@SCHEDULER_IP:8083: Authentication discarded I1104 19:38:57.885486 49012 sched.cpp:283] Authenticating with master master@MASTER_IP:PORT I1104 19:38:57.885928 49002 authenticatee.hpp:133] Creating new client SASL connection I1104 19:38:57.890581 49007 authenticatee.hpp:224] Received SASL authentication mechanisms: CRAM-MD5 I1104 19:38:57.890656 49007 authenticatee.hpp:250] Attempting to authenticate with mechanism 'CRAM-MD5' W1104 19:39:02.891196 49005 sched.cpp:378] Authentication timed out I1104 19:39:02.891850 49018 sched.cpp:338] Failed to authenticate with master master@MASTER_IP:PORT: Authentication discarded W1104 19:36:30.769420 8319 master.cpp:3930] Failed to authenticate scheduler-20f88a53-5945-4977-b5af-28f6c52d3c94@SCHEDULER_IP:8083: Failed to communicate with authenticatee I1104 19:36:42.701441 8328 master.cpp:3860] Queuing up authentication request from scheduler-d2d4437b-d375-4467-a583-362152fe065a@SCHEDULER_IP:8083 because authentication is still in progress ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2052","11/07/2014 19:16:53",1,"RunState::recover should always recover 'completed' ""RunState::recover() will return partial state if it cannot find or open the libprocess pid file. Specifically, it does not recover the 'completed' flag. However, if the slave has removed the executor (because launch failed or the executor failed to register) the sentinel flag will be set and this fact should be recovered. This ensures that container recovery is not attempted later. This was discovered when the LinuxLauncher failed to recover because it was asked to recover two containers with the same forkedPid. Investigation showed the executors both OOM'ed before registering, i.e., no libprocess pid file was present. However, the containerizer had detected the OOM, destroyed the container, and notified the slave which cleaned everything up: failing the task and calling removeExecutor (which writes the completed sentinel file.)""","",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2056","11/10/2014 17:30:20",1,"Refactor fetcher code in preparation for fetcher cache ""Refactor/rearrange fetcher-related code so that cache functionality can be dropped in. One could do both together in one go. This is splitting up reviews into smaller chunks. It will not immediately be obvious how this change will be used later, but it will look better-factored and still do the exact same thing as before. In particular, a download routine to be reused several times in launcher/fetcher will be factored out and the remainder of fetcher-related code can be moved from the containerizer realm into fetcher.cpp.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2057","11/10/2014 17:32:28",8,"Concurrency control for fetcher cache ""Having added a URI flag to CommandInfo messages (in MESOS-2069) that indicates caching, caching files downloaded by the fetcher in a repository, now ensure that when a URI is """"cached"""", it is only ever downloaded once for the same user on the same slave as long as the slave keeps running. This even holds if multiple tasks request the same URI concurrently. If multiple requests for the same URI occur, perform only one of them and reuse the result. Make concurrent requests for the same URI wait for the one download. Different URIs from different CommandInfos can be downloaded concurrently. No cache eviction, cleanup or failover will be handled for now. Additional tickets will be filed for these enhancements. (So don't use this feature in production until the whole epic is complete.) Note that implementing this does not suffice for production use. This ticket contains the main part of the fetcher logic, though. See the epic MESOS-336 for the rest of the features that lead to a fully functional fetcher cache. The proposed general approach is to keep all bookkeeping about what is in which stage of being fetched and where it resides in the slave's MesosContainerizerProcess, so that all concurrent access is disambiguated and controlled by an """"actor"""" (aka libprocess """"process""""). Depends on MESOS-2056 and MESOS-2069. ""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2058","11/10/2014 21:29:03",1,"Deprecate stats.json endpoints for Master and Slave ""With the introduction of the libprocess {{/metrics/snapshot}} endpoint, metrics are now duplicated in the Master and Slave between this and {{stats.json}}. We should deprecate the {{stats.json}} endpoints. Manual inspection of {{stats.json}} shows that all metrics are now covered by the new endpoint for Master and Slave.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2062","11/10/2014 23:57:08",3,"Add InverseOffer to Event/Call API. ""The initial use case for InverseOffer in the framework API will be the maintenance primitives in mesos: MESOS-1474. One way to add this is to tack it on to the OFFERS Event: """," message Offers { repeated Offer offers = 1; repeated InverseOffer inverse_offers = 2; } ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2063","11/11/2014 00:07:23",5,"Add InverseOffer to C++ Scheduler API. ""The initial use case for InverseOffer in the framework API will be the maintenance primitives in mesos: MESOS-1474. One way to add these to the C++ Scheduler API is to add a new callback: libmesos compatibility will need to be figured out here. We may want to leave the C++ binding untouched in favor of Event/Call, in order to not break API compatibility for schedulers."""," virtual void inverseResourceOffers( SchedulerDriver* driver, const std::vector& inverseOffers) = 0; ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2064","11/11/2014 00:07:25",5,"Add InverseOffer to Java Scheduler API. ""The initial use case for InverseOffer in the framework API will be the maintenance primitives in mesos: MESOS-1474. One way to add these to the Java Scheduler API is to add a new callback: JAR / libmesos compatibility will need to be figured out here. We may want to leave the Java binding untouched in favor of Event/Call, in order to not break API compatibility for schedulers."""," void inverseResourceOffers( SchedulerDriver driver, List inverseOffers); ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2065","11/11/2014 00:07:27",5,"Add InverseOffer to Python Scheduler API. ""The initial use case for InverseOffer in the framework API will be the maintenance primitives in mesos: MESOS-1474. One way to add these to the Python Scheduler API is to add a new callback: Egg / libmesos compatibility will need to be figured out here. We may want to leave the Python binding untouched in favor of Event/Call, in order to not break API compatibility for schedulers."""," def inverseResourceOffers(self, driver, inverse_offers): ",0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2067","11/11/2014 02:19:30",8,"Add HTTP API to the master for maintenance operations. ""Based on MESOS-1474, we'd like to provide an HTTP API on the master for the maintenance primitives in mesos. For the MVP, we'll want something like this for manipulating the schedule: (Note: The slashes in URLs might not be supported yet.) A schedule might look like: There should be firewall settings such that only those with access to master can use these endpoints."""," /maintenance/schedule GET - returns the schedule, which will include the various maintenance windows. POST - create or update the schedule with a JSON blob (see below). /maintenance/status GET - returns a list of machines and their maintenance mode. /maintenance/start POST - Transition a set of machines from Draining into Deactivated mode. /maintenance/stop POST - Transition a set of machines from Deactivated into Normal mode. /maintenance/consensus <- (Not sure what the right name is. matrix? acceptance?) GET - Returns the latest info on which frameworks have accepted or declined the maintenance schedule. { """"windows"""" : [ { """"machines"""" : [ { """"ip"""" : """"192.168.0.1"""" }, { """"hostname"""" : """"localhost"""" }, ... ], """"unavailability"""" : { """"start"""" : 12345, // Epoch seconds. """"duration"""" : 1000 // Seconds. } }, ... ] } ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2069","11/11/2014 11:58:20",8,"Basic fetcher cache functionality ""Add a flag to CommandInfo URI protobufs that indicates that files downloaded by the fetcher shall be cached in a repository. To be followed by MESOS-2057 for concurrency control. Also see MESOS-336 for the overall goals for the fetcher cache.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2070","11/11/2014 11:58:34",2,"Implement simple slave recovery behavior for fetcher cache ""Clean the fetcher cache completely upon slave restart/recovery. This implements correct, albeit not ideal behavior. More efficient schemes that restore knowledge about cached files or even resume downloads can be added later. ""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2072","11/11/2014 15:00:33",8,"Fetcher cache eviction ""Delete files from the fetcher cache so that a given cache size is never exceeded. Succeed in doing so while concurrent downloads are on their way and new requests are pouring in. Idea: measure the size of each download before it begins, make enough room before the download. This means that only download mechanisms that divulge the size before the main download will be supported. AFAWK, those in use so far have this property. The calculation of how much space to free needs to be under concurrency control, accumulating all space needed for competing, incomplete download requests. (The Python script that performs fetcher caching for Aurora does not seem to implement this. See https://gist.github.com/zmanji/f41df77510ef9d00265a, imagine several of these programs running concurrently, each one's _cache_eviction() call succeeding, each perceiving the SAME free space being available.) Ultimately, a conflict resolution strategy is needed if just the downloads underway already exceed the cache capacity. Then, as a fallback, direct download into the work directory will be used for some tasks. TBD how to pick which task gets treated how. At first, only support copying of any downloaded files to the work directory for task execution. This isolates the task life cycle after starting a task from cache eviction considerations. (Later, we can add symbolic links that avoid copying. But then eviction of fetched files used by ongoing tasks must be blocked, which adds complexity. another future extension is MESOS-1667 """"Extract from URI while downloading into work dir""""). ""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2074","11/11/2014 16:59:11",5,"Fetcher cache test fixture ""To accelerate providing good test coverage for the fetcher cache (MESOS-336), we can provide a framework that canonicalizes creating and running a number of tasks and allows easy parametrization with combinations of the following: - whether to cache or not - whether make what has been downloaded executable or not - whether to extract from an archive or not - whether to download from a file system, http, or... We can create a simple HHTP server in the test fixture to support the latter. Furthermore, the tests need to be robust wrt. varying numbers of StatusUpdate messages. An accumulating update message sink that reports the final state is needed. All this has already been programmed in this patch, just needs to be rebased: https://reviews.apache.org/r/21316/""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2075","11/11/2014 20:14:42",13,"Add maintenance information to the replicated registry. ""To achieve fault-tolerance for the maintenance primitives, we will need to add the maintenance information to the registry. The registry currently stores all of the slave information, which is quite large (~ 17MB for 50,000 slaves from my testing), which results in a protobuf object that is extremely expensive to copy. As far as I can tell, reads / writes to maintenance information is independent of reads / writes to the existing 'registry' information. So there are two approach here: h4. Add maintenance information to 'maintenance' key: # The advantage of this approach is that we don't further grow the large Registry object. # This approach assumes that writes to 'maintenance' are independent of writes to the 'registry'. -If these writes are not independent, this approach requires that we add transactional support to the State abstraction.- # -This approach requires adding compaction to LogStorage.- # This approach likely requires some refactoring to the Registrar. h4. Add maintenance information to 'registry' key: (This is the chosen method.) # The advantage of this approach is that it's the easiest to implement. # This will further grow the single 'registry' object, but doesn't preclude it being split apart in the future. # This approach may require using the diff support in LogStorage and/or adding compression support to LogStorage snapshots to deal with the increased size of the registry.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2076","11/11/2014 20:42:54",13,"Implement maintenance primitives in the Master. ""The master will need to do a number of things to implement the maintenance primitives: # For machines that have a maintenance window: #* Disambiguate machines to agents. #* For unused resources, offers must be augmented with an Unavailability. #* For used resources, inverse offers must be sent. # For inverse offers: #* Filter them before sending them again. #* For declined inverse offers, do something with the reason (store or log). # Recover the maintenance information upon failover. Note: Some amount of this logic will need to be placed in the allocator.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2078","11/11/2014 21:07:19",3,"Scheduler driver may ACK status updates when the scheduler threw an exception ""[~vinodkone] discovered that this can happen if the scheduler calls {{SchedulerDriver#stop}} before or while handling {{Scheduler#statusUpdate}}. In src/sched/sched.cpp: The driver invokes {{statusUpdate}} and later checks the {{aborted}} flag to determine whether to send an ACK. In src/java/jni/org_apache_mesos_MesosSchedulerDriver.cpp: The {{statusUpdate}} implementation checks for an exception and invokes {{driver->abort()}}. In src/sched/sched.cpp: The {{abort()}} implementation exits early if {{status != DRIVER_RUNNING}}, and *does not set the aborted flag*. As a result, the code will ACK despite an exception being thrown."""," void statusUpdate( const UPID& from, const StatusUpdate& update, const UPID& pid) { ... scheduler->statusUpdate(driver, status); VLOG(1) << """"Scheduler::statusUpdate took """" << stopwatch.elapsed(); // Note that we need to look at the volatile 'aborted' here to // so that we don't acknowledge the update if the driver was // aborted during the processing of the update. if (aborted) { VLOG(1) << """"Not sending status update acknowledgment message because """" << """"the driver is aborted!""""; return; } ... void JNIScheduler::statusUpdate(SchedulerDriver* driver, const TaskStatus& status) { jvm->AttachCurrentThread(JNIENV_CAST(&env), NULL); jclass clazz = env->GetObjectClass(jdriver); jfieldID scheduler = env->GetFieldID(clazz, """"scheduler"""", """"Lorg/apache/mesos/Scheduler;""""); jobject jscheduler = env->GetObjectField(jdriver, scheduler); clazz = env->GetObjectClass(jscheduler); // scheduler.statusUpdate(driver, status); jmethodID statusUpdate = env->GetMethodID(clazz, """"statusUpdate"""", """"(Lorg/apache/mesos/SchedulerDriver;"""" """"Lorg/apache/mesos/Protos$TaskStatus;)V""""); jobject jstatus = convert(env, status); env->ExceptionClear(); env->CallVoidMethod(jscheduler, statusUpdate, jdriver, jstatus); if (env->ExceptionCheck()) { env->ExceptionDescribe(); env->ExceptionClear(); jvm->DetachCurrentThread(); driver->abort(); return; } jvm->DetachCurrentThread(); } Status MesosSchedulerDriver::abort() { Lock lock(&mutex); if (status != DRIVER_RUNNING) { return status; } CHECK(process != NULL); // We set the volatile aborted to true here to prevent any further // messages from being processed in the SchedulerProcess. However, // if abort() is called from another thread as the SchedulerProcess, // there may be at most one additional message processed. // TODO(bmahler): Use an atomic boolean. process->aborted = true; // Dispatching here ensures that we still process the outstanding // requests *from* the scheduler, since those do proceed when // aborted is true. dispatch(process, &SchedulerProcess::abort); return status = DRIVER_ABORTED; } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2082","11/12/2014 00:01:18",5,"Update the webui to include maintenance information. ""The simplest thing here would probably be to include another tab in the header for maintenance information. We could also consider adding maintenance information inline to the slaves table. Depending on how this is done, the maintenance tab could actually be a subset of the slaves table; only those slaves for which there is maintenance information.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2083","11/12/2014 00:43:52",8,"Add documentation for maintenance primitives. ""We should provide some guiding documentation around the upcoming maintenance primitives in Mesos. Specifically, we should ensure that general users, framework developers, and operators understand the notion of maintenance in Mesos. Some guidance and recommendations for the latter two audiences will be necessary.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2099","11/13/2014 00:57:06",8,"Support acquiring/releasing resources with DiskInfo in allocator. ""The allocator needs to be changed because the resources are changing while we acquiring or releasing persistent disk resources (resources with DiskInfo). For example, when we release a persistent disk resource, we are changing the release with DiskInfo to a resource with the DiskInfo.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2100","11/13/2014 01:00:36",8,"Implement master to slave protocol for persistent disk resources. ""We need to do the following: 1) Slave needs to send persisted resources when registering (or re-registering). 2) Master needs to send total persisted resources to slave by either re-using RunTask/UpdateFrameworkInfo or introduce new type of messages (like UpdateResources).""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2103","11/13/2014 19:17:47",2,"Expose number of processes and threads in a container ""The CFS cpu statistics (cpus_nr_throttled, cpus_nr_periods, cpus_throttled_time) are difficult to interpret. 1) nr_throttled is the number of intervals where *any* throttling occurred 2) throttled_time is the aggregate time *across all runnable tasks* (tasks in the Linux sense). For example, in a typical 60 second sampling interval: nr_periods = 600, nr_throttled could be 60, i.e., 10% of intervals, but throttled_time could be much higher than (60/600) * 60 = 6 seconds if there is more than one task that is runnable but throttled. *Each* throttled task contributes to the total throttled time. Small test to demonstrate throttled_time > nr_periods * quota_interval: 5 x {{'openssl speed'}} running with quota=100ms: All 10 intervals throttled (100%) for total time of 2.8 seconds in 1 second (""""more than 100%"""" of the time interval) It would be helpful to expose the number of processes and tasks in the container cgroup. This would be at a very coarse granularity but would give some guidance."""," cat cpu.stat && sleep 1 && cat cpu.stat nr_periods 3228 nr_throttled 1276 throttled_time 528843772540 nr_periods 3238 nr_throttled 1286 throttled_time 531668964667 ",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2104","11/13/2014 22:50:21",3,"Correct naming of cgroup memory statistics ""mem_rss_bytes is *not* RSS but is the total memory usage (memory.usage_in_bytes) of the cgroup, including file cache etc. Actual RSS is reported as mem_anon_bytes. These, and others, should be consistently named.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2110","11/14/2014 02:11:07",8,"Configurable Ping Timeouts ""After a series of ping-failures, the master considers the slave lost and calls shutdownSlave, requiring such a slave that reconnects to kill its tasks and re-register as a new slaveId. On the other side, after a similar timeout, the slave will consider the master lost and try to detect a new master. These timeouts are currently hardcoded constants (5 * 15s), which may not be well-suited for all scenarios. - Some clusters may tolerate a longer slave process restart period, and wouldn't want tasks to be killed upon reconnect. - Some clusters may have higher-latency networks (e.g. cross-datacenter, or for volunteer computing efforts), and would like to tolerate longer periods without communication. We should provide flags/mechanisms on the master to control its tolerance for non-communicative slaves, and (less importantly?) on the slave to tolerate missing masters.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2127","11/18/2014 22:56:01",3,"killTask() should perform reconciliation for unknown tasks. ""Currently, {{killTask}} uses its own reconciliation logic, which has diverged from the {{reconcileTasks}} logic. Specifically, when the task is unknown and a non-strict registry is in use, {{killTask}} will not send TASK_LOST whereas {{reconcileTask}} will. We should make these consistent. ""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2128","11/19/2014 01:18:43",2,"Turning on cgroups_limit_swap effectively disables memory isolation ""Our test runs show that enabling cgroups_limit_swap effectively disables memory isolation altogether. Per: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-memory.html """"It is important to set the memory.limit_in_bytes parameter before setting the memory.memsw.limit_in_bytes parameter: attempting to do so in the reverse order results in an error. This is because memory.memsw.limit_in_bytes becomes available only after all memory limitations (previously set in memory.limit_in_bytes) are exhausted."""" Looks like the flag sets """"memory.memsw.limit_in_bytes"""" if true and """"memory.limit_in_bytes"""" if false, but should always set """"memory.limit_in_bytes"""" and in addition set """"memory.memsw.limit_in_bytes"""" if true. Otherwise the limits won't be set and enforced. See: https://github.com/apache/mesos/blob/c8598f7f5a24a01b6a68e0f060b79662ee97af89/src/slave/containerizer/isolators/cgroups/mem.cpp#L365 ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2136","11/19/2014 22:21:22",5,"Expose per-cgroup memory pressure ""The cgroup memory controller can provide information on the memory pressure of a cgroup. This is in the form of an event based notification where events of (low, medium, critical) are generated when the kernel makes specific actions to allocate memory. This signal is probably more informative than comparing memory usage to memory limit. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2139","11/20/2014 00:13:59",5,"Enable the master to handle reservation operations ""master's {{_accept}} function currently only handles {{Create}} and {{Destroy}} operations which exist for persistent volumes. We need to handle the {{Reserve}} and {{Unreserve}} operations for dynamic reservations as well. In addition, we need to add {{validate}} functions for the reservation operations.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2144","11/20/2014 18:56:19",8,"Segmentation Fault in ExamplesTest.LowLevelSchedulerPthread ""Occured on review bot review of: https://reviews.apache.org/r/28262/#review62333 The review doesn't touch code related to the test (And doesn't break libprocess in general) [ RUN ] ExamplesTest.LowLevelSchedulerPthread ../../src/tests/script.cpp:83: Failure Failed low_level_scheduler_pthread_test.sh terminated with signal Segmentation fault [ FAILED ] ExamplesTest.LowLevelSchedulerPthread (7561 ms) The test ""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2176","12/05/2014 19:36:38",5,"Hierarchical allocator inconsistently accounts for reserved resources. ""Looking through the allocator code for MESOS-2099, I see an issue with respect to accounting reserved resources in the sorters: Within {{HierarchicalAllocatorProcess::allocate}}, only unreserved resources are accounted for in the sorters, whereas everywhere else (add/remove framework, add/remove slave) we account for both reserved and unreserved. From git blame, it looks like this issue was introduced over a long course of refactoring and fixes to the allocator. My guess is that this was never caught due to the lack of unit-testability of the allocator (unnecessarily requires a master PID to use an allocator). From my understanding, the two levels of the hierarchical sorter should have the following semantics: # Level 1 sorts across roles. Only unreserved resources are shared across roles, and therefore the """"role sorter"""" for level 1 should only account for the unreserved resource pool. # Level 2 sorts across frameworks, within a role. Both unreserved and reserved resources are shared across frameworks within a role, and therefore the """"framework sorters"""" for level 2 should each account for the reserved resource pool for the role, as well as the unreserved resources _allocated_ inside the role.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2182","12/09/2014 01:04:03",3,"Performance issue in libprocess SocketManager. ""Noticed an issue in production under which the master is slow to respond after failover for ~15 minutes. After looking at some perf data, the top offender is: It appears that in the SocketManager, whenever an internal Process exits, we loop over all the links unnecessarily: On clusters with 10,000s of slaves, this means we hold the socket manager lock for a very expensive loop erasing nothing from a set! This is because, the master contains links from the Master Process to each slave. However, when a random ephemeral Process terminates, we don't need to loop over each slave link. While we hold this lock, the following calls will block: As a result, the slave observers and the master can block calling send()! Short term, we will try to fix this issue by removing the unnecessary looping. Longer term, it would be nice to avoid all this locking when sending on independent sockets."""," 12.02% mesos-master libmesos-0.21.0-rc3.so [.] std::_Rb_tree, std::less, std::allocator >::erase(process::ProcessBase* const&) ... 3.29% mesos-master libmesos-0.21.0-rc3.so [.] process::SocketManager::exited(process::ProcessBase*) void SocketManager::exited(ProcessBase* process) { // An exited event is enough to cause the process to get deleted // (e.g., by the garbage collector), which means we can't // dereference process (or even use the address) after we enqueue at // least one exited event. Thus, we save the process pid. const UPID pid = process->pid; // Likewise, we need to save the current time of the process so we // can update the clocks of linked processes as appropriate. const Time time = Clock::now(process); synchronized (this) { // Iterate through the links, removing any links the process might // have had and creating exited events for any linked processes. foreachpair (const UPID& linkee, set& processes, links) { processes.erase(process); if (linkee == pid) { foreach (ProcessBase* linker, processes) { CHECK(linker != process) << """"Process linked with itself""""; synchronized (timeouts) { if (Clock::paused()) { Clock::update(linker, time); } } linker->enqueue(new ExitedEvent(linkee)); } } } links.erase(pid); } } class SocketManager { public: Socket accepted(int s); void link(ProcessBase* process, const UPID& to); PID proxy(const Socket& socket); void send(Encoder* encoder, bool persist); void send(const Response& response, const Request& request, const Socket& socket); void send(Message* message); Encoder* next(int s); void close(int s); void exited(const Node& node); void exited(ProcessBase* process); ... ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2184","12/09/2014 22:24:36",1,"deprecate unused flag 'cgroups_subsystems' ""cgroups_subsystems is a slave flag that is no longer used and should be deprecated.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2191","12/12/2014 05:12:22",3,"Add ContainerId to the TaskStatus message ""{{TaskStatus}} provides the frameworks with certain information ({{executorId}}, {{slaveId}}, etc.) which is useful when collecting statistics about cluster performance; however, it is difficult to associate tasks to the container it is executed since this information stays always within mesos itself. Therefore it would be good to provide the framework scheduler with this information, adding a new field in the {{TaskStatus}} message. See comments for a use case.""","",0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2199","12/19/2014 01:31:56",2,"Failing test: SlaveTest.ROOT_RunTaskWithCommandInfoWithUser ""Appears that running the executor as {{nobody}} is not supported. [~nnielsen] can you take a look? Executor log: Test output: """," [root@hostname build]# cat /tmp/SlaveTest_ROOT_RunTaskWithCommandInfoWithUser_cxF1dY/slaves/20141219-005206-2081170186-60487-11862-S0/frameworks/20141219-005206-2081170186-60 487-11862-0000/executors/1/runs/latest/std* sh: /home/idownes/workspace/mesos/build/src/mesos-executor: Permission denied [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from SlaveTest [ RUN ] SlaveTest.ROOT_RunTaskWithCommandInfoWithUser ../../src/tests/slave_tests.cpp:680: Failure Value of: statusRunning.get().state() Actual: TASK_FAILED Expected: TASK_RUNNING ../../src/tests/slave_tests.cpp:682: Failure Failed to wait 10secs for statusFinished ../../src/tests/slave_tests.cpp:673: Failure Actual function call count doesn't match EXPECT_CALL(sched, statusUpdate(&driver, _))... Expected: to be called twice Actual: called once - unsatisfied and active [ FAILED ] SlaveTest.ROOT_RunTaskWithCommandInfoWithUser (10641 ms) [----------] 1 test from SlaveTest (10641 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (10658 ms total) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2201","12/22/2014 22:22:36",3,"ReplicaTest.Restore fails with leveldb greater than v1.7. ""I wanted to configure Mesos with system provided leveldb libraries when I ran into this issue. Apparently, if one does {{../configure --with-leveldb=/path/to/leveldb}}, compilation succeeds, however the """"ReplicaTest_Restore"""" test fails with the following back trace: The bundled version of leveldb is v1.4. I tested version 1.5 and that seems to work. However, v1.6 had some build issues and us unusable with Mesos. The next version v1.7, allows Mesos to compile fine but results in the above error."""," [ RUN ] ReplicaTest.Restore Using temporary directory '/tmp/ReplicaTest_Restore_IZbbRR' I1222 14:16:49.517500 2927 leveldb.cpp:176] Opened db in 10.758917ms I1222 14:16:49.526495 2927 leveldb.cpp:183] Compacted db in 8.931146ms I1222 14:16:49.526523 2927 leveldb.cpp:198] Created db iterator in 5787ns I1222 14:16:49.526531 2927 leveldb.cpp:204] Seeked to beginning of db in 511ns I1222 14:16:49.526535 2927 leveldb.cpp:273] Iterated through 0 keys in the db in 197ns I1222 14:16:49.526623 2927 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I1222 14:16:49.530972 2945 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 3.084458ms I1222 14:16:49.531008 2945 replica.cpp:320] Persisted replica status to VOTING I1222 14:16:49.541263 2927 leveldb.cpp:176] Opened db in 9.980586ms I1222 14:16:49.551636 2927 leveldb.cpp:183] Compacted db in 10.348096ms I1222 14:16:49.551683 2927 leveldb.cpp:198] Created db iterator in 3405ns I1222 14:16:49.551693 2927 leveldb.cpp:204] Seeked to beginning of db in 3559ns I1222 14:16:49.551728 2927 leveldb.cpp:273] Iterated through 1 keys in the db in 29722ns I1222 14:16:49.551751 2927 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I1222 14:16:49.551996 2947 replica.cpp:474] Replica received implicit promise request with proposal 1 I1222 14:16:49.560921 2947 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 8.899591ms I1222 14:16:49.560940 2947 replica.cpp:342] Persisted promised to 1 I1222 14:16:49.561338 2943 replica.cpp:508] Replica received write request for position 1 I1222 14:16:49.568677 2943 leveldb.cpp:343] Persisting action (27 bytes) to leveldb took 7.287155ms I1222 14:16:49.568692 2943 replica.cpp:676] Persisted action at 1 I1222 14:16:49.569042 2942 leveldb.cpp:438] Reading position from leveldb took 26339ns F1222 14:16:49.569411 2927 replica.cpp:721] CHECK_SOME(state): IO error: lock /tmp/ReplicaTest_Restore_IZbbRR/.log/LOCK: already held by process Failed to recover the log *** Check failure stack trace: *** @ 0x7f7f6c53e688 google::LogMessage::Fail() @ 0x7f7f6c53e5e7 google::LogMessage::SendToLog() @ 0x7f7f6c53dff8 google::LogMessage::Flush() @ 0x7f7f6c540d2c google::LogMessageFatal::~LogMessageFatal() @ 0x90a520 _CheckFatal::~_CheckFatal() @ 0x7f7f6c400f4d mesos::internal::log::ReplicaProcess::restore() @ 0x7f7f6c3fd763 mesos::internal::log::ReplicaProcess::ReplicaProcess() @ 0x7f7f6c401271 mesos::internal::log::Replica::Replica() @ 0xcd7ca3 ReplicaTest_Restore_Test::TestBody() @ 0x10934b2 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x108e584 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x10768fd testing::Test::Run() @ 0x1077020 testing::TestInfo::Run() @ 0x10775a8 testing::TestCase::Run() @ 0x107c324 testing::internal::UnitTestImpl::RunAllTests() @ 0x1094348 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x108f2b7 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x107b1d4 testing::UnitTest::Run() @ 0xd344a9 main @ 0x7f7f66fdfb45 __libc_start_main @ 0x8f3549 (unknown) @ (nil) (unknown) [2] 2927 abort (core dumped) GLOG_logtostderr=1 GTEST_v=10 ./bin/mesos-tests.sh --verbose ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2205","12/30/2014 02:01:38",2,"Add user documentation for reservations ""Add a user guide for reservations which describes basic usage of them, how ACLs are used to specify who can unreserve whose resources, and few advanced usage cases.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2215","01/13/2015 16:06:16",8,"The Docker containerizer attempts to recover any task when checkpointing is enabled, not just docker tasks. ""Once the slave restarts and recovers the task, I see this error in the log for all tasks that were recovered every second or so. Note, these were NOT docker tasks: W0113 16:01:00.790323 773142 monitor.cpp:213] Failed to get resource usage for container 7b729b89-dc7e-4d08-af97-8cd1af560a21 for executor thermos-1421085237813-slipstream-prod-agent-3-8f769514-1835-4151-90d0-3f55dcc940dd of framework 20150109-161713-715350282-5050-290797-0000: Failed to 'docker inspect mesos-7b729b89-dc7e-4d08-af97-8cd1af560a21': exit status = exited with status 1 stderr = Error: No such image or container: mesos-7b729b89-dc7e-4d08-af97-8cd1af560a21 However the tasks themselves are still healthy and running. The slave was launched with --containerizers=mesos,docker ----- More info: it looks like the docker containerizer is a little too ambitious about recovering containers, again this was not a docker task: I0113 15:59:59.476145 773142 docker.cpp:814] Recovering container '7b729b89-dc7e-4d08-af97-8cd1af560a21' for executor 'thermos-1421085237813-slipstream-prod-agent-3-8f769514-1835-4151-90d0-3f55dcc940dd' of framework 20150109-161713-715350282-5050-290797-0000 Looking into the source, it looks like the problem is that the ComposingContainerize runs recover in parallel, but neither the docker containerizer nor mesos containerizer check if they should recover the task or not (because they were the ones that launched it). Perhaps this needs to be written into the checkpoint somewhere?""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2225","01/15/2015 20:23:01",2,"FaultToleranceTest.ReregisterFrameworkExitedExecutor is flaky ""Observed this on internal CI. """," [ RUN ] FaultToleranceTest.ReregisterFrameworkExitedExecutor Using temporary directory '/tmp/FaultToleranceTest_ReregisterFrameworkExitedExecutor_yNprKi' I0114 18:50:51.461186 4720 leveldb.cpp:176] Opened db in 4.866948ms I0114 18:50:51.462057 4720 leveldb.cpp:183] Compacted db in 472256ns I0114 18:50:51.462514 4720 leveldb.cpp:198] Created db iterator in 42905ns I0114 18:50:51.462784 4720 leveldb.cpp:204] Seeked to beginning of db in 21630ns I0114 18:50:51.463068 4720 leveldb.cpp:273] Iterated through 0 keys in the db in 19967ns I0114 18:50:51.463485 4720 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0114 18:50:51.464555 4737 recover.cpp:449] Starting replica recovery I0114 18:50:51.465188 4737 recover.cpp:475] Replica is in EMPTY status I0114 18:50:51.467324 4741 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0114 18:50:51.470118 4736 recover.cpp:195] Received a recover response from a replica in EMPTY status I0114 18:50:51.475424 4739 recover.cpp:566] Updating replica status to STARTING I0114 18:50:51.476553 4739 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 107545ns I0114 18:50:51.476862 4739 replica.cpp:323] Persisted replica status to STARTING I0114 18:50:51.477309 4739 recover.cpp:475] Replica is in STARTING status I0114 18:50:51.479109 4734 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0114 18:50:51.481274 4738 recover.cpp:195] Received a recover response from a replica in STARTING status I0114 18:50:51.482324 4738 recover.cpp:566] Updating replica status to VOTING I0114 18:50:51.482913 4738 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 66011ns I0114 18:50:51.483186 4738 replica.cpp:323] Persisted replica status to VOTING I0114 18:50:51.483608 4738 recover.cpp:580] Successfully joined the Paxos group I0114 18:50:51.484031 4738 recover.cpp:464] Recover process terminated I0114 18:50:51.554949 4734 master.cpp:262] Master 20150114-185051-2272962752-57018-4720 (fedora-19) started on 192.168.122.135:57018 I0114 18:50:51.555785 4734 master.cpp:308] Master only allowing authenticated frameworks to register I0114 18:50:51.556046 4734 master.cpp:313] Master only allowing authenticated slaves to register I0114 18:50:51.556426 4734 credentials.hpp:36] Loading credentials for authentication from '/tmp/FaultToleranceTest_ReregisterFrameworkExitedExecutor_yNprKi/credentials' I0114 18:50:51.557003 4734 master.cpp:357] Authorization enabled I0114 18:50:51.558007 4737 hierarchical_allocator_process.hpp:285] Initialized hierarchical allocator process I0114 18:50:51.558521 4741 whitelist_watcher.cpp:65] No whitelist given I0114 18:50:51.562185 4734 master.cpp:1219] The newly elected leader is master@192.168.122.135:57018 with id 20150114-185051-2272962752-57018-4720 I0114 18:50:51.562680 4734 master.cpp:1232] Elected as the leading master! I0114 18:50:51.562950 4734 master.cpp:1050] Recovering from registrar I0114 18:50:51.564506 4736 registrar.cpp:313] Recovering registrar I0114 18:50:51.566162 4737 log.cpp:660] Attempting to start the writer I0114 18:50:51.568691 4741 replica.cpp:477] Replica received implicit promise request with proposal 1 I0114 18:50:51.569154 4741 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 106885ns I0114 18:50:51.569504 4741 replica.cpp:345] Persisted promised to 1 I0114 18:50:51.573277 4740 coordinator.cpp:230] Coordinator attemping to fill missing position I0114 18:50:51.575623 4739 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0114 18:50:51.576133 4739 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 86360ns I0114 18:50:51.576449 4739 replica.cpp:679] Persisted action at 0 I0114 18:50:51.586966 4736 replica.cpp:511] Replica received write request for position 0 I0114 18:50:51.587666 4736 leveldb.cpp:438] Reading position from leveldb took 60621ns I0114 18:50:51.588043 4736 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 81094ns I0114 18:50:51.588374 4736 replica.cpp:679] Persisted action at 0 I0114 18:50:51.589418 4736 replica.cpp:658] Replica received learned notice for position 0 I0114 18:50:51.590428 4736 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 106648ns I0114 18:50:51.590840 4736 replica.cpp:679] Persisted action at 0 I0114 18:50:51.591104 4736 replica.cpp:664] Replica learned NOP action at position 0 I0114 18:50:51.592260 4734 log.cpp:676] Writer started with ending position 0 I0114 18:50:51.594172 4739 leveldb.cpp:438] Reading position from leveldb took 52163ns I0114 18:50:51.600744 4736 registrar.cpp:346] Successfully fetched the registry (0B) in 35968us I0114 18:50:51.601646 4736 registrar.cpp:445] Applied 1 operations in 184502ns; attempting to update the 'registry' I0114 18:50:51.604329 4737 log.cpp:684] Attempting to append 130 bytes to the log I0114 18:50:51.604966 4737 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0114 18:50:51.606449 4737 replica.cpp:511] Replica received write request for position 1 I0114 18:50:51.606937 4737 leveldb.cpp:343] Persisting action (149 bytes) to leveldb took 84877ns I0114 18:50:51.607199 4737 replica.cpp:679] Persisted action at 1 I0114 18:50:51.611934 4741 replica.cpp:658] Replica received learned notice for position 1 I0114 18:50:51.612423 4741 leveldb.cpp:343] Persisting action (151 bytes) to leveldb took 113059ns I0114 18:50:51.612794 4741 replica.cpp:679] Persisted action at 1 I0114 18:50:51.613056 4741 replica.cpp:664] Replica learned APPEND action at position 1 I0114 18:50:51.614598 4741 log.cpp:703] Attempting to truncate the log to 1 I0114 18:50:51.615157 4741 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0114 18:50:51.616458 4737 replica.cpp:511] Replica received write request for position 2 I0114 18:50:51.616902 4737 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 71716ns I0114 18:50:51.617168 4737 replica.cpp:679] Persisted action at 2 I0114 18:50:51.618505 4740 replica.cpp:658] Replica received learned notice for position 2 I0114 18:50:51.619031 4740 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 78481ns I0114 18:50:51.619567 4740 leveldb.cpp:401] Deleting ~1 keys from leveldb took 59638ns I0114 18:50:51.619832 4740 replica.cpp:679] Persisted action at 2 I0114 18:50:51.620101 4740 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0114 18:50:51.621757 4736 registrar.cpp:490] Successfully updated the 'registry' in 19.78496ms I0114 18:50:51.622658 4736 registrar.cpp:376] Successfully recovered registrar I0114 18:50:51.623261 4736 master.cpp:1077] Recovered 0 slaves from the Registry (94B) ; allowing 10mins for slaves to re-register I0114 18:50:51.670349 4739 slave.cpp:173] Slave started on 115)@192.168.122.135:57018 I0114 18:50:51.671133 4739 credentials.hpp:84] Loading credential for authentication from '/tmp/FaultToleranceTest_ReregisterFrameworkExitedExecutor_ONrVug/credential' I0114 18:50:51.671685 4739 slave.cpp:282] Slave using credential for: test-principal I0114 18:50:51.672245 4739 slave.cpp:300] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0114 18:50:51.673360 4739 slave.cpp:329] Slave hostname: fedora-19 I0114 18:50:51.673660 4739 slave.cpp:330] Slave checkpoint: false W0114 18:50:51.674052 4739 slave.cpp:332] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I0114 18:50:51.677234 4737 state.cpp:33] Recovering state from '/tmp/FaultToleranceTest_ReregisterFrameworkExitedExecutor_ONrVug/meta' I0114 18:50:51.684973 4739 status_update_manager.cpp:197] Recovering status update manager I0114 18:50:51.687644 4739 slave.cpp:3519] Finished recovery I0114 18:50:51.688698 4737 slave.cpp:613] New master detected at master@192.168.122.135:57018 I0114 18:50:51.688902 4734 status_update_manager.cpp:171] Pausing sending status updates I0114 18:50:51.689482 4737 slave.cpp:676] Authenticating with master master@192.168.122.135:57018 I0114 18:50:51.689910 4737 slave.cpp:681] Using default CRAM-MD5 authenticatee I0114 18:50:51.690577 4741 authenticatee.hpp:138] Creating new client SASL connection I0114 18:50:51.691453 4737 slave.cpp:649] Detecting new master I0114 18:50:51.691864 4741 master.cpp:4130] Authenticating slave(115)@192.168.122.135:57018 I0114 18:50:51.692369 4741 master.cpp:4141] Using default CRAM-MD5 authenticator I0114 18:50:51.693208 4741 authenticator.hpp:170] Creating new server SASL connection I0114 18:50:51.694598 4738 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0114 18:50:51.694893 4738 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0114 18:50:51.695329 4741 authenticator.hpp:276] Received SASL authentication start I0114 18:50:51.695641 4741 authenticator.hpp:398] Authentication requires more steps I0114 18:50:51.696028 4736 authenticatee.hpp:275] Received SASL authentication step I0114 18:50:51.696486 4741 authenticator.hpp:304] Received SASL authentication step I0114 18:50:51.696753 4741 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0114 18:50:51.697041 4741 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0114 18:50:51.697343 4741 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0114 18:50:51.697685 4741 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0114 18:50:51.697998 4741 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0114 18:50:51.698251 4741 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0114 18:50:51.698580 4741 authenticator.hpp:390] Authentication success I0114 18:50:51.698927 4735 authenticatee.hpp:315] Authentication success I0114 18:50:51.705123 4741 master.cpp:4188] Successfully authenticated principal 'test-principal' at slave(115)@192.168.122.135:57018 I0114 18:50:51.705847 4720 sched.cpp:151] Version: 0.22.0 I0114 18:50:51.707159 4736 sched.cpp:248] New master detected at master@192.168.122.135:57018 I0114 18:50:51.707523 4736 sched.cpp:304] Authenticating with master master@192.168.122.135:57018 I0114 18:50:51.707792 4736 sched.cpp:311] Using default CRAM-MD5 authenticatee I0114 18:50:51.708412 4736 authenticatee.hpp:138] Creating new client SASL connection I0114 18:50:51.709316 4735 slave.cpp:747] Successfully authenticated with master master@192.168.122.135:57018 I0114 18:50:51.709723 4737 master.cpp:4130] Authenticating scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:51.710274 4737 master.cpp:4141] Using default CRAM-MD5 authenticator I0114 18:50:51.710739 4735 slave.cpp:1075] Will retry registration in 17.028024ms if necessary I0114 18:50:51.711304 4737 master.cpp:3276] Registering slave at slave(115)@192.168.122.135:57018 (fedora-19) with id 20150114-185051-2272962752-57018-4720-S0 I0114 18:50:51.711459 4738 authenticator.hpp:170] Creating new server SASL connection I0114 18:50:51.713142 4739 registrar.cpp:445] Applied 1 operations in 100530ns; attempting to update the 'registry' I0114 18:50:51.713465 4738 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0114 18:50:51.715435 4738 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0114 18:50:51.715963 4740 authenticator.hpp:276] Received SASL authentication start I0114 18:50:51.716258 4740 authenticator.hpp:398] Authentication requires more steps I0114 18:50:51.716524 4740 authenticatee.hpp:275] Received SASL authentication step I0114 18:50:51.716784 4740 authenticator.hpp:304] Received SASL authentication step I0114 18:50:51.716979 4740 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0114 18:50:51.717139 4740 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0114 18:50:51.717315 4740 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0114 18:50:51.717542 4740 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0114 18:50:51.717703 4740 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0114 18:50:51.717864 4740 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0114 18:50:51.718040 4740 authenticator.hpp:390] Authentication success I0114 18:50:51.718292 4740 authenticatee.hpp:315] Authentication success I0114 18:50:51.718454 4738 master.cpp:4188] Successfully authenticated principal 'test-principal' at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:51.719012 4740 sched.cpp:392] Successfully authenticated with master master@192.168.122.135:57018 I0114 18:50:51.719364 4740 sched.cpp:515] Sending registration request to master@192.168.122.135:57018 I0114 18:50:51.719702 4740 sched.cpp:548] Will retry registration in 746.539282ms if necessary I0114 18:50:51.719902 4735 master.cpp:1417] Received registration request for framework 'default' at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:51.720232 4735 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0114 18:50:51.722206 4735 master.cpp:1481] Registering framework 20150114-185051-2272962752-57018-4720-0000 (default) at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:51.720927 4737 log.cpp:684] Attempting to append 300 bytes to the log I0114 18:50:51.722924 4737 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0114 18:50:51.724269 4737 replica.cpp:511] Replica received write request for position 3 I0114 18:50:51.724817 4737 leveldb.cpp:343] Persisting action (319 bytes) to leveldb took 116638ns I0114 18:50:51.728560 4737 replica.cpp:679] Persisted action at 3 I0114 18:50:51.726066 4736 sched.cpp:442] Framework registered with 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.728879 4736 sched.cpp:456] Scheduler::registered took 34885ns I0114 18:50:51.725520 4735 hierarchical_allocator_process.hpp:319] Added framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.731864 4735 hierarchical_allocator_process.hpp:839] No resources available to allocate! I0114 18:50:51.732038 4735 hierarchical_allocator_process.hpp:746] Performed allocation for 0 slaves in 214728ns I0114 18:50:51.733106 4738 replica.cpp:658] Replica received learned notice for position 3 I0114 18:50:51.733340 4738 leveldb.cpp:343] Persisting action (321 bytes) to leveldb took 83165ns I0114 18:50:51.733538 4738 replica.cpp:679] Persisted action at 3 I0114 18:50:51.733705 4738 replica.cpp:664] Replica learned APPEND action at position 3 I0114 18:50:51.735610 4738 registrar.cpp:490] Successfully updated the 'registry' in 21.936128ms I0114 18:50:51.735805 4739 log.cpp:703] Attempting to truncate the log to 3 I0114 18:50:51.736445 4739 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0114 18:50:51.737664 4739 replica.cpp:511] Replica received write request for position 4 I0114 18:50:51.738013 4739 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 72906ns I0114 18:50:51.738255 4739 replica.cpp:679] Persisted action at 4 I0114 18:50:51.743397 4734 replica.cpp:658] Replica received learned notice for position 4 I0114 18:50:51.743628 4734 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 78832ns I0114 18:50:51.743837 4734 leveldb.cpp:401] Deleting ~2 keys from leveldb took 63991ns I0114 18:50:51.744004 4734 replica.cpp:679] Persisted action at 4 I0114 18:50:51.744168 4734 replica.cpp:664] Replica learned TRUNCATE action at position 4 I0114 18:50:51.745537 4738 master.cpp:3330] Registered slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0114 18:50:51.745968 4734 hierarchical_allocator_process.hpp:453] Added slave 20150114-185051-2272962752-57018-4720-S0 (fedora-19) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0114 18:50:51.746070 4735 slave.cpp:781] Registered with master master@192.168.122.135:57018; given slave ID 20150114-185051-2272962752-57018-4720-S0 I0114 18:50:51.751437 4741 status_update_manager.cpp:178] Resuming sending status updates I0114 18:50:51.752428 4740 master.cpp:4072] Sending 1 offers to framework 20150114-185051-2272962752-57018-4720-0000 (default) at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:51.753764 4740 sched.cpp:605] Scheduler::resourceOffers took 751714ns I0114 18:50:51.754812 4740 master.cpp:2541] Processing reply for offers: [ 20150114-185051-2272962752-57018-4720-O0 ] on slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) for framework 20150114-185051-2272962752-57018-4720-0000 (default) at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:51.755040 4740 master.cpp:2647] Authorizing framework principal 'test-principal' to launch task 0 as user 'jenkins' W0114 18:50:51.756431 4741 master.cpp:2124] Executor default for task 0 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0114 18:50:51.756652 4741 master.cpp:2136] Executor default for task 0 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0114 18:50:51.757284 4741 master.hpp:766] Adding task 0 with resources cpus(*):1; mem(*):16 on slave 20150114-185051-2272962752-57018-4720-S0 (fedora-19) I0114 18:50:51.757733 4734 hierarchical_allocator_process.hpp:764] Performed allocation for slave 20150114-185051-2272962752-57018-4720-S0 in 9.535066ms I0114 18:50:51.758117 4735 slave.cpp:2588] Received ping from slave-observer(95)@192.168.122.135:57018 I0114 18:50:51.758630 4741 master.cpp:2897] Launching task 0 of framework 20150114-185051-2272962752-57018-4720-0000 (default) at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 with resources cpus(*):1; mem(*):16 on slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) I0114 18:50:51.759526 4741 hierarchical_allocator_process.hpp:610] Updated allocation of framework 20150114-185051-2272962752-57018-4720-0000 on slave 20150114-185051-2272962752-57018-4720-S0 from cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] to cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0114 18:50:51.759796 4737 slave.cpp:1130] Got assigned task 0 for framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.761184 4737 slave.cpp:1245] Launching task 0 for framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.763586 4741 hierarchical_allocator_process.hpp:653] Recovered cpus(*):1; mem(*):1008; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):1; mem(*):1008; disk(*):1024; ports(*):[31000-32000]) on slave 20150114-185051-2272962752-57018-4720-S0 from framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.764034 4741 hierarchical_allocator_process.hpp:689] Framework 20150114-185051-2272962752-57018-4720-0000 filtered slave 20150114-185051-2272962752-57018-4720-S0 for 5secs I0114 18:50:51.764984 4737 slave.cpp:3921] Launching executor default of framework 20150114-185051-2272962752-57018-4720-0000 in work directory '/tmp/FaultToleranceTest_ReregisterFrameworkExitedExecutor_ONrVug/slaves/20150114-185051-2272962752-57018-4720-S0/frameworks/20150114-185051-2272962752-57018-4720-0000/executors/default/runs/dd104b76-b838-431e-ada9-ff7a2b07e694' I0114 18:50:51.775048 4737 exec.cpp:147] Version: 0.22.0 I0114 18:50:51.778069 4736 exec.cpp:197] Executor started at: executor(29)@192.168.122.135:57018 with pid 4720 I0114 18:50:51.778722 4737 slave.cpp:1368] Queuing task '0' for executor default of framework '20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.779103 4737 slave.cpp:566] Successfully attached file '/tmp/FaultToleranceTest_ReregisterFrameworkExitedExecutor_ONrVug/slaves/20150114-185051-2272962752-57018-4720-S0/frameworks/20150114-185051-2272962752-57018-4720-0000/executors/default/runs/dd104b76-b838-431e-ada9-ff7a2b07e694' I0114 18:50:51.779470 4737 slave.cpp:1912] Got registration for executor 'default' of framework 20150114-185051-2272962752-57018-4720-0000 from executor(29)@192.168.122.135:57018 I0114 18:50:51.780288 4740 exec.cpp:221] Executor registered on slave 20150114-185051-2272962752-57018-4720-S0 I0114 18:50:51.782098 4740 exec.cpp:233] Executor::registered took 61371ns I0114 18:50:51.782616 4737 slave.cpp:2031] Flushing queued task 0 for executor 'default' of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.783262 4741 exec.cpp:308] Executor asked to run task '0' I0114 18:50:51.783614 4741 exec.cpp:317] Executor::launchTask took 97020ns I0114 18:50:51.785373 4741 exec.cpp:540] Executor sending status update TASK_RUNNING (UUID: 3f6824a3-8a23-4029-8505-8eb5f72e472b) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.785995 4737 slave.cpp:2890] Monitoring executor 'default' of framework '20150114-185051-2272962752-57018-4720-0000' in container 'dd104b76-b838-431e-ada9-ff7a2b07e694' I0114 18:50:51.789064 4737 slave.cpp:2265] Handling status update TASK_RUNNING (UUID: 3f6824a3-8a23-4029-8505-8eb5f72e472b) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 from executor(29)@192.168.122.135:57018 I0114 18:50:51.789553 4735 status_update_manager.cpp:317] Received status update TASK_RUNNING (UUID: 3f6824a3-8a23-4029-8505-8eb5f72e472b) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.789827 4735 status_update_manager.cpp:494] Creating StatusUpdate stream for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.790329 4735 status_update_manager.cpp:371] Forwarding update TASK_RUNNING (UUID: 3f6824a3-8a23-4029-8505-8eb5f72e472b) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 to the slave I0114 18:50:51.790875 4737 slave.cpp:2508] Forwarding the update TASK_RUNNING (UUID: 3f6824a3-8a23-4029-8505-8eb5f72e472b) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 to master@192.168.122.135:57018 I0114 18:50:51.791442 4736 master.cpp:3653] Forwarding status update TASK_RUNNING (UUID: 3f6824a3-8a23-4029-8505-8eb5f72e472b) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.791813 4736 master.cpp:3625] Status update TASK_RUNNING (UUID: 3f6824a3-8a23-4029-8505-8eb5f72e472b) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 from slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) I0114 18:50:51.792140 4736 master.cpp:4935] Updating the latest state of task 0 of framework 20150114-185051-2272962752-57018-4720-0000 to TASK_RUNNING I0114 18:50:51.792690 4736 sched.cpp:696] Scheduler::statusUpdate took 70266ns I0114 18:50:51.793184 4739 master.cpp:3126] Forwarding status update acknowledgement 3f6824a3-8a23-4029-8505-8eb5f72e472b for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 (default) at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 to slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) I0114 18:50:51.794311 4720 master.cpp:654] Master terminating W0114 18:50:51.794908 4720 master.cpp:4980] Removing task 0 with resources cpus(*):1; mem(*):16 of framework 20150114-185051-2272962752-57018-4720-0000 on slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) in non-terminal state TASK_RUNNING I0114 18:50:51.795251 4739 slave.cpp:2435] Status update manager successfully handled status update TASK_RUNNING (UUID: 3f6824a3-8a23-4029-8505-8eb5f72e472b) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.795881 4739 slave.cpp:2441] Sending acknowledgement for status update TASK_RUNNING (UUID: 3f6824a3-8a23-4029-8505-8eb5f72e472b) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 to executor(29)@192.168.122.135:57018 I0114 18:50:51.796308 4739 exec.cpp:354] Executor received status update acknowledgement 3f6824a3-8a23-4029-8505-8eb5f72e472b for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.795326 4741 status_update_manager.cpp:389] Received status update acknowledgement (UUID: 3f6824a3-8a23-4029-8505-8eb5f72e472b) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.797854 4741 slave.cpp:1852] Status update manager successfully handled status update acknowledgement (UUID: 3f6824a3-8a23-4029-8505-8eb5f72e472b) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.797144 4720 master.cpp:5023] Removing executor 'default' with resources of framework 20150114-185051-2272962752-57018-4720-0000 on slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) I0114 18:50:51.796748 4734 hierarchical_allocator_process.hpp:653] Recovered cpus(*):1; mem(*):16 (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 20150114-185051-2272962752-57018-4720-S0 from framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:51.802438 4739 slave.cpp:2673] master@192.168.122.135:57018 exited W0114 18:50:51.802707 4739 slave.cpp:2676] Master disconnected! Waiting for a new master to be elected I0114 18:50:51.849522 4720 leveldb.cpp:176] Opened db in 1.773376ms I0114 18:50:51.860327 4720 leveldb.cpp:183] Compacted db in 1.475626ms I0114 18:50:51.860661 4720 leveldb.cpp:198] Created db iterator in 58499ns I0114 18:50:51.861027 4720 leveldb.cpp:204] Seeked to beginning of db in 53681ns I0114 18:50:51.861476 4720 leveldb.cpp:273] Iterated through 3 keys in the db in 195975ns I0114 18:50:51.861803 4720 replica.cpp:744] Replica recovered with log positions 3 -> 4 with 0 holes and 0 unlearned I0114 18:50:51.862931 4737 recover.cpp:449] Starting replica recovery I0114 18:50:51.863837 4737 recover.cpp:475] Replica is in VOTING status I0114 18:50:51.864320 4737 recover.cpp:464] Recover process terminated I0114 18:50:51.912767 4734 master.cpp:262] Master 20150114-185051-2272962752-57018-4720 (fedora-19) started on 192.168.122.135:57018 I0114 18:50:51.913460 4734 master.cpp:308] Master only allowing authenticated frameworks to register I0114 18:50:51.913712 4734 master.cpp:313] Master only allowing authenticated slaves to register I0114 18:50:51.914023 4734 credentials.hpp:36] Loading credentials for authentication from '/tmp/FaultToleranceTest_ReregisterFrameworkExitedExecutor_yNprKi/credentials' I0114 18:50:51.914626 4734 master.cpp:357] Authorization enabled I0114 18:50:51.915576 4739 hierarchical_allocator_process.hpp:285] Initialized hierarchical allocator process I0114 18:50:51.916064 4735 whitelist_watcher.cpp:65] No whitelist given I0114 18:50:51.919319 4734 master.cpp:1219] The newly elected leader is master@192.168.122.135:57018 with id 20150114-185051-2272962752-57018-4720 I0114 18:50:51.921718 4734 master.cpp:1232] Elected as the leading master! I0114 18:50:51.921975 4734 master.cpp:1050] Recovering from registrar I0114 18:50:51.922523 4738 registrar.cpp:313] Recovering registrar I0114 18:50:51.924142 4738 log.cpp:660] Attempting to start the writer I0114 18:50:51.926363 4739 replica.cpp:477] Replica received implicit promise request with proposal 2 I0114 18:50:51.927110 4739 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 147268ns I0114 18:50:51.927486 4739 replica.cpp:345] Persisted promised to 2 I0114 18:50:51.935008 4741 coordinator.cpp:230] Coordinator attemping to fill missing position I0114 18:50:51.935816 4741 log.cpp:676] Writer started with ending position 4 I0114 18:50:51.937769 4739 leveldb.cpp:438] Reading position from leveldb took 108522ns I0114 18:50:51.938480 4739 leveldb.cpp:438] Reading position from leveldb took 171418ns I0114 18:50:51.942811 4740 registrar.cpp:346] Successfully fetched the registry (261B) in 19.91296ms I0114 18:50:51.943493 4740 registrar.cpp:445] Applied 1 operations in 96988ns; attempting to update the 'registry' I0114 18:50:51.946138 4737 log.cpp:684] Attempting to append 300 bytes to the log I0114 18:50:51.950773 4737 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 5 I0114 18:50:51.954259 4739 replica.cpp:511] Replica received write request for position 5 I0114 18:50:51.958901 4739 leveldb.cpp:343] Persisting action (319 bytes) to leveldb took 351525ns I0114 18:50:51.959277 4739 replica.cpp:679] Persisted action at 5 I0114 18:50:51.966125 4736 replica.cpp:658] Replica received learned notice for position 5 I0114 18:50:51.966882 4736 leveldb.cpp:343] Persisting action (321 bytes) to leveldb took 114790ns I0114 18:50:51.967159 4736 replica.cpp:679] Persisted action at 5 I0114 18:50:51.967515 4736 replica.cpp:664] Replica learned APPEND action at position 5 I0114 18:50:51.971989 4739 registrar.cpp:490] Successfully updated the 'registry' in 28.18304ms I0114 18:50:51.972854 4739 registrar.cpp:376] Successfully recovered registrar I0114 18:50:51.973675 4737 master.cpp:1077] Recovered 1 slaves from the Registry (261B) ; allowing 10mins for slaves to re-register I0114 18:50:51.974957 4737 log.cpp:703] Attempting to truncate the log to 5 I0114 18:50:51.975620 4740 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 6 I0114 18:50:51.977298 4740 replica.cpp:511] Replica received write request for position 6 I0114 18:50:51.978060 4740 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 108071ns I0114 18:50:51.978374 4740 replica.cpp:679] Persisted action at 6 I0114 18:50:51.982532 4737 replica.cpp:658] Replica received learned notice for position 6 I0114 18:50:51.983160 4737 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 89982ns I0114 18:50:51.983505 4737 leveldb.cpp:401] Deleting ~2 keys from leveldb took 64662ns I0114 18:50:51.983806 4737 replica.cpp:679] Persisted action at 6 I0114 18:50:51.984136 4737 replica.cpp:664] Replica learned TRUNCATE action at position 6 I0114 18:50:51.997160 4740 slave.cpp:613] New master detected at master@192.168.122.135:57018 I0114 18:50:51.998111 4740 slave.cpp:676] Authenticating with master master@192.168.122.135:57018 I0114 18:50:51.998437 4740 slave.cpp:681] Using default CRAM-MD5 authenticatee I0114 18:50:51.999161 4734 authenticatee.hpp:138] Creating new client SASL connection I0114 18:50:51.997766 4735 status_update_manager.cpp:171] Pausing sending status updates I0114 18:50:52.000628 4740 slave.cpp:649] Detecting new master I0114 18:50:52.001258 4734 master.cpp:4130] Authenticating slave(115)@192.168.122.135:57018 I0114 18:50:52.002085 4734 master.cpp:4141] Using default CRAM-MD5 authenticator I0114 18:50:52.003057 4734 authenticator.hpp:170] Creating new server SASL connection I0114 18:50:52.004458 4735 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0114 18:50:52.004762 4735 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0114 18:50:52.005210 4734 authenticator.hpp:276] Received SASL authentication start I0114 18:50:52.005544 4734 authenticator.hpp:398] Authentication requires more steps I0114 18:50:52.006116 4736 authenticatee.hpp:275] Received SASL authentication step I0114 18:50:52.006676 4734 authenticator.hpp:304] Received SASL authentication step I0114 18:50:52.007045 4734 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0114 18:50:52.007340 4734 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0114 18:50:52.007733 4734 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0114 18:50:52.008149 4734 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0114 18:50:52.008437 4734 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0114 18:50:52.008714 4734 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0114 18:50:52.009009 4734 authenticator.hpp:390] Authentication success I0114 18:50:52.009459 4741 authenticatee.hpp:315] Authentication success I0114 18:50:52.018327 4738 master.cpp:4188] Successfully authenticated principal 'test-principal' at slave(115)@192.168.122.135:57018 I0114 18:50:52.018959 4741 slave.cpp:747] Successfully authenticated with master master@192.168.122.135:57018 I0114 18:50:52.020071 4739 master.cpp:3453] Re-registering slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) I0114 18:50:52.021256 4739 registrar.cpp:445] Applied 1 operations in 109203ns; attempting to update the 'registry' I0114 18:50:52.023926 4737 log.cpp:684] Attempting to append 300 bytes to the log I0114 18:50:52.024710 4735 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 7 I0114 18:50:52.026480 4734 replica.cpp:511] Replica received write request for position 7 I0114 18:50:52.027065 4734 leveldb.cpp:343] Persisting action (319 bytes) to leveldb took 109150ns I0114 18:50:52.027524 4734 replica.cpp:679] Persisted action at 7 I0114 18:50:52.028818 4738 replica.cpp:658] Replica received learned notice for position 7 I0114 18:50:52.029525 4738 leveldb.cpp:343] Persisting action (321 bytes) to leveldb took 185197ns I0114 18:50:52.029930 4738 replica.cpp:679] Persisted action at 7 I0114 18:50:52.030205 4738 replica.cpp:664] Replica learned APPEND action at position 7 I0114 18:50:52.031692 4735 log.cpp:703] Attempting to truncate the log to 7 I0114 18:50:52.032083 4740 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 8 I0114 18:50:52.033411 4740 replica.cpp:511] Replica received write request for position 8 I0114 18:50:52.033768 4740 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 78202ns I0114 18:50:52.034054 4740 replica.cpp:679] Persisted action at 8 I0114 18:50:52.035274 4740 replica.cpp:658] Replica received learned notice for position 8 I0114 18:50:52.035912 4740 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 80144ns I0114 18:50:52.036262 4740 leveldb.cpp:401] Deleting ~2 keys from leveldb took 93273ns I0114 18:50:52.036558 4740 replica.cpp:679] Persisted action at 8 I0114 18:50:52.036883 4740 replica.cpp:664] Replica learned TRUNCATE action at position 8 I0114 18:50:52.038254 4741 slave.cpp:1075] Will retry registration in 5.240065ms if necessary I0114 18:50:52.044471 4739 registrar.cpp:490] Successfully updated the 'registry' in 22.825984ms I0114 18:50:52.045918 4740 master.hpp:766] Adding task 0 with resources cpus(*):1; mem(*):16 on slave 20150114-185051-2272962752-57018-4720-S0 (fedora-19) W0114 18:50:52.052153 4740 master.cpp:4697] Possibly orphaned task 0 of framework 20150114-185051-2272962752-57018-4720-0000 running on slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) I0114 18:50:52.053467 4738 hierarchical_allocator_process.hpp:453] Added slave 20150114-185051-2272962752-57018-4720-S0 (fedora-19) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):1; mem(*):1008; disk(*):1024; ports(*):[31000-32000] available) I0114 18:50:52.054124 4738 hierarchical_allocator_process.hpp:839] No resources available to allocate! I0114 18:50:52.054733 4738 hierarchical_allocator_process.hpp:764] Performed allocation for slave 20150114-185051-2272962752-57018-4720-S0 in 795150ns I0114 18:50:52.055675 4736 slave.cpp:1075] Will retry registration in 4.9981ms if necessary I0114 18:50:52.056367 4736 slave.cpp:2588] Received ping from slave-observer(96)@192.168.122.135:57018 I0114 18:50:52.056958 4740 master.cpp:3521] Re-registered slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0114 18:50:52.057782 4740 master.cpp:3402] Re-registering slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) I0114 18:50:52.058290 4734 slave.cpp:849] Re-registered with master master@192.168.122.135:57018 I0114 18:50:52.061352 4734 slave.cpp:2948] Executor 'default' of framework 20150114-185051-2272962752-57018-4720-0000 exited with status 0 I0114 18:50:52.061640 4737 status_update_manager.cpp:178] Resuming sending status updates I0114 18:50:52.064230 4734 slave.cpp:2265] Handling status update TASK_LOST (UUID: 9b52bc70-76aa-4923-be0e-f14669185255) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 from @0.0.0.0:0 I0114 18:50:52.064846 4734 slave.cpp:4229] Terminating task 0 W0114 18:50:52.065830 4734 slave.cpp:856] Already re-registered with master master@192.168.122.135:57018 I0114 18:50:52.067150 4739 master.cpp:3705] Executor default of framework 20150114-185051-2272962752-57018-4720-0000 on slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) exited with status 0 I0114 18:50:52.070163 4737 status_update_manager.cpp:317] Received status update TASK_LOST (UUID: 9b52bc70-76aa-4923-be0e-f14669185255) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:52.070940 4737 status_update_manager.cpp:371] Forwarding update TASK_LOST (UUID: 9b52bc70-76aa-4923-be0e-f14669185255) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 to the slave I0114 18:50:52.071951 4736 sched.cpp:242] Scheduler::disconnected took 43823ns I0114 18:50:52.072419 4736 sched.cpp:248] New master detected at master@192.168.122.135:57018 I0114 18:50:52.072935 4736 sched.cpp:304] Authenticating with master master@192.168.122.135:57018 I0114 18:50:52.073321 4736 sched.cpp:311] Using default CRAM-MD5 authenticatee I0114 18:50:52.074064 4736 authenticatee.hpp:138] Creating new client SASL connection I0114 18:50:52.076202 4734 slave.cpp:2508] Forwarding the update TASK_LOST (UUID: 9b52bc70-76aa-4923-be0e-f14669185255) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 to master@192.168.122.135:57018 I0114 18:50:52.077155 4734 slave.cpp:2435] Status update manager successfully handled status update TASK_LOST (UUID: 9b52bc70-76aa-4923-be0e-f14669185255) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:52.076659 4739 master.cpp:5023] Removing executor 'default' with resources of framework 20150114-185051-2272962752-57018-4720-0000 on slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) I0114 18:50:52.080638 4739 master.cpp:4130] Authenticating scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:52.081056 4739 master.cpp:4141] Using default CRAM-MD5 authenticator I0114 18:50:52.081892 4741 authenticator.hpp:170] Creating new server SASL connection I0114 18:50:52.083005 4741 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0114 18:50:52.083470 4741 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0114 18:50:52.083953 4741 authenticator.hpp:276] Received SASL authentication start I0114 18:50:52.084355 4741 authenticator.hpp:398] Authentication requires more steps I0114 18:50:52.084794 4741 authenticatee.hpp:275] Received SASL authentication step I0114 18:50:52.085310 4737 authenticator.hpp:304] Received SASL authentication step I0114 18:50:52.085654 4737 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0114 18:50:52.085969 4737 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0114 18:50:52.086297 4737 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0114 18:50:52.086642 4737 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0114 18:50:52.086942 4737 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0114 18:50:52.087226 4737 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0114 18:50:52.087550 4737 authenticator.hpp:390] Authentication success I0114 18:50:52.087934 4741 authenticatee.hpp:315] Authentication success I0114 18:50:52.088513 4741 sched.cpp:392] Successfully authenticated with master master@192.168.122.135:57018 I0114 18:50:52.088899 4741 sched.cpp:515] Sending registration request to master@192.168.122.135:57018 I0114 18:50:52.089360 4741 sched.cpp:548] Will retry registration in 1.858884079secs if necessary I0114 18:50:52.090150 4739 master.cpp:1522] Queuing up re-registration request for framework 20150114-185051-2272962752-57018-4720-0000 (default) at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 because authentication is still in progress I0114 18:50:52.095142 4739 master.cpp:4188] Successfully authenticated principal 'test-principal' at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:52.108275 4739 master.cpp:1554] Received re-registration request from framework 20150114-185051-2272962752-57018-4720-0000 (default) at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:52.108742 4739 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0114 18:50:52.109735 4739 master.cpp:1607] Re-registering framework 20150114-185051-2272962752-57018-4720-0000 (default) at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:52.110985 4735 hierarchical_allocator_process.hpp:319] Added framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:52.120640 4735 hierarchical_allocator_process.hpp:746] Performed allocation for 1 slaves in 9.254989ms I0114 18:50:52.121709 4734 slave.cpp:1762] Updating framework 20150114-185051-2272962752-57018-4720-0000 pid to scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:52.122190 4734 status_update_manager.cpp:178] Resuming sending status updates W0114 18:50:52.122694 4734 status_update_manager.cpp:185] Resending status update TASK_LOST (UUID: 9b52bc70-76aa-4923-be0e-f14669185255) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:52.123072 4734 status_update_manager.cpp:371] Forwarding update TASK_LOST (UUID: 9b52bc70-76aa-4923-be0e-f14669185255) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 to the slave I0114 18:50:52.123733 4734 slave.cpp:2508] Forwarding the update TASK_LOST (UUID: 9b52bc70-76aa-4923-be0e-f14669185255) for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 to master@192.168.122.135:57018 I0114 18:50:52.124590 4720 sched.cpp:1471] Asked to stop the driver I0114 18:50:52.125461 4739 master.cpp:4072] Sending 1 offers to framework 20150114-185051-2272962752-57018-4720-0000 (default) at scheduler-092fbbec-0938-4355-8187-fb92e5174c64@192.168.122.135:57018 I0114 18:50:52.126096 4739 master.cpp:654] Master terminating W0114 18:50:52.126626 4739 master.cpp:4980] Removing task 0 with resources cpus(*):1; mem(*):16 of framework 20150114-185051-2272962752-57018-4720-0000 on slave 20150114-185051-2272962752-57018-4720-S0 at slave(115)@192.168.122.135:57018 (fedora-19) in non-terminal state TASK_RUNNING I0114 18:50:52.125669 4735 sched.cpp:423] Ignoring framework registered message because the driver is not running! I0114 18:50:52.127410 4735 sched.cpp:808] Stopping framework '20150114-185051-2272962752-57018-4720-0000' I0114 18:50:52.128592 4735 hierarchical_allocator_process.hpp:653] Recovered cpus(*):1; mem(*):16 (total allocatable: cpus(*):1; mem(*):16) on slave 20150114-185051-2272962752-57018-4720-S0 from framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:52.132880 4740 slave.cpp:2673] master@192.168.122.135:57018 exited W0114 18:50:52.133318 4740 slave.cpp:2676] Master disconnected! Waiting for a new master to be elected I0114 18:50:52.173943 4720 slave.cpp:495] Slave terminating I0114 18:50:52.174928 4720 slave.cpp:1585] Asked to shut down framework 20150114-185051-2272962752-57018-4720-0000 by @0.0.0.0:0 I0114 18:50:52.175448 4720 slave.cpp:1610] Shutting down framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:52.175858 4720 slave.cpp:3057] Cleaning up executor 'default' of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:52.176615 4740 gc.cpp:56] Scheduling '/tmp/FaultToleranceTest_ReregisterFrameworkExitedExecutor_ONrVug/slaves/20150114-185051-2272962752-57018-4720-S0/frameworks/20150114-185051-2272962752-57018-4720-0000/executors/default/runs/dd104b76-b838-431e-ada9-ff7a2b07e694' for gc 6.99999795726815days in the future I0114 18:50:52.177549 4734 gc.cpp:56] Scheduling '/tmp/FaultToleranceTest_ReregisterFrameworkExitedExecutor_ONrVug/slaves/20150114-185051-2272962752-57018-4720-S0/frameworks/20150114-185051-2272962752-57018-4720-0000/executors/default' for gc 6.99999794655111days in the future I0114 18:50:52.178169 4720 slave.cpp:3136] Cleaning up framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:52.178683 4741 status_update_manager.cpp:279] Closing status update streams for framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:52.179054 4741 status_update_manager.cpp:525] Cleaning up status update stream for task 0 of framework 20150114-185051-2272962752-57018-4720-0000 I0114 18:50:52.179730 4737 gc.cpp:56] Scheduling '/tmp/FaultToleranceTest_ReregisterFrameworkExitedExecutor_ONrVug/slaves/20150114-185051-2272962752-57018-4720-S0/frameworks/20150114-185051-2272962752-57018-4720-0000' for gc 6.9999979210637days in the future tests/fault_tolerance_tests.cpp:1213: Failure Actual function call count doesn't match EXPECT_CALL(sched, registered(&driver, _, _))... Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] FaultToleranceTest.ReregisterFrameworkExitedExecutor (776 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2226","01/15/2015 20:25:07",3,"HookTest.VerifySlaveLaunchExecutorHook is flaky ""Observed this on internal CI """," [ RUN ] HookTest.VerifySlaveLaunchExecutorHook Using temporary directory '/tmp/HookTest_VerifySlaveLaunchExecutorHook_GjBgME' I0114 18:51:34.659353 4720 leveldb.cpp:176] Opened db in 1.255951ms I0114 18:51:34.662112 4720 leveldb.cpp:183] Compacted db in 596090ns I0114 18:51:34.662364 4720 leveldb.cpp:198] Created db iterator in 177877ns I0114 18:51:34.662719 4720 leveldb.cpp:204] Seeked to beginning of db in 19709ns I0114 18:51:34.663010 4720 leveldb.cpp:273] Iterated through 0 keys in the db in 18208ns I0114 18:51:34.663312 4720 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0114 18:51:34.664266 4735 recover.cpp:449] Starting replica recovery I0114 18:51:34.664908 4735 recover.cpp:475] Replica is in EMPTY status I0114 18:51:34.667842 4734 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0114 18:51:34.669117 4735 recover.cpp:195] Received a recover response from a replica in EMPTY status I0114 18:51:34.677913 4735 recover.cpp:566] Updating replica status to STARTING I0114 18:51:34.683157 4735 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 137939ns I0114 18:51:34.683507 4735 replica.cpp:323] Persisted replica status to STARTING I0114 18:51:34.684013 4735 recover.cpp:475] Replica is in STARTING status I0114 18:51:34.685554 4738 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0114 18:51:34.696512 4736 recover.cpp:195] Received a recover response from a replica in STARTING status I0114 18:51:34.700552 4735 recover.cpp:566] Updating replica status to VOTING I0114 18:51:34.701128 4735 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 115624ns I0114 18:51:34.701478 4735 replica.cpp:323] Persisted replica status to VOTING I0114 18:51:34.701817 4735 recover.cpp:580] Successfully joined the Paxos group I0114 18:51:34.702569 4735 recover.cpp:464] Recover process terminated I0114 18:51:34.716439 4736 master.cpp:262] Master 20150114-185134-2272962752-57018-4720 (fedora-19) started on 192.168.122.135:57018 I0114 18:51:34.716913 4736 master.cpp:308] Master only allowing authenticated frameworks to register I0114 18:51:34.717136 4736 master.cpp:313] Master only allowing authenticated slaves to register I0114 18:51:34.717488 4736 credentials.hpp:36] Loading credentials for authentication from '/tmp/HookTest_VerifySlaveLaunchExecutorHook_GjBgME/credentials' I0114 18:51:34.718077 4736 master.cpp:357] Authorization enabled I0114 18:51:34.719238 4738 whitelist_watcher.cpp:65] No whitelist given I0114 18:51:34.719755 4737 hierarchical_allocator_process.hpp:285] Initialized hierarchical allocator process I0114 18:51:34.722584 4736 master.cpp:1219] The newly elected leader is master@192.168.122.135:57018 with id 20150114-185134-2272962752-57018-4720 I0114 18:51:34.722865 4736 master.cpp:1232] Elected as the leading master! I0114 18:51:34.723310 4736 master.cpp:1050] Recovering from registrar I0114 18:51:34.723760 4734 registrar.cpp:313] Recovering registrar I0114 18:51:34.725229 4740 log.cpp:660] Attempting to start the writer I0114 18:51:34.727893 4739 replica.cpp:477] Replica received implicit promise request with proposal 1 I0114 18:51:34.728425 4739 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 114781ns I0114 18:51:34.728662 4739 replica.cpp:345] Persisted promised to 1 I0114 18:51:34.731271 4741 coordinator.cpp:230] Coordinator attemping to fill missing position I0114 18:51:34.733223 4734 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0114 18:51:34.734076 4734 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 87441ns I0114 18:51:34.734441 4734 replica.cpp:679] Persisted action at 0 I0114 18:51:34.740272 4739 replica.cpp:511] Replica received write request for position 0 I0114 18:51:34.740910 4739 leveldb.cpp:438] Reading position from leveldb took 59846ns I0114 18:51:34.741672 4739 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 189259ns I0114 18:51:34.741919 4739 replica.cpp:679] Persisted action at 0 I0114 18:51:34.743000 4739 replica.cpp:658] Replica received learned notice for position 0 I0114 18:51:34.746844 4739 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 328487ns I0114 18:51:34.747118 4739 replica.cpp:679] Persisted action at 0 I0114 18:51:34.747553 4739 replica.cpp:664] Replica learned NOP action at position 0 I0114 18:51:34.751344 4737 log.cpp:676] Writer started with ending position 0 I0114 18:51:34.753504 4734 leveldb.cpp:438] Reading position from leveldb took 61183ns I0114 18:51:34.762962 4737 registrar.cpp:346] Successfully fetched the registry (0B) in 38.907904ms I0114 18:51:34.763610 4737 registrar.cpp:445] Applied 1 operations in 67206ns; attempting to update the 'registry' I0114 18:51:34.766079 4736 log.cpp:684] Attempting to append 130 bytes to the log I0114 18:51:34.766769 4736 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0114 18:51:34.768215 4741 replica.cpp:511] Replica received write request for position 1 I0114 18:51:34.768759 4741 leveldb.cpp:343] Persisting action (149 bytes) to leveldb took 87970ns I0114 18:51:34.768995 4741 replica.cpp:679] Persisted action at 1 I0114 18:51:34.770691 4736 replica.cpp:658] Replica received learned notice for position 1 I0114 18:51:34.771273 4736 leveldb.cpp:343] Persisting action (151 bytes) to leveldb took 83590ns I0114 18:51:34.771579 4736 replica.cpp:679] Persisted action at 1 I0114 18:51:34.771917 4736 replica.cpp:664] Replica learned APPEND action at position 1 I0114 18:51:34.773252 4738 log.cpp:703] Attempting to truncate the log to 1 I0114 18:51:34.773756 4735 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0114 18:51:34.775552 4736 replica.cpp:511] Replica received write request for position 2 I0114 18:51:34.775846 4736 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 71503ns I0114 18:51:34.776695 4736 replica.cpp:679] Persisted action at 2 I0114 18:51:34.785259 4739 replica.cpp:658] Replica received learned notice for position 2 I0114 18:51:34.786252 4737 registrar.cpp:490] Successfully updated the 'registry' in 22.340864ms I0114 18:51:34.787094 4737 registrar.cpp:376] Successfully recovered registrar I0114 18:51:34.787749 4737 master.cpp:1077] Recovered 0 slaves from the Registry (94B) ; allowing 10mins for slaves to re-register I0114 18:51:34.787282 4739 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 707150ns I0114 18:51:34.788692 4739 leveldb.cpp:401] Deleting ~1 keys from leveldb took 60262ns I0114 18:51:34.789048 4739 replica.cpp:679] Persisted action at 2 I0114 18:51:34.789329 4739 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0114 18:51:34.819548 4738 slave.cpp:173] Slave started on 171)@192.168.122.135:57018 I0114 18:51:34.820530 4738 credentials.hpp:84] Loading credential for authentication from '/tmp/HookTest_VerifySlaveLaunchExecutorHook_AYxNqe/credential' I0114 18:51:34.820952 4738 slave.cpp:282] Slave using credential for: test-principal I0114 18:51:34.821516 4738 slave.cpp:300] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0114 18:51:34.822217 4738 slave.cpp:329] Slave hostname: fedora-19 I0114 18:51:34.822502 4738 slave.cpp:330] Slave checkpoint: false W0114 18:51:34.822857 4738 slave.cpp:332] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I0114 18:51:34.824998 4737 state.cpp:33] Recovering state from '/tmp/HookTest_VerifySlaveLaunchExecutorHook_AYxNqe/meta' I0114 18:51:34.834015 4738 status_update_manager.cpp:197] Recovering status update manager I0114 18:51:34.834810 4738 slave.cpp:3519] Finished recovery I0114 18:51:34.835906 4734 status_update_manager.cpp:171] Pausing sending status updates I0114 18:51:34.836423 4738 slave.cpp:613] New master detected at master@192.168.122.135:57018 I0114 18:51:34.836908 4738 slave.cpp:676] Authenticating with master master@192.168.122.135:57018 I0114 18:51:34.837190 4738 slave.cpp:681] Using default CRAM-MD5 authenticatee I0114 18:51:34.837820 4737 authenticatee.hpp:138] Creating new client SASL connection I0114 18:51:34.838784 4738 slave.cpp:649] Detecting new master I0114 18:51:34.839306 4740 master.cpp:4130] Authenticating slave(171)@192.168.122.135:57018 I0114 18:51:34.839957 4740 master.cpp:4141] Using default CRAM-MD5 authenticator I0114 18:51:34.841236 4740 authenticator.hpp:170] Creating new server SASL connection I0114 18:51:34.842681 4741 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0114 18:51:34.843118 4741 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0114 18:51:34.843581 4740 authenticator.hpp:276] Received SASL authentication start I0114 18:51:34.843962 4740 authenticator.hpp:398] Authentication requires more steps I0114 18:51:34.844357 4740 authenticatee.hpp:275] Received SASL authentication step I0114 18:51:34.844780 4740 authenticator.hpp:304] Received SASL authentication step I0114 18:51:34.845113 4740 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0114 18:51:34.845507 4740 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0114 18:51:34.845835 4740 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0114 18:51:34.846238 4740 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0114 18:51:34.846542 4740 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0114 18:51:34.846806 4740 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0114 18:51:34.847110 4740 authenticator.hpp:390] Authentication success I0114 18:51:34.847808 4734 authenticatee.hpp:315] Authentication success I0114 18:51:34.851029 4734 slave.cpp:747] Successfully authenticated with master master@192.168.122.135:57018 I0114 18:51:34.851608 4737 master.cpp:4188] Successfully authenticated principal 'test-principal' at slave(171)@192.168.122.135:57018 I0114 18:51:34.854962 4720 sched.cpp:151] Version: 0.22.0 I0114 18:51:34.856674 4734 slave.cpp:1075] Will retry registration in 3.085482ms if necessary I0114 18:51:34.857434 4739 sched.cpp:248] New master detected at master@192.168.122.135:57018 I0114 18:51:34.861433 4739 sched.cpp:304] Authenticating with master master@192.168.122.135:57018 I0114 18:51:34.861693 4739 sched.cpp:311] Using default CRAM-MD5 authenticatee I0114 18:51:34.857795 4737 master.cpp:3276] Registering slave at slave(171)@192.168.122.135:57018 (fedora-19) with id 20150114-185134-2272962752-57018-4720-S0 I0114 18:51:34.862951 4737 authenticatee.hpp:138] Creating new client SASL connection I0114 18:51:34.863919 4735 registrar.cpp:445] Applied 1 operations in 120272ns; attempting to update the 'registry' I0114 18:51:34.864645 4738 master.cpp:4130] Authenticating scheduler-c45273e4-6eb5-44ee-bf45-71b353db648f@192.168.122.135:57018 I0114 18:51:34.865033 4738 master.cpp:4141] Using default CRAM-MD5 authenticator I0114 18:51:34.866904 4738 authenticator.hpp:170] Creating new server SASL connection I0114 18:51:34.868840 4737 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0114 18:51:34.869125 4737 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0114 18:51:34.869523 4737 authenticator.hpp:276] Received SASL authentication start I0114 18:51:34.869835 4737 authenticator.hpp:398] Authentication requires more steps I0114 18:51:34.870213 4737 authenticatee.hpp:275] Received SASL authentication step I0114 18:51:34.870622 4737 authenticator.hpp:304] Received SASL authentication step I0114 18:51:34.870946 4737 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0114 18:51:34.871219 4737 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0114 18:51:34.871554 4737 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0114 18:51:34.871968 4737 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0114 18:51:34.872297 4737 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0114 18:51:34.872655 4737 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0114 18:51:34.873024 4737 authenticator.hpp:390] Authentication success I0114 18:51:34.873428 4737 authenticatee.hpp:315] Authentication success I0114 18:51:34.873632 4739 master.cpp:4188] Successfully authenticated principal 'test-principal' at scheduler-c45273e4-6eb5-44ee-bf45-71b353db648f@192.168.122.135:57018 I0114 18:51:34.875006 4740 sched.cpp:392] Successfully authenticated with master master@192.168.122.135:57018 I0114 18:51:34.875319 4740 sched.cpp:515] Sending registration request to master@192.168.122.135:57018 I0114 18:51:34.876200 4740 sched.cpp:548] Will retry registration in 1.952991346secs if necessary I0114 18:51:34.876729 4738 master.cpp:1417] Received registration request for framework 'default' at scheduler-c45273e4-6eb5-44ee-bf45-71b353db648f@192.168.122.135:57018 I0114 18:51:34.877040 4738 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0114 18:51:34.878059 4738 master.cpp:1481] Registering framework 20150114-185134-2272962752-57018-4720-0000 (default) at scheduler-c45273e4-6eb5-44ee-bf45-71b353db648f@192.168.122.135:57018 I0114 18:51:34.878473 4739 log.cpp:684] Attempting to append 300 bytes to the log I0114 18:51:34.879464 4737 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0114 18:51:34.880116 4734 hierarchical_allocator_process.hpp:319] Added framework 20150114-185134-2272962752-57018-4720-0000 I0114 18:51:34.880470 4734 hierarchical_allocator_process.hpp:839] No resources available to allocate! I0114 18:51:34.882331 4734 hierarchical_allocator_process.hpp:746] Performed allocation for 0 slaves in 1.901284ms I0114 18:51:34.884024 4741 sched.cpp:442] Framework registered with 20150114-185134-2272962752-57018-4720-0000 I0114 18:51:34.884454 4741 sched.cpp:456] Scheduler::registered took 44320ns I0114 18:51:34.881965 4737 replica.cpp:511] Replica received write request for position 3 I0114 18:51:34.885218 4737 leveldb.cpp:343] Persisting action (319 bytes) to leveldb took 134480ns I0114 18:51:34.885716 4737 replica.cpp:679] Persisted action at 3 I0114 18:51:34.886034 4739 slave.cpp:1075] Will retry registration in 22.947772ms if necessary I0114 18:51:34.886291 4740 master.cpp:3264] Ignoring register slave message from slave(171)@192.168.122.135:57018 (fedora-19) as admission is already in progress I0114 18:51:34.894690 4736 replica.cpp:658] Replica received learned notice for position 3 I0114 18:51:34.898638 4736 leveldb.cpp:343] Persisting action (321 bytes) to leveldb took 215501ns I0114 18:51:34.899055 4736 replica.cpp:679] Persisted action at 3 I0114 18:51:34.899416 4736 replica.cpp:664] Replica learned APPEND action at position 3 I0114 18:51:34.911782 4736 registrar.cpp:490] Successfully updated the 'registry' in 46.176768ms I0114 18:51:34.912286 4740 log.cpp:703] Attempting to truncate the log to 3 I0114 18:51:34.913108 4740 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0114 18:51:34.915027 4736 master.cpp:3330] Registered slave 20150114-185134-2272962752-57018-4720-S0 at slave(171)@192.168.122.135:57018 (fedora-19) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0114 18:51:34.915642 4735 hierarchical_allocator_process.hpp:453] Added slave 20150114-185134-2272962752-57018-4720-S0 (fedora-19) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0114 18:51:34.917809 4735 hierarchical_allocator_process.hpp:764] Performed allocation for slave 20150114-185134-2272962752-57018-4720-S0 in 514027ns I0114 18:51:34.916689 4738 replica.cpp:511] Replica received write request for position 4 I0114 18:51:34.915784 4741 slave.cpp:781] Registered with master master@192.168.122.135:57018; given slave ID 20150114-185134-2272962752-57018-4720-S0 I0114 18:51:34.919293 4741 slave.cpp:2588] Received ping from slave-observer(156)@192.168.122.135:57018 I0114 18:51:34.919775 4740 status_update_manager.cpp:178] Resuming sending status updates I0114 18:51:34.920374 4736 master.cpp:4072] Sending 1 offers to framework 20150114-185134-2272962752-57018-4720-0000 (default) at scheduler-c45273e4-6eb5-44ee-bf45-71b353db648f@192.168.122.135:57018 I0114 18:51:34.920569 4738 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 1.540136ms I0114 18:51:34.921092 4738 replica.cpp:679] Persisted action at 4 I0114 18:51:34.927111 4735 replica.cpp:658] Replica received learned notice for position 4 I0114 18:51:34.927299 4734 sched.cpp:605] Scheduler::resourceOffers took 1.335524ms I0114 18:51:34.930418 4735 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 1.596377ms I0114 18:51:34.930882 4735 leveldb.cpp:401] Deleting ~2 keys from leveldb took 67578ns I0114 18:51:34.931115 4735 replica.cpp:679] Persisted action at 4 I0114 18:51:34.931529 4735 replica.cpp:664] Replica learned TRUNCATE action at position 4 I0114 18:51:34.930356 4734 master.cpp:2541] Processing reply for offers: [ 20150114-185134-2272962752-57018-4720-O0 ] on slave 20150114-185134-2272962752-57018-4720-S0 at slave(171)@192.168.122.135:57018 (fedora-19) for framework 20150114-185134-2272962752-57018-4720-0000 (default) at scheduler-c45273e4-6eb5-44ee-bf45-71b353db648f@192.168.122.135:57018 I0114 18:51:34.932834 4734 master.cpp:2647] Authorizing framework principal 'test-principal' to launch task 1 as user 'jenkins' W0114 18:51:34.934442 4736 master.cpp:2124] Executor default for task 1 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0114 18:51:34.934960 4736 master.cpp:2136] Executor default for task 1 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0114 18:51:34.935878 4736 master.hpp:766] Adding task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150114-185134-2272962752-57018-4720-S0 (fedora-19) I0114 18:51:34.939453 4738 hierarchical_allocator_process.hpp:610] Updated allocation of framework 20150114-185134-2272962752-57018-4720-0000 on slave 20150114-185134-2272962752-57018-4720-S0 from cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] to cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0114 18:51:34.939950 4736 master.cpp:2897] Launching task 1 of framework 20150114-185134-2272962752-57018-4720-0000 (default) at scheduler-c45273e4-6eb5-44ee-bf45-71b353db648f@192.168.122.135:57018 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150114-185134-2272962752-57018-4720-S0 at slave(171)@192.168.122.135:57018 (fedora-19) I0114 18:51:34.940467 4736 test_hook_module.cpp:52] Executing 'masterLaunchTaskLabelDecorator' hook I0114 18:51:34.941490 4740 slave.cpp:1130] Got assigned task 1 for framework 20150114-185134-2272962752-57018-4720-0000 I0114 18:51:34.942873 4740 slave.cpp:1245] Launching task 1 for framework 20150114-185134-2272962752-57018-4720-0000 I0114 18:51:34.943469 4740 test_hook_module.cpp:71] Executing 'slaveLaunchExecutorEnvironmentDecorator' hook I0114 18:51:34.946705 4740 slave.cpp:3921] Launching executor default of framework 20150114-185134-2272962752-57018-4720-0000 in work directory '/tmp/HookTest_VerifySlaveLaunchExecutorHook_AYxNqe/slaves/20150114-185134-2272962752-57018-4720-S0/frameworks/20150114-185134-2272962752-57018-4720-0000/executors/default/runs/d73da0e7-3d52-4a0e-91d0-eaef735fd65d' I0114 18:51:34.956496 4740 exec.cpp:147] Version: 0.22.0 I0114 18:51:34.960752 4737 exec.cpp:197] Executor started at: executor(56)@192.168.122.135:57018 with pid 4720 I0114 18:51:34.964501 4740 slave.cpp:1368] Queuing task '1' for executor default of framework '20150114-185134-2272962752-57018-4720-0000 I0114 18:51:34.965133 4740 slave.cpp:566] Successfully attached file '/tmp/HookTest_VerifySlaveLaunchExecutorHook_AYxNqe/slaves/20150114-185134-2272962752-57018-4720-S0/frameworks/20150114-185134-2272962752-57018-4720-0000/executors/default/runs/d73da0e7-3d52-4a0e-91d0-eaef735fd65d' I0114 18:51:34.965605 4740 slave.cpp:1912] Got registration for executor 'default' of framework 20150114-185134-2272962752-57018-4720-0000 from executor(56)@192.168.122.135:57018 I0114 18:51:34.966933 4734 exec.cpp:221] Executor registered on slave 20150114-185134-2272962752-57018-4720-S0 I0114 18:51:34.968889 4740 slave.cpp:2031] Flushing queued task 1 for executor 'default' of framework 20150114-185134-2272962752-57018-4720-0000 I0114 18:51:34.969743 4740 slave.cpp:2890] Monitoring executor 'default' of framework '20150114-185134-2272962752-57018-4720-0000' in container 'd73da0e7-3d52-4a0e-91d0-eaef735fd65d' I0114 18:51:34.973484 4734 exec.cpp:233] Executor::registered took 4.814445ms I0114 18:51:34.974081 4734 exec.cpp:308] Executor asked to run task '1' I0114 18:51:34.974431 4734 exec.cpp:317] Executor::launchTask took 184910ns I0114 18:51:34.975292 4720 sched.cpp:1471] Asked to stop the driver I0114 18:51:34.975817 4738 sched.cpp:808] Stopping framework '20150114-185134-2272962752-57018-4720-0000' I0114 18:51:34.975697 4720 master.cpp:654] Master terminating W0114 18:51:34.976610 4720 master.cpp:4980] Removing task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 20150114-185134-2272962752-57018-4720-0000 on slave 20150114-185134-2272962752-57018-4720-S0 at slave(171)@192.168.122.135:57018 (fedora-19) in non-terminal state TASK_STAGING I0114 18:51:34.977880 4720 master.cpp:5023] Removing executor 'default' with resources of framework 20150114-185134-2272962752-57018-4720-0000 on slave 20150114-185134-2272962752-57018-4720-S0 at slave(171)@192.168.122.135:57018 (fedora-19) I0114 18:51:34.978196 4741 hierarchical_allocator_process.hpp:653] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 20150114-185134-2272962752-57018-4720-S0 from framework 20150114-185134-2272962752-57018-4720-0000 I0114 18:51:34.982658 4735 slave.cpp:2673] master@192.168.122.135:57018 exited W0114 18:51:34.983065 4735 slave.cpp:2676] Master disconnected! Waiting for a new master to be elected I0114 18:51:35.029485 4720 slave.cpp:495] Slave terminating I0114 18:51:35.034024 4720 slave.cpp:1585] Asked to shut down framework 20150114-185134-2272962752-57018-4720-0000 by @0.0.0.0:0 I0114 18:51:35.034335 4720 slave.cpp:1610] Shutting down framework 20150114-185134-2272962752-57018-4720-0000 I0114 18:51:35.034857 4720 slave.cpp:3198] Shutting down executor 'default' of framework 20150114-185134-2272962752-57018-4720-0000 tests/hook_tests.cpp:271: Failure Value of: os::isfile(path.get()) Actual: true Expected: false [ FAILED ] HookTest.VerifySlaveLaunchExecutorHook (412 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2228","01/15/2015 21:39:55",3,"SlaveTest.MesosExecutorGracefulShutdown is flaky ""Observed this on internal CI """," [ RUN ] SlaveTest.MesosExecutorGracefulShutdown Using temporary directory '/tmp/SlaveTest_MesosExecutorGracefulShutdown_AWdtVJ' I0124 08:14:04.399211 7926 leveldb.cpp:176] Opened db in 27.364056ms I0124 08:14:04.402632 7926 leveldb.cpp:183] Compacted db in 3.357646ms I0124 08:14:04.402691 7926 leveldb.cpp:198] Created db iterator in 23822ns I0124 08:14:04.402708 7926 leveldb.cpp:204] Seeked to beginning of db in 1913ns I0124 08:14:04.402716 7926 leveldb.cpp:273] Iterated through 0 keys in the db in 458ns I0124 08:14:04.402767 7926 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0124 08:14:04.403728 7951 recover.cpp:449] Starting replica recovery I0124 08:14:04.404011 7951 recover.cpp:475] Replica is in EMPTY status I0124 08:14:04.407765 7950 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0124 08:14:04.408710 7951 recover.cpp:195] Received a recover response from a replica in EMPTY status I0124 08:14:04.419666 7951 recover.cpp:566] Updating replica status to STARTING I0124 08:14:04.429719 7953 master.cpp:262] Master 20150124-081404-16842879-47787-7926 (utopic) started on 127.0.1.1:47787 I0124 08:14:04.429790 7953 master.cpp:308] Master only allowing authenticated frameworks to register I0124 08:14:04.429802 7953 master.cpp:313] Master only allowing authenticated slaves to register I0124 08:14:04.429826 7953 credentials.hpp:36] Loading credentials for authentication from '/tmp/SlaveTest_MesosExecutorGracefulShutdown_AWdtVJ/credentials' I0124 08:14:04.430277 7953 master.cpp:357] Authorization enabled I0124 08:14:04.432682 7953 master.cpp:1219] The newly elected leader is master@127.0.1.1:47787 with id 20150124-081404-16842879-47787-7926 I0124 08:14:04.432816 7953 master.cpp:1232] Elected as the leading master! I0124 08:14:04.432894 7953 master.cpp:1050] Recovering from registrar I0124 08:14:04.433212 7950 registrar.cpp:313] Recovering registrar I0124 08:14:04.434226 7951 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 14.323302ms I0124 08:14:04.434270 7951 replica.cpp:323] Persisted replica status to STARTING I0124 08:14:04.434489 7951 recover.cpp:475] Replica is in STARTING status I0124 08:14:04.436164 7951 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0124 08:14:04.439368 7947 recover.cpp:195] Received a recover response from a replica in STARTING status I0124 08:14:04.440626 7947 recover.cpp:566] Updating replica status to VOTING I0124 08:14:04.443667 7947 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 2.698664ms I0124 08:14:04.443759 7947 replica.cpp:323] Persisted replica status to VOTING I0124 08:14:04.443925 7947 recover.cpp:580] Successfully joined the Paxos group I0124 08:14:04.444160 7947 recover.cpp:464] Recover process terminated I0124 08:14:04.444543 7949 log.cpp:660] Attempting to start the writer I0124 08:14:04.446331 7949 replica.cpp:477] Replica received implicit promise request with proposal 1 I0124 08:14:04.449329 7949 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 2.690453ms I0124 08:14:04.449388 7949 replica.cpp:345] Persisted promised to 1 I0124 08:14:04.450637 7947 coordinator.cpp:230] Coordinator attemping to fill missing position I0124 08:14:04.452271 7949 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0124 08:14:04.455124 7949 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 2.593522ms I0124 08:14:04.455157 7949 replica.cpp:679] Persisted action at 0 I0124 08:14:04.456594 7951 replica.cpp:511] Replica received write request for position 0 I0124 08:14:04.456657 7951 leveldb.cpp:438] Reading position from leveldb took 30358ns I0124 08:14:04.464860 7951 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 8.164646ms I0124 08:14:04.464903 7951 replica.cpp:679] Persisted action at 0 I0124 08:14:04.465947 7949 replica.cpp:658] Replica received learned notice for position 0 I0124 08:14:04.471567 7949 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 5.587838ms I0124 08:14:04.471601 7949 replica.cpp:679] Persisted action at 0 I0124 08:14:04.471622 7949 replica.cpp:664] Replica learned NOP action at position 0 I0124 08:14:04.472682 7951 log.cpp:676] Writer started with ending position 0 I0124 08:14:04.473919 7951 leveldb.cpp:438] Reading position from leveldb took 28676ns I0124 08:14:04.491591 7951 registrar.cpp:346] Successfully fetched the registry (0B) in 58.337024ms I0124 08:14:04.491704 7951 registrar.cpp:445] Applied 1 operations in 28163ns; attempting to update the 'registry' I0124 08:14:04.493938 7953 log.cpp:684] Attempting to append 118 bytes to the log I0124 08:14:04.494122 7953 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0124 08:14:04.495069 7953 replica.cpp:511] Replica received write request for position 1 I0124 08:14:04.500089 7953 leveldb.cpp:343] Persisting action (135 bytes) to leveldb took 4.989356ms I0124 08:14:04.500123 7953 replica.cpp:679] Persisted action at 1 I0124 08:14:04.501271 7950 replica.cpp:658] Replica received learned notice for position 1 I0124 08:14:04.505698 7950 leveldb.cpp:343] Persisting action (137 bytes) to leveldb took 4.396221ms I0124 08:14:04.505734 7950 replica.cpp:679] Persisted action at 1 I0124 08:14:04.505755 7950 replica.cpp:664] Replica learned APPEND action at position 1 I0124 08:14:04.507313 7950 registrar.cpp:490] Successfully updated the 'registry' in 15.52896ms I0124 08:14:04.507478 7953 log.cpp:703] Attempting to truncate the log to 1 I0124 08:14:04.507848 7953 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0124 08:14:04.508743 7953 replica.cpp:511] Replica received write request for position 2 I0124 08:14:04.509214 7950 registrar.cpp:376] Successfully recovered registrar I0124 08:14:04.509682 7946 master.cpp:1077] Recovered 0 slaves from the Registry (82B) ; allowing 10mins for slaves to re-register I0124 08:14:04.514654 7953 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 5.880031ms I0124 08:14:04.514689 7953 replica.cpp:679] Persisted action at 2 I0124 08:14:04.515736 7953 replica.cpp:658] Replica received learned notice for position 2 I0124 08:14:04.522014 7953 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 6.245138ms I0124 08:14:04.522086 7953 leveldb.cpp:401] Deleting ~1 keys from leveldb took 37803ns I0124 08:14:04.522107 7953 replica.cpp:679] Persisted action at 2 I0124 08:14:04.522128 7953 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0124 08:14:04.531460 7926 containerizer.cpp:103] Using isolation: posix/cpu,posix/mem I0124 08:14:04.547194 7951 slave.cpp:173] Slave started on 208)@127.0.1.1:47787 I0124 08:14:04.555682 7951 credentials.hpp:84] Loading credential for authentication from '/tmp/SlaveTest_MesosExecutorGracefulShutdown_kB74xo/credential' I0124 08:14:04.556622 7951 slave.cpp:282] Slave using credential for: test-principal I0124 08:14:04.557052 7951 slave.cpp:300] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0124 08:14:04.557842 7951 slave.cpp:329] Slave hostname: utopic I0124 08:14:04.558091 7951 slave.cpp:330] Slave checkpoint: false W0124 08:14:04.558352 7951 slave.cpp:332] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I0124 08:14:04.566864 7948 state.cpp:33] Recovering state from '/tmp/SlaveTest_MesosExecutorGracefulShutdown_kB74xo/meta' I0124 08:14:04.575711 7951 status_update_manager.cpp:197] Recovering status update manager I0124 08:14:04.575904 7951 containerizer.cpp:300] Recovering containerizer I0124 08:14:04.577112 7951 slave.cpp:3519] Finished recovery I0124 08:14:04.577374 7926 sched.cpp:151] Version: 0.22.0 I0124 08:14:04.578663 7950 sched.cpp:248] New master detected at master@127.0.1.1:47787 I0124 08:14:04.578759 7950 sched.cpp:304] Authenticating with master master@127.0.1.1:47787 I0124 08:14:04.578781 7950 sched.cpp:311] Using default CRAM-MD5 authenticatee I0124 08:14:04.579071 7950 authenticatee.hpp:138] Creating new client SASL connection I0124 08:14:04.579550 7947 master.cpp:4129] Authenticating scheduler-4a6c5cde-c54a-455a-aaad-6fc4e8ee99ef@127.0.1.1:47787 I0124 08:14:04.579582 7947 master.cpp:4140] Using default CRAM-MD5 authenticator I0124 08:14:04.580031 7947 authenticator.hpp:170] Creating new server SASL connection I0124 08:14:04.580402 7947 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0124 08:14:04.580430 7947 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0124 08:14:04.580538 7947 authenticator.hpp:276] Received SASL authentication start I0124 08:14:04.580581 7947 authenticator.hpp:398] Authentication requires more steps I0124 08:14:04.580651 7947 authenticatee.hpp:275] Received SASL authentication step I0124 08:14:04.580746 7947 authenticator.hpp:304] Received SASL authentication step I0124 08:14:04.580837 7947 authenticator.hpp:390] Authentication success I0124 08:14:04.580940 7947 authenticatee.hpp:315] Authentication success I0124 08:14:04.581009 7947 master.cpp:4187] Successfully authenticated principal 'test-principal' at scheduler-4a6c5cde-c54a-455a-aaad-6fc4e8ee99ef@127.0.1.1:47787 I0124 08:14:04.581328 7947 sched.cpp:392] Successfully authenticated with master master@127.0.1.1:47787 I0124 08:14:04.581509 7947 master.cpp:1420] Received registration request for framework 'default' at scheduler-4a6c5cde-c54a-455a-aaad-6fc4e8ee99ef@127.0.1.1:47787 I0124 08:14:04.581585 7947 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0124 08:14:04.582033 7947 master.cpp:1484] Registering framework 20150124-081404-16842879-47787-7926-0000 (default) at scheduler-4a6c5cde-c54a-455a-aaad-6fc4e8ee99ef@127.0.1.1:47787 I0124 08:14:04.582595 7947 hierarchical_allocator_process.hpp:319] Added framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:04.583051 7947 sched.cpp:442] Framework registered with 20150124-081404-16842879-47787-7926-0000 I0124 08:14:04.584087 7951 slave.cpp:613] New master detected at master@127.0.1.1:47787 I0124 08:14:04.584388 7951 slave.cpp:676] Authenticating with master master@127.0.1.1:47787 I0124 08:14:04.584564 7951 slave.cpp:681] Using default CRAM-MD5 authenticatee I0124 08:14:04.584951 7951 slave.cpp:649] Detecting new master I0124 08:14:04.585219 7951 status_update_manager.cpp:171] Pausing sending status updates I0124 08:14:04.585604 7951 authenticatee.hpp:138] Creating new client SASL connection I0124 08:14:04.587666 7953 master.cpp:4129] Authenticating slave(208)@127.0.1.1:47787 I0124 08:14:04.587702 7953 master.cpp:4140] Using default CRAM-MD5 authenticator I0124 08:14:04.588434 7953 authenticator.hpp:170] Creating new server SASL connection I0124 08:14:04.588764 7953 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0124 08:14:04.588790 7953 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0124 08:14:04.588896 7953 authenticator.hpp:276] Received SASL authentication start I0124 08:14:04.588935 7953 authenticator.hpp:398] Authentication requires more steps I0124 08:14:04.589005 7953 authenticatee.hpp:275] Received SASL authentication step I0124 08:14:04.589082 7953 authenticator.hpp:304] Received SASL authentication step I0124 08:14:04.589140 7953 authenticator.hpp:390] Authentication success I0124 08:14:04.589232 7953 authenticatee.hpp:315] Authentication success I0124 08:14:04.589300 7953 master.cpp:4187] Successfully authenticated principal 'test-principal' at slave(208)@127.0.1.1:47787 I0124 08:14:04.589587 7953 slave.cpp:747] Successfully authenticated with master master@127.0.1.1:47787 I0124 08:14:04.589913 7953 master.cpp:3275] Registering slave at slave(208)@127.0.1.1:47787 (utopic) with id 20150124-081404-16842879-47787-7926-S0 I0124 08:14:04.590322 7953 registrar.cpp:445] Applied 1 operations in 60404ns; attempting to update the 'registry' I0124 08:14:04.595336 7948 log.cpp:684] Attempting to append 283 bytes to the log I0124 08:14:04.595552 7948 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0124 08:14:04.596535 7948 replica.cpp:511] Replica received write request for position 3 I0124 08:14:04.597846 7951 master.cpp:3263] Ignoring register slave message from slave(208)@127.0.1.1:47787 (utopic) as admission is already in progress I0124 08:14:04.602326 7948 leveldb.cpp:343] Persisting action (302 bytes) to leveldb took 5.758211ms I0124 08:14:04.602363 7948 replica.cpp:679] Persisted action at 3 I0124 08:14:04.603492 7951 replica.cpp:658] Replica received learned notice for position 3 I0124 08:14:04.608952 7951 leveldb.cpp:343] Persisting action (304 bytes) to leveldb took 5.427195ms I0124 08:14:04.608985 7951 replica.cpp:679] Persisted action at 3 I0124 08:14:04.609007 7951 replica.cpp:664] Replica learned APPEND action at position 3 I0124 08:14:04.610643 7951 registrar.cpp:490] Successfully updated the 'registry' in 20.258048ms I0124 08:14:04.610800 7948 log.cpp:703] Attempting to truncate the log to 3 I0124 08:14:04.611184 7948 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0124 08:14:04.612076 7948 replica.cpp:511] Replica received write request for position 4 I0124 08:14:04.613061 7946 master.cpp:3329] Registered slave 20150124-081404-16842879-47787-7926-S0 at slave(208)@127.0.1.1:47787 (utopic) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0124 08:14:04.613299 7946 hierarchical_allocator_process.hpp:453] Added slave 20150124-081404-16842879-47787-7926-S0 (utopic) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0124 08:14:04.613688 7946 slave.cpp:781] Registered with master master@127.0.1.1:47787; given slave ID 20150124-081404-16842879-47787-7926-S0 I0124 08:14:04.614112 7946 master.cpp:4071] Sending 1 offers to framework 20150124-081404-16842879-47787-7926-0000 (default) at scheduler-4a6c5cde-c54a-455a-aaad-6fc4e8ee99ef@127.0.1.1:47787 I0124 08:14:04.614228 7946 status_update_manager.cpp:178] Resuming sending status updates I0124 08:14:04.617481 7947 master.cpp:2677] Processing ACCEPT call for offers: [ 20150124-081404-16842879-47787-7926-O0 ] on slave 20150124-081404-16842879-47787-7926-S0 at slave(208)@127.0.1.1:47787 (utopic) for framework 20150124-081404-16842879-47787-7926-0000 (default) at scheduler-4a6c5cde-c54a-455a-aaad-6fc4e8ee99ef@127.0.1.1:47787 I0124 08:14:04.617535 7947 master.cpp:2513] Authorizing framework principal 'test-principal' to launch task 7c16772d-4aed-4719-81c4-658a2cc22543 as user 'jenkins' I0124 08:14:04.618736 7947 master.hpp:782] Adding task 7c16772d-4aed-4719-81c4-658a2cc22543 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150124-081404-16842879-47787-7926-S0 (utopic) I0124 08:14:04.618854 7947 master.cpp:2885] Launching task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 (default) at scheduler-4a6c5cde-c54a-455a-aaad-6fc4e8ee99ef@127.0.1.1:47787 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150124-081404-16842879-47787-7926-S0 at slave(208)@127.0.1.1:47787 (utopic) I0124 08:14:04.619209 7947 slave.cpp:1130] Got assigned task 7c16772d-4aed-4719-81c4-658a2cc22543 for framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:04.619472 7948 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 7.364828ms I0124 08:14:04.619941 7948 replica.cpp:679] Persisted action at 4 I0124 08:14:04.624851 7953 replica.cpp:658] Replica received learned notice for position 4 I0124 08:14:04.625757 7947 slave.cpp:1245] Launching task 7c16772d-4aed-4719-81c4-658a2cc22543 for framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:04.630590 7953 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 5.705336ms I0124 08:14:04.630805 7953 leveldb.cpp:401] Deleting ~2 keys from leveldb took 51263ns I0124 08:14:04.630828 7953 replica.cpp:679] Persisted action at 4 I0124 08:14:04.630851 7953 replica.cpp:664] Replica learned TRUNCATE action at position 4 I0124 08:14:04.633968 7947 slave.cpp:3921] Launching executor 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 in work directory '/tmp/SlaveTest_MesosExecutorGracefulShutdown_kB74xo/slaves/20150124-081404-16842879-47787-7926-S0/frameworks/20150124-081404-16842879-47787-7926-0000/executors/7c16772d-4aed-4719-81c4-658a2cc22543/runs/53887a08-f11d-4a2f-a659-a715d9fcf3d2' I0124 08:14:04.634963 7951 containerizer.cpp:445] Starting container '53887a08-f11d-4a2f-a659-a715d9fcf3d2' for executor '7c16772d-4aed-4719-81c4-658a2cc22543' of framework '20150124-081404-16842879-47787-7926-0000' W0124 08:14:04.636931 7951 containerizer.cpp:296] CommandInfo.grace_period flag is not set, using default value: 3secs I0124 08:14:04.655591 7947 slave.cpp:1368] Queuing task '7c16772d-4aed-4719-81c4-658a2cc22543' for executor 7c16772d-4aed-4719-81c4-658a2cc22543 of framework '20150124-081404-16842879-47787-7926-0000 I0124 08:14:04.656992 7951 launcher.cpp:137] Forked child with pid '11030' for container '53887a08-f11d-4a2f-a659-a715d9fcf3d2' I0124 08:14:04.673646 7951 slave.cpp:2890] Monitoring executor '7c16772d-4aed-4719-81c4-658a2cc22543' of framework '20150124-081404-16842879-47787-7926-0000' in container '53887a08-f11d-4a2f-a659-a715d9fcf3d2' I0124 08:14:04.964946 11044 exec.cpp:147] Version: 0.22.0 I0124 08:14:05.113059 7948 slave.cpp:1912] Got registration for executor '7c16772d-4aed-4719-81c4-658a2cc22543' of framework 20150124-081404-16842879-47787-7926-0000 from executor(1)@127.0.1.1:49174 I0124 08:14:05.121086 7948 slave.cpp:2031] Flushing queued task 7c16772d-4aed-4719-81c4-658a2cc22543 for executor '7c16772d-4aed-4719-81c4-658a2cc22543' of framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:05.266849 11062 exec.cpp:221] Executor registered on slave 20150124-081404-16842879-47787-7926-S0 Shutdown timeout is set to 3secsRegistered executor on utopic Starting task 7c16772d-4aed-4719-81c4-658a2cc22543 Forked command at 11067 sh -c 'sleep 1000' I0124 08:14:05.492084 7953 slave.cpp:2265] Handling status update TASK_RUNNING (UUID: 54742a87-ef02-4e72-a19b-83b0eeb62568) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 from executor(1)@127.0.1.1:49174 I0124 08:14:05.492805 7953 status_update_manager.cpp:317] Received status update TASK_RUNNING (UUID: 54742a87-ef02-4e72-a19b-83b0eeb62568) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:05.493762 7953 slave.cpp:2508] Forwarding the update TASK_RUNNING (UUID: 54742a87-ef02-4e72-a19b-83b0eeb62568) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 to master@127.0.1.1:47787 I0124 08:14:05.493948 7953 slave.cpp:2441] Sending acknowledgement for status update TASK_RUNNING (UUID: 54742a87-ef02-4e72-a19b-83b0eeb62568) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 to executor(1)@127.0.1.1:49174 I0124 08:14:05.495378 7949 master.cpp:3652] Forwarding status update TASK_RUNNING (UUID: 54742a87-ef02-4e72-a19b-83b0eeb62568) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:05.495584 7949 master.cpp:3624] Status update TASK_RUNNING (UUID: 54742a87-ef02-4e72-a19b-83b0eeb62568) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 from slave 20150124-081404-16842879-47787-7926-S0 at slave(208)@127.0.1.1:47787 (utopic) I0124 08:14:05.495678 7949 master.cpp:4934] Updating the latest state of task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 to TASK_RUNNING I0124 08:14:05.496422 7949 master.cpp:3125] Forwarding status update acknowledgement 54742a87-ef02-4e72-a19b-83b0eeb62568 for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 (default) at scheduler-4a6c5cde-c54a-455a-aaad-6fc4e8ee99ef@127.0.1.1:47787 to slave 20150124-081404-16842879-47787-7926-S0 at slave(208)@127.0.1.1:47787 (utopic) I0124 08:14:05.497735 7946 master.cpp:2961] Asked to kill task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:05.497859 7946 master.cpp:3021] Telling slave 20150124-081404-16842879-47787-7926-S0 at slave(208)@127.0.1.1:47787 (utopic) to kill task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 (default) at scheduler-4a6c5cde-c54a-455a-aaad-6fc4e8ee99ef@127.0.1.1:47787 I0124 08:14:05.498589 7947 status_update_manager.cpp:389] Received status update acknowledgement (UUID: 54742a87-ef02-4e72-a19b-83b0eeb62568) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:05.499006 7953 slave.cpp:1424] Asked to kill task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 Shutting down Sending SIGTERM to process tree at pid 11067 Killing the following process trees: [ -+- 11067 sh -c sleep 1000 \--- 11068 sleep 1000 ] 2015-01-24 08:14:07,295:7926(0x7f30b1b34700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:57753] zk retcode=-4, errno=111(Connection refused): server refused to accept the client Process 11067 did not terminate after 3secs, sending SIGKILL to process tree at 11067 Killed the following process trees: [ -+- 11067 sh -c sleep 1000 \--- 11068 sleep 1000 ] Command terminated with signal Killed (pid: 11067) I0124 08:14:09.063453 7953 slave.cpp:2265] Handling status update TASK_KILLED (UUID: 4bd05372-2705-46e5-8182-5cb6907fbab3) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 from executor(1)@127.0.1.1:49174 I0124 08:14:09.069545 7953 status_update_manager.cpp:317] Received status update TASK_KILLED (UUID: 4bd05372-2705-46e5-8182-5cb6907fbab3) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:09.070265 7953 slave.cpp:2508] Forwarding the update TASK_KILLED (UUID: 4bd05372-2705-46e5-8182-5cb6907fbab3) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 to master@127.0.1.1:47787 I0124 08:14:09.070996 7947 master.cpp:3652] Forwarding status update TASK_KILLED (UUID: 4bd05372-2705-46e5-8182-5cb6907fbab3) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:09.071182 7947 master.cpp:3624] Status update TASK_KILLED (UUID: 4bd05372-2705-46e5-8182-5cb6907fbab3) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 from slave 20150124-081404-16842879-47787-7926-S0 at slave(208)@127.0.1.1:47787 (utopic) I0124 08:14:09.071260 7947 master.cpp:4934] Updating the latest state of task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 to TASK_KILLED I0124 08:14:09.072052 7947 hierarchical_allocator_process.hpp:653] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 20150124-081404-16842879-47787-7926-S0 from framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:09.072449 7947 master.cpp:4993] Removing task 7c16772d-4aed-4719-81c4-658a2cc22543 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 20150124-081404-16842879-47787-7926-0000 on slave 20150124-081404-16842879-47787-7926-S0 at slave(208)@127.0.1.1:47787 (utopic) I0124 08:14:09.072700 7947 master.cpp:3125] Forwarding status update acknowledgement 4bd05372-2705-46e5-8182-5cb6907fbab3 for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 (default) at scheduler-4a6c5cde-c54a-455a-aaad-6fc4e8ee99ef@127.0.1.1:47787 to slave 20150124-081404-16842879-47787-7926-S0 at slave(208)@127.0.1.1:47787 (utopic) ../../src/tests/slave_tests.cpp:1736: Failure Expected: (std::string::npos) != (statusKilled.get().message().find(""""Terminated"""")), actual: 18446744073709551615 vs 18446744073709551615 I0124 08:14:09.073422 7926 sched.cpp:1471] Asked to stop the driver I0124 08:14:09.073629 7926 master.cpp:654] Master terminating I0124 08:14:09.075768 7950 sched.cpp:808] Stopping framework '20150124-081404-16842879-47787-7926-0000' I0124 08:14:09.079352 7953 slave.cpp:2441] Sending acknowledgement for status update TASK_KILLED (UUID: 4bd05372-2705-46e5-8182-5cb6907fbab3) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 to executor(1)@127.0.1.1:49174 I0124 08:14:09.085199 7953 slave.cpp:2673] master@127.0.1.1:47787 exited W0124 08:14:09.085232 7953 slave.cpp:2676] Master disconnected! Waiting for a new master to be elected I0124 08:14:09.085263 7953 status_update_manager.cpp:389] Received status update acknowledgement (UUID: 4bd05372-2705-46e5-8182-5cb6907fbab3) for task 7c16772d-4aed-4719-81c4-658a2cc22543 of framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:09.120879 7946 containerizer.cpp:879] Destroying container '53887a08-f11d-4a2f-a659-a715d9fcf3d2' I0124 08:14:09.216553 7952 containerizer.cpp:1084] Executor for container '53887a08-f11d-4a2f-a659-a715d9fcf3d2' has exited I0124 08:14:09.218641 7952 slave.cpp:2948] Executor '7c16772d-4aed-4719-81c4-658a2cc22543' of framework 20150124-081404-16842879-47787-7926-0000 terminated with signal Killed I0124 08:14:09.218855 7952 slave.cpp:3057] Cleaning up executor '7c16772d-4aed-4719-81c4-658a2cc22543' of framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:09.223268 7947 gc.cpp:56] Scheduling '/tmp/SlaveTest_MesosExecutorGracefulShutdown_kB74xo/slaves/20150124-081404-16842879-47787-7926-S0/frameworks/20150124-081404-16842879-47787-7926-0000/executors/7c16772d-4aed-4719-81c4-658a2cc22543/runs/53887a08-f11d-4a2f-a659-a715d9fcf3d2' for gc 6.99999746482667days in the future I0124 08:14:09.224205 7947 gc.cpp:56] Scheduling '/tmp/SlaveTest_MesosExecutorGracefulShutdown_kB74xo/slaves/20150124-081404-16842879-47787-7926-S0/frameworks/20150124-081404-16842879-47787-7926-0000/executors/7c16772d-4aed-4719-81c4-658a2cc22543' for gc 6.99999746293926days in the future I0124 08:14:09.227552 7952 slave.cpp:3136] Cleaning up framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:09.229786 7949 status_update_manager.cpp:279] Closing status update streams for framework 20150124-081404-16842879-47787-7926-0000 I0124 08:14:09.230849 7952 slave.cpp:495] Slave terminating I0124 08:14:09.230989 7952 gc.cpp:56] Scheduling '/tmp/SlaveTest_MesosExecutorGracefulShutdown_kB74xo/slaves/20150124-081404-16842879-47787-7926-S0/frameworks/20150124-081404-16842879-47787-7926-0000' for gc 6.99999732935407days in the future [ FAILED ] SlaveTest.MesosExecutorGracefulShutdown (4881 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2230","01/16/2015 01:00:53",3,"Update RateLimiter to allow the acquired future to be discarded ""Currently there is no way for the future returned by RateLimiter's acquire() to be discarded by the user of the limiter. This is useful in cases where the user is no longer interested in the permit. See MESOS-1148 for an example use case.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2232","01/16/2015 15:45:41",3,"Suppress MockAllocator::transformAllocation() warnings. ""After transforming allocated resources feature was added to allocator, a number of warnings are popping out for allocator tests. Commits leading to this behaviour: {{dacc88292cc13d4b08fe8cda4df71110a96cb12a}} {{5a02d5bdc75d3b1149dcda519016374be06ec6bd}} corresponding reviews: https://reviews.apache.org/r/29083 https://reviews.apache.org/r/29084 Here is an example: """," [ RUN ] MasterAllocatorTest/0.FrameworkReregistersFirst GMOCK WARNING: Uninteresting mock function call - taking default action specified at: ../../../src/tests/mesos.hpp:719: Function call: transformAllocation(@0x7fd3bb5274d8 20150115-185632-1677764800-59671-44186-0000, @0x7fd3bb5274f8 20150115-185632-1677764800-59671-44186-S0, @0x1119140e0 16-byte object ) Stack trace: [ OK ] MasterAllocatorTest/0.FrameworkReregistersFirst (204 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2241","01/21/2015 22:59:46",1,"DiskUsageCollectorTest.SymbolicLink test is flaky ""Observed this on a local machine running linux w/ sudo. """," [ RUN ] DiskUsageCollectorTest.SymbolicLink ../../src/tests/disk_quota_tests.cpp:138: Failure Expected: (usage1.get()) < (Kilobytes(16)), actual: 24KB vs 8-byte object <00-40 00-00 00-00 00-00> [ FAILED ] DiskUsageCollectorTest.SymbolicLink (201 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2273","01/26/2015 23:07:55",1,"Add ""tests"" target to Makefile for building-but-not-running tests. ""'make check' allows one to build and run the test suite. However, often we just want to build the tests. Currently, this is done by setting GTEST_FILTER to an empty string. It will be nice to have a dedicated target such as 'make tests' that allows one to build the test suite without running it.""","",0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2281","01/27/2015 22:33:03",3,"Deprecate plain text Credential format. ""Currently two formats of credentials are supported: JSON And a new line file: We should deprecate the new line format and remove support for the old format."""," """"credentials"""": [ { """"principal"""": """"sherman"""", """"secret"""": """"kitesurf"""" } principal1 secret1 pricipal2 secret2 ",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2283","01/28/2015 02:31:40",1,"SlaveRecoveryTest.ReconcileKillTask is flaky. ""Saw this on an internal CI: """," [ RUN ] SlaveRecoveryTest/0.ReconcileKillTask Using temporary directory '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_D5wSwg' I0126 19:10:52.005317 13291 leveldb.cpp:176] Opened db in 978670ns I0126 19:10:52.006155 13291 leveldb.cpp:183] Compacted db in 541346ns I0126 19:10:52.006494 13291 leveldb.cpp:198] Created db iterator in 24562ns I0126 19:10:52.006798 13291 leveldb.cpp:204] Seeked to beginning of db in 3254ns I0126 19:10:52.007036 13291 leveldb.cpp:273] Iterated through 0 keys in the db in 949ns I0126 19:10:52.007369 13291 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0126 19:10:52.008362 13308 recover.cpp:449] Starting replica recovery I0126 19:10:52.009141 13308 recover.cpp:475] Replica is in EMPTY status I0126 19:10:52.016494 13308 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0126 19:10:52.017333 13309 recover.cpp:195] Received a recover response from a replica in EMPTY status I0126 19:10:52.018244 13309 recover.cpp:566] Updating replica status to STARTING I0126 19:10:52.019064 13305 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 113577ns I0126 19:10:52.019487 13305 replica.cpp:323] Persisted replica status to STARTING I0126 19:10:52.019937 13309 recover.cpp:475] Replica is in STARTING status I0126 19:10:52.021492 13307 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0126 19:10:52.022665 13309 recover.cpp:195] Received a recover response from a replica in STARTING status I0126 19:10:52.027971 13312 recover.cpp:566] Updating replica status to VOTING I0126 19:10:52.028590 13312 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 78452ns I0126 19:10:52.028869 13312 replica.cpp:323] Persisted replica status to VOTING I0126 19:10:52.029252 13312 recover.cpp:580] Successfully joined the Paxos group I0126 19:10:52.030828 13307 recover.cpp:464] Recover process terminated I0126 19:10:52.049947 13306 master.cpp:262] Master 20150126-191052-2272962752-35545-13291 (fedora-19) started on 192.168.122.135:35545 I0126 19:10:52.050499 13306 master.cpp:308] Master only allowing authenticated frameworks to register I0126 19:10:52.050765 13306 master.cpp:313] Master only allowing authenticated slaves to register I0126 19:10:52.051048 13306 credentials.hpp:36] Loading credentials for authentication from '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_D5wSwg/credentials' I0126 19:10:52.051589 13306 master.cpp:357] Authorization enabled I0126 19:10:52.052531 13305 hierarchical_allocator_process.hpp:285] Initialized hierarchical allocator process I0126 19:10:52.052881 13311 whitelist_watcher.cpp:65] No whitelist given I0126 19:10:52.055524 13306 master.cpp:1219] The newly elected leader is master@192.168.122.135:35545 with id 20150126-191052-2272962752-35545-13291 I0126 19:10:52.056226 13306 master.cpp:1232] Elected as the leading master! I0126 19:10:52.056639 13306 master.cpp:1050] Recovering from registrar I0126 19:10:52.057045 13307 registrar.cpp:313] Recovering registrar I0126 19:10:52.058554 13312 log.cpp:660] Attempting to start the writer I0126 19:10:52.060868 13309 replica.cpp:477] Replica received implicit promise request with proposal 1 I0126 19:10:52.061691 13309 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 91680ns I0126 19:10:52.062261 13309 replica.cpp:345] Persisted promised to 1 I0126 19:10:52.064559 13310 coordinator.cpp:230] Coordinator attemping to fill missing position I0126 19:10:52.069105 13311 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0126 19:10:52.069860 13311 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 94858ns I0126 19:10:52.070350 13311 replica.cpp:679] Persisted action at 0 I0126 19:10:52.080348 13305 replica.cpp:511] Replica received write request for position 0 I0126 19:10:52.081153 13305 leveldb.cpp:438] Reading position from leveldb took 62247ns I0126 19:10:52.081676 13305 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 81487ns I0126 19:10:52.082053 13305 replica.cpp:679] Persisted action at 0 I0126 19:10:52.083566 13309 replica.cpp:658] Replica received learned notice for position 0 I0126 19:10:52.085734 13309 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 283144ns I0126 19:10:52.086067 13309 replica.cpp:679] Persisted action at 0 I0126 19:10:52.086448 13309 replica.cpp:664] Replica learned NOP action at position 0 I0126 19:10:52.089784 13306 log.cpp:676] Writer started with ending position 0 I0126 19:10:52.093415 13309 leveldb.cpp:438] Reading position from leveldb took 66744ns I0126 19:10:52.104814 13306 registrar.cpp:346] Successfully fetched the registry (0B) in 47.451136ms I0126 19:10:52.105731 13306 registrar.cpp:445] Applied 1 operations in 42124ns; attempting to update the 'registry' I0126 19:10:52.111935 13305 log.cpp:684] Attempting to append 131 bytes to the log I0126 19:10:52.112754 13305 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0126 19:10:52.114297 13308 replica.cpp:511] Replica received write request for position 1 I0126 19:10:52.114908 13308 leveldb.cpp:343] Persisting action (150 bytes) to leveldb took 98332ns I0126 19:10:52.115387 13308 replica.cpp:679] Persisted action at 1 I0126 19:10:52.117277 13305 replica.cpp:658] Replica received learned notice for position 1 I0126 19:10:52.118142 13305 leveldb.cpp:343] Persisting action (152 bytes) to leveldb took 227799ns I0126 19:10:52.118621 13305 replica.cpp:679] Persisted action at 1 I0126 19:10:52.118979 13305 replica.cpp:664] Replica learned APPEND action at position 1 I0126 19:10:52.121311 13305 registrar.cpp:490] Successfully updated the 'registry' in 15.161088ms I0126 19:10:52.121548 13311 log.cpp:703] Attempting to truncate the log to 1 I0126 19:10:52.122697 13311 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0126 19:10:52.124316 13307 replica.cpp:511] Replica received write request for position 2 I0126 19:10:52.124913 13307 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 87281ns I0126 19:10:52.125334 13307 replica.cpp:679] Persisted action at 2 I0126 19:10:52.127018 13311 replica.cpp:658] Replica received learned notice for position 2 I0126 19:10:52.127835 13311 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 201050ns I0126 19:10:52.128232 13311 leveldb.cpp:401] Deleting ~1 keys from leveldb took 78012ns I0126 19:10:52.128835 13305 registrar.cpp:376] Successfully recovered registrar I0126 19:10:52.128551 13311 replica.cpp:679] Persisted action at 2 I0126 19:10:52.130105 13311 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0126 19:10:52.131479 13312 master.cpp:1077] Recovered 0 slaves from the Registry (95B) ; allowing 10mins for slaves to re-register I0126 19:10:52.143465 13291 containerizer.cpp:103] Using isolation: posix/cpu,posix/mem I0126 19:10:52.170471 13309 slave.cpp:173] Slave started on 101)@192.168.122.135:35545 I0126 19:10:52.171723 13309 credentials.hpp:84] Loading credential for authentication from '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/credential' I0126 19:10:52.172286 13309 slave.cpp:282] Slave using credential for: test-principal I0126 19:10:52.172821 13309 slave.cpp:300] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0126 19:10:52.173982 13309 slave.cpp:329] Slave hostname: fedora-19 I0126 19:10:52.174505 13309 slave.cpp:330] Slave checkpoint: true I0126 19:10:52.179308 13309 state.cpp:33] Recovering state from '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta' I0126 19:10:52.180075 13308 status_update_manager.cpp:197] Recovering status update manager I0126 19:10:52.180611 13308 containerizer.cpp:300] Recovering containerizer I0126 19:10:52.182473 13309 slave.cpp:3519] Finished recovery I0126 19:10:52.184403 13312 slave.cpp:613] New master detected at master@192.168.122.135:35545 I0126 19:10:52.184916 13312 slave.cpp:676] Authenticating with master master@192.168.122.135:35545 I0126 19:10:52.185230 13312 slave.cpp:681] Using default CRAM-MD5 authenticatee I0126 19:10:52.185715 13312 slave.cpp:649] Detecting new master I0126 19:10:52.186420 13312 authenticatee.hpp:138] Creating new client SASL connection I0126 19:10:52.186002 13311 status_update_manager.cpp:171] Pausing sending status updates I0126 19:10:52.188293 13312 master.cpp:4129] Authenticating slave(101)@192.168.122.135:35545 I0126 19:10:52.188748 13312 master.cpp:4140] Using default CRAM-MD5 authenticator I0126 19:10:52.189525 13312 authenticator.hpp:170] Creating new server SASL connection I0126 19:10:52.191082 13305 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0126 19:10:52.191550 13305 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0126 19:10:52.191990 13312 authenticator.hpp:276] Received SASL authentication start I0126 19:10:52.192365 13312 authenticator.hpp:398] Authentication requires more steps I0126 19:10:52.192800 13311 authenticatee.hpp:275] Received SASL authentication step I0126 19:10:52.193244 13312 authenticator.hpp:304] Received SASL authentication step I0126 19:10:52.193565 13312 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0126 19:10:52.193902 13312 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0126 19:10:52.194301 13312 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0126 19:10:52.195669 13312 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0126 19:10:52.196048 13312 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0126 19:10:52.196395 13312 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0126 19:10:52.196723 13312 authenticator.hpp:390] Authentication success I0126 19:10:52.197206 13305 authenticatee.hpp:315] Authentication success I0126 19:10:52.204121 13305 slave.cpp:747] Successfully authenticated with master master@192.168.122.135:35545 I0126 19:10:52.204676 13310 master.cpp:4187] Successfully authenticated principal 'test-principal' at slave(101)@192.168.122.135:35545 I0126 19:10:52.205729 13305 slave.cpp:1075] Will retry registration in 5.608661ms if necessary I0126 19:10:52.206451 13310 master.cpp:3275] Registering slave at slave(101)@192.168.122.135:35545 (fedora-19) with id 20150126-191052-2272962752-35545-13291-S0 I0126 19:10:52.210019 13310 registrar.cpp:445] Applied 1 operations in 235087ns; attempting to update the 'registry' I0126 19:10:52.220736 13308 slave.cpp:1075] Will retry registration in 9.28397ms if necessary I0126 19:10:52.221309 13311 master.cpp:3263] Ignoring register slave message from slave(101)@192.168.122.135:35545 (fedora-19) as admission is already in progress I0126 19:10:52.224818 13307 log.cpp:684] Attempting to append 302 bytes to the log I0126 19:10:52.225554 13307 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0126 19:10:52.227422 13305 replica.cpp:511] Replica received write request for position 3 I0126 19:10:52.227969 13305 leveldb.cpp:343] Persisting action (321 bytes) to leveldb took 100350ns I0126 19:10:52.228276 13305 replica.cpp:679] Persisted action at 3 I0126 19:10:52.232475 13312 replica.cpp:658] Replica received learned notice for position 3 I0126 19:10:52.233280 13312 leveldb.cpp:343] Persisting action (323 bytes) to leveldb took 546567ns I0126 19:10:52.233726 13312 replica.cpp:679] Persisted action at 3 I0126 19:10:52.234035 13312 replica.cpp:664] Replica learned APPEND action at position 3 I0126 19:10:52.236556 13310 registrar.cpp:490] Successfully updated the 'registry' in 26.040064ms I0126 19:10:52.237330 13305 log.cpp:703] Attempting to truncate the log to 3 I0126 19:10:52.238056 13311 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0126 19:10:52.239594 13311 replica.cpp:511] Replica received write request for position 4 I0126 19:10:52.240129 13311 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 92868ns I0126 19:10:52.240458 13311 replica.cpp:679] Persisted action at 4 I0126 19:10:52.241976 13308 replica.cpp:658] Replica received learned notice for position 4 I0126 19:10:52.242645 13308 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 95635ns I0126 19:10:52.242990 13308 leveldb.cpp:401] Deleting ~2 keys from leveldb took 58066ns I0126 19:10:52.243337 13308 replica.cpp:679] Persisted action at 4 I0126 19:10:52.243695 13308 replica.cpp:664] Replica learned TRUNCATE action at position 4 I0126 19:10:52.245657 13291 sched.cpp:151] Version: 0.22.0 I0126 19:10:52.247625 13305 master.cpp:3329] Registered slave 20150126-191052-2272962752-35545-13291-S0 at slave(101)@192.168.122.135:35545 (fedora-19) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0126 19:10:52.248942 13307 slave.cpp:781] Registered with master master@192.168.122.135:35545; given slave ID 20150126-191052-2272962752-35545-13291-S0 I0126 19:10:52.250396 13307 slave.cpp:797] Checkpointing SlaveInfo to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/slaves/20150126-191052-2272962752-35545-13291-S0/slave.info' I0126 19:10:52.250731 13309 status_update_manager.cpp:178] Resuming sending status updates I0126 19:10:52.251765 13307 slave.cpp:2588] Received ping from slave-observer(99)@192.168.122.135:35545 I0126 19:10:52.247951 13310 hierarchical_allocator_process.hpp:453] Added slave 20150126-191052-2272962752-35545-13291-S0 (fedora-19) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0126 19:10:52.252810 13310 hierarchical_allocator_process.hpp:831] No resources available to allocate! I0126 19:10:52.254365 13310 hierarchical_allocator_process.hpp:756] Performed allocation for slave 20150126-191052-2272962752-35545-13291-S0 in 1.732701ms I0126 19:10:52.254137 13307 sched.cpp:248] New master detected at master@192.168.122.135:35545 I0126 19:10:52.257863 13307 sched.cpp:304] Authenticating with master master@192.168.122.135:35545 I0126 19:10:52.258249 13307 sched.cpp:311] Using default CRAM-MD5 authenticatee I0126 19:10:52.258908 13306 authenticatee.hpp:138] Creating new client SASL connection I0126 19:10:52.261397 13309 master.cpp:4129] Authenticating scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 I0126 19:10:52.261776 13309 master.cpp:4140] Using default CRAM-MD5 authenticator I0126 19:10:52.264528 13309 authenticator.hpp:170] Creating new server SASL connection I0126 19:10:52.266248 13312 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0126 19:10:52.266749 13312 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0126 19:10:52.267143 13312 authenticator.hpp:276] Received SASL authentication start I0126 19:10:52.267525 13312 authenticator.hpp:398] Authentication requires more steps I0126 19:10:52.267917 13312 authenticatee.hpp:275] Received SASL authentication step I0126 19:10:52.268404 13312 authenticator.hpp:304] Received SASL authentication step I0126 19:10:52.268725 13312 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0126 19:10:52.269078 13312 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0126 19:10:52.269498 13312 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0126 19:10:52.269881 13312 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0126 19:10:52.270385 13312 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0126 19:10:52.271015 13312 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0126 19:10:52.271599 13312 authenticator.hpp:390] Authentication success I0126 19:10:52.272126 13312 authenticatee.hpp:315] Authentication success I0126 19:10:52.272415 13305 master.cpp:4187] Successfully authenticated principal 'test-principal' at scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 I0126 19:10:52.273998 13307 sched.cpp:392] Successfully authenticated with master master@192.168.122.135:35545 I0126 19:10:52.274415 13307 sched.cpp:515] Sending registration request to master@192.168.122.135:35545 I0126 19:10:52.274842 13307 sched.cpp:548] Will retry registration in 674.656506ms if necessary I0126 19:10:52.275235 13305 master.cpp:1420] Received registration request for framework 'default' at scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 I0126 19:10:52.276017 13305 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0126 19:10:52.277027 13305 master.cpp:1484] Registering framework 20150126-191052-2272962752-35545-13291-0000 (default) at scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 I0126 19:10:52.278285 13308 hierarchical_allocator_process.hpp:319] Added framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.279575 13308 hierarchical_allocator_process.hpp:738] Performed allocation for 1 slaves in 697902ns I0126 19:10:52.287966 13305 master.cpp:4071] Sending 1 offers to framework 20150126-191052-2272962752-35545-13291-0000 (default) at scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 I0126 19:10:52.288776 13307 sched.cpp:442] Framework registered with 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.289373 13307 sched.cpp:456] Scheduler::registered took 21674ns I0126 19:10:52.289932 13307 sched.cpp:605] Scheduler::resourceOffers took 76147ns I0126 19:10:52.293220 13311 master.cpp:2677] Processing ACCEPT call for offers: [ 20150126-191052-2272962752-35545-13291-O0 ] on slave 20150126-191052-2272962752-35545-13291-S0 at slave(101)@192.168.122.135:35545 (fedora-19) for framework 20150126-191052-2272962752-35545-13291-0000 (default) at scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 I0126 19:10:52.293586 13311 master.cpp:2513] Authorizing framework principal 'test-principal' to launch task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e as user 'jenkins' I0126 19:10:52.295825 13311 master.hpp:782] Adding task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150126-191052-2272962752-35545-13291-S0 (fedora-19) I0126 19:10:52.296272 13311 master.cpp:2885] Launching task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 (default) at scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150126-191052-2272962752-35545-13291-S0 at slave(101)@192.168.122.135:35545 (fedora-19) I0126 19:10:52.296886 13309 slave.cpp:1130] Got assigned task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e for framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.297324 13309 slave.cpp:3846] Checkpointing FrameworkInfo to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/framework.info' I0126 19:10:52.297919 13309 slave.cpp:3853] Checkpointing framework pid 'scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545' to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/framework.pid' I0126 19:10:52.299072 13309 slave.cpp:1245] Launching task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e for framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.308050 13309 slave.cpp:4289] Checkpointing ExecutorInfo to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/executors/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e/executor.info' I0126 19:10:52.310894 13309 slave.cpp:3921] Launching executor 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 in work directory '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/executors/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e/runs/960eca2c-9e2c-415a-b6a5-159efca1f1b0' I0126 19:10:52.311957 13308 containerizer.cpp:445] Starting container '960eca2c-9e2c-415a-b6a5-159efca1f1b0' for executor '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' of framework '20150126-191052-2272962752-35545-13291-0000' W0126 19:10:52.313951 13307 containerizer.cpp:296] CommandInfo.grace_period flag is not set, using default value: 3secs I0126 19:10:52.330166 13309 slave.cpp:4312] Checkpointing TaskInfo to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/executors/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e/runs/960eca2c-9e2c-415a-b6a5-159efca1f1b0/tasks/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e/task.info' I0126 19:10:52.333307 13309 slave.cpp:1368] Queuing task '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' for executor 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework '20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.332506 13307 launcher.cpp:137] Forked child with pid '15795' for container '960eca2c-9e2c-415a-b6a5-159efca1f1b0' I0126 19:10:52.334852 13307 containerizer.cpp:655] Checkpointing executor's forked pid 15795 to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/executors/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e/runs/960eca2c-9e2c-415a-b6a5-159efca1f1b0/pids/forked.pid' I0126 19:10:52.339607 13309 slave.cpp:566] Successfully attached file '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/executors/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e/runs/960eca2c-9e2c-415a-b6a5-159efca1f1b0' I0126 19:10:52.341423 13309 slave.cpp:2890] Monitoring executor '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' of framework '20150126-191052-2272962752-35545-13291-0000' in container '960eca2c-9e2c-415a-b6a5-159efca1f1b0' WARNING: Logging before InitGoogleLogging() is written to STDERR I0126 19:10:52.584766 15795 process.cpp:958] libprocess is initialized on 192.168.122.135:41245 for 8 cpus I0126 19:10:52.597306 15795 logging.cpp:177] Logging to STDERR I0126 19:10:52.606741 15795 exec.cpp:147] Version: 0.22.0 I0126 19:10:52.617653 15825 exec.cpp:197] Executor started at: executor(1)@192.168.122.135:41245 with pid 15795 I0126 19:10:52.643771 13309 slave.cpp:1912] Got registration for executor '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' of framework 20150126-191052-2272962752-35545-13291-0000 from executor(1)@192.168.122.135:41245 I0126 19:10:52.644484 13309 slave.cpp:1998] Checkpointing executor pid 'executor(1)@192.168.122.135:41245' to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/executors/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e/runs/960eca2c-9e2c-415a-b6a5-159efca1f1b0/pids/libprocess.pid' I0126 19:10:52.648509 13309 slave.cpp:2031] Flushing queued task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e for executor '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.701879 15830 exec.cpp:221] Executor registered on slave 20150126-191052-2272962752-35545-13291-S0 Shutdown timeout is set to 3secsRegistered executor on fedora-19 I0126 19:10:52.706497 15830 exec.cpp:233] Executor::registered took 2.369798ms I0126 19:10:52.710708 15830 exec.cpp:308] Executor asked to run task '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' Starting task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e I0126 19:10:52.713075 15830 exec.cpp:317] Executor::launchTask took 1.248631ms sh -c 'sleep 1000' Forked command at 15832 I0126 19:10:52.720675 15824 exec.cpp:540] Executor sending status update TASK_RUNNING (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.722925 13308 slave.cpp:2265] Handling status update TASK_RUNNING (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 from executor(1)@192.168.122.135:41245 I0126 19:10:52.723328 13308 status_update_manager.cpp:317] Received status update TASK_RUNNING (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.723371 13308 status_update_manager.cpp:494] Creating StatusUpdate stream for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.723803 13308 status_update_manager.hpp:346] Checkpointing UPDATE for status update TASK_RUNNING (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.723963 13308 status_update_manager.cpp:371] Forwarding update TASK_RUNNING (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 to the slave I0126 19:10:52.724717 13312 slave.cpp:2508] Forwarding the update TASK_RUNNING (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 to master@192.168.122.135:35545 I0126 19:10:52.725385 13305 master.cpp:3652] Forwarding status update TASK_RUNNING (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.725857 13305 master.cpp:3624] Status update TASK_RUNNING (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 from slave 20150126-191052-2272962752-35545-13291-S0 at slave(101)@192.168.122.135:35545 (fedora-19) I0126 19:10:52.726471 13305 master.cpp:4934] Updating the latest state of task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 to TASK_RUNNING I0126 19:10:52.726269 13311 sched.cpp:696] Scheduler::statusUpdate took 22534ns I0126 19:10:52.727679 13311 master.cpp:3125] Forwarding status update acknowledgement 9577ef79-7e59-4be6-a892-5a20baa13647 for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 (default) at scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 to slave 20150126-191052-2272962752-35545-13291-S0 at slave(101)@192.168.122.135:35545 (fedora-19) I0126 19:10:52.728380 13308 status_update_manager.cpp:389] Received status update acknowledgement (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.728579 13311 slave.cpp:2435] Status update manager successfully handled status update TASK_RUNNING (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.729403 13311 slave.cpp:2441] Sending acknowledgement for status update TASK_RUNNING (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 to executor(1)@192.168.122.135:41245 I0126 19:10:52.728869 13308 status_update_manager.hpp:346] Checkpointing ACK for status update TASK_RUNNING (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.731828 13307 slave.cpp:1852] Status update manager successfully handled status update acknowledgement (UUID: 9577ef79-7e59-4be6-a892-5a20baa13647) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.732923 13307 slave.cpp:495] Slave terminating I0126 19:10:52.739572 15827 exec.cpp:354] Executor received status update acknowledgement 9577ef79-7e59-4be6-a892-5a20baa13647 for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.743466 13306 master.cpp:795] Slave 20150126-191052-2272962752-35545-13291-S0 at slave(101)@192.168.122.135:35545 (fedora-19) disconnected I0126 19:10:52.743948 13306 master.cpp:1826] Disconnecting slave 20150126-191052-2272962752-35545-13291-S0 at slave(101)@192.168.122.135:35545 (fedora-19) I0126 19:10:52.744940 13306 master.cpp:1845] Deactivating slave 20150126-191052-2272962752-35545-13291-S0 at slave(101)@192.168.122.135:35545 (fedora-19) I0126 19:10:52.752821 13306 hierarchical_allocator_process.hpp:512] Slave 20150126-191052-2272962752-35545-13291-S0 deactivated I0126 19:10:52.765900 13291 containerizer.cpp:103] Using isolation: posix/cpu,posix/mem I0126 19:10:52.766723 13309 master.cpp:2961] Asked to kill task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 W0126 19:10:52.767549 13309 master.cpp:3030] Cannot kill task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 (default) at scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 because the slave 20150126-191052-2272962752-35545-13291-S0 at slave(101)@192.168.122.135:35545 (fedora-19) is disconnected. Kill will be retried if the slave re-registers I0126 19:10:52.789048 13307 slave.cpp:173] Slave started on 102)@192.168.122.135:35545 I0126 19:10:52.790671 13307 credentials.hpp:84] Loading credential for authentication from '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/credential' I0126 19:10:52.791497 13307 slave.cpp:282] Slave using credential for: test-principal I0126 19:10:52.792064 13307 slave.cpp:300] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0126 19:10:52.793090 13307 slave.cpp:329] Slave hostname: fedora-19 I0126 19:10:52.793556 13307 slave.cpp:330] Slave checkpoint: true I0126 19:10:52.795727 13311 state.cpp:33] Recovering state from '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta' I0126 19:10:52.796282 13311 state.cpp:668] Failed to find resources file '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/resources/resources.info' I0126 19:10:52.804524 13309 slave.cpp:3601] Recovering framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.805106 13309 slave.cpp:4040] Recovering executor '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.807494 13309 slave.cpp:566] Successfully attached file '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/executors/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e/runs/960eca2c-9e2c-415a-b6a5-159efca1f1b0' I0126 19:10:52.807888 13310 status_update_manager.cpp:197] Recovering status update manager I0126 19:10:52.808390 13310 status_update_manager.cpp:205] Recovering executor '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.808830 13310 status_update_manager.cpp:494] Creating StatusUpdate stream for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.809484 13310 status_update_manager.hpp:310] Replaying status update stream for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e I0126 19:10:52.810966 13308 containerizer.cpp:300] Recovering containerizer I0126 19:10:52.811550 13308 containerizer.cpp:342] Recovering container '960eca2c-9e2c-415a-b6a5-159efca1f1b0' for executor '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:52.816074 13305 slave.cpp:3460] Sending reconnect request to executor 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 at executor(1)@192.168.122.135:41245 I0126 19:10:52.929554 15827 exec.cpp:267] Received reconnect request from slave 20150126-191052-2272962752-35545-13291-S0 I0126 19:10:52.946156 13305 slave.cpp:2089] Re-registering executor 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:53.010731 15829 exec.cpp:244] Executor re-registered on slave 20150126-191052-2272962752-35545-13291-S0 Re-registered executor on fedora-19 I0126 19:10:53.012980 15829 exec.cpp:256] Executor::reregistered took 313096ns I0126 19:10:53.054590 13309 hierarchical_allocator_process.hpp:831] No resources available to allocate! I0126 19:10:53.054930 13309 hierarchical_allocator_process.hpp:738] Performed allocation for 1 slaves in 388184ns I0126 19:10:54.055598 13312 hierarchical_allocator_process.hpp:831] No resources available to allocate! I0126 19:10:54.058614 13312 hierarchical_allocator_process.hpp:738] Performed allocation for 1 slaves in 3.086403ms I0126 19:10:54.818248 13310 slave.cpp:2214] Cleaning up un-reregistered executors I0126 19:10:54.821072 13310 slave.cpp:3519] Finished recovery I0126 19:10:54.823081 13312 status_update_manager.cpp:171] Pausing sending status updates I0126 19:10:54.823719 13310 slave.cpp:613] New master detected at master@192.168.122.135:35545 I0126 19:10:54.824260 13310 slave.cpp:676] Authenticating with master master@192.168.122.135:35545 I0126 19:10:54.824583 13310 slave.cpp:681] Using default CRAM-MD5 authenticatee I0126 19:10:54.825479 13307 authenticatee.hpp:138] Creating new client SASL connection I0126 19:10:54.826686 13310 slave.cpp:649] Detecting new master I0126 19:10:54.827214 13307 master.cpp:4129] Authenticating slave(102)@192.168.122.135:35545 I0126 19:10:54.827747 13307 master.cpp:4140] Using default CRAM-MD5 authenticator I0126 19:10:54.828635 13307 authenticator.hpp:170] Creating new server SASL connection I0126 19:10:54.830049 13306 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0126 19:10:54.830447 13306 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0126 19:10:54.830934 13307 authenticator.hpp:276] Received SASL authentication start I0126 19:10:54.831362 13307 authenticator.hpp:398] Authentication requires more steps I0126 19:10:54.831837 13309 authenticatee.hpp:275] Received SASL authentication step I0126 19:10:54.832283 13307 authenticator.hpp:304] Received SASL authentication step I0126 19:10:54.832615 13307 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0126 19:10:54.833143 13307 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0126 19:10:54.833549 13307 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0126 19:10:54.833904 13307 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0126 19:10:54.834241 13307 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0126 19:10:54.834539 13307 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0126 19:10:54.834869 13307 authenticator.hpp:390] Authentication success I0126 19:10:54.836004 13311 authenticatee.hpp:315] Authentication success I0126 19:10:54.842200 13311 slave.cpp:747] Successfully authenticated with master master@192.168.122.135:35545 I0126 19:10:54.842851 13308 master.cpp:4187] Successfully authenticated principal 'test-principal' at slave(102)@192.168.122.135:35545 I0126 19:10:54.844679 13309 master.cpp:3401] Re-registering slave 20150126-191052-2272962752-35545-13291-S0 at slave(101)@192.168.122.135:35545 (fedora-19) W0126 19:10:54.845654 13309 master.cpp:4347] Slave 20150126-191052-2272962752-35545-13291-S0 at slave(102)@192.168.122.135:35545 (fedora-19) has non-terminal task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e that is supposed to be killed. Killing it now! I0126 19:10:54.846365 13308 hierarchical_allocator_process.hpp:498] Slave 20150126-191052-2272962752-35545-13291-S0 reactivated I0126 19:10:54.846976 13311 slave.cpp:1075] Will retry registration in 10.72364ms if necessary I0126 19:10:54.847618 13311 slave.cpp:849] Re-registered with master master@192.168.122.135:35545 I0126 19:10:54.848054 13309 status_update_manager.cpp:178] Resuming sending status updates I0126 19:10:54.848565 13311 slave.cpp:1424] Asked to kill task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:54.850329 13311 slave.cpp:1762] Updating framework 20150126-191052-2272962752-35545-13291-0000 pid to scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 I0126 19:10:54.853868 13311 slave.cpp:1770] Checkpointing framework pid 'scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545' to '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/framework.pid' I0126 19:10:54.854627 13312 status_update_manager.cpp:178] Resuming sending status updates I0126 19:10:54.920938 15824 exec.cpp:328] Executor asked to kill task '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' I0126 19:10:54.921044 15824 exec.cpp:337] Executor::killTask took 56676ns Shutting down Sending SIGTERM to process tree at pid 15832 Killing the following process trees: [ --- 15832 sleep 1000 ] Command terminated with signal Terminated (pid: 15832) I0126 19:10:55.045547 15825 exec.cpp:540] Executor sending status update TASK_KILLED (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:55.059789 13312 hierarchical_allocator_process.hpp:831] No resources available to allocate! I0126 19:10:55.060405 13312 hierarchical_allocator_process.hpp:738] Performed allocation for 1 slaves in 918365ns I0126 19:10:55.115810 13309 slave.cpp:2265] Handling status update TASK_KILLED (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 from executor(1)@192.168.122.135:41245 I0126 19:10:55.116387 13309 slave.cpp:4229] Terminating task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e I0126 19:10:55.119729 13305 status_update_manager.cpp:317] Received status update TASK_KILLED (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:55.120384 13305 status_update_manager.hpp:346] Checkpointing UPDATE for status update TASK_KILLED (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:55.120900 13305 status_update_manager.cpp:371] Forwarding update TASK_KILLED (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 to the slave I0126 19:10:55.121579 13305 slave.cpp:2508] Forwarding the update TASK_KILLED (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 to master@192.168.122.135:35545 I0126 19:10:55.122256 13310 master.cpp:3652] Forwarding status update TASK_KILLED (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:55.122685 13310 master.cpp:3624] Status update TASK_KILLED (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 from slave 20150126-191052-2272962752-35545-13291-S0 at slave(102)@192.168.122.135:35545 (fedora-19) I0126 19:10:55.123086 13308 sched.cpp:696] Scheduler::statusUpdate took 79719ns I0126 19:10:55.124562 13310 master.cpp:4934] Updating the latest state of task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 to TASK_KILLED I0126 19:10:55.125345 13310 master.cpp:4993] Removing task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 20150126-191052-2272962752-35545-13291-0000 on slave 20150126-191052-2272962752-35545-13291-S0 at slave(102)@192.168.122.135:35545 (fedora-19) I0126 19:10:55.125810 13306 hierarchical_allocator_process.hpp:645] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 20150126-191052-2272962752-35545-13291-S0 from framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:55.126552 13310 master.cpp:3125] Forwarding status update acknowledgement 2c2ef52e-8c0d-4a83-be36-e6433316989e for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 (default) at scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 to slave 20150126-191052-2272962752-35545-13291-S0 at slave(102)@192.168.122.135:35545 (fedora-19) I0126 19:10:55.126843 13305 slave.cpp:2435] Status update manager successfully handled status update TASK_KILLED (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:55.127396 13305 slave.cpp:2441] Sending acknowledgement for status update TASK_KILLED (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 to executor(1)@192.168.122.135:41245 I0126 19:10:55.129451 13306 status_update_manager.cpp:389] Received status update acknowledgement (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:55.129976 13306 status_update_manager.hpp:346] Checkpointing ACK for status update TASK_KILLED (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:55.130367 13306 status_update_manager.cpp:525] Cleaning up status update stream for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:55.130980 13305 slave.cpp:1852] Status update manager successfully handled status update acknowledgement (UUID: 2c2ef52e-8c0d-4a83-be36-e6433316989e) for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:55.131376 13305 slave.cpp:4268] Completing task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e I0126 19:10:55.148888 15823 exec.cpp:354] Executor received status update acknowledgement 2c2ef52e-8c0d-4a83-be36-e6433316989e for task 61eaeec3-e8ca-4e15-82d6-284c05c3bb6e of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:56.061642 13305 hierarchical_allocator_process.hpp:738] Performed allocation for 1 slaves in 612945ns I0126 19:10:56.065135 13310 master.cpp:4071] Sending 1 offers to framework 20150126-191052-2272962752-35545-13291-0000 (default) at scheduler-6da85b48-b57f-4202-b630-c45f8f652321@192.168.122.135:35545 I0126 19:10:56.068106 13310 sched.cpp:605] Scheduler::resourceOffers took 98788ns I0126 19:10:56.068989 13291 sched.cpp:1471] Asked to stop the driver I0126 19:10:56.069831 13291 master.cpp:654] Master terminating I0126 19:10:56.070969 13310 sched.cpp:808] Stopping framework '20150126-191052-2272962752-35545-13291-0000' I0126 19:10:56.072089 13312 slave.cpp:2673] master@192.168.122.135:35545 exited W0126 19:10:56.072654 13312 slave.cpp:2676] Master disconnected! Waiting for a new master to be elected I0126 19:10:56.110337 13310 containerizer.cpp:1084] Executor for container '960eca2c-9e2c-415a-b6a5-159efca1f1b0' has exited I0126 19:10:56.110785 13310 containerizer.cpp:879] Destroying container '960eca2c-9e2c-415a-b6a5-159efca1f1b0' I./tests/cluster.hpp:451: Failure (wait).failure(): Unknown container: 960eca2c-9e2c-415a-b6a5-159efca1f1b0 0126 19:10:56.146338 13307 slave.cpp:2948] Executor '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' of framework 20150126-191052-2272962752-35545-13291-0000 exited with status 0 W0126 19:10:56.147302 13309 containerizer.cpp:868] Ignoring destroy of unknown container: 960eca2c-9e2c-415a-b6a5-159efca1f1b0 *** Aborted at 1422328256 (unix time) try """"date -d @1422328256"""" if you are using GNU date *** I0126 19:10:56.151959 13307 slave.cpp:3057] Cleaning up executor '61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' of framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:56.153216 13309 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/executors/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e/runs/960eca2c-9e2c-415a-b6a5-159efca1f1b0' for gc 6.99999822829926days in the future I0126 19:10:56.154017 13305 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/executors/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' for gc 6.99999821866963days in the future I0126 19:10:56.154710 13312 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/executors/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e/runs/960eca2c-9e2c-415a-b6a5-159efca1f1b0' for gc 6.99999821037037days in the future IPC: @ 0x8f9d48 mesos::internal::tests::Cluster::Slaves::shutdown() 0126 19:10:56.155350 13307 slave.cpp:3136] Cleaning up framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:56.155609 13310 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000/executors/61eaeec3-e8ca-4e15-82d6-284c05c3bb6e' for gc 6.99999820308148days in the future I0126 19:10:56.158103 13310 status_update_manager.cpp:279] Closing status update streams for framework 20150126-191052-2272962752-35545-13291-0000 I0126 19:10:56.163135 13310 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000' for gc 6.99999817088days in the future I0126 19:10:56.168100 13307 gc.cpp:56] Scheduling '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_qbguuM/meta/slaves/20150126-191052-2272962752-35545-13291-S0/frameworks/20150126-191052-2272962752-35545-13291-0000' for gc 6.99999805755852days in the future *** SIGSEGV (@0x0) received by PID 13291 (TID 0x7fc0fdb22880) from PID 0; stack trace: *** @ 0x7fc0da188cbb (unknown) @ 0x7fc0da18d1a1 (unknown) @ 0x3aa2a0efa0 (unknown) @ 0x8f9d48 mesos::internal::tests::Cluster::Slaves::shutdown() @ 0xe0bfba mesos::internal::tests::MesosTest::ShutdownSlaves() @ 0xe0bf7e mesos::internal::tests::MesosTest::Shutdown() @ 0xe0981f mesos::internal::tests::MesosTest::TearDown() @ 0xe0f2b6 mesos::internal::tests::ContainerizerTest<>::TearDown() @ 0x10d8180 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x10d3356 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x10bb718 testing::Test::Run() @ 0x10bbdf2 testing::TestInfo::Run() @ 0x10bc37a testing::TestCase::Run() @ 0x10c10f6 testing::internal::UnitTestImpl::RunAllTests() @ 0x10d8ff1 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x10d4047 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x10bffa6 testing::UnitTest::Run() @ 0xce850d main @ 0x3aa2221b45 (unknown) @ 0x8d59f9 (unknown) make[3]: *** [check-local] Segmentation fault (core dumped) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2302","01/29/2015 19:40:42",1,"FaultToleranceTest.SchedulerFailoverFrameworkMessage is flaky. ""Bad Run: Good Run: """," [ RUN ] FaultToleranceTest.SchedulerFailoverFrameworkMessage Using temporary directory '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_f3jYkr' I0123 18:50:11.669674 15688 leveldb.cpp:176] Opened db in 31.920683ms I0123 18:50:11.678328 15688 leveldb.cpp:183] Compacted db in 8.580569ms I0123 18:50:11.678455 15688 leveldb.cpp:198] Created db iterator in 38478ns I0123 18:50:11.678478 15688 leveldb.cpp:204] Seeked to beginning of db in 3057ns I0123 18:50:11.678489 15688 leveldb.cpp:273] Iterated through 0 keys in the db in 427ns I0123 18:50:11.678539 15688 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0123 18:50:11.682271 15705 recover.cpp:449] Starting replica recovery I0123 18:50:11.682634 15705 recover.cpp:475] Replica is in EMPTY status I0123 18:50:11.684389 15708 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0123 18:50:11.685132 15708 recover.cpp:195] Received a recover response from a replica in EMPTY status I0123 18:50:11.689842 15708 recover.cpp:566] Updating replica status to STARTING I0123 18:50:11.702548 15708 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 12.484558ms I0123 18:50:11.702615 15708 replica.cpp:323] Persisted replica status to STARTING I0123 18:50:11.703531 15708 recover.cpp:475] Replica is in STARTING status I0123 18:50:11.705080 15704 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0123 18:50:11.712587 15708 recover.cpp:195] Received a recover response from a replica in STARTING status I0123 18:50:11.722898 15708 recover.cpp:566] Updating replica status to VOTING I0123 18:50:11.725427 15703 master.cpp:262] Master 20150123-185011-16777343-37526-15688 (localhost.localdomain) started on 127.0.0.1:37526 W0123 18:50:11.725464 15703 master.cpp:266] ************************************************** Master bound to loopback interface! Cannot communicate with remote schedulers or slaves. You might want to set '--ip' flag to a routable IP address. ************************************************** I0123 18:50:11.725502 15703 master.cpp:308] Master only allowing authenticated frameworks to register I0123 18:50:11.725513 15703 master.cpp:313] Master only allowing authenticated slaves to register I0123 18:50:11.725543 15703 credentials.hpp:36] Loading credentials for authentication from '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_f3jYkr/credentials' I0123 18:50:11.725774 15703 master.cpp:357] Authorization enabled I0123 18:50:11.728428 15707 whitelist_watcher.cpp:65] No whitelist given I0123 18:50:11.729169 15707 master.cpp:1219] The newly elected leader is master@127.0.0.1:37526 with id 20150123-185011-16777343-37526-15688 I0123 18:50:11.729200 15707 master.cpp:1232] Elected as the leading master! I0123 18:50:11.729223 15707 master.cpp:1050] Recovering from registrar I0123 18:50:11.729595 15706 registrar.cpp:313] Recovering registrar I0123 18:50:11.730715 15703 hierarchical_allocator_process.hpp:285] Initialized hierarchical allocator process I0123 18:50:11.737431 15708 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 14.259597ms I0123 18:50:11.737511 15708 replica.cpp:323] Persisted replica status to VOTING I0123 18:50:11.737768 15708 recover.cpp:580] Successfully joined the Paxos group I0123 18:50:11.737977 15708 recover.cpp:464] Recover process terminated I0123 18:50:11.739083 15706 log.cpp:660] Attempting to start the writer I0123 18:50:11.741236 15706 replica.cpp:477] Replica received implicit promise request with proposal 1 I0123 18:50:11.750435 15706 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 8.813783ms I0123 18:50:11.750514 15706 replica.cpp:345] Persisted promised to 1 I0123 18:50:11.752239 15708 coordinator.cpp:230] Coordinator attemping to fill missing position I0123 18:50:11.754176 15706 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0123 18:50:11.763464 15706 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 8.799822ms I0123 18:50:11.763535 15706 replica.cpp:679] Persisted action at 0 I0123 18:50:11.765697 15709 replica.cpp:511] Replica received write request for position 0 I0123 18:50:11.766293 15709 leveldb.cpp:438] Reading position from leveldb took 54028ns I0123 18:50:11.776468 15709 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 9.789169ms I0123 18:50:11.776561 15709 replica.cpp:679] Persisted action at 0 I0123 18:50:11.777515 15709 replica.cpp:658] Replica received learned notice for position 0 I0123 18:50:11.785459 15709 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 7.897242ms I0123 18:50:11.785531 15709 replica.cpp:679] Persisted action at 0 I0123 18:50:11.785565 15709 replica.cpp:664] Replica learned NOP action at position 0 I0123 18:50:11.786633 15709 log.cpp:676] Writer started with ending position 0 I0123 18:50:11.788460 15709 leveldb.cpp:438] Reading position from leveldb took 266087ns I0123 18:50:11.801141 15709 registrar.cpp:346] Successfully fetched the registry (0B) in 71.491072ms I0123 18:50:11.801300 15709 registrar.cpp:445] Applied 1 operations in 41795ns; attempting to update the 'registry' I0123 18:50:11.805186 15707 log.cpp:684] Attempting to append 136 bytes to the log I0123 18:50:11.805454 15707 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0123 18:50:11.806677 15703 replica.cpp:511] Replica received write request for position 1 I0123 18:50:11.815621 15703 leveldb.cpp:343] Persisting action (155 bytes) to leveldb took 8.89177ms I0123 18:50:11.815692 15703 replica.cpp:679] Persisted action at 1 I0123 18:50:11.817358 15704 replica.cpp:658] Replica received learned notice for position 1 I0123 18:50:11.825014 15704 leveldb.cpp:343] Persisting action (157 bytes) to leveldb took 7.578558ms I0123 18:50:11.825088 15704 replica.cpp:679] Persisted action at 1 I0123 18:50:11.825124 15704 replica.cpp:664] Replica learned APPEND action at position 1 I0123 18:50:11.827008 15705 registrar.cpp:490] Successfully updated the 'registry' in 25.629952ms I0123 18:50:11.827143 15705 registrar.cpp:376] Successfully recovered registrar I0123 18:50:11.827517 15705 master.cpp:1077] Recovered 0 slaves from the Registry (98B) ; allowing 10mins for slaves to re-register I0123 18:50:11.828515 15704 log.cpp:703] Attempting to truncate the log to 1 I0123 18:50:11.829074 15704 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0123 18:50:11.830546 15709 replica.cpp:511] Replica received write request for position 2 I0123 18:50:11.837752 15709 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 7.142431ms I0123 18:50:11.837826 15709 replica.cpp:679] Persisted action at 2 I0123 18:50:11.839334 15709 replica.cpp:658] Replica received learned notice for position 2 I0123 18:50:11.847069 15709 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 7.116607ms I0123 18:50:11.847214 15709 leveldb.cpp:401] Deleting ~1 keys from leveldb took 74008ns I0123 18:50:11.847241 15709 replica.cpp:679] Persisted action at 2 I0123 18:50:11.847295 15709 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0123 18:50:11.870337 15710 slave.cpp:173] Slave started on 94)@127.0.0.1:37526 W0123 18:50:11.870980 15710 slave.cpp:176] ************************************************** Slave bound to loopback interface! Cannot communicate with remote master(s). You might want to set '--ip' flag to a routable IP address. ************************************************** I0123 18:50:11.871412 15710 credentials.hpp:84] Loading credential for authentication from '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_TB8Rh3/credential' I0123 18:50:11.871819 15710 slave.cpp:282] Slave using credential for: test-principal I0123 18:50:11.873178 15710 slave.cpp:300] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0123 18:50:11.873620 15710 slave.cpp:329] Slave hostname: localhost.localdomain I0123 18:50:11.873837 15710 slave.cpp:330] Slave checkpoint: false W0123 18:50:11.874068 15710 slave.cpp:332] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I0123 18:50:11.879103 15705 state.cpp:33] Recovering state from '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_TB8Rh3/meta' W0123 18:50:11.882972 15688 sched.cpp:1246] ************************************************** Scheduler driver bound to loopback interface! Cannot communicate with remote master(s). You might want to set 'LIBPROCESS_IP' environment variable to use a routable IP address. ************************************************** I0123 18:50:11.884106 15709 status_update_manager.cpp:197] Recovering status update manager I0123 18:50:11.884703 15710 slave.cpp:3519] Finished recovery I0123 18:50:11.892076 15704 status_update_manager.cpp:171] Pausing sending status updates I0123 18:50:11.892590 15710 slave.cpp:613] New master detected at master@127.0.0.1:37526 I0123 18:50:11.892937 15710 slave.cpp:676] Authenticating with master master@127.0.0.1:37526 I0123 18:50:11.893165 15710 slave.cpp:681] Using default CRAM-MD5 authenticatee I0123 18:50:11.893754 15708 authenticatee.hpp:138] Creating new client SASL connection I0123 18:50:11.894120 15708 master.cpp:4129] Authenticating slave(94)@127.0.0.1:37526 I0123 18:50:11.894153 15708 master.cpp:4140] Using default CRAM-MD5 authenticator I0123 18:50:11.894628 15708 authenticator.hpp:170] Creating new server SASL connection I0123 18:50:11.894913 15708 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0123 18:50:11.894942 15708 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0123 18:50:11.895043 15708 authenticator.hpp:276] Received SASL authentication start I0123 18:50:11.895095 15708 authenticator.hpp:398] Authentication requires more steps I0123 18:50:11.895165 15708 authenticatee.hpp:275] Received SASL authentication step I0123 18:50:11.895261 15708 authenticator.hpp:304] Received SASL authentication step I0123 18:50:11.895292 15708 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0123 18:50:11.895305 15708 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0123 18:50:11.895354 15708 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0123 18:50:11.895881 15710 slave.cpp:649] Detecting new master I0123 18:50:11.898449 15708 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0123 18:50:11.899024 15708 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0123 18:50:11.899106 15708 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0123 18:50:11.899190 15708 authenticator.hpp:390] Authentication success I0123 18:50:11.899569 15706 authenticatee.hpp:315] Authentication success I0123 18:50:11.902299 15706 slave.cpp:747] Successfully authenticated with master master@127.0.0.1:37526 I0123 18:50:11.902847 15706 slave.cpp:1075] Will retry registration in 19.809649ms if necessary I0123 18:50:11.903264 15705 master.cpp:3214] Queuing up registration request from slave(94)@127.0.0.1:37526 because authentication is still in progress I0123 18:50:11.903497 15705 master.cpp:4187] Successfully authenticated principal 'test-principal' at slave(94)@127.0.0.1:37526 I0123 18:50:11.903940 15705 master.cpp:3275] Registering slave at slave(94)@127.0.0.1:37526 (localhost.localdomain) with id 20150123-185011-16777343-37526-15688-S0 I0123 18:50:11.904398 15705 registrar.cpp:445] Applied 1 operations in 63679ns; attempting to update the 'registry' I0123 18:50:11.917883 15688 sched.cpp:151] Version: 0.22.0 I0123 18:50:11.919347 15703 log.cpp:684] Attempting to append 315 bytes to the log I0123 18:50:11.921039 15703 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0123 18:50:11.919992 15706 sched.cpp:248] New master detected at master@127.0.0.1:37526 I0123 18:50:11.921352 15706 sched.cpp:304] Authenticating with master master@127.0.0.1:37526 I0123 18:50:11.921408 15706 sched.cpp:311] Using default CRAM-MD5 authenticatee I0123 18:50:11.921773 15706 authenticatee.hpp:138] Creating new client SASL connection I0123 18:50:11.922266 15706 master.cpp:4129] Authenticating scheduler-2cecb105-ca23-4048-9707-12b1e4422e11@127.0.0.1:37526 I0123 18:50:11.922301 15706 master.cpp:4140] Using default CRAM-MD5 authenticator I0123 18:50:11.923928 15703 replica.cpp:511] Replica received write request for position 3 I0123 18:50:11.924285 15707 authenticator.hpp:170] Creating new server SASL connection I0123 18:50:11.925091 15707 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0123 18:50:11.925122 15707 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0123 18:50:11.925194 15707 authenticator.hpp:276] Received SASL authentication start I0123 18:50:11.925257 15707 authenticator.hpp:398] Authentication requires more steps I0123 18:50:11.925325 15707 authenticatee.hpp:275] Received SASL authentication step I0123 18:50:11.925442 15707 authenticator.hpp:304] Received SASL authentication step I0123 18:50:11.925473 15707 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0123 18:50:11.925487 15707 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0123 18:50:11.925532 15707 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0123 18:50:11.925559 15707 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0123 18:50:11.925571 15707 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0123 18:50:11.925580 15707 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0123 18:50:11.925595 15707 authenticator.hpp:390] Authentication success I0123 18:50:11.925695 15707 authenticatee.hpp:315] Authentication success I0123 18:50:11.925792 15707 master.cpp:4187] Successfully authenticated principal 'test-principal' at scheduler-2cecb105-ca23-4048-9707-12b1e4422e11@127.0.0.1:37526 I0123 18:50:11.926127 15707 sched.cpp:392] Successfully authenticated with master master@127.0.0.1:37526 I0123 18:50:11.926154 15707 sched.cpp:515] Sending registration request to master@127.0.0.1:37526 I0123 18:50:11.926215 15707 sched.cpp:548] Will retry registration in 866.81063ms if necessary I0123 18:50:11.926640 15707 master.cpp:1420] Received registration request for framework 'default' at scheduler-2cecb105-ca23-4048-9707-12b1e4422e11@127.0.0.1:37526 I0123 18:50:11.926960 15707 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0123 18:50:11.927691 15707 master.cpp:1484] Registering framework 20150123-185011-16777343-37526-15688-0000 (default) at scheduler-2cecb105-ca23-4048-9707-12b1e4422e11@127.0.0.1:37526 I0123 18:50:11.928292 15708 hierarchical_allocator_process.hpp:319] Added framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.928326 15708 hierarchical_allocator_process.hpp:839] No resources available to allocate! I0123 18:50:11.928340 15708 hierarchical_allocator_process.hpp:746] Performed allocation for 0 slaves in 21080ns I0123 18:50:11.934458 15707 sched.cpp:442] Framework registered with 20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.934927 15707 sched.cpp:456] Scheduler::registered took 112885ns I0123 18:50:11.935747 15709 slave.cpp:1075] Will retry registration in 19.609252ms if necessary I0123 18:50:11.935981 15709 master.cpp:3263] Ignoring register slave message from slave(94)@127.0.0.1:37526 (localhost.localdomain) as admission is already in progress I0123 18:50:11.938997 15703 leveldb.cpp:343] Persisting action (334 bytes) to leveldb took 10.171709ms I0123 18:50:11.939049 15703 replica.cpp:679] Persisted action at 3 I0123 18:50:11.940630 15709 replica.cpp:658] Replica received learned notice for position 3 I0123 18:50:11.945473 15709 leveldb.cpp:343] Persisting action (336 bytes) to leveldb took 4.804742ms I0123 18:50:11.945521 15709 replica.cpp:679] Persisted action at 3 I0123 18:50:11.945550 15709 replica.cpp:664] Replica learned APPEND action at position 3 I0123 18:50:11.947105 15709 registrar.cpp:490] Successfully updated the 'registry' in 42.637056ms I0123 18:50:11.948020 15703 master.cpp:3329] Registered slave 20150123-185011-16777343-37526-15688-S0 at slave(94)@127.0.0.1:37526 (localhost.localdomain) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0123 18:50:11.948318 15703 hierarchical_allocator_process.hpp:453] Added slave 20150123-185011-16777343-37526-15688-S0 (localhost.localdomain) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0123 18:50:11.948719 15703 hierarchical_allocator_process.hpp:764] Performed allocation for slave 20150123-185011-16777343-37526-15688-S0 in 355831ns I0123 18:50:11.948813 15703 slave.cpp:781] Registered with master master@127.0.0.1:37526; given slave ID 20150123-185011-16777343-37526-15688-S0 I0123 18:50:11.948969 15703 slave.cpp:2588] Received ping from slave-observer(92)@127.0.0.1:37526 I0123 18:50:11.949324 15703 master.cpp:4071] Sending 1 offers to framework 20150123-185011-16777343-37526-15688-0000 (default) at scheduler-2cecb105-ca23-4048-9707-12b1e4422e11@127.0.0.1:37526 I0123 18:50:11.949571 15706 status_update_manager.cpp:178] Resuming sending status updates I0123 18:50:11.950023 15709 log.cpp:703] Attempting to truncate the log to 3 I0123 18:50:11.950810 15705 sched.cpp:605] Scheduler::resourceOffers took 135580ns I0123 18:50:11.952793 15708 master.cpp:2677] Processing ACCEPT call for offers: [ 20150123-185011-16777343-37526-15688-O0 ] on slave 20150123-185011-16777343-37526-15688-S0 at slave(94)@127.0.0.1:37526 (localhost.localdomain) for framework 20150123-185011-16777343-37526-15688-0000 (default) at scheduler-2cecb105-ca23-4048-9707-12b1e4422e11@127.0.0.1:37526 I0123 18:50:11.952852 15708 master.cpp:2513] Authorizing framework principal 'test-principal' to launch task 1 as user 'jenkins' W0123 18:50:11.954649 15708 master.cpp:2130] Executor default for task 1 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0123 18:50:11.954988 15708 master.cpp:2142] Executor default for task 1 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0123 18:50:11.955579 15708 master.hpp:782] Adding task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150123-185011-16777343-37526-15688-S0 (localhost.localdomain) I0123 18:50:11.956035 15703 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0123 18:50:11.957592 15704 replica.cpp:511] Replica received write request for position 4 I0123 18:50:11.958485 15708 master.cpp:2885] Launching task 1 of framework 20150123-185011-16777343-37526-15688-0000 (default) at scheduler-2cecb105-ca23-4048-9707-12b1e4422e11@127.0.0.1:37526 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150123-185011-16777343-37526-15688-S0 at slave(94)@127.0.0.1:37526 (localhost.localdomain) I0123 18:50:11.960578 15706 slave.cpp:1130] Got assigned task 1 for framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.961293 15706 slave.cpp:1245] Launching task 1 for framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.964450 15704 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 6.81421ms I0123 18:50:11.964496 15704 replica.cpp:679] Persisted action at 4 I0123 18:50:11.966328 15705 replica.cpp:658] Replica received learned notice for position 4 I0123 18:50:11.969648 15706 slave.cpp:3921] Launching executor default of framework 20150123-185011-16777343-37526-15688-0000 in work directory '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_TB8Rh3/slaves/20150123-185011-16777343-37526-15688-S0/frameworks/20150123-185011-16777343-37526-15688-0000/executors/default/runs/02536e4f-fb59-4b75-99aa-611fd7fffcb1' I0123 18:50:11.976954 15705 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 10.584003ms I0123 18:50:11.977078 15705 leveldb.cpp:401] Deleting ~2 keys from leveldb took 72466ns I0123 18:50:11.977104 15705 replica.cpp:679] Persisted action at 4 I0123 18:50:11.977138 15705 replica.cpp:664] Replica learned TRUNCATE action at position 4 I0123 18:50:11.978016 15706 exec.cpp:147] Version: 0.22.0 I0123 18:50:11.978646 15710 exec.cpp:197] Executor started at: executor(50)@127.0.0.1:37526 with pid 15688 I0123 18:50:11.982480 15706 slave.cpp:1368] Queuing task '1' for executor default of framework '20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.982676 15706 slave.cpp:566] Successfully attached file '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_TB8Rh3/slaves/20150123-185011-16777343-37526-15688-S0/frameworks/20150123-185011-16777343-37526-15688-0000/executors/default/runs/02536e4f-fb59-4b75-99aa-611fd7fffcb1' I0123 18:50:11.982770 15706 slave.cpp:1912] Got registration for executor 'default' of framework 20150123-185011-16777343-37526-15688-0000 from executor(50)@127.0.0.1:37526 I0123 18:50:11.983203 15706 slave.cpp:2031] Flushing queued task 1 for executor 'default' of framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.983505 15706 slave.cpp:2890] Monitoring executor 'default' of framework '20150123-185011-16777343-37526-15688-0000' in container '02536e4f-fb59-4b75-99aa-611fd7fffcb1' I0123 18:50:11.983749 15706 exec.cpp:221] Executor registered on slave 20150123-185011-16777343-37526-15688-S0 I0123 18:50:11.986131 15706 exec.cpp:233] Executor::registered took 30292ns I0123 18:50:11.989857 15706 exec.cpp:308] Executor asked to run task '1' I0123 18:50:11.990216 15706 exec.cpp:317] Executor::launchTask took 83992ns I0123 18:50:11.992413 15706 exec.cpp:540] Executor sending status update TASK_RUNNING (UUID: 3887f5e3-3349-4d1d-8e18-299be6c3c294) for task 1 of framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.996598 15703 slave.cpp:2265] Handling status update TASK_RUNNING (UUID: 3887f5e3-3349-4d1d-8e18-299be6c3c294) for task 1 of framework 20150123-185011-16777343-37526-15688-0000 from executor(50)@127.0.0.1:37526 I0123 18:50:11.996922 15703 status_update_manager.cpp:317] Received status update TASK_RUNNING (UUID: 3887f5e3-3349-4d1d-8e18-299be6c3c294) for task 1 of framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.996960 15703 status_update_manager.cpp:494] Creating StatusUpdate stream for task 1 of framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.997187 15703 status_update_manager.cpp:371] Forwarding update TASK_RUNNING (UUID: 3887f5e3-3349-4d1d-8e18-299be6c3c294) for task 1 of framework 20150123-185011-16777343-37526-15688-0000 to the slave I0123 18:50:11.997541 15703 slave.cpp:2508] Forwarding the update TASK_RUNNING (UUID: 3887f5e3-3349-4d1d-8e18-299be6c3c294) for task 1 of framework 20150123-185011-16777343-37526-15688-0000 to master@127.0.0.1:37526 I0123 18:50:11.997678 15703 slave.cpp:2435] Status update manager successfully handled status update TASK_RUNNING (UUID: 3887f5e3-3349-4d1d-8e18-299be6c3c294) for task 1 of framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.997707 15703 slave.cpp:2441] Sending acknowledgement for status update TASK_RUNNING (UUID: 3887f5e3-3349-4d1d-8e18-299be6c3c294) for task 1 of framework 20150123-185011-16777343-37526-15688-0000 to executor(50)@127.0.0.1:37526 I0123 18:50:11.997936 15703 master.cpp:3652] Forwarding status update TASK_RUNNING (UUID: 3887f5e3-3349-4d1d-8e18-299be6c3c294) for task 1 of framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.998054 15703 master.cpp:3624] Status update TASK_RUNNING (UUID: 3887f5e3-3349-4d1d-8e18-299be6c3c294) for task 1 of framework 20150123-185011-16777343-37526-15688-0000 from slave 20150123-185011-16777343-37526-15688-S0 at slave(94)@127.0.0.1:37526 (localhost.localdomain) I0123 18:50:11.998106 15703 master.cpp:4934] Updating the latest state of task 1 of framework 20150123-185011-16777343-37526-15688-0000 to TASK_RUNNING I0123 18:50:11.998301 15703 sched.cpp:696] Scheduler::statusUpdate took 54363ns I0123 18:50:11.998615 15707 master.cpp:3125] Forwarding status update acknowledgement 3887f5e3-3349-4d1d-8e18-299be6c3c294 for task 1 of framework 20150123-185011-16777343-37526-15688-0000 (default) at scheduler-2cecb105-ca23-4048-9707-12b1e4422e11@127.0.0.1:37526 to slave 20150123-185011-16777343-37526-15688-S0 at slave(94)@127.0.0.1:37526 (localhost.localdomain) I0123 18:50:11.998867 15707 status_update_manager.cpp:389] Received status update acknowledgement (UUID: 3887f5e3-3349-4d1d-8e18-299be6c3c294) for task 1 of framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:11.999047 15707 slave.cpp:1852] Status update manager successfully handled status update acknowledgement (UUID: 3887f5e3-3349-4d1d-8e18-299be6c3c294) for task 1 of framework 20150123-185011-16777343-37526-15688-0000 W0123 18:50:12.001930 15688 sched.cpp:1246] ************************************************** Scheduler driver bound to loopback interface! Cannot communicate with remote master(s). You might want to set 'LIBPROCESS_IP' environment variable to use a routable IP address. ************************************************** I0123 18:50:12.006674 15706 exec.cpp:354] Executor received status update acknowledgement 3887f5e3-3349-4d1d-8e18-299be6c3c294 for task 1 of framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:12.015889 15688 sched.cpp:151] Version: 0.22.0 I0123 18:50:12.017143 15706 sched.cpp:248] New master detected at master@127.0.0.1:37526 I0123 18:50:12.017241 15706 sched.cpp:304] Authenticating with master master@127.0.0.1:37526 I0123 18:50:12.017264 15706 sched.cpp:311] Using default CRAM-MD5 authenticatee I0123 18:50:12.017680 15710 authenticatee.hpp:138] Creating new client SASL connection I0123 18:50:12.018093 15710 master.cpp:4129] Authenticating scheduler-9b22c538-3b80-4309-80bd-e4c06956dd3e@127.0.0.1:37526 I0123 18:50:12.018129 15710 master.cpp:4140] Using default CRAM-MD5 authenticator I0123 18:50:12.018590 15710 authenticator.hpp:170] Creating new server SASL connection I0123 18:50:12.018904 15710 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0123 18:50:12.018934 15710 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0123 18:50:12.019039 15710 authenticator.hpp:276] Received SASL authentication start I0123 18:50:12.019101 15710 authenticator.hpp:398] Authentication requires more steps I0123 18:50:12.019172 15710 authenticatee.hpp:275] Received SASL authentication step I0123 18:50:12.019273 15710 authenticator.hpp:304] Received SASL authentication step I0123 18:50:12.019304 15710 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0123 18:50:12.019316 15710 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0123 18:50:12.019364 15710 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0123 18:50:12.020604 15710 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0123 18:50:12.020859 15710 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0123 18:50:12.021114 15710 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0123 18:50:12.021402 15710 authenticator.hpp:390] Authentication success I0123 18:50:12.021790 15705 authenticatee.hpp:315] Authentication success I0123 18:50:12.029628 15705 sched.cpp:392] Successfully authenticated with master master@127.0.0.1:37526 I0123 18:50:12.029682 15705 sched.cpp:515] Sending registration request to master@127.0.0.1:37526 I0123 18:50:12.029784 15705 sched.cpp:548] Will retry registration in 371.903559ms if necessary I0123 18:50:12.030015 15705 master.cpp:1525] Queuing up re-registration request for framework 20150123-185011-16777343-37526-15688-0000 (default) at scheduler-9b22c538-3b80-4309-80bd-e4c06956dd3e@127.0.0.1:37526 because authentication is still in progress I0123 18:50:12.030215 15710 master.cpp:4187] Successfully authenticated principal 'test-principal' at scheduler-9b22c538-3b80-4309-80bd-e4c06956dd3e@127.0.0.1:37526 I0123 18:50:12.030539 15710 master.cpp:1557] Received re-registration request from framework 20150123-185011-16777343-37526-15688-0000 (default) at scheduler-9b22c538-3b80-4309-80bd-e4c06956dd3e@127.0.0.1:37526 I0123 18:50:12.030618 15710 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0123 18:50:12.031014 15710 master.cpp:1610] Re-registering framework 20150123-185011-16777343-37526-15688-0000 (default) at scheduler-9b22c538-3b80-4309-80bd-e4c06956dd3e@127.0.0.1:37526 I0123 18:50:12.031060 15710 master.cpp:1639] Framework 20150123-185011-16777343-37526-15688-0000 (default) at scheduler-2cecb105-ca23-4048-9707-12b1e4422e11@127.0.0.1:37526 failed over I0123 18:50:12.031723 15703 sched.cpp:442] Framework registered with 20150123-185011-16777343-37526-15688-0000 I0123 18:50:12.031841 15703 sched.cpp:456] Scheduler::registered took 54566ns I0123 18:50:12.032662 15709 slave.cpp:1762] Updating framework 20150123-185011-16777343-37526-15688-0000 pid to scheduler-9b22c538-3b80-4309-80bd-e4c06956dd3e@127.0.0.1:37526 I0123 18:50:12.032924 15709 status_update_manager.cpp:178] Resuming sending status updates I0123 18:50:12.034113 15703 slave.cpp:2571] Sending message for framework 20150123-185011-16777343-37526-15688-0000 to scheduler-9b22c538-3b80-4309-80bd-e4c06956dd3e@127.0.0.1:37526 I0123 18:50:12.034302 15703 sched.cpp:782] Scheduler::frameworkMessage took 53684ns I0123 18:50:12.034771 15688 sched.cpp:1471] Asked to stop the driver I0123 18:50:12.034864 15688 sched.cpp:1471] Asked to stop the driver I0123 18:50:12.034942 15688 master.cpp:654] Master terminating W0123 18:50:12.035094 15688 master.cpp:4979] Removing task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 20150123-185011-16777343-37526-15688-0000 on slave 20150123-185011-16777343-37526-15688-S0 at slave(94)@127.0.0.1:37526 (localhost.localdomain) in non-terminal state TASK_RUNNING I0123 18:50:12.035724 15688 master.cpp:5022] Removing executor 'default' with resources of framework 20150123-185011-16777343-37526-15688-0000 on slave 20150123-185011-16777343-37526-15688-S0 at slave(94)@127.0.0.1:37526 (localhost.localdomain) I0123 18:50:12.036705 15709 sched.cpp:808] Stopping framework '20150123-185011-16777343-37526-15688-0000' I0123 18:50:12.036960 15709 hierarchical_allocator_process.hpp:653] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 20150123-185011-16777343-37526-15688-S0 from framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:12.037048 15709 slave.cpp:2673] master@127.0.0.1:37526 exited W0123 18:50:12.037071 15709 slave.cpp:2676] Master disconnected! Waiting for a new master to be elected I0123 18:50:12.037359 15710 sched.cpp:788] Ignoring error message because the driver is not running! I0123 18:50:12.037513 15710 sched.cpp:808] Stopping framework '20150123-185011-16777343-37526-15688-0000' I0123 18:50:12.076481 15688 slave.cpp:495] Slave terminating I0123 18:50:12.080759 15688 slave.cpp:1585] Asked to shut down framework 20150123-185011-16777343-37526-15688-0000 by @0.0.0.0:0 I0123 18:50:12.081023 15688 slave.cpp:1610] Shutting down framework 20150123-185011-16777343-37526-15688-0000 I0123 18:50:12.081351 15688 slave.cpp:3198] Shutting down executor 'default' of framework 20150123-185011-16777343-37526-15688-0000 tests/fault_tolerance_tests.cpp:1383: Failure Actual function call count doesn't match EXPECT_CALL(sched1, error(&driver1, """"Framework failed over""""))... Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] FaultToleranceTest.SchedulerFailoverFrameworkMessage (481 ms) [ RUN ] FaultToleranceTest.SchedulerFailoverFrameworkMessage Using temporary directory '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_hEc3n7' I0122 19:15:01.356081 3518 leveldb.cpp:176] Opened db in 19.797885ms I0122 19:15:01.362119 3518 leveldb.cpp:183] Compacted db in 5.953605ms I0122 19:15:01.362191 3518 leveldb.cpp:198] Created db iterator in 30691ns I0122 19:15:01.362210 3518 leveldb.cpp:204] Seeked to beginning of db in 2240ns I0122 19:15:01.362221 3518 leveldb.cpp:273] Iterated through 0 keys in the db in 517ns I0122 19:15:01.362295 3518 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0122 19:15:01.364575 3534 recover.cpp:449] Starting replica recovery I0122 19:15:01.365314 3534 recover.cpp:475] Replica is in EMPTY status I0122 19:15:01.389731 3534 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0122 19:15:01.390005 3534 recover.cpp:195] Received a recover response from a replica in EMPTY status I0122 19:15:01.391346 3538 recover.cpp:566] Updating replica status to STARTING I0122 19:15:01.403445 3538 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 11.806565ms I0122 19:15:01.403795 3538 replica.cpp:323] Persisted replica status to STARTING I0122 19:15:01.406898 3538 recover.cpp:475] Replica is in STARTING status I0122 19:15:01.408671 3537 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0122 19:15:01.413719 3537 recover.cpp:195] Received a recover response from a replica in STARTING status I0122 19:15:01.419553 3538 recover.cpp:566] Updating replica status to VOTING I0122 19:15:01.426426 3536 master.cpp:262] Master 20150122-191501-16777343-50172-3518 (localhost.localdomain) started on 127.0.0.1:50172 W0122 19:15:01.426473 3536 master.cpp:266] ************************************************** Master bound to loopback interface! Cannot communicate with remote schedulers or slaves. You might want to set '--ip' flag to a routable IP address. ************************************************** I0122 19:15:01.426519 3536 master.cpp:308] Master only allowing authenticated frameworks to register I0122 19:15:01.426532 3536 master.cpp:313] Master only allowing authenticated slaves to register I0122 19:15:01.426564 3536 credentials.hpp:36] Loading credentials for authentication from '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_hEc3n7/credentials' I0122 19:15:01.426841 3536 master.cpp:357] Authorization enabled I0122 19:15:01.428205 3533 hierarchical_allocator_process.hpp:285] Initialized hierarchical allocator process I0122 19:15:01.428627 3534 whitelist_watcher.cpp:65] No whitelist given I0122 19:15:01.429839 3538 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 6.340373ms I0122 19:15:01.429879 3538 replica.cpp:323] Persisted replica status to VOTING I0122 19:15:01.430024 3538 recover.cpp:580] Successfully joined the Paxos group I0122 19:15:01.430233 3538 recover.cpp:464] Recover process terminated I0122 19:15:01.432348 3536 master.cpp:1219] The newly elected leader is master@127.0.0.1:50172 with id 20150122-191501-16777343-50172-3518 I0122 19:15:01.436343 3536 master.cpp:1232] Elected as the leading master! I0122 19:15:01.436738 3536 master.cpp:1050] Recovering from registrar I0122 19:15:01.437191 3535 registrar.cpp:313] Recovering registrar I0122 19:15:01.438340 3535 log.cpp:660] Attempting to start the writer I0122 19:15:01.440163 3533 replica.cpp:477] Replica received implicit promise request with proposal 1 I0122 19:15:01.445287 3533 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 4.550707ms I0122 19:15:01.445327 3533 replica.cpp:345] Persisted promised to 1 I0122 19:15:01.446691 3537 coordinator.cpp:230] Coordinator attemping to fill missing position I0122 19:15:01.448724 3537 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0122 19:15:01.453824 3537 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 4.787009ms I0122 19:15:01.453860 3537 replica.cpp:679] Persisted action at 0 I0122 19:15:01.455684 3533 replica.cpp:511] Replica received write request for position 0 I0122 19:15:01.456087 3533 leveldb.cpp:438] Reading position from leveldb took 37133ns I0122 19:15:01.460862 3533 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 4.458448ms I0122 19:15:01.460897 3533 replica.cpp:679] Persisted action at 0 I0122 19:15:01.461601 3533 replica.cpp:658] Replica received learned notice for position 0 I0122 19:15:01.466660 3533 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 5.028194ms I0122 19:15:01.466696 3533 replica.cpp:679] Persisted action at 0 I0122 19:15:01.466718 3533 replica.cpp:664] Replica learned NOP action at position 0 I0122 19:15:01.467931 3537 log.cpp:676] Writer started with ending position 0 I0122 19:15:01.469182 3537 leveldb.cpp:438] Reading position from leveldb took 32199ns I0122 19:15:01.479857 3537 registrar.cpp:346] Successfully fetched the registry (0B) in 42.6048ms I0122 19:15:01.480340 3537 registrar.cpp:445] Applied 1 operations in 42179ns; attempting to update the 'registry' I0122 19:15:01.484465 3535 log.cpp:684] Attempting to append 134 bytes to the log I0122 19:15:01.484661 3535 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0122 19:15:01.486250 3535 replica.cpp:511] Replica received write request for position 1 I0122 19:15:01.491243 3535 leveldb.cpp:343] Persisting action (153 bytes) to leveldb took 4.582496ms I0122 19:15:01.491296 3535 replica.cpp:679] Persisted action at 1 I0122 19:15:01.492647 3539 replica.cpp:658] Replica received learned notice for position 1 I0122 19:15:01.497707 3539 leveldb.cpp:343] Persisting action (155 bytes) to leveldb took 5.027112ms I0122 19:15:01.497743 3539 replica.cpp:679] Persisted action at 1 I0122 19:15:01.497767 3539 replica.cpp:664] Replica learned APPEND action at position 1 I0122 19:15:01.499428 3539 registrar.cpp:490] Successfully updated the 'registry' in 18.743808ms I0122 19:15:01.499609 3535 log.cpp:703] Attempting to truncate the log to 1 I0122 19:15:01.500036 3535 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0122 19:15:01.501029 3535 replica.cpp:511] Replica received write request for position 2 I0122 19:15:01.501694 3539 registrar.cpp:376] Successfully recovered registrar I0122 19:15:01.502358 3536 master.cpp:1077] Recovered 0 slaves from the Registry (97B) ; allowing 10mins for slaves to re-register I0122 19:15:01.514142 3535 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 13.074759ms I0122 19:15:01.514189 3535 replica.cpp:679] Persisted action at 2 I0122 19:15:01.515473 3535 replica.cpp:658] Replica received learned notice for position 2 I0122 19:15:01.527171 3535 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 11.064363ms I0122 19:15:01.527456 3535 leveldb.cpp:401] Deleting ~1 keys from leveldb took 118227ns I0122 19:15:01.527495 3535 replica.cpp:679] Persisted action at 2 I0122 19:15:01.527537 3535 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0122 19:15:01.563684 3538 slave.cpp:173] Slave started on 76)@127.0.0.1:50172 W0122 19:15:01.563736 3538 slave.cpp:176] ************************************************** Slave bound to loopback interface! Cannot communicate with remote master(s). You might want to set '--ip' flag to a routable IP address. ************************************************** I0122 19:15:01.563751 3538 credentials.hpp:84] Loading credential for authentication from '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_XZCq6R/credential' I0122 19:15:01.563921 3538 slave.cpp:282] Slave using credential for: test-principal I0122 19:15:01.564209 3538 slave.cpp:300] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0122 19:15:01.564327 3538 slave.cpp:329] Slave hostname: localhost.localdomain I0122 19:15:01.564348 3538 slave.cpp:330] Slave checkpoint: false W0122 19:15:01.564357 3538 slave.cpp:332] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I0122 19:15:01.566193 3537 state.cpp:33] Recovering state from '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_XZCq6R/meta' I0122 19:15:01.566629 3537 status_update_manager.cpp:197] Recovering status update manager I0122 19:15:01.566962 3537 slave.cpp:3519] Finished recovery I0122 19:15:01.571022 3533 status_update_manager.cpp:171] Pausing sending status updates I0122 19:15:01.571466 3537 slave.cpp:613] New master detected at master@127.0.0.1:50172 I0122 19:15:01.573503 3537 slave.cpp:676] Authenticating with master master@127.0.0.1:50172 I0122 19:15:01.573771 3537 slave.cpp:681] Using default CRAM-MD5 authenticatee I0122 19:15:01.574427 3540 authenticatee.hpp:138] Creating new client SASL connection I0122 19:15:01.574857 3535 master.cpp:4129] Authenticating slave(76)@127.0.0.1:50172 I0122 19:15:01.574893 3535 master.cpp:4140] Using default CRAM-MD5 authenticator W0122 19:15:01.575266 3518 sched.cpp:1246] ************************************************** Scheduler driver bound to loopback interface! Cannot communicate with remote master(s). You might want to set 'LIBPROCESS_IP' environment variable to use a routable IP address. ************************************************** I0122 19:15:01.576125 3535 authenticator.hpp:170] Creating new server SASL connection I0122 19:15:01.576526 3535 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0122 19:15:01.576563 3535 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0122 19:15:01.576671 3535 authenticator.hpp:276] Received SASL authentication start I0122 19:15:01.576740 3535 authenticator.hpp:398] Authentication requires more steps I0122 19:15:01.576812 3535 authenticatee.hpp:275] Received SASL authentication step I0122 19:15:01.576915 3535 authenticator.hpp:304] Received SASL authentication step I0122 19:15:01.576943 3535 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0122 19:15:01.576967 3535 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0122 19:15:01.577026 3535 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0122 19:15:01.577061 3535 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0122 19:15:01.577076 3535 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0122 19:15:01.577085 3535 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0122 19:15:01.577101 3535 authenticator.hpp:390] Authentication success I0122 19:15:01.577209 3535 authenticatee.hpp:315] Authentication success I0122 19:15:01.577304 3535 master.cpp:4187] Successfully authenticated principal 'test-principal' at slave(76)@127.0.0.1:50172 I0122 19:15:01.577615 3537 slave.cpp:649] Detecting new master I0122 19:15:01.580585 3537 slave.cpp:747] Successfully authenticated with master master@127.0.0.1:50172 I0122 19:15:01.585831 3537 slave.cpp:1075] Will retry registration in 18.23486ms if necessary I0122 19:15:01.588697 3535 master.cpp:3275] Registering slave at slave(76)@127.0.0.1:50172 (localhost.localdomain) with id 20150122-191501-16777343-50172-3518-S0 I0122 19:15:01.589609 3539 registrar.cpp:445] Applied 1 operations in 117629ns; attempting to update the 'registry' I0122 19:15:01.592538 3536 log.cpp:684] Attempting to append 313 bytes to the log I0122 19:15:01.592766 3536 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0122 19:15:01.594276 3536 replica.cpp:511] Replica received write request for position 3 I0122 19:15:01.599052 3518 sched.cpp:151] Version: 0.22.0 I0122 19:15:01.600783 3533 sched.cpp:248] New master detected at master@127.0.0.1:50172 I0122 19:15:01.600873 3533 sched.cpp:304] Authenticating with master master@127.0.0.1:50172 I0122 19:15:01.600896 3533 sched.cpp:311] Using default CRAM-MD5 authenticatee I0122 19:15:01.601238 3533 authenticatee.hpp:138] Creating new client SASL connection I0122 19:15:01.601773 3534 master.cpp:4129] Authenticating scheduler-1491b8db-aae5-47fb-b234-d44c2f509ec0@127.0.0.1:50172 I0122 19:15:01.601809 3534 master.cpp:4140] Using default CRAM-MD5 authenticator I0122 19:15:01.602197 3534 authenticator.hpp:170] Creating new server SASL connection I0122 19:15:01.602519 3534 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0122 19:15:01.602548 3534 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0122 19:15:01.602651 3534 authenticator.hpp:276] Received SASL authentication start I0122 19:15:01.602705 3534 authenticator.hpp:398] Authentication requires more steps I0122 19:15:01.602774 3534 authenticatee.hpp:275] Received SASL authentication step I0122 19:15:01.602854 3534 authenticator.hpp:304] Received SASL authentication step I0122 19:15:01.602881 3534 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0122 19:15:01.602895 3534 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0122 19:15:01.602936 3534 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0122 19:15:01.602960 3534 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0122 19:15:01.602973 3534 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0122 19:15:01.602982 3534 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0122 19:15:01.602996 3534 authenticator.hpp:390] Authentication success I0122 19:15:01.603091 3534 authenticatee.hpp:315] Authentication success I0122 19:15:01.603174 3534 master.cpp:4187] Successfully authenticated principal 'test-principal' at scheduler-1491b8db-aae5-47fb-b234-d44c2f509ec0@127.0.0.1:50172 I0122 19:15:01.603533 3535 sched.cpp:392] Successfully authenticated with master master@127.0.0.1:50172 I0122 19:15:01.603559 3535 sched.cpp:515] Sending registration request to master@127.0.0.1:50172 I0122 19:15:01.603626 3535 sched.cpp:548] Will retry registration in 64.334563ms if necessary I0122 19:15:01.604531 3534 master.cpp:1420] Received registration request for framework 'default' at scheduler-1491b8db-aae5-47fb-b234-d44c2f509ec0@127.0.0.1:50172 I0122 19:15:01.604869 3536 leveldb.cpp:343] Persisting action (332 bytes) to leveldb took 10.363251ms I0122 19:15:01.605108 3536 replica.cpp:679] Persisted action at 3 I0122 19:15:01.606084 3536 replica.cpp:658] Replica received learned notice for position 3 I0122 19:15:01.606972 3534 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0122 19:15:01.607828 3534 master.cpp:1484] Registering framework 20150122-191501-16777343-50172-3518-0000 (default) at scheduler-1491b8db-aae5-47fb-b234-d44c2f509ec0@127.0.0.1:50172 I0122 19:15:01.608331 3540 slave.cpp:1075] Will retry registration in 28.084371ms if necessary I0122 19:15:01.610283 3539 hierarchical_allocator_process.hpp:319] Added framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.610349 3539 hierarchical_allocator_process.hpp:839] No resources available to allocate! I0122 19:15:01.610391 3539 hierarchical_allocator_process.hpp:746] Performed allocation for 0 slaves in 54938ns I0122 19:15:01.614012 3536 leveldb.cpp:343] Persisting action (334 bytes) to leveldb took 7.895962ms I0122 19:15:01.614048 3536 replica.cpp:679] Persisted action at 3 I0122 19:15:01.614073 3536 replica.cpp:664] Replica learned APPEND action at position 3 I0122 19:15:01.615972 3536 registrar.cpp:490] Successfully updated the 'registry' in 26.294016ms I0122 19:15:01.616164 3533 log.cpp:703] Attempting to truncate the log to 3 I0122 19:15:01.616703 3533 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0122 19:15:01.617676 3533 replica.cpp:511] Replica received write request for position 4 I0122 19:15:01.625704 3537 sched.cpp:442] Framework registered with 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.625782 3537 sched.cpp:456] Scheduler::registered took 45146ns I0122 19:15:01.626240 3534 master.cpp:3263] Ignoring register slave message from slave(76)@127.0.0.1:50172 (localhost.localdomain) as admission is already in progress I0122 19:15:01.627259 3534 master.cpp:3329] Registered slave 20150122-191501-16777343-50172-3518-S0 at slave(76)@127.0.0.1:50172 (localhost.localdomain) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0122 19:15:01.627607 3534 hierarchical_allocator_process.hpp:453] Added slave 20150122-191501-16777343-50172-3518-S0 (localhost.localdomain) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0122 19:15:01.628016 3534 hierarchical_allocator_process.hpp:764] Performed allocation for slave 20150122-191501-16777343-50172-3518-S0 in 361785ns I0122 19:15:01.628105 3534 slave.cpp:781] Registered with master master@127.0.0.1:50172; given slave ID 20150122-191501-16777343-50172-3518-S0 I0122 19:15:01.628268 3534 slave.cpp:2588] Received ping from slave-observer(62)@127.0.0.1:50172 I0122 19:15:01.628720 3535 master.cpp:4071] Sending 1 offers to framework 20150122-191501-16777343-50172-3518-0000 (default) at scheduler-1491b8db-aae5-47fb-b234-d44c2f509ec0@127.0.0.1:50172 I0122 19:15:01.628861 3535 status_update_manager.cpp:178] Resuming sending status updates I0122 19:15:01.629256 3535 sched.cpp:605] Scheduler::resourceOffers took 76294ns I0122 19:15:01.629585 3533 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 11.873298ms I0122 19:15:01.629623 3533 replica.cpp:679] Persisted action at 4 I0122 19:15:01.631567 3533 replica.cpp:658] Replica received learned notice for position 4 I0122 19:15:01.633208 3540 master.cpp:2677] Processing ACCEPT call for offers: [ 20150122-191501-16777343-50172-3518-O0 ] on slave 20150122-191501-16777343-50172-3518-S0 at slave(76)@127.0.0.1:50172 (localhost.localdomain) for framework 20150122-191501-16777343-50172-3518-0000 (default) at scheduler-1491b8db-aae5-47fb-b234-d44c2f509ec0@127.0.0.1:50172 I0122 19:15:01.633386 3540 master.cpp:2513] Authorizing framework principal 'test-principal' to launch task 1 as user 'jenkins' W0122 19:15:01.635479 3540 master.cpp:2130] Executor default for task 1 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0122 19:15:01.636101 3540 master.cpp:2142] Executor default for task 1 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0122 19:15:01.636804 3540 master.hpp:782] Adding task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150122-191501-16777343-50172-3518-S0 (localhost.localdomain) I0122 19:15:01.638121 3540 master.cpp:2885] Launching task 1 of framework 20150122-191501-16777343-50172-3518-0000 (default) at scheduler-1491b8db-aae5-47fb-b234-d44c2f509ec0@127.0.0.1:50172 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150122-191501-16777343-50172-3518-S0 at slave(76)@127.0.0.1:50172 (localhost.localdomain) I0122 19:15:01.642609 3536 slave.cpp:1130] Got assigned task 1 for framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.643496 3536 slave.cpp:1245] Launching task 1 for framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.644604 3533 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 13.001968ms I0122 19:15:01.644726 3533 leveldb.cpp:401] Deleting ~2 keys from leveldb took 61434ns I0122 19:15:01.644752 3533 replica.cpp:679] Persisted action at 4 I0122 19:15:01.644778 3533 replica.cpp:664] Replica learned TRUNCATE action at position 4 I0122 19:15:01.648011 3536 slave.cpp:3921] Launching executor default of framework 20150122-191501-16777343-50172-3518-0000 in work directory '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_XZCq6R/slaves/20150122-191501-16777343-50172-3518-S0/frameworks/20150122-191501-16777343-50172-3518-0000/executors/default/runs/c386f98d-3df4-4aee-b3ad-9e9c1ec7cc15' I0122 19:15:01.657420 3536 exec.cpp:147] Version: 0.22.0 I0122 19:15:01.661609 3534 exec.cpp:197] Executor started at: executor(14)@127.0.0.1:50172 with pid 3518 I0122 19:15:01.662360 3536 slave.cpp:1368] Queuing task '1' for executor default of framework '20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.663007 3536 slave.cpp:566] Successfully attached file '/tmp/FaultToleranceTest_SchedulerFailoverFrameworkMessage_XZCq6R/slaves/20150122-191501-16777343-50172-3518-S0/frameworks/20150122-191501-16777343-50172-3518-0000/executors/default/runs/c386f98d-3df4-4aee-b3ad-9e9c1ec7cc15' I0122 19:15:01.665674 3536 slave.cpp:1912] Got registration for executor 'default' of framework 20150122-191501-16777343-50172-3518-0000 from executor(14)@127.0.0.1:50172 I0122 19:15:01.666738 3539 exec.cpp:221] Executor registered on slave 20150122-191501-16777343-50172-3518-S0 I0122 19:15:01.668758 3539 exec.cpp:233] Executor::registered took 76393ns I0122 19:15:01.669208 3536 slave.cpp:2031] Flushing queued task 1 for executor 'default' of framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.670045 3533 exec.cpp:308] Executor asked to run task '1' I0122 19:15:01.670194 3533 exec.cpp:317] Executor::launchTask took 118431ns I0122 19:15:01.670605 3536 slave.cpp:2890] Monitoring executor 'default' of framework '20150122-191501-16777343-50172-3518-0000' in container 'c386f98d-3df4-4aee-b3ad-9e9c1ec7cc15' I0122 19:15:01.673183 3533 exec.cpp:540] Executor sending status update TASK_RUNNING (UUID: 2150029d-e89c-40a6-998f-c0295d72d964) for task 1 of framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.675230 3534 slave.cpp:2265] Handling status update TASK_RUNNING (UUID: 2150029d-e89c-40a6-998f-c0295d72d964) for task 1 of framework 20150122-191501-16777343-50172-3518-0000 from executor(14)@127.0.0.1:50172 I0122 19:15:01.675647 3534 status_update_manager.cpp:317] Received status update TASK_RUNNING (UUID: 2150029d-e89c-40a6-998f-c0295d72d964) for task 1 of framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.675689 3534 status_update_manager.cpp:494] Creating StatusUpdate stream for task 1 of framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.675981 3534 status_update_manager.cpp:371] Forwarding update TASK_RUNNING (UUID: 2150029d-e89c-40a6-998f-c0295d72d964) for task 1 of framework 20150122-191501-16777343-50172-3518-0000 to the slave I0122 19:15:01.676338 3534 slave.cpp:2508] Forwarding the update TASK_RUNNING (UUID: 2150029d-e89c-40a6-998f-c0295d72d964) for task 1 of framework 20150122-191501-16777343-50172-3518-0000 to master@127.0.0.1:50172 I0122 19:15:01.676910 3538 master.cpp:3652] Forwarding status update TASK_RUNNING (UUID: 2150029d-e89c-40a6-998f-c0295d72d964) for task 1 of framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.677034 3538 master.cpp:3624] Status update TASK_RUNNING (UUID: 2150029d-e89c-40a6-998f-c0295d72d964) for task 1 of framework 20150122-191501-16777343-50172-3518-0000 from slave 20150122-191501-16777343-50172-3518-S0 at slave(76)@127.0.0.1:50172 (localhost.localdomain) I0122 19:15:01.677098 3538 master.cpp:4934] Updating the latest state of task 1 of framework 20150122-191501-16777343-50172-3518-0000 to TASK_RUNNING I0122 19:15:01.677338 3538 sched.cpp:696] Scheduler::statusUpdate took 71579ns I0122 19:15:01.677701 3538 master.cpp:3125] Forwarding status update acknowledgement 2150029d-e89c-40a6-998f-c0295d72d964 for task 1 of framework 20150122-191501-16777343-50172-3518-0000 (default) at scheduler-1491b8db-aae5-47fb-b234-d44c2f509ec0@127.0.0.1:50172 to slave 20150122-191501-16777343-50172-3518-S0 at slave(76)@127.0.0.1:50172 (localhost.localdomain) W0122 19:15:01.680450 3518 sched.cpp:1246] ************************************************** Scheduler driver bound to loopback interface! Cannot communicate with remote master(s). You might want to set 'LIBPROCESS_IP' environment variable to use a routable IP address. ************************************************** I0122 19:15:01.684923 3534 slave.cpp:2435] Status update manager successfully handled status update TASK_RUNNING (UUID: 2150029d-e89c-40a6-998f-c0295d72d964) for task 1 of framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.685042 3534 slave.cpp:2441] Sending acknowledgement for status update TASK_RUNNING (UUID: 2150029d-e89c-40a6-998f-c0295d72d964) for task 1 of framework 20150122-191501-16777343-50172-3518-0000 to executor(14)@127.0.0.1:50172 I0122 19:15:01.687777 3534 exec.cpp:354] Executor received status update acknowledgement 2150029d-e89c-40a6-998f-c0295d72d964 for task 1 of framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.687896 3537 status_update_manager.cpp:389] Received status update acknowledgement (UUID: 2150029d-e89c-40a6-998f-c0295d72d964) for task 1 of framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.688174 3537 slave.cpp:1852] Status update manager successfully handled status update acknowledgement (UUID: 2150029d-e89c-40a6-998f-c0295d72d964) for task 1 of framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.707738 3518 sched.cpp:151] Version: 0.22.0 I0122 19:15:01.708892 3540 sched.cpp:248] New master detected at master@127.0.0.1:50172 I0122 19:15:01.708973 3540 sched.cpp:304] Authenticating with master master@127.0.0.1:50172 I0122 19:15:01.708997 3540 sched.cpp:311] Using default CRAM-MD5 authenticatee I0122 19:15:01.709345 3540 authenticatee.hpp:138] Creating new client SASL connection I0122 19:15:01.710389 3534 master.cpp:4129] Authenticating scheduler-ece17f18-6f5c-4807-8204-35771496dd9f@127.0.0.1:50172 I0122 19:15:01.710440 3534 master.cpp:4140] Using default CRAM-MD5 authenticator I0122 19:15:01.710844 3534 authenticator.hpp:170] Creating new server SASL connection I0122 19:15:01.711359 3540 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0122 19:15:01.711762 3540 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0122 19:15:01.712133 3540 authenticator.hpp:276] Received SASL authentication start I0122 19:15:01.712434 3540 authenticator.hpp:398] Authentication requires more steps I0122 19:15:01.712754 3540 authenticatee.hpp:275] Received SASL authentication step I0122 19:15:01.713156 3536 authenticator.hpp:304] Received SASL authentication step I0122 19:15:01.713191 3536 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0122 19:15:01.713204 3536 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0122 19:15:01.713263 3536 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0122 19:15:01.713290 3536 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'localhost.localdomain' server FQDN: 'localhost.localdomain' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0122 19:15:01.713304 3536 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0122 19:15:01.713311 3536 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0122 19:15:01.713326 3536 authenticator.hpp:390] Authentication success I0122 19:15:01.713470 3536 master.cpp:4187] Successfully authenticated principal 'test-principal' at scheduler-ece17f18-6f5c-4807-8204-35771496dd9f@127.0.0.1:50172 I0122 19:15:01.716526 3540 authenticatee.hpp:315] Authentication success I0122 19:15:01.720747 3540 sched.cpp:392] Successfully authenticated with master master@127.0.0.1:50172 I0122 19:15:01.720780 3540 sched.cpp:515] Sending registration request to master@127.0.0.1:50172 I0122 19:15:01.720852 3540 sched.cpp:548] Will retry registration in 1.20284193secs if necessary I0122 19:15:01.721050 3540 master.cpp:1557] Received re-registration request from framework 20150122-191501-16777343-50172-3518-0000 (default) at scheduler-ece17f18-6f5c-4807-8204-35771496dd9f@127.0.0.1:50172 I0122 19:15:01.721143 3540 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0122 19:15:01.721583 3540 master.cpp:1610] Re-registering framework 20150122-191501-16777343-50172-3518-0000 (default) at scheduler-ece17f18-6f5c-4807-8204-35771496dd9f@127.0.0.1:50172 I0122 19:15:01.721629 3540 master.cpp:1639] Framework 20150122-191501-16777343-50172-3518-0000 (default) at scheduler-1491b8db-aae5-47fb-b234-d44c2f509ec0@127.0.0.1:50172 failed over I0122 19:15:01.721943 3540 sched.cpp:792] Got error 'Framework failed over' I0122 19:15:01.721972 3540 sched.cpp:1505] Asked to abort the driver I0122 19:15:01.722039 3540 sched.cpp:803] Scheduler::error took 26469ns I0122 19:15:01.722084 3540 sched.cpp:833] Aborting framework '20150122-191501-16777343-50172-3518-0000' I0122 19:15:01.722196 3540 sched.cpp:442] Framework registered with 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.722262 3540 sched.cpp:456] Scheduler::registered took 40517ns W0122 19:15:01.722734 3538 master.cpp:1775] Ignoring deactivate framework message for framework 20150122-191501-16777343-50172-3518-0000 (default) at scheduler-ece17f18-6f5c-4807-8204-35771496dd9f@127.0.0.1:50172 because it is not expected from scheduler-1491b8db-aae5-47fb-b234-d44c2f509ec0@127.0.0.1:50172 I0122 19:15:01.724819 3540 slave.cpp:1762] Updating framework 20150122-191501-16777343-50172-3518-0000 pid to scheduler-ece17f18-6f5c-4807-8204-35771496dd9f@127.0.0.1:50172 I0122 19:15:01.725316 3533 status_update_manager.cpp:178] Resuming sending status updates I0122 19:15:01.726033 3540 slave.cpp:2571] Sending message for framework 20150122-191501-16777343-50172-3518-0000 to scheduler-ece17f18-6f5c-4807-8204-35771496dd9f@127.0.0.1:50172 I0122 19:15:01.727016 3534 sched.cpp:782] Scheduler::frameworkMessage took 57233ns I0122 19:15:01.727601 3518 sched.cpp:1471] Asked to stop the driver I0122 19:15:01.727665 3518 sched.cpp:1471] Asked to stop the driver I0122 19:15:01.727743 3518 master.cpp:654] Master terminating W0122 19:15:01.727893 3518 master.cpp:4979] Removing task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 20150122-191501-16777343-50172-3518-0000 on slave 20150122-191501-16777343-50172-3518-S0 at slave(76)@127.0.0.1:50172 (localhost.localdomain) in non-terminal state TASK_RUNNING I0122 19:15:01.728521 3518 master.cpp:5022] Removing executor 'default' with resources of framework 20150122-191501-16777343-50172-3518-0000 on slave 20150122-191501-16777343-50172-3518-S0 at slave(76)@127.0.0.1:50172 (localhost.localdomain) I0122 19:15:01.729651 3534 sched.cpp:808] Stopping framework '20150122-191501-16777343-50172-3518-0000' I0122 19:15:01.729786 3534 sched.cpp:808] Stopping framework '20150122-191501-16777343-50172-3518-0000' I0122 19:15:01.730036 3534 hierarchical_allocator_process.hpp:653] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 20150122-191501-16777343-50172-3518-S0 from framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.730134 3534 slave.cpp:2673] master@127.0.0.1:50172 exited W0122 19:15:01.730156 3534 slave.cpp:2676] Master disconnected! Waiting for a new master to be elected I0122 19:15:01.782312 3518 slave.cpp:495] Slave terminating I0122 19:15:01.786846 3518 slave.cpp:1585] Asked to shut down framework 20150122-191501-16777343-50172-3518-0000 by @0.0.0.0:0 I0122 19:15:01.787127 3518 slave.cpp:1610] Shutting down framework 20150122-191501-16777343-50172-3518-0000 I0122 19:15:01.787394 3518 slave.cpp:3198] Shutting down executor 'default' of framework 20150122-191501-16777343-50172-3518-0000 [ OK ] FaultToleranceTest.SchedulerFailoverFrameworkMessage (495 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2306","01/29/2015 22:58:04",1,"MasterAuthorizationTest.FrameworkRemovedBeforeReregistration is flaky. ""Good run: Bad run: """," [ RUN ] MasterAuthorizationTest.FrameworkRemovedBeforeReregistration Using temporary directory '/tmp/MasterAuthorizationTest_FrameworkRemovedBeforeReregistration_ZU7oaD' I0122 19:23:06.481690 17483 leveldb.cpp:176] Opened db in 21.058723ms I0122 19:23:06.488590 17483 leveldb.cpp:183] Compacted db in 6.6715ms I0122 19:23:06.488816 17483 leveldb.cpp:198] Created db iterator in 30034ns I0122 19:23:06.489053 17483 leveldb.cpp:204] Seeked to beginning of db in 2908ns I0122 19:23:06.489073 17483 leveldb.cpp:273] Iterated through 0 keys in the db in 492ns I0122 19:23:06.489148 17483 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0122 19:23:06.490272 17504 recover.cpp:449] Starting replica recovery I0122 19:23:06.490900 17504 recover.cpp:475] Replica is in EMPTY status I0122 19:23:06.492422 17504 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0122 19:23:06.492694 17504 recover.cpp:195] Received a recover response from a replica in EMPTY status I0122 19:23:06.493185 17504 recover.cpp:566] Updating replica status to STARTING I0122 19:23:06.514881 17504 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 21.459963ms I0122 19:23:06.514920 17504 replica.cpp:323] Persisted replica status to STARTING I0122 19:23:06.515861 17501 master.cpp:262] Master 20150122-192306-16842879-46283-17483 (lucid) started on 127.0.1.1:46283 I0122 19:23:06.515910 17501 master.cpp:308] Master only allowing authenticated frameworks to register I0122 19:23:06.515923 17501 master.cpp:313] Master only allowing authenticated slaves to register I0122 19:23:06.515946 17501 credentials.hpp:36] Loading credentials for authentication from '/tmp/MasterAuthorizationTest_FrameworkRemovedBeforeReregistration_ZU7oaD/credentials' I0122 19:23:06.516150 17501 master.cpp:357] Authorization enabled I0122 19:23:06.517511 17501 hierarchical_allocator_process.hpp:285] Initialized hierarchical allocator process I0122 19:23:06.517607 17501 whitelist_watcher.cpp:65] No whitelist given I0122 19:23:06.518066 17498 master.cpp:1219] The newly elected leader is master@127.0.1.1:46283 with id 20150122-192306-16842879-46283-17483 I0122 19:23:06.518095 17498 master.cpp:1232] Elected as the leading master! I0122 19:23:06.518121 17498 master.cpp:1050] Recovering from registrar I0122 19:23:06.518333 17498 registrar.cpp:313] Recovering registrar I0122 19:23:06.523987 17504 recover.cpp:475] Replica is in STARTING status I0122 19:23:06.525090 17504 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0122 19:23:06.525337 17504 recover.cpp:195] Received a recover response from a replica in STARTING status I0122 19:23:06.525693 17504 recover.cpp:566] Updating replica status to VOTING I0122 19:23:06.532680 17504 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 6.810884ms I0122 19:23:06.532714 17504 replica.cpp:323] Persisted replica status to VOTING I0122 19:23:06.532835 17504 recover.cpp:580] Successfully joined the Paxos group I0122 19:23:06.533004 17504 recover.cpp:464] Recover process terminated I0122 19:23:06.533833 17500 log.cpp:660] Attempting to start the writer I0122 19:23:06.535225 17500 replica.cpp:477] Replica received implicit promise request with proposal 1 I0122 19:23:06.540340 17500 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 5.086139ms I0122 19:23:06.540371 17500 replica.cpp:345] Persisted promised to 1 I0122 19:23:06.541502 17504 coordinator.cpp:230] Coordinator attemping to fill missing position I0122 19:23:06.543021 17504 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0122 19:23:06.548140 17504 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 5.083443ms I0122 19:23:06.548171 17504 replica.cpp:679] Persisted action at 0 I0122 19:23:06.549746 17500 replica.cpp:511] Replica received write request for position 0 I0122 19:23:06.549926 17500 leveldb.cpp:438] Reading position from leveldb took 31962ns I0122 19:23:06.555033 17500 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 5.065823ms I0122 19:23:06.555064 17500 replica.cpp:679] Persisted action at 0 I0122 19:23:06.556094 17504 replica.cpp:658] Replica received learned notice for position 0 I0122 19:23:06.558815 17504 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 2.688382ms I0122 19:23:06.558847 17504 replica.cpp:679] Persisted action at 0 I0122 19:23:06.558868 17504 replica.cpp:664] Replica learned NOP action at position 0 I0122 19:23:06.559917 17500 log.cpp:676] Writer started with ending position 0 I0122 19:23:06.560995 17500 leveldb.cpp:438] Reading position from leveldb took 27742ns I0122 19:23:06.563467 17500 registrar.cpp:346] Successfully fetched the registry (0B) in 45.095936ms I0122 19:23:06.563551 17500 registrar.cpp:445] Applied 1 operations in 19686ns; attempting to update the 'registry' I0122 19:23:06.566107 17500 log.cpp:684] Attempting to append 118 bytes to the log I0122 19:23:06.566267 17500 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0122 19:23:06.567126 17500 replica.cpp:511] Replica received write request for position 1 I0122 19:23:06.582588 17500 leveldb.cpp:343] Persisting action (135 bytes) to leveldb took 15.425511ms I0122 19:23:06.582631 17500 replica.cpp:679] Persisted action at 1 I0122 19:23:06.583425 17500 replica.cpp:658] Replica received learned notice for position 1 I0122 19:23:06.589001 17500 leveldb.cpp:343] Persisting action (137 bytes) to leveldb took 5.549486ms I0122 19:23:06.589200 17500 replica.cpp:679] Persisted action at 1 I0122 19:23:06.589416 17500 replica.cpp:664] Replica learned APPEND action at position 1 I0122 19:23:06.596420 17500 registrar.cpp:490] Successfully updated the 'registry' in 32.815104ms I0122 19:23:06.596551 17500 registrar.cpp:376] Successfully recovered registrar I0122 19:23:06.596923 17500 master.cpp:1077] Recovered 0 slaves from the Registry (82B) ; allowing 10mins for slaves to re-register I0122 19:23:06.597007 17500 log.cpp:703] Attempting to truncate the log to 1 I0122 19:23:06.597239 17500 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0122 19:23:06.598464 17501 replica.cpp:511] Replica received write request for position 2 I0122 19:23:06.604038 17501 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 5.536264ms I0122 19:23:06.604084 17501 replica.cpp:679] Persisted action at 2 I0122 19:23:06.608747 17503 replica.cpp:658] Replica received learned notice for position 2 I0122 19:23:06.614094 17503 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 5.315347ms I0122 19:23:06.614171 17503 leveldb.cpp:401] Deleting ~1 keys from leveldb took 33021ns I0122 19:23:06.614188 17503 replica.cpp:679] Persisted action at 2 I0122 19:23:06.614208 17503 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0122 19:23:06.628820 17483 sched.cpp:151] Version: 0.22.0 I0122 19:23:06.629879 17505 sched.cpp:248] New master detected at master@127.0.1.1:46283 I0122 19:23:06.629973 17505 sched.cpp:304] Authenticating with master master@127.0.1.1:46283 I0122 19:23:06.629995 17505 sched.cpp:311] Using default CRAM-MD5 authenticatee I0122 19:23:06.630314 17505 authenticatee.hpp:138] Creating new client SASL connection I0122 19:23:06.630722 17505 master.cpp:4129] Authenticating scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 I0122 19:23:06.630750 17505 master.cpp:4140] Using default CRAM-MD5 authenticator I0122 19:23:06.631115 17505 authenticator.hpp:170] Creating new server SASL connection I0122 19:23:06.631423 17505 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0122 19:23:06.631459 17505 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0122 19:23:06.631563 17505 authenticator.hpp:276] Received SASL authentication start I0122 19:23:06.631605 17505 authenticator.hpp:398] Authentication requires more steps I0122 19:23:06.631671 17505 authenticatee.hpp:275] Received SASL authentication step I0122 19:23:06.631748 17505 authenticator.hpp:304] Received SASL authentication step I0122 19:23:06.631774 17505 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'lucid' server FQDN: 'lucid' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0122 19:23:06.631784 17505 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0122 19:23:06.631822 17505 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0122 19:23:06.631856 17505 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'lucid' server FQDN: 'lucid' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0122 19:23:06.631870 17505 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0122 19:23:06.631877 17505 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0122 19:23:06.631892 17505 authenticator.hpp:390] Authentication success I0122 19:23:06.631988 17505 authenticatee.hpp:315] Authentication success I0122 19:23:06.632066 17505 master.cpp:4187] Successfully authenticated principal 'test-principal' at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 I0122 19:23:06.632359 17505 sched.cpp:392] Successfully authenticated with master master@127.0.1.1:46283 I0122 19:23:06.632382 17505 sched.cpp:515] Sending registration request to master@127.0.1.1:46283 I0122 19:23:06.632432 17505 sched.cpp:548] Will retry registration in 598.155756ms if necessary I0122 19:23:06.632575 17505 master.cpp:1420] Received registration request for framework 'default' at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 I0122 19:23:06.632639 17505 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0122 19:23:06.632912 17505 master.cpp:1484] Registering framework 20150122-192306-16842879-46283-17483-0000 (default) at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 I0122 19:23:06.633421 17505 hierarchical_allocator_process.hpp:319] Added framework 20150122-192306-16842879-46283-17483-0000 I0122 19:23:06.633448 17505 hierarchical_allocator_process.hpp:839] No resources available to allocate! I0122 19:23:06.633458 17505 hierarchical_allocator_process.hpp:746] Performed allocation for 0 slaves in 17704ns I0122 19:23:06.633919 17505 sched.cpp:442] Framework registered with 20150122-192306-16842879-46283-17483-0000 I0122 19:23:06.633980 17505 sched.cpp:456] Scheduler::registered took 37063ns I0122 19:23:06.636554 17500 sched.cpp:242] Scheduler::disconnected took 14843ns I0122 19:23:06.636579 17500 sched.cpp:248] New master detected at master@127.0.1.1:46283 I0122 19:23:06.636625 17500 sched.cpp:304] Authenticating with master master@127.0.1.1:46283 I0122 19:23:06.636641 17500 sched.cpp:311] Using default CRAM-MD5 authenticatee I0122 19:23:06.636914 17500 authenticatee.hpp:138] Creating new client SASL connection I0122 19:23:06.637313 17500 master.cpp:4129] Authenticating scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 I0122 19:23:06.637341 17500 master.cpp:4140] Using default CRAM-MD5 authenticator I0122 19:23:06.637675 17500 authenticator.hpp:170] Creating new server SASL connection I0122 19:23:06.638056 17501 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0122 19:23:06.638083 17501 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0122 19:23:06.638182 17501 authenticator.hpp:276] Received SASL authentication start I0122 19:23:06.638221 17501 authenticator.hpp:398] Authentication requires more steps I0122 19:23:06.638286 17501 authenticatee.hpp:275] Received SASL authentication step I0122 19:23:06.638360 17501 authenticator.hpp:304] Received SASL authentication step I0122 19:23:06.638383 17501 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'lucid' server FQDN: 'lucid' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0122 19:23:06.638393 17501 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0122 19:23:06.638422 17501 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0122 19:23:06.638447 17501 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'lucid' server FQDN: 'lucid' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0122 19:23:06.638458 17501 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0122 19:23:06.638464 17501 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0122 19:23:06.638478 17501 authenticator.hpp:390] Authentication success I0122 19:23:06.638566 17501 authenticatee.hpp:315] Authentication success I0122 19:23:06.638643 17501 master.cpp:4187] Successfully authenticated principal 'test-principal' at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 I0122 19:23:06.638919 17501 sched.cpp:392] Successfully authenticated with master master@127.0.1.1:46283 I0122 19:23:06.638942 17501 sched.cpp:515] Sending registration request to master@127.0.1.1:46283 I0122 19:23:06.638994 17501 sched.cpp:548] Will retry registration in 489.304713ms if necessary I0122 19:23:06.639169 17501 master.cpp:1557] Received re-registration request from framework 20150122-192306-16842879-46283-17483-0000 (default) at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 I0122 19:23:06.639242 17501 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0122 19:23:06.639839 17483 sched.cpp:1471] Asked to stop the driver I0122 19:23:06.640379 17499 sched.cpp:808] Stopping framework '20150122-192306-16842879-46283-17483-0000' I0122 19:23:06.640697 17499 master.cpp:745] Framework 20150122-192306-16842879-46283-17483-0000 (default) at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 disconnected I0122 19:23:06.640723 17499 master.cpp:1789] Disconnecting framework 20150122-192306-16842879-46283-17483-0000 (default) at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 I0122 19:23:06.640744 17499 master.cpp:1805] Deactivating framework 20150122-192306-16842879-46283-17483-0000 (default) at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 I0122 19:23:06.640806 17499 master.cpp:767] Giving framework 20150122-192306-16842879-46283-17483-0000 (default) at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 0ns to failover I0122 19:23:06.640951 17499 hierarchical_allocator_process.hpp:398] Deactivated framework 20150122-192306-16842879-46283-17483-0000 I0122 19:23:06.646342 17498 master.cpp:1604] Dropping re-registration request of framework 20150122-192306-16842879-46283-17483-0000 (default) at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 because it is not authenticated I0122 19:23:06.648844 17498 master.cpp:3941] Framework failover timeout, removing framework 20150122-192306-16842879-46283-17483-0000 (default) at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 I0122 19:23:06.648871 17498 master.cpp:4499] Removing framework 20150122-192306-16842879-46283-17483-0000 (default) at scheduler-4156eae6-8d7f-423a-920a-02b11b7bd1ba@127.0.1.1:46283 I0122 19:23:06.649624 17498 hierarchical_allocator_process.hpp:352] Removed framework 20150122-192306-16842879-46283-17483-0000 I0122 19:23:06.656532 17483 master.cpp:654] Master terminating [ OK ] MasterAuthorizationTest.FrameworkRemovedBeforeReregistration (216 ms) [ RUN ] MasterAuthorizationTest.FrameworkRemovedBeforeReregistration Using temporary directory '/tmp/MasterAuthorizationTest_FrameworkRemovedBeforeReregistration_JDM2sm' I0126 19:19:55.517570 2381 leveldb.cpp:176] Opened db in 34.341401ms I0126 19:19:55.529630 2381 leveldb.cpp:183] Compacted db in 11.824435ms I0126 19:19:55.529878 2381 leveldb.cpp:198] Created db iterator in 26176ns I0126 19:19:55.530200 2381 leveldb.cpp:204] Seeked to beginning of db in 3457ns I0126 19:19:55.530455 2381 leveldb.cpp:273] Iterated through 0 keys in the db in 902ns I0126 19:19:55.530658 2381 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0126 19:19:55.531492 2397 recover.cpp:449] Starting replica recovery I0126 19:19:55.531793 2397 recover.cpp:475] Replica is in EMPTY status I0126 19:19:55.533327 2397 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0126 19:19:55.533608 2397 recover.cpp:195] Received a recover response from a replica in EMPTY status I0126 19:19:55.534101 2397 recover.cpp:566] Updating replica status to STARTING I0126 19:19:55.550417 2397 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 16.106821ms I0126 19:19:55.550472 2397 replica.cpp:323] Persisted replica status to STARTING I0126 19:19:55.551434 2397 recover.cpp:475] Replica is in STARTING status I0126 19:19:55.552846 2397 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0126 19:19:55.553099 2397 recover.cpp:195] Received a recover response from a replica in STARTING status I0126 19:19:55.553565 2397 recover.cpp:566] Updating replica status to VOTING I0126 19:19:55.564590 2397 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 10.719218ms I0126 19:19:55.564919 2397 replica.cpp:323] Persisted replica status to VOTING I0126 19:19:55.565982 2397 recover.cpp:580] Successfully joined the Paxos group I0126 19:19:55.566231 2397 recover.cpp:464] Recover process terminated I0126 19:19:55.567878 2401 master.cpp:262] Master 20150126-191955-16842879-51862-2381 (lucid) started on 127.0.1.1:51862 I0126 19:19:55.567927 2401 master.cpp:308] Master only allowing authenticated frameworks to register I0126 19:19:55.567950 2401 master.cpp:313] Master only allowing authenticated slaves to register I0126 19:19:55.567978 2401 credentials.hpp:36] Loading credentials for authentication from '/tmp/MasterAuthorizationTest_FrameworkRemovedBeforeReregistration_JDM2sm/credentials' I0126 19:19:55.568220 2401 master.cpp:357] Authorization enabled I0126 19:19:55.569890 2401 hierarchical_allocator_process.hpp:285] Initialized hierarchical allocator process I0126 19:19:55.569999 2401 whitelist_watcher.cpp:65] No whitelist given I0126 19:19:55.570694 2401 master.cpp:1219] The newly elected leader is master@127.0.1.1:51862 with id 20150126-191955-16842879-51862-2381 I0126 19:19:55.570721 2401 master.cpp:1232] Elected as the leading master! I0126 19:19:55.570742 2401 master.cpp:1050] Recovering from registrar I0126 19:19:55.570977 2401 registrar.cpp:313] Recovering registrar I0126 19:19:55.571959 2401 log.cpp:660] Attempting to start the writer I0126 19:19:55.573441 2401 replica.cpp:477] Replica received implicit promise request with proposal 1 I0126 19:19:55.590724 2401 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 17.243964ms I0126 19:19:55.590785 2401 replica.cpp:345] Persisted promised to 1 I0126 19:19:55.592140 2396 coordinator.cpp:230] Coordinator attemping to fill missing position I0126 19:19:55.593834 2396 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0126 19:19:55.603837 2396 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 9.955824ms I0126 19:19:55.603902 2396 replica.cpp:679] Persisted action at 0 I0126 19:19:55.606082 2401 replica.cpp:511] Replica received write request for position 0 I0126 19:19:55.606331 2401 leveldb.cpp:438] Reading position from leveldb took 44524ns I0126 19:19:55.612546 2401 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 5.870411ms I0126 19:19:55.612597 2401 replica.cpp:679] Persisted action at 0 I0126 19:19:55.613416 2401 replica.cpp:658] Replica received learned notice for position 0 I0126 19:19:55.616269 2401 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 2.82145ms I0126 19:19:55.616305 2401 replica.cpp:679] Persisted action at 0 I0126 19:19:55.616328 2401 replica.cpp:664] Replica learned NOP action at position 0 I0126 19:19:55.628062 2399 log.cpp:676] Writer started with ending position 0 I0126 19:19:55.629328 2399 leveldb.cpp:438] Reading position from leveldb took 57003ns I0126 19:19:55.631995 2399 registrar.cpp:346] Successfully fetched the registry (0B) in 60.973824ms I0126 19:19:55.632109 2399 registrar.cpp:445] Applied 1 operations in 35531ns; attempting to update the 'registry' I0126 19:19:55.634799 2399 log.cpp:684] Attempting to append 117 bytes to the log I0126 19:19:55.634996 2399 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0126 19:19:55.636651 2397 replica.cpp:511] Replica received write request for position 1 I0126 19:19:55.642165 2397 leveldb.cpp:343] Persisting action (134 bytes) to leveldb took 5.474306ms I0126 19:19:55.642215 2397 replica.cpp:679] Persisted action at 1 I0126 19:19:55.643226 2397 replica.cpp:658] Replica received learned notice for position 1 I0126 19:19:55.648574 2397 leveldb.cpp:343] Persisting action (136 bytes) to leveldb took 5.317891ms I0126 19:19:55.648808 2397 replica.cpp:679] Persisted action at 1 I0126 19:19:55.649158 2397 replica.cpp:664] Replica learned APPEND action at position 1 I0126 19:19:55.663101 2397 registrar.cpp:490] Successfully updated the 'registry' in 30.918144ms I0126 19:19:55.663267 2397 registrar.cpp:376] Successfully recovered registrar I0126 19:19:55.663699 2397 master.cpp:1077] Recovered 0 slaves from the Registry (81B) ; allowing 10mins for slaves to re-register I0126 19:19:55.663795 2397 log.cpp:703] Attempting to truncate the log to 1 I0126 19:19:55.664083 2397 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0126 19:19:55.665573 2403 replica.cpp:511] Replica received write request for position 2 I0126 19:19:55.671500 2403 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 5.883759ms I0126 19:19:55.671547 2403 replica.cpp:679] Persisted action at 2 I0126 19:19:55.672780 2403 replica.cpp:658] Replica received learned notice for position 2 I0126 19:19:55.685999 2403 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 12.808643ms I0126 19:19:55.686099 2403 leveldb.cpp:401] Deleting ~1 keys from leveldb took 49867ns I0126 19:19:55.686121 2403 replica.cpp:679] Persisted action at 2 I0126 19:19:55.686149 2403 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0126 19:19:55.722545 2381 sched.cpp:151] Version: 0.22.0 I0126 19:19:55.723795 2401 sched.cpp:248] New master detected at master@127.0.1.1:51862 I0126 19:19:55.723891 2401 sched.cpp:304] Authenticating with master master@127.0.1.1:51862 I0126 19:19:55.723914 2401 sched.cpp:311] Using default CRAM-MD5 authenticatee I0126 19:19:55.724244 2401 authenticatee.hpp:138] Creating new client SASL connection I0126 19:19:55.724694 2401 master.cpp:4129] Authenticating scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 I0126 19:19:55.724725 2401 master.cpp:4140] Using default CRAM-MD5 authenticator I0126 19:19:55.725108 2401 authenticator.hpp:170] Creating new server SASL connection I0126 19:19:55.725390 2401 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0126 19:19:55.725415 2401 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0126 19:19:55.725515 2401 authenticator.hpp:276] Received SASL authentication start I0126 19:19:55.725566 2401 authenticator.hpp:398] Authentication requires more steps I0126 19:19:55.725632 2401 authenticatee.hpp:275] Received SASL authentication step I0126 19:19:55.725710 2401 authenticator.hpp:304] Received SASL authentication step I0126 19:19:55.725744 2401 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'lucid' server FQDN: 'lucid' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0126 19:19:55.725757 2401 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0126 19:19:55.725808 2401 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0126 19:19:55.725834 2401 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'lucid' server FQDN: 'lucid' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0126 19:19:55.725847 2401 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0126 19:19:55.725853 2401 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0126 19:19:55.725867 2401 authenticator.hpp:390] Authentication success I0126 19:19:55.728629 2399 authenticatee.hpp:315] Authentication success I0126 19:19:55.729228 2399 sched.cpp:392] Successfully authenticated with master master@127.0.1.1:51862 I0126 19:19:55.729277 2399 sched.cpp:515] Sending registration request to master@127.0.1.1:51862 I0126 19:19:55.729365 2399 sched.cpp:548] Will retry registration in 3.855403ms if necessary I0126 19:19:55.729671 2399 master.cpp:1411] Queuing up registration request for framework 'default' at scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 because authentication is still in progress I0126 19:19:55.733487 2400 master.cpp:4187] Successfully authenticated principal 'test-principal' at scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 I0126 19:19:55.734094 2400 master.cpp:1420] Received registration request for framework 'default' at scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 I0126 19:19:55.734177 2400 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0126 19:19:55.734724 2400 master.cpp:1484] Registering framework 20150126-191955-16842879-51862-2381-0000 (default) at scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 I0126 19:19:55.735335 2402 hierarchical_allocator_process.hpp:319] Added framework 20150126-191955-16842879-51862-2381-0000 I0126 19:19:55.735376 2402 hierarchical_allocator_process.hpp:831] No resources available to allocate! I0126 19:19:55.735389 2402 hierarchical_allocator_process.hpp:738] Performed allocation for 0 slaves in 22978ns I0126 19:19:55.741891 2398 sched.cpp:515] Sending registration request to master@127.0.1.1:51862 I0126 19:19:55.744575 2398 sched.cpp:548] Will retry registration in 3.86742709secs if necessary I0126 19:19:55.744742 2398 sched.cpp:442] Framework registered with 20150126-191955-16842879-51862-2381-0000 I0126 19:19:55.744827 2398 sched.cpp:456] Scheduler::registered took 60111ns I0126 19:19:55.744956 2398 master.cpp:1420] Received registration request for framework 'default' at scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 I0126 19:19:55.745020 2398 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0126 19:19:55.749315 2401 sched.cpp:242] Scheduler::disconnected took 19450ns I0126 19:19:55.749343 2401 sched.cpp:248] New master detected at master@127.0.1.1:51862 I0126 19:19:55.749394 2401 sched.cpp:304] Authenticating with master master@127.0.1.1:51862 I0126 19:19:55.749411 2401 sched.cpp:311] Using default CRAM-MD5 authenticatee I0126 19:19:55.749743 2401 authenticatee.hpp:138] Creating new client SASL connection I0126 19:19:55.750208 2401 master.cpp:4129] Authenticating scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 I0126 19:19:55.750238 2401 master.cpp:4140] Using default CRAM-MD5 authenticator I0126 19:19:55.750629 2401 authenticator.hpp:170] Creating new server SASL connection I0126 19:19:55.750938 2401 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0126 19:19:55.750963 2401 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0126 19:19:55.751063 2401 authenticator.hpp:276] Received SASL authentication start I0126 19:19:55.751109 2401 authenticator.hpp:398] Authentication requires more steps I0126 19:19:55.751175 2401 authenticatee.hpp:275] Received SASL authentication step I0126 19:19:55.751269 2401 authenticator.hpp:304] Received SASL authentication step I0126 19:19:55.751296 2401 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'lucid' server FQDN: 'lucid' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0126 19:19:55.751307 2401 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0126 19:19:55.751358 2401 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0126 19:19:55.751392 2401 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'lucid' server FQDN: 'lucid' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0126 19:19:55.751405 2401 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0126 19:19:55.751413 2401 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0126 19:19:55.751427 2401 authenticator.hpp:390] Authentication success I0126 19:19:55.751524 2401 authenticatee.hpp:315] Authentication success I0126 19:19:55.751605 2401 master.cpp:4187] Successfully authenticated principal 'test-principal' at scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 I0126 19:19:55.751898 2401 sched.cpp:392] Successfully authenticated with master master@127.0.1.1:51862 I0126 19:19:55.751922 2401 sched.cpp:515] Sending registration request to master@127.0.1.1:51862 I0126 19:19:55.751996 2401 sched.cpp:548] Will retry registration in 1.511226315secs if necessary I0126 19:19:55.752174 2401 master.cpp:1557] Received re-registration request from framework 20150126-191955-16842879-51862-2381-0000 (default) at scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 I0126 19:19:55.752256 2401 master.cpp:1298] Authorizing framework principal 'test-principal' to receive offers for role '*' I0126 19:19:55.752485 2401 master.cpp:1610] Re-registering framework 20150126-191955-16842879-51862-2381-0000 (default) at scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 I0126 19:19:55.752527 2401 master.cpp:1650] Allowing framework 20150126-191955-16842879-51862-2381-0000 (default) at scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 to re-register with an already used id I0126 19:19:55.752689 2401 sched.cpp:484] Framework re-registered with 20150126-191955-16842879-51862-2381-0000 tests/master_authorization_tests.cpp:980: Failure Mock function called more times than expected - returning directly. Function call: reregistered(0x7fff5cef57e0, @0x56077d0 id: """"20150126-191955-16842879-51862-2381"""" ip: 16842879 port: 51862 pid: """"master@127.0.1.1:51862"""" hostname: """"lucid"""" ) Expected: to be never called Actual: called once - over-saturated and active I0126 19:19:55.753191 2401 sched.cpp:498] Scheduler::reregistered took 478798ns I0126 19:19:55.753600 2381 sched.cpp:1471] Asked to stop the driver I0126 19:19:55.754518 2402 sched.cpp:808] Stopping framework '20150126-191955-16842879-51862-2381-0000' I0126 19:19:55.755089 2402 master.cpp:1744] Asked to unregister framework 20150126-191955-16842879-51862-2381-0000 I0126 19:19:55.755302 2402 master.cpp:4499] Removing framework 20150126-191955-16842879-51862-2381-0000 (default) at scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 I0126 19:19:55.759419 2402 hierarchical_allocator_process.hpp:398] Deactivated framework 20150126-191955-16842879-51862-2381-0000 I0126 19:19:55.759850 2402 hierarchical_allocator_process.hpp:352] Removed framework 20150126-191955-16842879-51862-2381-0000 I0126 19:19:55.761160 2400 master.cpp:1462] Dropping registration request for framework 'default' at scheduler-80465e3f-73a3-4bd0-ba66-4dca62e9cdee@127.0.1.1:51862 because it is not authenticated I0126 19:19:55.771309 2381 master.cpp:654] Master terminating [ FAILED ] MasterAuthorizationTest.FrameworkRemovedBeforeReregistration (312 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2319","02/04/2015 18:35:01",2,"Unable to set --work_dir to a non /tmp device ""When starting mesos-slave with --work_dir set to a directory which is not the same device as /tmp results in mesos-slave throwing a core dump: Removing the --work_dir option results in the slave starting successfully."""," mesos # GLOG_v=1 sbin/mesos-slave --master=zk://10.171.59.83:2181/mesos --work_dir=/var/lib/mesos/ WARNING: Logging before InitGoogleLogging() is written to STDERR I0204 18:24:49.274619 22922 process.cpp:958] libprocess is initialized on 10.169.146.67:5051 for 8 cpus I0204 18:24:49.274978 22922 logging.cpp:177] Logging to STDERR I0204 18:24:49.275111 22922 main.cpp:152] Build: 2015-02-03 22:59:30 by I0204 18:24:49.275233 22922 main.cpp:154] Version: 0.22.0 I0204 18:24:49.275485 22922 containerizer.cpp:103] Using isolation: posix/cpu,posix/mem 2015-02-04 18:24:49,275:22922(0x7ffdd4d5c700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2015-02-04 18:24:49,275:22922(0x7ffdd4d5c700):ZOO_INFO@log_env@716: Client environment:host.name=ip-10-169-146-67.ec2.internal 2015-02-04 18:24:49,276:22922(0x7ffdd4d5c700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2015-02-04 18:24:49,276:22922(0x7ffdd4d5c700):ZOO_INFO@log_env@724: Client environment:os.arch=3.18.2 2015-02-04 18:24:49,276:22922(0x7ffdd4d5c700):ZOO_INFO@log_env@725: Client environment:os.version=#2 SMP Tue Jan 27 23:34:36 UTC 2015 2015-02-04 18:24:49,276:22922(0x7ffdd4d5c700):ZOO_INFO@log_env@733: Client environment:user.name=core 2015-02-04 18:24:49,276:22922(0x7ffdd4d5c700):ZOO_INFO@log_env@741: Client environment:user.home=/root 2015-02-04 18:24:49,276:22922(0x7ffdd4d5c700):ZOO_INFO@log_env@753: Client environment:user.dir=/opt/mesosphere/dcos/0.0.1-0.1.20150203225612/mesos 2015-02-04 18:24:49,276:22922(0x7ffdd4d5c700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=10.171.59.83:2181 sessionTimeout=10000 watcher=0x7ffdd97bccf0 sessionId=0 sessionPasswd= context=0x7ffdc8000ba0 flags=0 I0204 18:24:49.276793 22922 main.cpp:180] Starting Mesos slave 2015-02-04 18:24:49,307:22922(0x7ffdd151f700):ZOO_INFO@check_events@1703: initiated connection to server [10.171.59.83:2181] I0204 18:24:49.307548 22922 slave.cpp:173] Slave started on 1)@10.169.146.67:5051 I0204 18:24:49.307955 22922 slave.cpp:300] Slave resources: cpus(*):1; mem(*):2728; disk(*):24736; ports(*):[31000-32000] I0204 18:24:49.308404 22922 slave.cpp:329] Slave hostname: ip-10-169-146-67.ec2.internal I0204 18:24:49.308459 22922 slave.cpp:330] Slave checkpoint: true I0204 18:24:49.310431 22924 state.cpp:33] Recovering state from '/var/lib/mesos/meta' I0204 18:24:49.310583 22924 state.cpp:668] Failed to find resources file '/var/lib/mesos/meta/resources/resources.info' I0204 18:24:49.310670 22924 state.cpp:74] Failed to find the latest slave from '/var/lib/mesos/meta' I0204 18:24:49.310803 22924 status_update_manager.cpp:197] Recovering status update manager I0204 18:24:49.310916 22924 containerizer.cpp:300] Recovering containerizer I0204 18:24:49.311110 22924 slave.cpp:3527] Finished recovery F0204 18:24:49.311312 22924 slave.cpp:3537] CHECK_SOME(state::checkpoint(path, bootId.get())): Failed to rename '/tmp/PSHLqV' to '/var/lib/mesos/meta/boot_id': Invalid cross-device link 2015-02-04 18:24:49,310:22922(0x7ffdd151f700):ZOO_INFO@check_events@1750: session establishment complete on server [10.171.59.83:2181], sessionId=0x14b51bc8506039a, negotiated timeout=10000 *** Check failure stack trace: *** @ 0x7ffdd9a6596d google::LogMessage::Fail() I0204 18:24:49.313356 22930 group.cpp:313] Group process (group(1)@10.169.146.67:5051) connected to ZooKeeper @ 0x7ffdd9a677ad google::LogMessage::SendToLog() I0204 18:24:49.313786 22930 group.cpp:790] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) I0204 18:24:49.314487 22930 group.cpp:385] Trying to create path '/mesos' in ZooKeeper I0204 18:24:49.323668 22930 group.cpp:717] Found non-sequence node 'log_replicas' at '/mesos' in ZooKeeper I0204 18:24:49.323806 22930 detector.cpp:138] Detected a new leader: (id='1') I0204 18:24:49.323958 22930 group.cpp:659] Trying to get '/mesos/info_0000000001' in ZooKeeper I0204 18:24:49.324595 22930 detector.cpp:433] A new leading master (UPID=master@10.171.59.83:5050) is detected @ 0x7ffdd9a6555c google::LogMessage::Flush() @ 0x7ffdd9a680a9 google::LogMessageFatal::~LogMessageFatal() @ 0x7ffdd94b7179 _CheckFatal::~_CheckFatal() @ 0x7ffdd96718e2 mesos::internal::slave::Slave::__recover() @ 0x7ffdd9a1524a process::ProcessManager::resume() @ 0x7ffdd9a1550c process::schedule() @ 0x7ffdd83832ad (unknown) @ 0x7ffdd80b834d (unknown) Aborted (core dumped) ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2324","02/06/2015 18:59:21",1,"MasterAllocatorTest/0.OutOfOrderDispatch is flaky "" {noformat:title=} [ RUN ] MasterAllocatorTest/0.OutOfOrderDispatch Using temporary directory '/tmp/MasterAllocatorTest_0_OutOfOrderDispatch_kjLb9b' I0206 07:55:44.084333 15065 leveldb.cpp:175] Opened db in 25.006293ms I0206 07:55:44.089635 15065 leveldb.cpp:182] Compacted db in 5.256332ms I0206 07:55:44.089695 15065 leveldb.cpp:197] Created db iterator in 23534ns I0206 07:55:44.089710 15065 leveldb.cpp:203] Seeked to beginning of db in 2175ns I0206 07:55:44.089720 15065 leveldb.cpp:272] Iterated through 0 keys in the db in 417ns I0206 07:55:44.089781 15065 replica.cpp:743] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0206 07:55:44.093750 15086 recover.cpp:448] Starting replica recovery I0206 07:55:44.094044 15086 recover.cpp:474] Replica is in EMPTY status I0206 07:55:44.095473 15086 replica.cpp:640] Replica in EMPTY status received a broadcasted recover request I0206 07:55:44.095724 15086 recover.cpp:194] Received a recover response from a replica in EMPTY status I0206 07:55:44.096097 15086 recover.cpp:565] Updating replica status to STARTING I0206 07:55:44.106575 15086 leveldb.cpp:305] Persisting metadata (8 bytes) to leveldb took 10.289939ms I0206 07:55:44.106613 15086 replica.cpp:322] Persisted replica status to STARTING I0206 07:55:44.108144 15086 recover.cpp:474] Replica is in STARTING status I0206 07:55:44.109122 15086 replica.cpp:640] Replica in STARTING status received a broadcasted recover request I0206 07:55:44.110879 15091 recover.cpp:194] Received a recover response from a replica in STARTING status I0206 07:55:44.117267 15087 recover.cpp:565] Updating replica status to VOTING I0206 07:55:44.124771 15087 leveldb.cpp:305] Persisting metadata (8 bytes) to leveldb took 2.66794ms I0206 07:55:44.124814 15087 replica.cpp:322] Persisted replica status to VOTING I0206 07:55:44.124948 15087 recover.cpp:579] Successfully joined the Paxos group I0206 07:55:44.125095 15087 recover.cpp:463] Recover process terminated I0206 07:55:44.126204 15087 master.cpp:344] Master 20150206-075544-16842879-38895-15065 (utopic) started on 127.0.1.1:38895 I0206 07:55:44.126268 15087 master.cpp:390] Master only allowing authenticated frameworks to register I0206 07:55:44.126281 15087 master.cpp:395] Master only allowing authenticated slaves to register I0206 07:55:44.126307 15087 credentials.hpp:35] Loading credentials for authentication from '/tmp/MasterAllocatorTest_0_OutOfOrderDispatch_kjLb9b/credentials' I0206 07:55:44.126683 15087 master.cpp:439] Authorization enabled I0206 07:55:44.129329 15086 master.cpp:1350] The newly elected leader is master@127.0.1.1:38895 with id 20150206-075544-16842879-38895-15065 I0206 07:55:44.129361 15086 master.cpp:1363] Elected as the leading master! I0206 07:55:44.129389 15086 master.cpp:1181] Recovering from registrar I0206 07:55:44.129653 15088 registrar.cpp:312] Recovering registrar I0206 07:55:44.130859 15088 log.cpp:659] Attempting to start the writer I0206 07:55:44.132334 15088 replica.cpp:476] Replica received implicit promise request with proposal 1 I0206 07:55:44.135187 15088 leveldb.cpp:305] Persisting metadata (8 bytes) to leveldb took 2.825465ms I0206 07:55:44.135390 15088 replica.cpp:344] Persisted promised to 1 I0206 07:55:44.138062 15091 coordinator.cpp:229] Coordinator attemping to fill missing position I0206 07:55:44.139576 15091 replica.cpp:377] Replica received explicit promise request for position 0 with proposal 2 I0206 07:55:44.142156 15091 leveldb.cpp:342] Persisting action (8 bytes) to leveldb took 2.545543ms I0206 07:55:44.142189 15091 replica.cpp:678] Persisted action at 0 I0206 07:55:44.143414 15091 replica.cpp:510] Replica received write request for position 0 I0206 07:55:44.143468 15091 leveldb.cpp:437] Reading position from leveldb took 28872ns I0206 07:55:44.145982 15091 leveldb.cpp:342] Persisting action (14 bytes) to leveldb took 2.480277ms I0206 07:55:44.146015 15091 replica.cpp:678] Persisted action at 0 I0206 07:55:44.147050 15089 replica.cpp:657] Replica received learned notice for position 0 I0206 07:55:44.154364 15089 leveldb.cpp:342] Persisting action (16 bytes) to leveldb took 7.281644ms I0206 07:55:44.154400 15089 replica.cpp:678] Persisted action at 0 I0206 07:55:44.154422 15089 replica.cpp:663] Replica learned NOP action at position 0 I0206 07:55:44.155506 15091 log.cpp:675] Writer started with ending position 0 I0206 07:55:44.156746 15091 leveldb.cpp:437] Reading position from leveldb took 30248ns I0206 07:55:44.173681 15091 registrar.cpp:345] Successfully fetched the registry (0B) in 43.977984ms I0206 07:55:44.173821 15091 registrar.cpp:444] Applied 1 operations in 30768ns; attempting to update the 'registry' I0206 07:55:44.176213 15086 log.cpp:683] Attempting to append 119 bytes to the log I0206 07:55:44.176426 15086 coordinator.cpp:339] Coordinator attempting to write APPEND action at position 1 I0206 07:55:44.177608 15088 replica.cpp:510] Replica received write request for position 1 I0206 07:55:44.180059 15088 leveldb.cpp:342] Persisting action (136 bytes) to leveldb took 2.415145ms I0206 07:55:44.180094 15088 replica.cpp:678] Persisted action at 1 I0206 07:55:44.181324 15084 replica.cpp:657] Replica received learned notice for position 1 I0206 07:55:44.183831 15084 leveldb.cpp:342] Persisting action (138 bytes) to leveldb took 2.473724ms I0206 07:55:44.183866 15084 replica.cpp:678] Persisted action at 1 I0206 07:55:44.183887 15084 replica.cpp:663] Replica learned APPEND action at position 1 I0206 07:55:44.185510 15084 registrar.cpp:489] Successfully updated the 'registry' in 11.619072ms I0206 07:55:44.185678 15086 log.cpp:702] Attempting to truncate the log to 1 I0206 07:55:44.186111 15086 coordinator.cpp:339] Coordinator attempting to write TRUNCATE action at position 2 I0206 07:55:44.186944 15086 replica.cpp:510] Replica received write request for position 2 I0206 07:55:44.187492 15084 registrar.cpp:375] Successfully recovered registrar I0206 07:55:44.188016 15087 master.cpp:1208] Recovered 0 slaves from the Registry (83B) ; allowing 10mins for slaves to re-register I0206 07:55:44.189678 15086 leveldb.cpp:342] Persisting action (16 bytes) to leveldb took 2.702559ms I0206 07:55:44.189713 15086 replica.cpp:678] Persisted action at 2 I0206 07:55:44.190620 15086 replica.cpp:657] Replica received learned notice for position 2 I0206 07:55:44.193383 15086 leveldb.cpp:342] Persisting action (18 bytes) to leveldb took 2.737088ms I0206 07:55:44.193455 15086 leveldb.cpp:400] Deleting ~1 keys from leveldb took 37762ns I0206 07:55:44.193475 15086 replica.cpp:678] Persisted action at 2 I0206 07:55:44.193496 15086 replica.cpp:663] Replica learned TRUNCATE action at position 2 I0206 07:55:44.200028 15065 containerizer.cpp:102] Using isolation: posix/cpu,posix/mem I0206 07:55:44.212924 15088 slave.cpp:172] Slave started on 46)@127.0.1.1:38895 I0206 07:55:44.213762 15088 credentials.hpp:83] Loading credential for authentication from '/tmp/MasterAllocatorTest_0_OutOfOrderDispatch_RuNyVQ/credential' I0206 07:55:44.214251 15088 slave.cpp:281] Slave using credential for: test-principal I0206 07:55:44.214653 15088 slave.cpp:299] Slave resources: cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] I0206 07:55:44.214918 15088 slave.cpp:328] Slave hostname: utopic I0206 07:55:44.215116 15088 slave.cpp:329] Slave checkpoint: false W0206 07:55:44.215332 15088 slave.cpp:331] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I0206 07:55:44.217061 15090 state.cpp:32] Recovering state from '/tmp/MasterAllocatorTest_0_OutOfOrderDispatch_RuNyVQ/meta' I0206 07:55:44.235409 15088 status_update_manager.cpp:196] Recovering status update manager I0206 07:55:44.235601 15088 containerizer.cpp:299] Recovering containerizer I0206 07:55:44.236486 15088 slave.cpp:3526] Finished recovery I0206 07:55:44.237709 15087 status_update_manager.cpp:170] Pausing sending status updates I0206 07:55:44.237890 15088 slave.cpp:620] New master detected at master@127.0.1.1:38895 I0206 07:55:44.241575 15088 slave.cpp:683] Authenticating with master master@127.0.1.1:38895 I0206 07:55:44.247459 15088 slave.cpp:688] Using default CRAM-MD5 authenticatee I0206 07:55:44.248617 15089 authenticatee.hpp:137] Creating new client SASL connection I0206 07:55:44.249099 15089 master.cpp:3788] Authenticating slave(46)@127.0.1.1:38895 I0206 07:55:44.249137 15089 master.cpp:3799] Using default CRAM-MD5 authenticator I0206 07:55:44.249728 15089 authenticator.hpp:169] Creating new server SASL connection I0206 07:55:44.250285 15089 authenticatee.hpp:228] Received SASL authentication mechanisms: CRAM-MD5 I0206 07:55:44.250496 15089 authenticatee.hpp:254] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 07:55:44.250452 15088 slave.cpp:656] Detecting new master I0206 07:55:44.251063 15091 authenticator.hpp:275] Received SASL authentication start I0206 07:55:44.251124 15091 authenticator.hpp:397] Authentication requires more steps I0206 07:55:44.251256 15089 authenticatee.hpp:274] Received SASL authentication step I0206 07:55:44.251451 15090 authenticator.hpp:303] Received SASL authentication step I0206 07:55:44.251575 15090 authenticator.hpp:389] Authentication success I0206 07:55:44.251687 15090 master.cpp:3846] Successfully authenticated principal 'test-principal' at slave(46)@127.0.1.1:38895 I0206 07:55:44.253306 15089 authenticatee.hpp:314] Authentication success I0206 07:55:44.258015 15089 slave.cpp:754] Successfully authenticated with master master@127.0.1.1:38895 I0206 07:55:44.258468 15089 master.cpp:2913] Registering slave at slave(46)@127.0.1.1:38895 (utopic) with id 20150206-075544-16842879-38895-15065-S0 I0206 07:55:44.259028 15089 registrar.cpp:444] Applied 1 operations in 88902ns; attempting to update the 'registry' I0206 07:55:44.269492 15065 sched.cpp:149] Version: 0.22.0 I0206 07:55:44.270539 15090 sched.cpp:246] New master detected at master@127.0.1.1:38895 I0206 07:55:44.270614 15090 sched.cpp:302] Authenticating with master master@127.0.1.1:38895 I0206 07:55:44.270634 15090 sched.cpp:309] Using default CRAM-MD5 authenticatee I0206 07:55:44.270900 15090 authenticatee.hpp:137] Creating new client SASL connection I0206 07:55:44.272300 15089 log.cpp:683] Attempting to append 285 bytes to the log I0206 07:55:44.272552 15089 coordinator.cpp:339] Coordinator attempting to write APPEND action at position 3 I0206 07:55:44.273609 15086 master.cpp:3788] Authenticating scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.273643 15086 master.cpp:3799] Using default CRAM-MD5 authenticator I0206 07:55:44.273955 15086 authenticator.hpp:169] Creating new server SASL connection I0206 07:55:44.274617 15090 authenticatee.hpp:228] Received SASL authentication mechanisms: CRAM-MD5 I0206 07:55:44.274813 15090 authenticatee.hpp:254] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 07:55:44.275171 15088 authenticator.hpp:275] Received SASL authentication start I0206 07:55:44.275215 15088 authenticator.hpp:397] Authentication requires more steps I0206 07:55:44.275408 15090 authenticatee.hpp:274] Received SASL authentication step I0206 07:55:44.275696 15084 authenticator.hpp:303] Received SASL authentication step I0206 07:55:44.275774 15084 authenticator.hpp:389] Authentication success I0206 07:55:44.275876 15084 master.cpp:3846] Successfully authenticated principal 'test-principal' at scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.277593 15090 authenticatee.hpp:314] Authentication success I0206 07:55:44.278201 15086 sched.cpp:390] Successfully authenticated with master master@127.0.1.1:38895 I0206 07:55:44.278548 15086 master.cpp:1568] Received registration request for framework 'framework1' at scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.278642 15086 master.cpp:1429] Authorizing framework principal 'test-principal' to receive offers for role '*' I0206 07:55:44.279157 15086 master.cpp:1632] Registering framework 20150206-075544-16842879-38895-15065-0000 (framework1) at scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.280081 15086 sched.cpp:440] Framework registered with 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.280320 15086 hierarchical_allocator_process.hpp:318] Added framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.281411 15089 replica.cpp:510] Replica received write request for position 3 I0206 07:55:44.282289 15085 master.cpp:2901] Ignoring register slave message from slave(46)@127.0.1.1:38895 (utopic) as admission is already in progress I0206 07:55:44.284984 15089 leveldb.cpp:342] Persisting action (304 bytes) to leveldb took 3.368213ms I0206 07:55:44.285020 15089 replica.cpp:678] Persisted action at 3 I0206 07:55:44.285893 15089 replica.cpp:657] Replica received learned notice for position 3 I0206 07:55:44.288350 15089 leveldb.cpp:342] Persisting action (306 bytes) to leveldb took 2.430449ms I0206 07:55:44.288384 15089 replica.cpp:678] Persisted action at 3 I0206 07:55:44.288405 15089 replica.cpp:663] Replica learned APPEND action at position 3 I0206 07:55:44.290154 15089 registrar.cpp:489] Successfully updated the 'registry' in 31.046912ms I0206 07:55:44.290307 15085 log.cpp:702] Attempting to truncate the log to 3 I0206 07:55:44.290671 15085 coordinator.cpp:339] Coordinator attempting to write TRUNCATE action at position 4 I0206 07:55:44.291482 15085 replica.cpp:510] Replica received write request for position 4 I0206 07:55:44.292559 15087 master.cpp:2970] Registered slave 20150206-075544-16842879-38895-15065-S0 at slave(46)@127.0.1.1:38895 (utopic) with cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] I0206 07:55:44.292940 15087 slave.cpp:788] Registered with master master@127.0.1.1:38895; given slave ID 20150206-075544-16842879-38895-15065-S0 I0206 07:55:44.293298 15087 hierarchical_allocator_process.hpp:450] Added slave 20150206-075544-16842879-38895-15065-S0 (utopic) with cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] available) I0206 07:55:44.293684 15087 status_update_manager.cpp:177] Resuming sending status updates I0206 07:55:44.294085 15087 master.cpp:3730] Sending 1 offers to framework 20150206-075544-16842879-38895-15065-0000 (framework1) at scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.299957 15085 leveldb.cpp:342] Persisting action (16 bytes) to leveldb took 8.442691ms I0206 07:55:44.300165 15085 replica.cpp:678] Persisted action at 4 I0206 07:55:44.300698 15065 sched.cpp:1468] Asked to stop the driver I0206 07:55:44.301127 15090 sched.cpp:806] Stopping framework '20150206-075544-16842879-38895-15065-0000' I0206 07:55:44.301503 15090 master.cpp:1892] Asked to unregister framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.301535 15090 master.cpp:4158] Removing framework 20150206-075544-16842879-38895-15065-0000 (framework1) at scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.302376 15090 slave.cpp:1592] Asked to shut down framework 20150206-075544-16842879-38895-15065-0000 by master@127.0.1.1:38895 W0206 07:55:44.302407 15090 slave.cpp:1607] Cannot shut down unknown framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.302814 15090 hierarchical_allocator_process.hpp:397] Deactivated framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.302947 15090 hierarchical_allocator_process.hpp:351] Removed framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.309281 15086 hierarchical_allocator_process.hpp:642] Recovered cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000]) on slave 20150206-075544-16842879-38895-15065-S0 from framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.310158 15084 replica.cpp:657] Replica received learned notice for position 4 I0206 07:55:44.313246 15084 leveldb.cpp:342] Persisting action (18 bytes) to leveldb took 3.055049ms I0206 07:55:44.313328 15084 leveldb.cpp:400] Deleting ~2 keys from leveldb took 45270ns I0206 07:55:44.313349 15084 replica.cpp:678] Persisted action at 4 I0206 07:55:44.313374 15084 replica.cpp:663] Replica learned TRUNCATE action at position 4 I0206 07:55:44.329591 15065 sched.cpp:149] Version: 0.22.0 I0206 07:55:44.330258 15088 sched.cpp:246] New master detected at master@127.0.1.1:38895 I0206 07:55:44.330346 15088 sched.cpp:302] Authenticating with master master@127.0.1.1:38895 I0206 07:55:44.330368 15088 sched.cpp:309] Using default CRAM-MD5 authenticatee I0206 07:55:44.330652 15088 authenticatee.hpp:137] Creating new client SASL connection I0206 07:55:44.331403 15088 master.cpp:3788] Authenticating scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 I0206 07:55:44.331717 15088 master.cpp:3799] Using default CRAM-MD5 authenticator I0206 07:55:44.332293 15088 authenticator.hpp:169] Creating new server SASL connection I0206 07:55:44.332655 15088 authenticatee.hpp:228] Received SASL authentication mechanisms: CRAM-MD5 I0206 07:55:44.332684 15088 authenticatee.hpp:254] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 07:55:44.332792 15088 authenticator.hpp:275] Received SASL authentication start I0206 07:55:44.332835 15088 authenticator.hpp:397] Authentication requires more steps I0206 07:55:44.332903 15088 authenticatee.hpp:274] Received SASL authentication step I0206 07:55:44.332983 15088 authenticator.hpp:303] Received SASL authentication step I0206 07:55:44.333056 15088 authenticator.hpp:389] Authentication success I0206 07:55:44.333153 15088 authenticatee.hpp:314] Authentication success I0206 07:55:44.333297 15091 master.cpp:3846] Successfully authenticated principal 'test-principal' at scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 I0206 07:55:44.334326 15087 sched.cpp:390] Successfully authenticated with master master@127.0.1.1:38895 I0206 07:55:44.334645 15087 master.cpp:1568] Received registration request for framework 'framework2' at scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 I0206 07:55:44.334722 15087 master.cpp:1429] Authorizing framework principal 'test-principal' to receive offers for role '*' I0206 07:55:44.335153 15087 master.cpp:1632] Registering framework 20150206-075544-16842879-38895-15065-0001 (framework2) at scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 I0206 07:55:44.336019 15087 sched.cpp:440] Framework registered with 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.336156 15087 hierarchical_allocator_process.hpp:318] Added framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.336796 15087 master.cpp:3730] Sending 1 offers to framework 20150206-075544-16842879-38895-15065-0001 (framework2) at scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 I0206 07:55:44.337725 15065 sched.cpp:1468] Asked to stop the driver I0206 07:55:44.338002 15086 sched.cpp:806] Stopping framework '20150206-075544-16842879-38895-15065-0001' I0206 07:55:44.338297 15090 master.cpp:1892] Asked to unregister framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.338353 15090 master.cpp:4158] Removing framework 20150206-075544-16842879-38895-15065-0001 (framework2) at scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 ../../src/tests/master_allocator_tests.cpp:300: Failure Mock function called more times than expected - taking default action specified at: ../../src/tests/mesos.hpp:713: Function call: deactivateFramework(@0x7fdb74008d70 20150206-075544-16842879-38895-15065-0001) Expected: to be called once Actual: called twice - over-saturated and active ../../src/tests/master_allocator_tests.cpp:312: Failure Mock function called more times than expected - taking default action specified at: ../../src/tests/mesos.hpp:753: Function call: recoverResources(@0x7fdb74013040 20150206-075544-16842879-38895-15065-0001, @0x7fdb74013060 20150206-075544-16842879-38895-15065-S0, @0x7fdb74013080 { cpus(*):2, mem(*):1024, disk(*):24988, ports(*):[31000-32000] }, @0x7fdb74013098 16-byte object <01-00 00-00 DB-7F 00-00 00-00 00-00 00-00 00-00>) Expected: to be called once Actual: called twice - over-saturated and active I0206 07:55:44.339527 15090 slave.cpp:1592] Asked to shut down framework 20150206-075544-16842879-38895-15065-0001 by master@127.0.1.1:38895 W0206 07:55:44.339558 15090 slave.cpp:1607] Cannot shut down unknown framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.339954 15090 hierarchical_allocator_process.hpp:397] Deactivated framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.340095 15090 hierarchical_allocator_process.hpp:642] Recovered cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000]) on slave 20150206-075544-16842879-38895-15065-S0 from framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.340181 15090 hierarchical_allocator_process.hpp:351] Removed framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.340852 15085 master.cpp:781] Master terminating I0206 07:55:44.345564 15086 slave.cpp:2680] master@127.0.1.1:38895 exited W0206 07:55:44.345593 15086 slave.cpp:2683] Master disconnected! Waiting for a new master to be elected I0206 07:55:44.393707 15065 slave.cpp:502] Slave terminating [ FAILED ] MasterAllocatorTest/0.OutOfOrderDispatch, where TypeParam = mesos::master::allocator::HierarchicalAllocatorProcess (360 ms) {noformat}"""," [ RUN ] MasterAllocatorTest/0.OutOfOrderDispatch Using temporary directory '/tmp/MasterAllocatorTest_0_OutOfOrderDispatch_kjLb9b' I0206 07:55:44.084333 15065 leveldb.cpp:175] Opened db in 25.006293ms I0206 07:55:44.089635 15065 leveldb.cpp:182] Compacted db in 5.256332ms I0206 07:55:44.089695 15065 leveldb.cpp:197] Created db iterator in 23534ns I0206 07:55:44.089710 15065 leveldb.cpp:203] Seeked to beginning of db in 2175ns I0206 07:55:44.089720 15065 leveldb.cpp:272] Iterated through 0 keys in the db in 417ns I0206 07:55:44.089781 15065 replica.cpp:743] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0206 07:55:44.093750 15086 recover.cpp:448] Starting replica recovery I0206 07:55:44.094044 15086 recover.cpp:474] Replica is in EMPTY status I0206 07:55:44.095473 15086 replica.cpp:640] Replica in EMPTY status received a broadcasted recover request I0206 07:55:44.095724 15086 recover.cpp:194] Received a recover response from a replica in EMPTY status I0206 07:55:44.096097 15086 recover.cpp:565] Updating replica status to STARTING I0206 07:55:44.106575 15086 leveldb.cpp:305] Persisting metadata (8 bytes) to leveldb took 10.289939ms I0206 07:55:44.106613 15086 replica.cpp:322] Persisted replica status to STARTING I0206 07:55:44.108144 15086 recover.cpp:474] Replica is in STARTING status I0206 07:55:44.109122 15086 replica.cpp:640] Replica in STARTING status received a broadcasted recover request I0206 07:55:44.110879 15091 recover.cpp:194] Received a recover response from a replica in STARTING status I0206 07:55:44.117267 15087 recover.cpp:565] Updating replica status to VOTING I0206 07:55:44.124771 15087 leveldb.cpp:305] Persisting metadata (8 bytes) to leveldb took 2.66794ms I0206 07:55:44.124814 15087 replica.cpp:322] Persisted replica status to VOTING I0206 07:55:44.124948 15087 recover.cpp:579] Successfully joined the Paxos group I0206 07:55:44.125095 15087 recover.cpp:463] Recover process terminated I0206 07:55:44.126204 15087 master.cpp:344] Master 20150206-075544-16842879-38895-15065 (utopic) started on 127.0.1.1:38895 I0206 07:55:44.126268 15087 master.cpp:390] Master only allowing authenticated frameworks to register I0206 07:55:44.126281 15087 master.cpp:395] Master only allowing authenticated slaves to register I0206 07:55:44.126307 15087 credentials.hpp:35] Loading credentials for authentication from '/tmp/MasterAllocatorTest_0_OutOfOrderDispatch_kjLb9b/credentials' I0206 07:55:44.126683 15087 master.cpp:439] Authorization enabled I0206 07:55:44.129329 15086 master.cpp:1350] The newly elected leader is master@127.0.1.1:38895 with id 20150206-075544-16842879-38895-15065 I0206 07:55:44.129361 15086 master.cpp:1363] Elected as the leading master! I0206 07:55:44.129389 15086 master.cpp:1181] Recovering from registrar I0206 07:55:44.129653 15088 registrar.cpp:312] Recovering registrar I0206 07:55:44.130859 15088 log.cpp:659] Attempting to start the writer I0206 07:55:44.132334 15088 replica.cpp:476] Replica received implicit promise request with proposal 1 I0206 07:55:44.135187 15088 leveldb.cpp:305] Persisting metadata (8 bytes) to leveldb took 2.825465ms I0206 07:55:44.135390 15088 replica.cpp:344] Persisted promised to 1 I0206 07:55:44.138062 15091 coordinator.cpp:229] Coordinator attemping to fill missing position I0206 07:55:44.139576 15091 replica.cpp:377] Replica received explicit promise request for position 0 with proposal 2 I0206 07:55:44.142156 15091 leveldb.cpp:342] Persisting action (8 bytes) to leveldb took 2.545543ms I0206 07:55:44.142189 15091 replica.cpp:678] Persisted action at 0 I0206 07:55:44.143414 15091 replica.cpp:510] Replica received write request for position 0 I0206 07:55:44.143468 15091 leveldb.cpp:437] Reading position from leveldb took 28872ns I0206 07:55:44.145982 15091 leveldb.cpp:342] Persisting action (14 bytes) to leveldb took 2.480277ms I0206 07:55:44.146015 15091 replica.cpp:678] Persisted action at 0 I0206 07:55:44.147050 15089 replica.cpp:657] Replica received learned notice for position 0 I0206 07:55:44.154364 15089 leveldb.cpp:342] Persisting action (16 bytes) to leveldb took 7.281644ms I0206 07:55:44.154400 15089 replica.cpp:678] Persisted action at 0 I0206 07:55:44.154422 15089 replica.cpp:663] Replica learned NOP action at position 0 I0206 07:55:44.155506 15091 log.cpp:675] Writer started with ending position 0 I0206 07:55:44.156746 15091 leveldb.cpp:437] Reading position from leveldb took 30248ns I0206 07:55:44.173681 15091 registrar.cpp:345] Successfully fetched the registry (0B) in 43.977984ms I0206 07:55:44.173821 15091 registrar.cpp:444] Applied 1 operations in 30768ns; attempting to update the 'registry' I0206 07:55:44.176213 15086 log.cpp:683] Attempting to append 119 bytes to the log I0206 07:55:44.176426 15086 coordinator.cpp:339] Coordinator attempting to write APPEND action at position 1 I0206 07:55:44.177608 15088 replica.cpp:510] Replica received write request for position 1 I0206 07:55:44.180059 15088 leveldb.cpp:342] Persisting action (136 bytes) to leveldb took 2.415145ms I0206 07:55:44.180094 15088 replica.cpp:678] Persisted action at 1 I0206 07:55:44.181324 15084 replica.cpp:657] Replica received learned notice for position 1 I0206 07:55:44.183831 15084 leveldb.cpp:342] Persisting action (138 bytes) to leveldb took 2.473724ms I0206 07:55:44.183866 15084 replica.cpp:678] Persisted action at 1 I0206 07:55:44.183887 15084 replica.cpp:663] Replica learned APPEND action at position 1 I0206 07:55:44.185510 15084 registrar.cpp:489] Successfully updated the 'registry' in 11.619072ms I0206 07:55:44.185678 15086 log.cpp:702] Attempting to truncate the log to 1 I0206 07:55:44.186111 15086 coordinator.cpp:339] Coordinator attempting to write TRUNCATE action at position 2 I0206 07:55:44.186944 15086 replica.cpp:510] Replica received write request for position 2 I0206 07:55:44.187492 15084 registrar.cpp:375] Successfully recovered registrar I0206 07:55:44.188016 15087 master.cpp:1208] Recovered 0 slaves from the Registry (83B) ; allowing 10mins for slaves to re-register I0206 07:55:44.189678 15086 leveldb.cpp:342] Persisting action (16 bytes) to leveldb took 2.702559ms I0206 07:55:44.189713 15086 replica.cpp:678] Persisted action at 2 I0206 07:55:44.190620 15086 replica.cpp:657] Replica received learned notice for position 2 I0206 07:55:44.193383 15086 leveldb.cpp:342] Persisting action (18 bytes) to leveldb took 2.737088ms I0206 07:55:44.193455 15086 leveldb.cpp:400] Deleting ~1 keys from leveldb took 37762ns I0206 07:55:44.193475 15086 replica.cpp:678] Persisted action at 2 I0206 07:55:44.193496 15086 replica.cpp:663] Replica learned TRUNCATE action at position 2 I0206 07:55:44.200028 15065 containerizer.cpp:102] Using isolation: posix/cpu,posix/mem I0206 07:55:44.212924 15088 slave.cpp:172] Slave started on 46)@127.0.1.1:38895 I0206 07:55:44.213762 15088 credentials.hpp:83] Loading credential for authentication from '/tmp/MasterAllocatorTest_0_OutOfOrderDispatch_RuNyVQ/credential' I0206 07:55:44.214251 15088 slave.cpp:281] Slave using credential for: test-principal I0206 07:55:44.214653 15088 slave.cpp:299] Slave resources: cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] I0206 07:55:44.214918 15088 slave.cpp:328] Slave hostname: utopic I0206 07:55:44.215116 15088 slave.cpp:329] Slave checkpoint: false W0206 07:55:44.215332 15088 slave.cpp:331] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I0206 07:55:44.217061 15090 state.cpp:32] Recovering state from '/tmp/MasterAllocatorTest_0_OutOfOrderDispatch_RuNyVQ/meta' I0206 07:55:44.235409 15088 status_update_manager.cpp:196] Recovering status update manager I0206 07:55:44.235601 15088 containerizer.cpp:299] Recovering containerizer I0206 07:55:44.236486 15088 slave.cpp:3526] Finished recovery I0206 07:55:44.237709 15087 status_update_manager.cpp:170] Pausing sending status updates I0206 07:55:44.237890 15088 slave.cpp:620] New master detected at master@127.0.1.1:38895 I0206 07:55:44.241575 15088 slave.cpp:683] Authenticating with master master@127.0.1.1:38895 I0206 07:55:44.247459 15088 slave.cpp:688] Using default CRAM-MD5 authenticatee I0206 07:55:44.248617 15089 authenticatee.hpp:137] Creating new client SASL connection I0206 07:55:44.249099 15089 master.cpp:3788] Authenticating slave(46)@127.0.1.1:38895 I0206 07:55:44.249137 15089 master.cpp:3799] Using default CRAM-MD5 authenticator I0206 07:55:44.249728 15089 authenticator.hpp:169] Creating new server SASL connection I0206 07:55:44.250285 15089 authenticatee.hpp:228] Received SASL authentication mechanisms: CRAM-MD5 I0206 07:55:44.250496 15089 authenticatee.hpp:254] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 07:55:44.250452 15088 slave.cpp:656] Detecting new master I0206 07:55:44.251063 15091 authenticator.hpp:275] Received SASL authentication start I0206 07:55:44.251124 15091 authenticator.hpp:397] Authentication requires more steps I0206 07:55:44.251256 15089 authenticatee.hpp:274] Received SASL authentication step I0206 07:55:44.251451 15090 authenticator.hpp:303] Received SASL authentication step I0206 07:55:44.251575 15090 authenticator.hpp:389] Authentication success I0206 07:55:44.251687 15090 master.cpp:3846] Successfully authenticated principal 'test-principal' at slave(46)@127.0.1.1:38895 I0206 07:55:44.253306 15089 authenticatee.hpp:314] Authentication success I0206 07:55:44.258015 15089 slave.cpp:754] Successfully authenticated with master master@127.0.1.1:38895 I0206 07:55:44.258468 15089 master.cpp:2913] Registering slave at slave(46)@127.0.1.1:38895 (utopic) with id 20150206-075544-16842879-38895-15065-S0 I0206 07:55:44.259028 15089 registrar.cpp:444] Applied 1 operations in 88902ns; attempting to update the 'registry' I0206 07:55:44.269492 15065 sched.cpp:149] Version: 0.22.0 I0206 07:55:44.270539 15090 sched.cpp:246] New master detected at master@127.0.1.1:38895 I0206 07:55:44.270614 15090 sched.cpp:302] Authenticating with master master@127.0.1.1:38895 I0206 07:55:44.270634 15090 sched.cpp:309] Using default CRAM-MD5 authenticatee I0206 07:55:44.270900 15090 authenticatee.hpp:137] Creating new client SASL connection I0206 07:55:44.272300 15089 log.cpp:683] Attempting to append 285 bytes to the log I0206 07:55:44.272552 15089 coordinator.cpp:339] Coordinator attempting to write APPEND action at position 3 I0206 07:55:44.273609 15086 master.cpp:3788] Authenticating scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.273643 15086 master.cpp:3799] Using default CRAM-MD5 authenticator I0206 07:55:44.273955 15086 authenticator.hpp:169] Creating new server SASL connection I0206 07:55:44.274617 15090 authenticatee.hpp:228] Received SASL authentication mechanisms: CRAM-MD5 I0206 07:55:44.274813 15090 authenticatee.hpp:254] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 07:55:44.275171 15088 authenticator.hpp:275] Received SASL authentication start I0206 07:55:44.275215 15088 authenticator.hpp:397] Authentication requires more steps I0206 07:55:44.275408 15090 authenticatee.hpp:274] Received SASL authentication step I0206 07:55:44.275696 15084 authenticator.hpp:303] Received SASL authentication step I0206 07:55:44.275774 15084 authenticator.hpp:389] Authentication success I0206 07:55:44.275876 15084 master.cpp:3846] Successfully authenticated principal 'test-principal' at scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.277593 15090 authenticatee.hpp:314] Authentication success I0206 07:55:44.278201 15086 sched.cpp:390] Successfully authenticated with master master@127.0.1.1:38895 I0206 07:55:44.278548 15086 master.cpp:1568] Received registration request for framework 'framework1' at scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.278642 15086 master.cpp:1429] Authorizing framework principal 'test-principal' to receive offers for role '*' I0206 07:55:44.279157 15086 master.cpp:1632] Registering framework 20150206-075544-16842879-38895-15065-0000 (framework1) at scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.280081 15086 sched.cpp:440] Framework registered with 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.280320 15086 hierarchical_allocator_process.hpp:318] Added framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.281411 15089 replica.cpp:510] Replica received write request for position 3 I0206 07:55:44.282289 15085 master.cpp:2901] Ignoring register slave message from slave(46)@127.0.1.1:38895 (utopic) as admission is already in progress I0206 07:55:44.284984 15089 leveldb.cpp:342] Persisting action (304 bytes) to leveldb took 3.368213ms I0206 07:55:44.285020 15089 replica.cpp:678] Persisted action at 3 I0206 07:55:44.285893 15089 replica.cpp:657] Replica received learned notice for position 3 I0206 07:55:44.288350 15089 leveldb.cpp:342] Persisting action (306 bytes) to leveldb took 2.430449ms I0206 07:55:44.288384 15089 replica.cpp:678] Persisted action at 3 I0206 07:55:44.288405 15089 replica.cpp:663] Replica learned APPEND action at position 3 I0206 07:55:44.290154 15089 registrar.cpp:489] Successfully updated the 'registry' in 31.046912ms I0206 07:55:44.290307 15085 log.cpp:702] Attempting to truncate the log to 3 I0206 07:55:44.290671 15085 coordinator.cpp:339] Coordinator attempting to write TRUNCATE action at position 4 I0206 07:55:44.291482 15085 replica.cpp:510] Replica received write request for position 4 I0206 07:55:44.292559 15087 master.cpp:2970] Registered slave 20150206-075544-16842879-38895-15065-S0 at slave(46)@127.0.1.1:38895 (utopic) with cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] I0206 07:55:44.292940 15087 slave.cpp:788] Registered with master master@127.0.1.1:38895; given slave ID 20150206-075544-16842879-38895-15065-S0 I0206 07:55:44.293298 15087 hierarchical_allocator_process.hpp:450] Added slave 20150206-075544-16842879-38895-15065-S0 (utopic) with cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] available) I0206 07:55:44.293684 15087 status_update_manager.cpp:177] Resuming sending status updates I0206 07:55:44.294085 15087 master.cpp:3730] Sending 1 offers to framework 20150206-075544-16842879-38895-15065-0000 (framework1) at scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.299957 15085 leveldb.cpp:342] Persisting action (16 bytes) to leveldb took 8.442691ms I0206 07:55:44.300165 15085 replica.cpp:678] Persisted action at 4 I0206 07:55:44.300698 15065 sched.cpp:1468] Asked to stop the driver I0206 07:55:44.301127 15090 sched.cpp:806] Stopping framework '20150206-075544-16842879-38895-15065-0000' I0206 07:55:44.301503 15090 master.cpp:1892] Asked to unregister framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.301535 15090 master.cpp:4158] Removing framework 20150206-075544-16842879-38895-15065-0000 (framework1) at scheduler-d6cac0a1-d461-4a05-b19d-5cbdae239eb0@127.0.1.1:38895 I0206 07:55:44.302376 15090 slave.cpp:1592] Asked to shut down framework 20150206-075544-16842879-38895-15065-0000 by master@127.0.1.1:38895 W0206 07:55:44.302407 15090 slave.cpp:1607] Cannot shut down unknown framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.302814 15090 hierarchical_allocator_process.hpp:397] Deactivated framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.302947 15090 hierarchical_allocator_process.hpp:351] Removed framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.309281 15086 hierarchical_allocator_process.hpp:642] Recovered cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000]) on slave 20150206-075544-16842879-38895-15065-S0 from framework 20150206-075544-16842879-38895-15065-0000 I0206 07:55:44.310158 15084 replica.cpp:657] Replica received learned notice for position 4 I0206 07:55:44.313246 15084 leveldb.cpp:342] Persisting action (18 bytes) to leveldb took 3.055049ms I0206 07:55:44.313328 15084 leveldb.cpp:400] Deleting ~2 keys from leveldb took 45270ns I0206 07:55:44.313349 15084 replica.cpp:678] Persisted action at 4 I0206 07:55:44.313374 15084 replica.cpp:663] Replica learned TRUNCATE action at position 4 I0206 07:55:44.329591 15065 sched.cpp:149] Version: 0.22.0 I0206 07:55:44.330258 15088 sched.cpp:246] New master detected at master@127.0.1.1:38895 I0206 07:55:44.330346 15088 sched.cpp:302] Authenticating with master master@127.0.1.1:38895 I0206 07:55:44.330368 15088 sched.cpp:309] Using default CRAM-MD5 authenticatee I0206 07:55:44.330652 15088 authenticatee.hpp:137] Creating new client SASL connection I0206 07:55:44.331403 15088 master.cpp:3788] Authenticating scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 I0206 07:55:44.331717 15088 master.cpp:3799] Using default CRAM-MD5 authenticator I0206 07:55:44.332293 15088 authenticator.hpp:169] Creating new server SASL connection I0206 07:55:44.332655 15088 authenticatee.hpp:228] Received SASL authentication mechanisms: CRAM-MD5 I0206 07:55:44.332684 15088 authenticatee.hpp:254] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 07:55:44.332792 15088 authenticator.hpp:275] Received SASL authentication start I0206 07:55:44.332835 15088 authenticator.hpp:397] Authentication requires more steps I0206 07:55:44.332903 15088 authenticatee.hpp:274] Received SASL authentication step I0206 07:55:44.332983 15088 authenticator.hpp:303] Received SASL authentication step I0206 07:55:44.333056 15088 authenticator.hpp:389] Authentication success I0206 07:55:44.333153 15088 authenticatee.hpp:314] Authentication success I0206 07:55:44.333297 15091 master.cpp:3846] Successfully authenticated principal 'test-principal' at scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 I0206 07:55:44.334326 15087 sched.cpp:390] Successfully authenticated with master master@127.0.1.1:38895 I0206 07:55:44.334645 15087 master.cpp:1568] Received registration request for framework 'framework2' at scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 I0206 07:55:44.334722 15087 master.cpp:1429] Authorizing framework principal 'test-principal' to receive offers for role '*' I0206 07:55:44.335153 15087 master.cpp:1632] Registering framework 20150206-075544-16842879-38895-15065-0001 (framework2) at scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 I0206 07:55:44.336019 15087 sched.cpp:440] Framework registered with 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.336156 15087 hierarchical_allocator_process.hpp:318] Added framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.336796 15087 master.cpp:3730] Sending 1 offers to framework 20150206-075544-16842879-38895-15065-0001 (framework2) at scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 I0206 07:55:44.337725 15065 sched.cpp:1468] Asked to stop the driver I0206 07:55:44.338002 15086 sched.cpp:806] Stopping framework '20150206-075544-16842879-38895-15065-0001' I0206 07:55:44.338297 15090 master.cpp:1892] Asked to unregister framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.338353 15090 master.cpp:4158] Removing framework 20150206-075544-16842879-38895-15065-0001 (framework2) at scheduler-7bdaa90b-eb9f-4009-bd5a-d07fd3f24cec@127.0.1.1:38895 ../../src/tests/master_allocator_tests.cpp:300: Failure Mock function called more times than expected - taking default action specified at: ../../src/tests/mesos.hpp:713: Function call: deactivateFramework(@0x7fdb74008d70 20150206-075544-16842879-38895-15065-0001) Expected: to be called once Actual: called twice - over-saturated and active ../../src/tests/master_allocator_tests.cpp:312: Failure Mock function called more times than expected - taking default action specified at: ../../src/tests/mesos.hpp:753: Function call: recoverResources(@0x7fdb74013040 20150206-075544-16842879-38895-15065-0001, @0x7fdb74013060 20150206-075544-16842879-38895-15065-S0, @0x7fdb74013080 { cpus(*):2, mem(*):1024, disk(*):24988, ports(*):[31000-32000] }, @0x7fdb74013098 16-byte object <01-00 00-00 DB-7F 00-00 00-00 00-00 00-00 00-00>) Expected: to be called once Actual: called twice - over-saturated and active I0206 07:55:44.339527 15090 slave.cpp:1592] Asked to shut down framework 20150206-075544-16842879-38895-15065-0001 by master@127.0.1.1:38895 W0206 07:55:44.339558 15090 slave.cpp:1607] Cannot shut down unknown framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.339954 15090 hierarchical_allocator_process.hpp:397] Deactivated framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.340095 15090 hierarchical_allocator_process.hpp:642] Recovered cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):24988; ports(*):[31000-32000]) on slave 20150206-075544-16842879-38895-15065-S0 from framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.340181 15090 hierarchical_allocator_process.hpp:351] Removed framework 20150206-075544-16842879-38895-15065-0001 I0206 07:55:44.340852 15085 master.cpp:781] Master terminating I0206 07:55:44.345564 15086 slave.cpp:2680] master@127.0.1.1:38895 exited W0206 07:55:44.345593 15086 slave.cpp:2683] Master disconnected! Waiting for a new master to be elected I0206 07:55:44.393707 15065 slave.cpp:502] Slave terminating [ FAILED ] MasterAllocatorTest/0.OutOfOrderDispatch, where TypeParam = mesos::master::allocator::HierarchicalAllocatorProcess (360 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2332","02/10/2015 01:53:38",5,"Report per-container metrics for network bandwidth throttling ""Export metrics from the network isolation to identify scope and duration of container throttling. Packet loss can be identified from the overlimits and requeues fields of the htb qdisc report for the virtual interface, e.g. Note that since a packet can be examined multiple times before transmission, overlimits can exceed total packets sent. Add to the port_mapping isolator usage() and the container statistics protobuf. Carefully consider the naming (esp tx/rx) + commenting of the protobuf fields so it's clear what these represent and how they are different to the existing dropped packet counts from the network stack."""," $ tc -s -d qdisc show dev mesos19223 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 158213287452 bytes 1030876393 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc ingress ffff: parent ffff:fff1 ---------------- Sent 119381747824 bytes 1144549901 pkt (dropped 2044879, overlimits 0 requeues 0) backlog 0b 0p requeues 0 ",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2335","02/10/2015 16:57:45",0.5,"Mesos Lifecycle Modules ""A new kind of module that receives callbacks at significant life cycle events of its host libprocess process. Typically the latter is a Mesos slave or master and the life time of the libprocess process coincides with the underlying OS process. h4. Motivation and Use Cases We want to add customized and experimental capabilities that concern the life time of Mesos components without protruding into Mesos source code and without creating new build process dependencies for everybody. Example use cases: 1. A slave or master life cycle module that gathers fail-over incidents and reports summaries thereof to a remote data sink. 2. A slave module that observes host computer metrics and correlates these with task activity. This can be used to find resources leaks and to prevent, respectively guide, oversubscription. 3. Upgrades and provisioning that require shutdown and restart. h4. Specifics The specific life cycle events that we want to get notified about and want to be able to act upon are: - Process is spawning/initializing - Process is terminating/finalizing In all these cases, a reference to the process is passed as a parameter, giving the module access for inspection and reaction. h4. Module Classification Unlike other named modules, a life cycle module does not directly replace or provide essential Mesos functionality (such as an Isolator module does). Unlike a decorator module it does not directly add or inject data into Mesos core either.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2337","02/10/2015 18:25:37",2,"__init__.py not getting installed in $PREFIX/lib/pythonX.Y/site-packages/mesos ""When doing a {{make install}}, the src/python/native/src/mesos/__init__.py file is not getting installed in {{$PREFIX/lib/pythonX.Y/site-packages/mesos/}}. This makes it impossible to do the following import when {{PYTHONPATH}} is set to the {{site-packages}} directory. The directories {{$PREFIX/lib/pythonX.Y/site-packages/mesos/interface, native}} do have their corresponding {{__init__.py}} files. Reproducing the bug: """," import mesos.interface.mesos_pb2 ../configure --prefix=$HOME/test-install && make install ",0,0,1,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2340","02/10/2015 21:57:42",5,"Add ability to decode JSON serialized MasterInfo from ZK ""Currently to discover the master a client needs the ZK node location and access to the MasterInfo protobuf so it can deserialize the binary blob in the node. I think it would be nice to publish JSON (like Twitter's ServerSets) so clients are not tied to protobuf to do service discovery. This ticket is an intermediate (compatibility) step: we add in {{0.23}} the ability for the {{Detector}} to """"understand"""" JSON **alongside** Protobuf serialized format; this makes it compatible with both earlier versions, as well a future one (most likely, {{0.24}}) that will write the {{MasterInfo}} information in JSON format.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2347","02/12/2015 00:34:25",8,"Add ability for schedulers to explicitly acknowledge status updates on the driver. ""In order for schedulers to be able to handle status updates in a scalable manner, they need the ability to send acknowledgements through the driver. This enables optimizations in schedulers (e.g. process status updates asynchronously w/o backing up the driver, processing/acking updates in batch). Without this, an implicit reconciliation can overload a scheduler (hence the motivation for MESOS-2308).""","",0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2349","02/12/2015 05:40:17",5,"Provide a way to execute an arbitrary process in a MesosContainerizer container context ""Include a separate binary that when provided with a container_id, path to an executable, and optional arguments will find the container context, enter it, and exec the executable. e.g., This need only support (initially) containers created with the MesosContainerizer and will support all isolators shipped with Mesos, i.e., it should find and enter the cgroups and namespaces for the running executor of the specified container."""," mesos-container-exec --container_id=abc123 [--] /path/to/executable [arg1 ...] ",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2350","02/12/2015 05:44:17",5,"Add support for MesosContainerizerLaunch to chroot to a specified path ""In preparation for the MesosContainerizer to support a filesystem isolator the MesosContainerizerLauncher must support chrooting. Optionally, it should also configure the chroot environment by (re-)mounting special filesystems such as /proc and /sys and making device nodes such as /dev/zero, etc., such that the chroot environment is functional.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2353","02/14/2015 01:20:40",5,"Improve performance of the state.json endpoint for large clusters. ""The master's state.json endpoint consistently takes a long time to compute the JSON result, for large clusters: This can cause the master to get backlogged if there are many state.json requests in flight. Looking at {{perf}} data, it seems most of the time is spent doing memory allocation / de-allocation. This ticket will try to capture any low hanging fruit to speed this up. Possibly we can leverage moves if they are not already being used by the compiler."""," $ time curl -s -o /dev/null localhost:5050/master/state.json Mon Jan 26 22:38:50 UTC 2015 real 0m13.174s user 0m0.003s sys 0m0.022s ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2367","02/18/2015 02:35:38",5,"Improve slave resiliency in the face of orphan containers ""Right now there's a case where a misbehaving executor can cause a slave process to flap: {panel:title=Quote From [~jieyu]} {quote} 1) User tries to kill an instance 2) Slave sends {{KillTaskMessage}} to executor 3) Executor sends kill signals to task processes 4) Executor sends {{TASK_KILLED}} to slave 5) Slave updates container cpu limit to be 0.01 cpus 6) A user-process is still processing the kill signal 7) the task process cannot exit since it has too little cpu share and is throttled 8) Executor itself terminates 9) Slave tries to destroy the container, but cannot because the user-process is stuck in the exit path. 10) Slave restarts, and is constantly flapping because it cannot kill orphan containers {quote} {panel} The slave's orphan container handling should be improved to deal with this case despite ill-behaved users (framework writers).""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2373","02/18/2015 23:14:00",2,"DRFSorter needs to distinguish resources from different slaves. ""Currently the {{DRFSorter}} aggregates total and allocated resources across multiple slaves, which only works for scalar resources. We need to distinguish resources from different slaves. Suppose we have 2 slaves and 1 framework. The framework is allocated all resources from both slaves. To provide some context, this issue came up while trying to reserve all unreserved resources from every offer. Suppose the slave resources are the same as above: {quote} Slave1: {{cpus(\*):2; mem(\*):512; ports(\*):\[31000-32000\]}} Slave2: {{cpus(\*):2; mem(\*):512; ports(\*):\[31000-32000\]}} {quote} Initial (incorrect) total resources in the DRFSorter is: {quote} {{cpus(\*):4; mem(\*):1024; ports(\*):\[31000-32000\]}} {quote} We receive 2 offers, 1 from each slave: {quote} Offer1: {{cpus(\*):2; mem(\*):512; ports(\*):\[31000-32000\]}} Offer2: {{cpus(\*):2; mem(\*):512; ports(\*):\[31000-32000\]}} {quote} At this point, the resources allocated for the framework is: {quote} {{cpus(\*):4; mem(\*):1024; ports(\*):\[31000-32000\]}} {quote} After first {{RESERVE}} operation with Offer1: The allocated resources for the framework becomes: {quote} {{cpus(\*):2; mem(\*):512; cpus(role):2; mem(role):512; ports(role):\[31000-32000\]}} {quote} During second {{RESERVE}} operation with Offer2: {code:title=HierarchicalAllocatorProcess::updateAllocation} // ... FrameworkSorter* frameworkSorter = frameworkSorters[frameworks\[frameworkId\].role]; Resources allocation = frameworkSorter->allocation(frameworkId.value()); // Update the allocated resources. Try updatedAllocation = allocation.apply(operations); CHECK_SOME(updatedAllocation); // ... {code} {{allocation}} in the above code is: {quote} {{cpus(\*):2; mem(\*):512; cpus(role):2; mem(role):512; ports(role):\[31000-32000\]}} {quote} We try to {{apply}} a {{RESERVE}} operation and we fail to find {{ports(\*):\[31000-32000\]}} which leads to the {{CHECK}} fail at {{CHECK_SOME(updatedAllocation);}}"""," Resources slaveResources = Resources::parse(""""cpus:2;mem:512;ports:[31000-32000]"""").get(); DRFSorter sorter; sorter.add(slaveResources); // Add slave1 resources sorter.add(slaveResources); // Add slave2 resources // Total resources in sorter at this point is // cpus(*):4; mem(*):1024; ports(*):[31000-32000]. // The scalar resources get aggregated correctly but ports do not. sorter.add(""""F""""); // The 2 calls to allocated only works because we simply do: // allocation[name] += resources; // without checking that the 'resources' is available in the total. sorter.allocated(""""F"""", slaveResources); sorter.allocated(""""F"""", slaveResources); // At this point, sorter.allocation(""""F"""") is: // cpus(*):4; mem(*):1024; ports(*):[31000-32000]. for (const Offer& offer : offers) { Resources unreserved = offer.resources().unreserved(); Resources reserved = unreserved.flatten(role, Resource::FRAMEWORK); Offer::Operation reserve; reserve.set_type(Offer::Operation::RESERVE); reserve.mutable_reserve()->mutable_resources()->CopyFrom(reserved); driver->acceptOffers({offer.id()}, {reserve}); } // ... FrameworkSorter* frameworkSorter = frameworkSorters[frameworks\[frameworkId\].role]; Resources allocation = frameworkSorter->allocation(frameworkId.value()); // Update the allocated resources. Try updatedAllocation = allocation.apply(operations); CHECK_SOME(updatedAllocation); // ... ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2382","02/23/2015 10:51:10",1,"replace unsafe ""find | xargs"" with ""find -exec"" ""The problem exists in 1194:src/Makefile.am 47:src/tests/balloon_framework_test.sh The current """"find | xargs rm -rf"""" in the Makefile could potentially destroy data if mesos source was in a folder with a space in the name. E.g. if you for some reason checkout mesos to """"/ mesos"""" the command in src/Makefile.am would turn into a rm -rf / """"find | xargs"""" should be NUL delimited with """"find -print0 | xargs -0"""" for safer execution or can just be replaced with the find build-in option """"find -exec '{}' \+"""" which behaves similar to xargs. There was a second occurrence of this in a test script, though in that case it would only rmdir empty folders, so is less critical. I submitted a PR here: https://github.com/apache/mesos/pull/36 ""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2387","02/23/2015 20:22:21",1,"SlaveTest.TaskLaunchContainerizerUpdateFails is flaky ""Observed on internal CI """," [ RUN ] SlaveTest.TaskLaunchContainerizerUpdateFails Using temporary directory '/tmp/SlaveTest_TaskLaunchContainerizerUpdateFails_tUjtcI' I0222 04:59:56.568491 21813 process.cpp:2117] Dropped / Lost event for PID: slave(52)@192.168.122.68:39461 I0222 04:59:56.595433 21791 leveldb.cpp:175] Opened db in 27.59732ms I0222 04:59:56.603965 21791 leveldb.cpp:182] Compacted db in 8.49192ms I0222 04:59:56.604019 21791 leveldb.cpp:197] Created db iterator in 19206ns I0222 04:59:56.604037 21791 leveldb.cpp:203] Seeked to beginning of db in 1802ns I0222 04:59:56.604046 21791 leveldb.cpp:272] Iterated through 0 keys in the db in 467ns I0222 04:59:56.604081 21791 replica.cpp:743] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0222 04:59:56.607413 21809 recover.cpp:448] Starting replica recovery I0222 04:59:56.607687 21809 recover.cpp:474] Replica is in 4 status I0222 04:59:56.609011 21809 replica.cpp:640] Replica in 4 status received a broadcasted recover request I0222 04:59:56.609262 21809 recover.cpp:194] Received a recover response from a replica in 4 status I0222 04:59:56.609709 21809 recover.cpp:565] Updating replica status to 3 I0222 04:59:56.610749 21811 master.cpp:347] Master 20150222-045956-1148889280-39461-21791 (centos-7) started on 192.168.122.68:39461 I0222 04:59:56.610791 21811 master.cpp:393] Master only allowing authenticated frameworks to register I0222 04:59:56.610802 21811 master.cpp:398] Master only allowing authenticated slaves to register I0222 04:59:56.610821 21811 credentials.hpp:36] Loading credentials for authentication from '/tmp/SlaveTest_TaskLaunchContainerizerUpdateFails_tUjtcI/credentials' I0222 04:59:56.611042 21811 master.cpp:440] Authorization enabled I0222 04:59:56.612329 21811 hierarchical.hpp:286] Initialized hierarchical allocator process I0222 04:59:56.612416 21811 whitelist_watcher.cpp:78] No whitelist given I0222 04:59:56.613005 21811 master.cpp:1354] The newly elected leader is master@192.168.122.68:39461 with id 20150222-045956-1148889280-39461-21791 I0222 04:59:56.613034 21811 master.cpp:1367] Elected as the leading master! I0222 04:59:56.613050 21811 master.cpp:1185] Recovering from registrar I0222 04:59:56.613229 21811 registrar.cpp:312] Recovering registrar I0222 04:59:56.622866 21809 leveldb.cpp:305] Persisting metadata (8 bytes) to leveldb took 12.988429ms I0222 04:59:56.622913 21809 replica.cpp:322] Persisted replica status to 3 I0222 04:59:56.623118 21809 recover.cpp:474] Replica is in 3 status I0222 04:59:56.624419 21809 replica.cpp:640] Replica in 3 status received a broadcasted recover request I0222 04:59:56.624685 21809 recover.cpp:194] Received a recover response from a replica in 3 status I0222 04:59:56.625200 21809 recover.cpp:565] Updating replica status to 1 I0222 04:59:56.635154 21809 leveldb.cpp:305] Persisting metadata (8 bytes) to leveldb took 9.799671ms I0222 04:59:56.635197 21809 replica.cpp:322] Persisted replica status to 1 I0222 04:59:56.635296 21809 recover.cpp:579] Successfully joined the Paxos group I0222 04:59:56.635426 21809 recover.cpp:463] Recover process terminated I0222 04:59:56.635812 21809 log.cpp:659] Attempting to start the writer I0222 04:59:56.637075 21809 replica.cpp:476] Replica received implicit promise request with proposal 1 I0222 04:59:56.648674 21809 leveldb.cpp:305] Persisting metadata (8 bytes) to leveldb took 11.566146ms I0222 04:59:56.648717 21809 replica.cpp:344] Persisted promised to 1 I0222 04:59:56.649456 21809 coordinator.cpp:229] Coordinator attemping to fill missing position I0222 04:59:56.650800 21809 replica.cpp:377] Replica received explicit promise request for position 0 with proposal 2 I0222 04:59:56.659916 21809 leveldb.cpp:342] Persisting action (8 bytes) to leveldb took 9.078258ms I0222 04:59:56.659981 21809 replica.cpp:678] Persisted action at 0 I0222 04:59:56.661075 21809 replica.cpp:510] Replica received write request for position 0 I0222 04:59:56.661129 21809 leveldb.cpp:437] Reading position from leveldb took 26387ns I0222 04:59:56.671227 21809 leveldb.cpp:342] Persisting action (14 bytes) to leveldb took 10.064302ms I0222 04:59:56.671262 21809 replica.cpp:678] Persisted action at 0 I0222 04:59:56.671821 21809 replica.cpp:657] Replica received learned notice for position 0 I0222 04:59:56.684200 21809 leveldb.cpp:342] Persisting action (16 bytes) to leveldb took 12.346897ms I0222 04:59:56.684242 21809 replica.cpp:678] Persisted action at 0 I0222 04:59:56.684262 21809 replica.cpp:663] Replica learned 1 action at position 0 I0222 04:59:56.684875 21809 log.cpp:675] Writer started with ending position 0 I0222 04:59:56.685932 21809 leveldb.cpp:437] Reading position from leveldb took 27308ns I0222 04:59:56.688256 21809 registrar.cpp:345] Successfully fetched the registry (0B) in 74.992128ms I0222 04:59:56.688344 21809 registrar.cpp:444] Applied 1 operations in 19566ns; attempting to update the 'registry' I0222 04:59:56.690690 21809 log.cpp:683] Attempting to append 129 bytes to the log I0222 04:59:56.690848 21809 coordinator.cpp:339] Coordinator attempting to write 2 action at position 1 I0222 04:59:56.691661 21809 replica.cpp:510] Replica received write request for position 1 I0222 04:59:56.701247 21809 leveldb.cpp:342] Persisting action (148 bytes) to leveldb took 9.550768ms I0222 04:59:56.701292 21809 replica.cpp:678] Persisted action at 1 I0222 04:59:56.702066 21809 replica.cpp:657] Replica received learned notice for position 1 I0222 04:59:56.712136 21809 leveldb.cpp:342] Persisting action (150 bytes) to leveldb took 10.041696ms I0222 04:59:56.712175 21809 replica.cpp:678] Persisted action at 1 I0222 04:59:56.712198 21809 replica.cpp:663] Replica learned 2 action at position 1 I0222 04:59:56.713289 21809 registrar.cpp:489] Successfully updated the 'registry' in 24.890112ms I0222 04:59:56.713397 21809 registrar.cpp:375] Successfully recovered registrar I0222 04:59:56.713537 21809 log.cpp:702] Attempting to truncate the log to 1 I0222 04:59:56.713795 21809 master.cpp:1212] Recovered 0 slaves from the Registry (93B) ; allowing 10mins for slaves to re-register I0222 04:59:56.713871 21809 coordinator.cpp:339] Coordinator attempting to write 3 action at position 2 I0222 04:59:56.714879 21809 replica.cpp:510] Replica received write request for position 2 I0222 04:59:56.725225 21809 leveldb.cpp:342] Persisting action (16 bytes) to leveldb took 10.311704ms I0222 04:59:56.725270 21809 replica.cpp:678] Persisted action at 2 I0222 04:59:56.726066 21809 replica.cpp:657] Replica received learned notice for position 2 I0222 04:59:56.734110 21809 leveldb.cpp:342] Persisting action (18 bytes) to leveldb took 8.012327ms I0222 04:59:56.734180 21809 leveldb.cpp:400] Deleting ~1 keys from leveldb took 36578ns I0222 04:59:56.734201 21809 replica.cpp:678] Persisted action at 2 I0222 04:59:56.734221 21809 replica.cpp:663] Replica learned 3 action at position 2 I0222 04:59:56.747556 21809 slave.cpp:173] Slave started on 53)@192.168.122.68:39461 I0222 04:59:56.747601 21809 credentials.hpp:84] Loading credential for authentication from '/tmp/SlaveTest_TaskLaunchContainerizerUpdateFails_qkhaJP/credential' I0222 04:59:56.747774 21809 slave.cpp:280] Slave using credential for: test-principal I0222 04:59:56.748021 21809 slave.cpp:298] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0222 04:59:56.748682 21809 slave.cpp:327] Slave hostname: centos-7 I0222 04:59:56.748705 21809 slave.cpp:328] Slave checkpoint: false W0222 04:59:56.748714 21809 slave.cpp:330] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I0222 04:59:56.749826 21809 state.cpp:34] Recovering state from '/tmp/SlaveTest_TaskLaunchContainerizerUpdateFails_qkhaJP/meta' I0222 04:59:56.750191 21809 status_update_manager.cpp:196] Recovering status update manager I0222 04:59:56.750465 21809 slave.cpp:3775] Finished recovery I0222 04:59:56.751260 21809 slave.cpp:623] New master detected at master@192.168.122.68:39461 I0222 04:59:56.751349 21809 slave.cpp:686] Authenticating with master master@192.168.122.68:39461 I0222 04:59:56.751369 21809 slave.cpp:691] Using default CRAM-MD5 authenticatee I0222 04:59:56.751502 21809 slave.cpp:659] Detecting new master I0222 04:59:56.751596 21809 status_update_manager.cpp:170] Pausing sending status updates I0222 04:59:56.751668 21809 authenticatee.hpp:138] Creating new client SASL connection I0222 04:59:56.752781 21809 master.cpp:3811] Authenticating slave(53)@192.168.122.68:39461 I0222 04:59:56.752820 21809 master.cpp:3822] Using default CRAM-MD5 authenticator I0222 04:59:56.753124 21809 authenticator.hpp:169] Creating new server SASL connection I0222 04:59:56.755609 21809 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0222 04:59:56.755641 21809 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0222 04:59:56.755708 21809 authenticator.hpp:275] Received SASL authentication start I0222 04:59:56.755751 21809 authenticator.hpp:397] Authentication requires more steps I0222 04:59:56.755813 21809 authenticatee.hpp:275] Received SASL authentication step I0222 04:59:56.755887 21809 authenticator.hpp:303] Received SASL authentication step I0222 04:59:56.755920 21809 auxprop.cpp:98] Request to lookup properties for user: 'test-principal' realm: 'centos-7' server FQDN: 'centos-7' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0222 04:59:56.755934 21809 auxprop.cpp:170] Looking up auxiliary property '*userPassword' I0222 04:59:56.756005 21809 auxprop.cpp:170] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0222 04:59:56.756036 21809 auxprop.cpp:98] Request to lookup properties for user: 'test-principal' realm: 'centos-7' server FQDN: 'centos-7' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0222 04:59:56.756047 21809 auxprop.cpp:120] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0222 04:59:56.756054 21809 auxprop.cpp:120] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0222 04:59:56.756068 21809 authenticator.hpp:389] Authentication success I0222 04:59:56.756155 21809 authenticatee.hpp:315] Authentication success I0222 04:59:56.756219 21809 master.cpp:3869] Successfully authenticated principal 'test-principal' at slave(53)@192.168.122.68:39461 I0222 04:59:56.756503 21809 slave.cpp:757] Successfully authenticated with master master@192.168.122.68:39461 I0222 04:59:56.756611 21809 slave.cpp:1089] Will retry registration in 11.221976ms if necessary I0222 04:59:56.756876 21809 master.cpp:2936] Registering slave at slave(53)@192.168.122.68:39461 (centos-7) with id 20150222-045956-1148889280-39461-21791-S0 I0222 04:59:56.757323 21809 registrar.cpp:444] Applied 1 operations in 70787ns; attempting to update the 'registry' I0222 04:59:56.759790 21809 log.cpp:683] Attempting to append 299 bytes to the log I0222 04:59:56.760000 21809 coordinator.cpp:339] Coordinator attempting to write 2 action at position 3 I0222 04:59:56.760920 21809 replica.cpp:510] Replica received write request for position 3 I0222 04:59:56.762037 21791 sched.cpp:154] Version: 0.22.0 I0222 04:59:56.762763 21806 sched.cpp:251] New master detected at master@192.168.122.68:39461 I0222 04:59:56.762835 21806 sched.cpp:307] Authenticating with master master@192.168.122.68:39461 I0222 04:59:56.762856 21806 sched.cpp:314] Using default CRAM-MD5 authenticatee I0222 04:59:56.763082 21806 authenticatee.hpp:138] Creating new client SASL connection I0222 04:59:56.763753 21806 master.cpp:3811] Authenticating scheduler-d9c22c4e-8dec-42a6-a350-a98472642891@192.168.122.68:39461 I0222 04:59:56.763784 21806 master.cpp:3822] Using default CRAM-MD5 authenticator I0222 04:59:56.764040 21806 authenticator.hpp:169] Creating new server SASL connection I0222 04:59:56.764624 21806 authenticatee.hpp:229] Received SASL authentication mechanisms: CRAM-MD5 I0222 04:59:56.764653 21806 authenticatee.hpp:255] Attempting to authenticate with mechanism 'CRAM-MD5' I0222 04:59:56.764719 21806 authenticator.hpp:275] Received SASL authentication start I0222 04:59:56.764758 21806 authenticator.hpp:397] Authentication requires more steps I0222 04:59:56.764819 21806 authenticatee.hpp:275] Received SASL authentication step I0222 04:59:56.764889 21806 authenticator.hpp:303] Received SASL authentication step I0222 04:59:56.764911 21806 auxprop.cpp:98] Request to lookup properties for user: 'test-principal' realm: 'centos-7' server FQDN: 'centos-7' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0222 04:59:56.764922 21806 auxprop.cpp:170] Looking up auxiliary property '*userPassword' I0222 04:59:56.764974 21806 auxprop.cpp:170] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0222 04:59:56.765005 21806 auxprop.cpp:98] Request to lookup properties for user: 'test-principal' realm: 'centos-7' server FQDN: 'centos-7' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0222 04:59:56.765017 21806 auxprop.cpp:120] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0222 04:59:56.765023 21806 auxprop.cpp:120] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0222 04:59:56.765036 21806 authenticator.hpp:389] Authentication success I0222 04:59:56.765120 21806 authenticatee.hpp:315] Authentication success I0222 04:59:56.765182 21806 master.cpp:3869] Successfully authenticated principal 'test-principal' at scheduler-d9c22c4e-8dec-42a6-a350-a98472642891@192.168.122.68:39461 I0222 04:59:56.765442 21806 sched.cpp:395] Successfully authenticated with master master@192.168.122.68:39461 I0222 04:59:56.765465 21806 sched.cpp:518] Sending registration request to master@192.168.122.68:39461 I0222 04:59:56.765522 21806 sched.cpp:551] Will retry registration in 1.283564292secs if necessary I0222 04:59:56.765637 21806 master.cpp:1572] Received registration request for framework 'default' at scheduler-d9c22c4e-8dec-42a6-a350-a98472642891@192.168.122.68:39461 I0222 04:59:56.765699 21806 master.cpp:1433] Authorizing framework principal 'test-principal' to receive offers for role '*' I0222 04:59:56.766120 21806 master.cpp:1636] Registering framework 20150222-045956-1148889280-39461-21791-0000 (default) at scheduler-d9c22c4e-8dec-42a6-a350-a98472642891@192.168.122.68:39461 I0222 04:59:56.766572 21806 hierarchical.hpp:320] Added framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.766598 21806 hierarchical.hpp:831] No resources available to allocate! I0222 04:59:56.766609 21806 hierarchical.hpp:738] Performed allocation for 0 slaves in 15902ns I0222 04:59:56.766753 21806 sched.cpp:445] Framework registered with 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.766790 21806 sched.cpp:459] Scheduler::registered took 15076ns I0222 04:59:56.773710 21806 slave.cpp:1089] Will retry registration in 3.454005ms if necessary I0222 04:59:56.773900 21806 master.cpp:2924] Ignoring register slave message from slave(53)@192.168.122.68:39461 (centos-7) as admission is already in progress I0222 04:59:56.775297 21809 leveldb.cpp:342] Persisting action (318 bytes) to leveldb took 14.319807ms I0222 04:59:56.775344 21809 replica.cpp:678] Persisted action at 3 I0222 04:59:56.776139 21809 replica.cpp:657] Replica received learned notice for position 3 I0222 04:59:56.778630 21806 slave.cpp:1089] Will retry registration in 32.764468ms if necessary I0222 04:59:56.778779 21806 master.cpp:2924] Ignoring register slave message from slave(53)@192.168.122.68:39461 (centos-7) as admission is already in progress I0222 04:59:56.783778 21809 leveldb.cpp:342] Persisting action (320 bytes) to leveldb took 7.609533ms I0222 04:59:56.783828 21809 replica.cpp:678] Persisted action at 3 I0222 04:59:56.783849 21809 replica.cpp:663] Replica learned 2 action at position 3 I0222 04:59:56.785058 21809 registrar.cpp:489] Successfully updated the 'registry' in 27.669248ms I0222 04:59:56.785274 21809 log.cpp:702] Attempting to truncate the log to 3 I0222 04:59:56.785815 21809 master.cpp:2993] Registered slave 20150222-045956-1148889280-39461-21791-S0 at slave(53)@192.168.122.68:39461 (centos-7) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0222 04:59:56.785913 21809 coordinator.cpp:339] Coordinator attempting to write 3 action at position 4 I0222 04:59:56.786267 21809 hierarchical.hpp:452] Added slave 20150222-045956-1148889280-39461-21791-S0 (centos-7) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0222 04:59:56.786600 21809 hierarchical.hpp:756] Performed allocation for slave 20150222-045956-1148889280-39461-21791-S0 in 292298ns I0222 04:59:56.786684 21809 slave.cpp:791] Registered with master master@192.168.122.68:39461; given slave ID 20150222-045956-1148889280-39461-21791-S0 I0222 04:59:56.786792 21809 slave.cpp:2830] Received ping from slave-observer(52)@192.168.122.68:39461 I0222 04:59:56.787230 21809 master.cpp:3753] Sending 1 offers to framework 20150222-045956-1148889280-39461-21791-0000 (default) at scheduler-d9c22c4e-8dec-42a6-a350-a98472642891@192.168.122.68:39461 I0222 04:59:56.787334 21809 status_update_manager.cpp:177] Resuming sending status updates I0222 04:59:56.788156 21809 sched.cpp:608] Scheduler::resourceOffers took 557128ns I0222 04:59:56.788936 21809 master.cpp:2266] Processing ACCEPT call for offers: [ 20150222-045956-1148889280-39461-21791-O0 ] on slave 20150222-045956-1148889280-39461-21791-S0 at slave(53)@192.168.122.68:39461 (centos-7) for framework 20150222-045956-1148889280-39461-21791-0000 (default) at scheduler-d9c22c4e-8dec-42a6-a350-a98472642891@192.168.122.68:39461 I0222 04:59:56.789000 21809 master.cpp:2110] Authorizing framework principal 'test-principal' to launch task 0 as user 'jenkins' W0222 04:59:56.790506 21809 validation.cpp:327] Executor default for task 0 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0222 04:59:56.790546 21809 validation.cpp:339] Executor default for task 0 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0222 04:59:56.790808 21809 master.hpp:821] Adding task 0 with resources cpus(*):1; mem(*):128 on slave 20150222-045956-1148889280-39461-21791-S0 (centos-7) I0222 04:59:56.790885 21809 master.cpp:2543] Launching task 0 of framework 20150222-045956-1148889280-39461-21791-0000 (default) at scheduler-d9c22c4e-8dec-42a6-a350-a98472642891@192.168.122.68:39461 with resources cpus(*):1; mem(*):128 on slave 20150222-045956-1148889280-39461-21791-S0 at slave(53)@192.168.122.68:39461 (centos-7) I0222 04:59:56.791201 21809 replica.cpp:510] Replica received write request for position 4 I0222 04:59:56.791610 21806 slave.cpp:1120] Got assigned task 0 for framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.792140 21806 slave.cpp:1230] Launching task 0 for framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.794872 21806 slave.cpp:4177] Launching executor default of framework 20150222-045956-1148889280-39461-21791-0000 in work directory '/tmp/SlaveTest_TaskLaunchContainerizerUpdateFails_qkhaJP/slaves/20150222-045956-1148889280-39461-21791-S0/frameworks/20150222-045956-1148889280-39461-21791-0000/executors/default/runs/753232b5-43ff-4fbf-b29a-0f76161132ab' I0222 04:59:56.796846 21806 exec.cpp:130] Version: 0.22.0 I0222 04:59:56.797173 21806 slave.cpp:1377] Queuing task '0' for executor default of framework '20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.797355 21806 slave.cpp:3132] Monitoring executor 'default' of framework '20150222-045956-1148889280-39461-21791-0000' in container '753232b5-43ff-4fbf-b29a-0f76161132ab' I0222 04:59:56.797570 21806 hierarchical.hpp:645] Recovered cpus(*):1; mem(*):896; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):1; mem(*):896; disk(*):1024; ports(*):[31000-32000]) on slave 20150222-045956-1148889280-39461-21791-S0 from framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.797613 21806 hierarchical.hpp:681] Framework 20150222-045956-1148889280-39461-21791-0000 filtered slave 20150222-045956-1148889280-39461-21791-S0 for 5secs I0222 04:59:56.797796 21806 exec.cpp:180] Executor started at: executor(24)@192.168.122.68:39461 with pid 21791 I0222 04:59:56.798068 21806 slave.cpp:576] Successfully attached file '/tmp/SlaveTest_TaskLaunchContainerizerUpdateFails_qkhaJP/slaves/20150222-045956-1148889280-39461-21791-S0/frameworks/20150222-045956-1148889280-39461-21791-0000/executors/default/runs/753232b5-43ff-4fbf-b29a-0f76161132ab' I0222 04:59:56.798136 21806 slave.cpp:2140] Got registration for executor 'default' of framework 20150222-045956-1148889280-39461-21791-0000 from executor(24)@192.168.122.68:39461 E0222 04:59:56.798573 21806 slave.cpp:1445] Failed to update resources for container 753232b5-43ff-4fbf-b29a-0f76161132ab of executor 'default' of framework 20150222-045956-1148889280-39461-21791-0000, destroying container: update() failed I0222 04:59:56.798811 21806 slave.cpp:3190] Executor 'default' of framework 20150222-045956-1148889280-39461-21791-0000 exited with status 0 I0222 04:59:56.800436 21806 slave.cpp:2507] Handling status update TASK_LOST (UUID: 45243922-bcad-4e11-9a9f-db9213111a2a) for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 from @0.0.0.0:0 I0222 04:59:56.800520 21806 slave.cpp:4485] Terminating task 0 I0222 04:59:56.801142 21806 master.cpp:3386] Executor default of framework 20150222-045956-1148889280-39461-21791-0000 on slave 20150222-045956-1148889280-39461-21791-S0 at slave(53)@192.168.122.68:39461 (centos-7) exited with status 0 I0222 04:59:56.801211 21806 master.cpp:4712] Removing executor 'default' with resources of framework 20150222-045956-1148889280-39461-21791-0000 on slave 20150222-045956-1148889280-39461-21791-S0 at slave(53)@192.168.122.68:39461 (centos-7) I0222 04:59:56.801378 21806 status_update_manager.cpp:316] Received status update TASK_LOST (UUID: 45243922-bcad-4e11-9a9f-db9213111a2a) for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.801412 21806 status_update_manager.cpp:493] Creating StatusUpdate stream for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.801574 21806 status_update_manager.cpp:370] Forwarding update TASK_LOST (UUID: 45243922-bcad-4e11-9a9f-db9213111a2a) for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 to the slave I0222 04:59:56.801831 21806 slave.cpp:2750] Forwarding the update TASK_LOST (UUID: 45243922-bcad-4e11-9a9f-db9213111a2a) for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 to master@192.168.122.68:39461 I0222 04:59:56.802109 21805 master.cpp:3293] Status update TASK_LOST (UUID: 45243922-bcad-4e11-9a9f-db9213111a2a) for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 from slave 20150222-045956-1148889280-39461-21791-S0 at slave(53)@192.168.122.68:39461 (centos-7) I0222 04:59:56.802145 21805 master.cpp:3334] Forwarding status update TASK_LOST (UUID: 45243922-bcad-4e11-9a9f-db9213111a2a) for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.802266 21805 master.cpp:4616] Updating the latest state of task 0 of framework 20150222-045956-1148889280-39461-21791-0000 to TASK_LOST I0222 04:59:56.802685 21805 sched.cpp:714] Scheduler::statusUpdate took 40465ns I0222 04:59:56.802821 21805 hierarchical.hpp:645] Recovered cpus(*):1; mem(*):128 (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 20150222-045956-1148889280-39461-21791-S0 from framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.803130 21805 master.cpp:4683] Removing task 0 with resources cpus(*):1; mem(*):128 of framework 20150222-045956-1148889280-39461-21791-0000 on slave 20150222-045956-1148889280-39461-21791-S0 at slave(53)@192.168.122.68:39461 (centos-7) I0222 04:59:56.803268 21805 master.cpp:2780] Forwarding status update acknowledgement 45243922-bcad-4e11-9a9f-db9213111a2a for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 (default) at scheduler-d9c22c4e-8dec-42a6-a350-a98472642891@192.168.122.68:39461 to slave 20150222-045956-1148889280-39461-21791-S0 at slave(53)@192.168.122.68:39461 (centos-7) I0222 04:59:56.803473 21791 sched.cpp:1585] Asked to stop the driver I0222 04:59:56.803547 21791 master.cpp:785] Master terminating I0222 04:59:56.804844 21791 process.cpp:2117] Dropped / Lost event for PID: master@192.168.122.68:39461 I0222 04:59:56.804921 21791 process.cpp:2117] Dropped / Lost event for PID: master@192.168.122.68:39461 I0222 04:59:56.805624 21812 sched.cpp:828] Stopping framework '20150222-045956-1148889280-39461-21791-0000' I0222 04:59:56.805675 21812 process.cpp:2117] Dropped / Lost event for PID: master@192.168.122.68:39461 I0222 04:59:56.807793 21812 process.cpp:2117] Dropped / Lost event for PID: log-coordinator(89)@192.168.122.68:39461 I0222 04:59:56.809552 21806 slave.cpp:2677] Status update manager successfully handled status update TASK_LOST (UUID: 45243922-bcad-4e11-9a9f-db9213111a2a) for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.809736 21806 slave.cpp:2915] master@192.168.122.68:39461 exited W0222 04:59:56.809759 21806 slave.cpp:2918] Master disconnected! Waiting for a new master to be elected I0222 04:59:56.809788 21806 status_update_manager.cpp:388] Received status update acknowledgement (UUID: 45243922-bcad-4e11-9a9f-db9213111a2a) for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.809855 21806 status_update_manager.cpp:524] Cleaning up status update stream for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.810042 21806 slave.cpp:2080] Status update manager successfully handled status update acknowledgement (UUID: 45243922-bcad-4e11-9a9f-db9213111a2a) for task 0 of framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.810088 21806 slave.cpp:4526] Completing task 0 I0222 04:59:56.810117 21806 slave.cpp:3299] Cleaning up executor 'default' of framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.810361 21806 slave.cpp:3378] Cleaning up framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.810509 21806 gc.cpp:55] Scheduling '/tmp/SlaveTest_TaskLaunchContainerizerUpdateFails_qkhaJP/slaves/20150222-045956-1148889280-39461-21791-S0/frameworks/20150222-045956-1148889280-39461-21791-0000/executors/default/runs/753232b5-43ff-4fbf-b29a-0f76161132ab' for gc 6.99999062248889days in the future I0222 04:59:56.810673 21806 gc.cpp:55] Scheduling '/tmp/SlaveTest_TaskLaunchContainerizerUpdateFails_qkhaJP/slaves/20150222-045956-1148889280-39461-21791-S0/frameworks/20150222-045956-1148889280-39461-21791-0000/executors/default' for gc 6.99999062130963days in the future I0222 04:59:56.810768 21806 gc.cpp:55] Scheduling '/tmp/SlaveTest_TaskLaunchContainerizerUpdateFails_qkhaJP/slaves/20150222-045956-1148889280-39461-21791-S0/frameworks/20150222-045956-1148889280-39461-21791-0000' for gc 6.9999906199763days in the future I0222 04:59:56.810861 21806 status_update_manager.cpp:278] Closing status update streams for framework 20150222-045956-1148889280-39461-21791-0000 I0222 04:59:56.817010 21809 leveldb.cpp:342] Persisting action (16 bytes) to leveldb took 25.747365ms I0222 04:59:56.817047 21809 replica.cpp:678] Persisted action at 4 I0222 04:59:56.817087 21809 process.cpp:2117] Dropped / Lost event for PID: (1371)@192.168.122.68:39461 I0222 04:59:56.817679 21791 slave.cpp:505] Slave terminating I0222 04:59:56.818411 21791 process.cpp:2117] Dropped / Lost event for PID: slave(53)@192.168.122.68:39461 I0222 04:59:56.818869 21791 process.cpp:2117] Dropped / Lost event for PID: scheduler-d9c22c4e-8dec-42a6-a350-a98472642891@192.168.122.68:39461 tests/slave_tests.cpp:1183: Failure Actual function call count doesn't match EXPECT_CALL(exec, registered(_, _, _, _))... Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] SlaveTest.TaskLaunchContainerizerUpdateFails (253 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2392","02/24/2015 02:22:48",3,"Rate limit slaves removals during master recovery. ""Much like we rate limit slave removals in the common path (MESOS-1148), we need to rate limit slave removals that occur during master recovery. When a master recovers and is using a strict registry, slaves that do not re-register within a timeout will be removed. Currently there is a safeguard in place to abort when too many slaves have not re-registered. However, in the case of a transient partition, we don't want to remove large sections of slaves without rate limiting.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2401","02/25/2015 06:57:44",1,"MasterTest.ShutdownFrameworkWhileTaskRunning is flaky ""Looks like the executorShutdownTimeout() was called immediately after executorShutdown() was called! """," [ RUN ] MasterTest.ShutdownFrameworkWhileTaskRunning Using temporary directory '/tmp/MasterTest_ShutdownFrameworkWhileTaskRunning_sBd6vK' I0224 18:51:17.385068 30213 leveldb.cpp:176] Opened db in 1.262442ms I0224 18:51:17.386360 30213 leveldb.cpp:183] Compacted db in 985102ns I0224 18:51:17.387025 30213 leveldb.cpp:198] Created db iterator in 78043ns I0224 18:51:17.387420 30213 leveldb.cpp:204] Seeked to beginning of db in 25814ns I0224 18:51:17.387804 30213 leveldb.cpp:273] Iterated through 0 keys in the db in 25025ns I0224 18:51:17.388270 30213 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0224 18:51:17.389760 30227 recover.cpp:449] Starting replica recovery I0224 18:51:17.395699 30227 recover.cpp:475] Replica is in 4 status I0224 18:51:17.398294 30227 replica.cpp:641] Replica in 4 status received a broadcasted recover request I0224 18:51:17.398816 30227 recover.cpp:195] Received a recover response from a replica in 4 status I0224 18:51:17.402415 30230 recover.cpp:566] Updating replica status to 3 I0224 18:51:17.403473 30229 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 273857ns I0224 18:51:17.404093 30229 replica.cpp:323] Persisted replica status to 3 I0224 18:51:17.404930 30229 recover.cpp:475] Replica is in 3 status I0224 18:51:17.407995 30233 replica.cpp:641] Replica in 3 status received a broadcasted recover request I0224 18:51:17.410697 30231 recover.cpp:195] Received a recover response from a replica in 3 status I0224 18:51:17.415710 30230 recover.cpp:566] Updating replica status to 1 I0224 18:51:17.416987 30227 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 221966ns I0224 18:51:17.417579 30227 replica.cpp:323] Persisted replica status to 1 I0224 18:51:17.418803 30234 recover.cpp:580] Successfully joined the Paxos group I0224 18:51:17.419699 30227 recover.cpp:464] Recover process terminated I0224 18:51:17.430594 30234 master.cpp:349] Master 20150224-185117-2272962752-44950-30213 (fedora-19) started on 192.168.122.135:44950 I0224 18:51:17.431082 30234 master.cpp:395] Master only allowing authenticated frameworks to register I0224 18:51:17.431453 30234 master.cpp:400] Master only allowing authenticated slaves to register I0224 18:51:17.431828 30234 credentials.hpp:37] Loading credentials for authentication from '/tmp/MasterTest_ShutdownFrameworkWhileTaskRunning_sBd6vK/credentials' I0224 18:51:17.432740 30234 master.cpp:442] Authorization enabled I0224 18:51:17.434224 30229 hierarchical.hpp:287] Initialized hierarchical allocator process I0224 18:51:17.434994 30233 whitelist_watcher.cpp:79] No whitelist given I0224 18:51:17.440687 30234 master.cpp:1356] The newly elected leader is master@192.168.122.135:44950 with id 20150224-185117-2272962752-44950-30213 I0224 18:51:17.441764 30234 master.cpp:1369] Elected as the leading master! I0224 18:51:17.442430 30234 master.cpp:1187] Recovering from registrar I0224 18:51:17.443053 30229 registrar.cpp:313] Recovering registrar I0224 18:51:17.445468 30228 log.cpp:660] Attempting to start the writer I0224 18:51:17.449970 30233 replica.cpp:477] Replica received implicit promise request with proposal 1 I0224 18:51:17.451359 30233 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 339488ns I0224 18:51:17.451949 30233 replica.cpp:345] Persisted promised to 1 I0224 18:51:17.456845 30235 process.cpp:2117] Dropped / Lost event for PID: hierarchical-allocator(154)@192.168.122.135:44950 I0224 18:51:17.461741 30231 coordinator.cpp:230] Coordinator attemping to fill missing position I0224 18:51:17.464686 30228 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0224 18:51:17.465515 30228 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 170261ns I0224 18:51:17.465991 30228 replica.cpp:679] Persisted action at 0 I0224 18:51:17.470512 30229 replica.cpp:511] Replica received write request for position 0 I0224 18:51:17.471437 30229 leveldb.cpp:438] Reading position from leveldb took 139178ns I0224 18:51:17.472129 30229 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 141560ns I0224 18:51:17.472705 30229 replica.cpp:679] Persisted action at 0 I0224 18:51:17.476305 30228 replica.cpp:658] Replica received learned notice for position 0 I0224 18:51:17.477991 30228 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 208112ns I0224 18:51:17.478574 30228 replica.cpp:679] Persisted action at 0 I0224 18:51:17.479044 30228 replica.cpp:664] Replica learned 1 action at position 0 I0224 18:51:17.484371 30233 log.cpp:676] Writer started with ending position 0 I0224 18:51:17.487396 30233 leveldb.cpp:438] Reading position from leveldb took 96498ns I0224 18:51:17.498906 30233 registrar.cpp:346] Successfully fetched the registry (0B) in 55.234048ms I0224 18:51:17.499781 30233 registrar.cpp:445] Applied 1 operations in 97308ns; attempting to update the 'registry' I0224 18:51:17.503955 30231 log.cpp:684] Attempting to append 131 bytes to the log I0224 18:51:17.505009 30231 coordinator.cpp:340] Coordinator attempting to write 2 action at position 1 I0224 18:51:17.507428 30228 replica.cpp:511] Replica received write request for position 1 I0224 18:51:17.508517 30228 leveldb.cpp:343] Persisting action (150 bytes) to leveldb took 316570ns I0224 18:51:17.508985 30228 replica.cpp:679] Persisted action at 1 I0224 18:51:17.512902 30229 replica.cpp:658] Replica received learned notice for position 1 I0224 18:51:17.517261 30229 leveldb.cpp:343] Persisting action (152 bytes) to leveldb took 427860ns I0224 18:51:17.517470 30229 replica.cpp:679] Persisted action at 1 I0224 18:51:17.517796 30229 replica.cpp:664] Replica learned 2 action at position 1 I0224 18:51:17.532624 30232 registrar.cpp:490] Successfully updated the 'registry' in 32.31104ms I0224 18:51:17.533957 30228 log.cpp:703] Attempting to truncate the log to 1 I0224 18:51:17.534366 30228 coordinator.cpp:340] Coordinator attempting to write 3 action at position 2 I0224 18:51:17.536684 30227 replica.cpp:511] Replica received write request for position 2 I0224 18:51:17.537406 30227 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 196455ns I0224 18:51:17.537946 30227 replica.cpp:679] Persisted action at 2 I0224 18:51:17.537695 30232 registrar.cpp:376] Successfully recovered registrar I0224 18:51:17.544136 30231 master.cpp:1214] Recovered 0 slaves from the Registry (95B) ; allowing 10mins for slaves to re-register I0224 18:51:17.546041 30227 replica.cpp:658] Replica received learned notice for position 2 I0224 18:51:17.546728 30227 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 192442ns I0224 18:51:17.547058 30227 leveldb.cpp:401] Deleting ~1 keys from leveldb took 61064ns I0224 18:51:17.547363 30227 replica.cpp:679] Persisted action at 2 I0224 18:51:17.547669 30227 replica.cpp:664] Replica learned 3 action at position 2 I0224 18:51:17.565460 30234 slave.cpp:174] Slave started on 138)@192.168.122.135:44950 I0224 18:51:17.566038 30234 credentials.hpp:85] Loading credential for authentication from '/tmp/MasterTest_ShutdownFrameworkWhileTaskRunning_lRugms/credential' I0224 18:51:17.566584 30234 slave.cpp:281] Slave using credential for: test-principal I0224 18:51:17.567198 30234 slave.cpp:299] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0224 18:51:17.567930 30234 slave.cpp:328] Slave hostname: fedora-19 I0224 18:51:17.568172 30234 slave.cpp:329] Slave checkpoint: false W0224 18:51:17.568435 30234 slave.cpp:331] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I0224 18:51:17.570539 30227 state.cpp:35] Recovering state from '/tmp/MasterTest_ShutdownFrameworkWhileTaskRunning_lRugms/meta' I0224 18:51:17.573499 30232 status_update_manager.cpp:197] Recovering status update manager I0224 18:51:17.574209 30234 slave.cpp:3775] Finished recovery I0224 18:51:17.576277 30229 status_update_manager.cpp:171] Pausing sending status updates I0224 18:51:17.576680 30234 slave.cpp:624] New master detected at master@192.168.122.135:44950 I0224 18:51:17.577131 30234 slave.cpp:687] Authenticating with master master@192.168.122.135:44950 I0224 18:51:17.577385 30234 slave.cpp:692] Using default CRAM-MD5 authenticatee I0224 18:51:17.577945 30228 authenticatee.hpp:139] Creating new client SASL connection I0224 18:51:17.578837 30234 slave.cpp:660] Detecting new master I0224 18:51:17.579270 30228 master.cpp:3813] Authenticating slave(138)@192.168.122.135:44950 I0224 18:51:17.579900 30228 master.cpp:3824] Using default CRAM-MD5 authenticator I0224 18:51:17.580572 30228 authenticator.hpp:170] Creating new server SASL connection I0224 18:51:17.581501 30231 authenticatee.hpp:230] Received SASL authentication mechanisms: CRAM-MD5 I0224 18:51:17.581805 30231 authenticatee.hpp:256] Attempting to authenticate with mechanism 'CRAM-MD5' I0224 18:51:17.582222 30228 authenticator.hpp:276] Received SASL authentication start I0224 18:51:17.582531 30228 authenticator.hpp:398] Authentication requires more steps I0224 18:51:17.582945 30230 authenticatee.hpp:276] Received SASL authentication step I0224 18:51:17.583351 30228 authenticator.hpp:304] Received SASL authentication step I0224 18:51:17.583643 30228 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0224 18:51:17.583911 30228 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0224 18:51:17.584241 30228 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0224 18:51:17.584517 30228 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0224 18:51:17.584787 30228 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0224 18:51:17.585075 30228 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0224 18:51:17.585358 30228 authenticator.hpp:390] Authentication success I0224 18:51:17.585750 30233 authenticatee.hpp:316] Authentication success I0224 18:51:17.586354 30232 master.cpp:3871] Successfully authenticated principal 'test-principal' at slave(138)@192.168.122.135:44950 I0224 18:51:17.590953 30234 slave.cpp:758] Successfully authenticated with master master@192.168.122.135:44950 I0224 18:51:17.591686 30233 master.cpp:2938] Registering slave at slave(138)@192.168.122.135:44950 (fedora-19) with id 20150224-185117-2272962752-44950-30213-S0 I0224 18:51:17.592718 30233 registrar.cpp:445] Applied 1 operations in 100358ns; attempting to update the 'registry' I0224 18:51:17.595989 30227 log.cpp:684] Attempting to append 302 bytes to the log I0224 18:51:17.596757 30227 coordinator.cpp:340] Coordinator attempting to write 2 action at position 3 I0224 18:51:17.599280 30227 replica.cpp:511] Replica received write request for position 3 I0224 18:51:17.599481 30234 slave.cpp:1090] Will retry registration in 12.331173ms if necessary I0224 18:51:17.601940 30227 leveldb.cpp:343] Persisting action (321 bytes) to leveldb took 999045ns I0224 18:51:17.602339 30227 replica.cpp:679] Persisted action at 3 I0224 18:51:17.612349 30229 replica.cpp:658] Replica received learned notice for position 3 I0224 18:51:17.612934 30229 leveldb.cpp:343] Persisting action (323 bytes) to leveldb took 152139ns I0224 18:51:17.613471 30229 replica.cpp:679] Persisted action at 3 I0224 18:51:17.613796 30229 replica.cpp:664] Replica learned 2 action at position 3 I0224 18:51:17.615980 30229 master.cpp:2926] Ignoring register slave message from slave(138)@192.168.122.135:44950 (fedora-19) as admission is already in progress I0224 18:51:17.614302 30233 slave.cpp:1090] Will retry registration in 11.014835ms if necessary I0224 18:51:17.617490 30234 registrar.cpp:490] Successfully updated the 'registry' in 24.179968ms I0224 18:51:17.618989 30234 master.cpp:2995] Registered slave 20150224-185117-2272962752-44950-30213-S0 at slave(138)@192.168.122.135:44950 (fedora-19) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0224 18:51:17.619567 30233 hierarchical.hpp:455] Added slave 20150224-185117-2272962752-44950-30213-S0 (fedora-19) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) I0224 18:51:17.621080 30233 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:17.621441 30233 hierarchical.hpp:759] Performed allocation for slave 20150224-185117-2272962752-44950-30213-S0 in 544608ns I0224 18:51:17.619704 30229 slave.cpp:792] Registered with master master@192.168.122.135:44950; given slave ID 20150224-185117-2272962752-44950-30213-S0 I0224 18:51:17.622195 30229 slave.cpp:2830] Received ping from slave-observer(125)@192.168.122.135:44950 I0224 18:51:17.622385 30227 status_update_manager.cpp:178] Resuming sending status updates I0224 18:51:17.620266 30232 log.cpp:703] Attempting to truncate the log to 3 I0224 18:51:17.623522 30232 coordinator.cpp:340] Coordinator attempting to write 3 action at position 4 I0224 18:51:17.624835 30229 replica.cpp:511] Replica received write request for position 4 I0224 18:51:17.625727 30229 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 259831ns I0224 18:51:17.626122 30229 replica.cpp:679] Persisted action at 4 I0224 18:51:17.627686 30227 replica.cpp:658] Replica received learned notice for position 4 I0224 18:51:17.628228 30227 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 93777ns I0224 18:51:17.628785 30227 leveldb.cpp:401] Deleting ~2 keys from leveldb took 57660ns I0224 18:51:17.629176 30227 replica.cpp:679] Persisted action at 4 I0224 18:51:17.629443 30227 replica.cpp:664] Replica learned 3 action at position 4 I0224 18:51:17.636715 30213 sched.cpp:157] Version: 0.23.0 I0224 18:51:17.638003 30229 sched.cpp:254] New master detected at master@192.168.122.135:44950 I0224 18:51:17.638602 30229 sched.cpp:310] Authenticating with master master@192.168.122.135:44950 I0224 18:51:17.639024 30229 sched.cpp:317] Using default CRAM-MD5 authenticatee I0224 18:51:17.639580 30228 authenticatee.hpp:139] Creating new client SASL connection I0224 18:51:17.640455 30235 process.cpp:2117] Dropped / Lost event for PID: scheduler-11bb6bcb-cd51-4927-a28b-dbca9d63772f@192.168.122.135:44950 I0224 18:51:17.641150 30228 master.cpp:3813] Authenticating scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 I0224 18:51:17.641597 30228 master.cpp:3824] Using default CRAM-MD5 authenticator I0224 18:51:17.642643 30228 authenticator.hpp:170] Creating new server SASL connection I0224 18:51:17.643698 30234 authenticatee.hpp:230] Received SASL authentication mechanisms: CRAM-MD5 I0224 18:51:17.644296 30234 authenticatee.hpp:256] Attempting to authenticate with mechanism 'CRAM-MD5' I0224 18:51:17.644739 30228 authenticator.hpp:276] Received SASL authentication start I0224 18:51:17.645143 30228 authenticator.hpp:398] Authentication requires more steps I0224 18:51:17.645654 30230 authenticatee.hpp:276] Received SASL authentication step I0224 18:51:17.646122 30228 authenticator.hpp:304] Received SASL authentication step I0224 18:51:17.646421 30228 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0224 18:51:17.646746 30228 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0224 18:51:17.647203 30228 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0224 18:51:17.647644 30228 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'fedora-19' server FQDN: 'fedora-19' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0224 18:51:17.648454 30228 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0224 18:51:17.648788 30228 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0224 18:51:17.649210 30228 authenticator.hpp:390] Authentication success I0224 18:51:17.649705 30231 authenticatee.hpp:316] Authentication success I0224 18:51:17.653314 30231 sched.cpp:398] Successfully authenticated with master master@192.168.122.135:44950 I0224 18:51:17.653766 30232 master.cpp:3871] Successfully authenticated principal 'test-principal' at scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 I0224 18:51:17.654683 30231 sched.cpp:521] Sending registration request to master@192.168.122.135:44950 I0224 18:51:17.655138 30231 sched.cpp:554] Will retry registration in 1.028970132secs if necessary I0224 18:51:17.657112 30232 master.cpp:1574] Received registration request for framework 'default' at scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 I0224 18:51:17.658509 30232 master.cpp:1435] Authorizing framework principal 'test-principal' to receive offers for role '*' I0224 18:51:17.659765 30232 master.cpp:1638] Registering framework 20150224-185117-2272962752-44950-30213-0000 (default) at scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 I0224 18:51:17.660727 30233 hierarchical.hpp:321] Added framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.661730 30233 hierarchical.hpp:741] Performed allocation for 1 slaves in 529369ns I0224 18:51:17.662911 30229 sched.cpp:448] Framework registered with 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.663374 30229 sched.cpp:462] Scheduler::registered took 35637ns I0224 18:51:17.664552 30232 master.cpp:3755] Sending 1 offers to framework 20150224-185117-2272962752-44950-30213-0000 (default) at scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 I0224 18:51:17.668009 30234 sched.cpp:611] Scheduler::resourceOffers took 2.574292ms I0224 18:51:17.671038 30232 master.cpp:2268] Processing ACCEPT call for offers: [ 20150224-185117-2272962752-44950-30213-O0 ] on slave 20150224-185117-2272962752-44950-30213-S0 at slave(138)@192.168.122.135:44950 (fedora-19) for framework 20150224-185117-2272962752-44950-30213-0000 (default) at scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 I0224 18:51:17.672071 30232 master.cpp:2112] Authorizing framework principal 'test-principal' to launch task 1 as user 'jenkins' W0224 18:51:17.674675 30232 validation.cpp:326] Executor default for task 1 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0224 18:51:17.675395 30232 validation.cpp:338] Executor default for task 1 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0224 18:51:17.676460 30232 master.hpp:822] Adding task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150224-185117-2272962752-44950-30213-S0 (fedora-19) I0224 18:51:17.677078 30232 master.cpp:2545] Launching task 1 of framework 20150224-185117-2272962752-44950-30213-0000 (default) at scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150224-185117-2272962752-44950-30213-S0 at slave(138)@192.168.122.135:44950 (fedora-19) I0224 18:51:17.678084 30230 slave.cpp:1121] Got assigned task 1 for framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.680057 30230 slave.cpp:1231] Launching task 1 for framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.684798 30230 slave.cpp:4177] Launching executor default of framework 20150224-185117-2272962752-44950-30213-0000 in work directory '/tmp/MasterTest_ShutdownFrameworkWhileTaskRunning_lRugms/slaves/20150224-185117-2272962752-44950-30213-S0/frameworks/20150224-185117-2272962752-44950-30213-0000/executors/default/runs/675638b4-5214-449d-96d8-c50551496152' I0224 18:51:17.688701 30230 exec.cpp:132] Version: 0.23.0 I0224 18:51:17.689615 30234 exec.cpp:182] Executor started at: executor(41)@192.168.122.135:44950 with pid 30213 I0224 18:51:17.690659 30230 slave.cpp:1379] Queuing task '1' for executor default of framework '20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.691382 30230 slave.cpp:577] Successfully attached file '/tmp/MasterTest_ShutdownFrameworkWhileTaskRunning_lRugms/slaves/20150224-185117-2272962752-44950-30213-S0/frameworks/20150224-185117-2272962752-44950-30213-0000/executors/default/runs/675638b4-5214-449d-96d8-c50551496152' I0224 18:51:17.691813 30230 slave.cpp:2140] Got registration for executor 'default' of framework 20150224-185117-2272962752-44950-30213-0000 from executor(41)@192.168.122.135:44950 I0224 18:51:17.692772 30231 exec.cpp:206] Executor registered on slave 20150224-185117-2272962752-44950-30213-S0 I0224 18:51:17.695121 30231 exec.cpp:218] Executor::registered took 80811ns I0224 18:51:17.697582 30230 slave.cpp:3132] Monitoring executor 'default' of framework '20150224-185117-2272962752-44950-30213-0000' in container '675638b4-5214-449d-96d8-c50551496152' I0224 18:51:17.699354 30230 slave.cpp:1533] Sending queued task '1' to executor 'default' of framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.700932 30227 exec.cpp:293] Executor asked to run task '1' I0224 18:51:17.701679 30227 exec.cpp:302] Executor::launchTask took 140355ns I0224 18:51:17.705504 30227 exec.cpp:525] Executor sending status update TASK_RUNNING (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.707149 30228 slave.cpp:2507] Handling status update TASK_RUNNING (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 from executor(41)@192.168.122.135:44950 I0224 18:51:17.708539 30228 status_update_manager.cpp:317] Received status update TASK_RUNNING (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.709377 30228 status_update_manager.cpp:494] Creating StatusUpdate stream for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.710360 30228 status_update_manager.cpp:371] Forwarding update TASK_RUNNING (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 to the slave I0224 18:51:17.711405 30233 slave.cpp:2750] Forwarding the update TASK_RUNNING (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 to master@192.168.122.135:44950 I0224 18:51:17.712425 30233 master.cpp:3295] Status update TASK_RUNNING (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 from slave 20150224-185117-2272962752-44950-30213-S0 at slave(138)@192.168.122.135:44950 (fedora-19) I0224 18:51:17.713047 30233 master.cpp:3336] Forwarding status update TASK_RUNNING (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.713930 30233 master.cpp:4615] Updating the latest state of task 1 of framework 20150224-185117-2272962752-44950-30213-0000 to TASK_RUNNING I0224 18:51:17.714588 30232 sched.cpp:717] Scheduler::statusUpdate took 118286ns I0224 18:51:17.715512 30213 sched.cpp:1589] Asked to stop the driver I0224 18:51:17.718159 30232 master.cpp:2782] Forwarding status update acknowledgement 57877913-d602-445c-a0d9-87fc71e8eca5 for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 (default) at scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 to slave 20150224-185117-2272962752-44950-30213-S0 at slave(138)@192.168.122.135:44950 (fedora-19) I0224 18:51:17.718691 30234 sched.cpp:831] Stopping framework '20150224-185117-2272962752-44950-30213-0000' I0224 18:51:17.721380 30232 master.cpp:1898] Asked to unregister framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.722920 30232 master.cpp:4183] Removing framework 20150224-185117-2272962752-44950-30213-0000 (default) at scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 I0224 18:51:17.725231 30231 hierarchical.hpp:400] Deactivated framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.725734 30229 slave.cpp:1746] Asked to shut down framework 20150224-185117-2272962752-44950-30213-0000 by master@192.168.122.135:44950 I0224 18:51:17.726658 30229 slave.cpp:1771] Shutting down framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.727322 30229 slave.cpp:3440] Shutting down executor 'default' of framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.728742 30228 status_update_manager.cpp:389] Received status update acknowledgement (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.729042 30231 slave.cpp:2677] Status update manager successfully handled status update TASK_RUNNING (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.730578 30231 slave.cpp:2683] Sending acknowledgement for status update TASK_RUNNING (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 to executor(41)@192.168.122.135:44950 I0224 18:51:17.731300 30231 slave.cpp:3510] Killing executor 'default' of framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.733461 30232 master.cpp:4615] Updating the latest state of task 1 of framework 20150224-185117-2272962752-44950-30213-0000 to TASK_KILLED I0224 18:51:17.734503 30231 slave.cpp:3190] Executor 'default' of framework 20150224-185117-2272962752-44950-30213-0000 exited with status 0 I0224 18:51:17.736177 30231 slave.cpp:3299] Cleaning up executor 'default' of framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.737277 30233 gc.cpp:56] Scheduling '/tmp/MasterTest_ShutdownFrameworkWhileTaskRunning_lRugms/slaves/20150224-185117-2272962752-44950-30213-S0/frameworks/20150224-185117-2272962752-44950-30213-0000/executors/default/runs/675638b4-5214-449d-96d8-c50551496152' for gc 6.99999146853037days in the future I0224 18:51:17.738636 30231 slave.cpp:3378] Cleaning up framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.739148 30228 gc.cpp:56] Scheduling '/tmp/MasterTest_ShutdownFrameworkWhileTaskRunning_lRugms/slaves/20150224-185117-2272962752-44950-30213-S0/frameworks/20150224-185117-2272962752-44950-30213-0000/executors/default' for gc 6.99999145243259days in the future I0224 18:51:17.740373 30228 status_update_manager.cpp:279] Closing status update streams for framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.741170 30228 status_update_manager.cpp:525] Cleaning up status update stream for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.742255 30229 gc.cpp:56] Scheduling '/tmp/MasterTest_ShutdownFrameworkWhileTaskRunning_lRugms/slaves/20150224-185117-2272962752-44950-30213-S0/frameworks/20150224-185117-2272962752-44950-30213-0000' for gc 6.99999141055704days in the future I0224 18:51:17.743207 30231 slave.cpp:2080] Status update manager successfully handled status update acknowledgement (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of framework 20150224-185117-2272962752-44950-30213-0000 E0224 18:51:17.743799 30231 slave.cpp:2091] Status update acknowledgement (UUID: 57877913-d602-445c-a0d9-87fc71e8eca5) for task 1 of unknown framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.744699 30233 hierarchical.hpp:648] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on slave 20150224-185117-2272962752-44950-30213-S0 from framework 20150224-185117-2272962752-44950-30213-0000 I0224 18:51:17.746369 30232 master.cpp:4682] Removing task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 20150224-185117-2272962752-44950-30213-0000 on slave 20150224-185117-2272962752-44950-30213-S0 at slave(138)@192.168.122.135:44950 (fedora-19) I0224 18:51:17.747835 30232 master.cpp:4711] Removing executor 'default' with resources of framework 20150224-185117-2272962752-44950-30213-0000 on slave 20150224-185117-2272962752-44950-30213-S0 at slave(138)@192.168.122.135:44950 (fedora-19) I0224 18:51:17.749958 30230 hierarchical.hpp:354] Removed framework 20150224-185117-2272962752-44950-30213-0000 W0224 18:51:17.754004 30232 master.cpp:3382] Ignoring unknown exited executor 'default' of framework 20150224-185117-2272962752-44950-30213-0000 on slave 20150224-185117-2272962752-44950-30213-S0 at slave(138)@192.168.122.135:44950 (fedora-19) I0224 18:51:17.806908 30235 process.cpp:2117] Dropped / Lost event for PID: hierarchical-allocator(155)@192.168.122.135:44950 I0224 18:51:18.169684 30235 process.cpp:2117] Dropped / Lost event for PID: hierarchical-allocator(156)@192.168.122.135:44950 I0224 18:51:18.277674 30235 process.cpp:2117] Dropped / Lost event for PID: scheduler-279c11f0-4f87-4922-b09c-e75af333d93e@192.168.122.135:44950 I0224 18:51:18.435956 30229 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:18.438434 30229 hierarchical.hpp:741] Performed allocation for 1 slaves in 2.913091ms I0224 18:51:18.687819 30235 process.cpp:2117] Dropped / Lost event for PID: scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 I0224 18:51:18.840509 30235 process.cpp:2117] Dropped / Lost event for PID: scheduler-11bb6bcb-cd51-4927-a28b-dbca9d63772f@192.168.122.135:44950 I0224 18:51:19.440609 30233 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:19.444022 30233 hierarchical.hpp:741] Performed allocation for 1 slaves in 3.667907ms I0224 18:51:20.445341 30229 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:20.448892 30229 hierarchical.hpp:741] Performed allocation for 1 slaves in 3.787334ms I0224 18:51:20.840729 30235 process.cpp:2117] Dropped / Lost event for PID: slave(133)@192.168.122.135:44950 I0224 18:51:20.895016 30235 process.cpp:2117] Dropped / Lost event for PID: scheduler-deb787a0-9e87-42c9-813e-fc35f354582f@192.168.122.135:44950 I0224 18:51:21.016639 30235 process.cpp:2117] Dropped / Lost event for PID: slave(133)@192.168.122.135:44950 I0224 18:51:21.258066 30235 process.cpp:2117] Dropped / Lost event for PID: slave(134)@192.168.122.135:44950 I0224 18:51:21.312721 30235 process.cpp:2117] Dropped / Lost event for PID: scheduler-7a62b2f4-6959-49de-9fd8-72ffd048f4e3@192.168.122.135:44950 I0224 18:51:21.450574 30230 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:21.451004 30230 hierarchical.hpp:741] Performed allocation for 1 slaves in 761280ns I0224 18:51:21.557883 30235 process.cpp:2117] Dropped / Lost event for PID: slave(135)@192.168.122.135:44950 I0224 18:51:21.611552 30235 process.cpp:2117] Dropped / Lost event for PID: scheduler-279c11f0-4f87-4922-b09c-e75af333d93e@192.168.122.135:44950 I0224 18:51:21.709940 30235 process.cpp:2117] Dropped / Lost event for PID: slave(135)@192.168.122.135:44950 I0224 18:51:21.915220 30235 process.cpp:2117] Dropped / Lost event for PID: slave(136)@192.168.122.135:44950 I0224 18:51:21.997714 30235 process.cpp:2117] Dropped / Lost event for PID: scheduler-11bb6bcb-cd51-4927-a28b-dbca9d63772f@192.168.122.135:44950 I0224 18:51:22.107311 30235 process.cpp:2117] Dropped / Lost event for PID: slave(136)@192.168.122.135:44950 I0224 18:51:22.219341 30235 process.cpp:2117] Dropped / Lost event for PID: scheduler-11bb6bcb-cd51-4927-a28b-dbca9d63772f@192.168.122.135:44950 I0224 18:51:22.269714 30235 process.cpp:2117] Dropped / Lost event for PID: slave(137)@192.168.122.135:44950 I0224 18:51:22.453269 30229 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:22.457568 30229 hierarchical.hpp:741] Performed allocation for 1 slaves in 4.67818ms I0224 18:51:22.644316 30235 process.cpp:2117] Dropped / Lost event for PID: scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 I0224 18:51:23.459383 30231 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:23.462417 30231 hierarchical.hpp:741] Performed allocation for 1 slaves in 3.417923ms I0224 18:51:24.464651 30228 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:24.468094 30228 hierarchical.hpp:741] Performed allocation for 1 slaves in 3.698248ms I0224 18:51:25.469254 30232 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:25.472430 30232 hierarchical.hpp:741] Performed allocation for 1 slaves in 3.477698ms I0224 18:51:25.971513 30235 process.cpp:2117] Dropped / Lost event for PID: (2965)@192.168.122.135:44950 I0224 18:51:26.474663 30234 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:26.475232 30234 hierarchical.hpp:741] Performed allocation for 1 slaves in 942399ns I0224 18:51:26.672420 30235 process.cpp:2117] Dropped / Lost event for PID: (2996)@192.168.122.135:44950 I0224 18:51:27.069792 30235 process.cpp:2117] Dropped / Lost event for PID: (3010)@192.168.122.135:44950 I0224 18:51:27.476572 30228 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:27.479708 30228 hierarchical.hpp:741] Performed allocation for 1 slaves in 3.419391ms I0224 18:51:28.481403 30228 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:28.484798 30228 hierarchical.hpp:741] Performed allocation for 1 slaves in 3.709639ms I0224 18:51:29.487264 30228 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:29.491909 30228 hierarchical.hpp:741] Performed allocation for 1 slaves in 5.065187ms I0224 18:51:29.623121 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(739)@192.168.122.135:44950 I0224 18:51:29.641655 30235 process.cpp:2117] Dropped / Lost event for PID: slave-observer(120)@192.168.122.135:44950 I0224 18:51:29.647222 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(740)@192.168.122.135:44950 I0224 18:51:30.493526 30232 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:30.496922 30232 hierarchical.hpp:741] Performed allocation for 1 slaves in 3.714894ms I0224 18:51:30.670241 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(741)@192.168.122.135:44950 I0224 18:51:30.670737 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(742)@192.168.122.135:44950 I0224 18:51:30.882052 30235 process.cpp:2117] Dropped / Lost event for PID: slave-observer(121)@192.168.122.135:44950 I0224 18:51:30.922494 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(744)@192.168.122.135:44950 I0224 18:51:30.985663 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(745)@192.168.122.135:44950 I0224 18:51:31.293728 30235 process.cpp:2117] Dropped / Lost event for PID: slave-observer(122)@192.168.122.135:44950 I0224 18:51:31.328766 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(747)@192.168.122.135:44950 I0224 18:51:31.497968 30234 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:31.501803 30234 hierarchical.hpp:741] Performed allocation for 1 slaves in 4.13526ms I0224 18:51:31.604324 30235 process.cpp:2117] Dropped / Lost event for PID: slave-observer(123)@192.168.122.135:44950 I0224 18:51:31.645259 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(749)@192.168.122.135:44950 I0224 18:51:31.676254 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(750)@192.168.122.135:44950 I0224 18:51:31.913120 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(752)@192.168.122.135:44950 I0224 18:51:31.957001 30235 process.cpp:2117] Dropped / Lost event for PID: slave-observer(124)@192.168.122.135:44950 I0224 18:51:31.996151 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(753)@192.168.122.135:44950 I0224 18:51:32.033216 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(754)@192.168.122.135:44950 I0224 18:51:32.231158 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(756)@192.168.122.135:44950 I0224 18:51:32.267324 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(757)@192.168.122.135:44950 I0224 18:51:32.503401 30227 hierarchical.hpp:834] No resources available to allocate! I0224 18:51:32.503978 30227 hierarchical.hpp:741] Performed allocation for 1 slaves in 1.062132ms I0224 18:51:32.621012 30232 slave.cpp:2830] Received ping from slave-observer(125)@192.168.122.135:44950 I0224 18:51:32.658608 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(759)@192.168.122.135:44950 I0224 18:51:32.671058 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(760)@192.168.122.135:44950 I0224 18:51:32.719501 30235 process.cpp:2117] Dropped / Lost event for PID: __waiter__(761)@192.168.122.135:44950 tests/master_tests.cpp:259: Failure Failed to wait 15secs for shutdown I0224 18:51:32.737337 30213 process.cpp:2117] Dropped / Lost event for PID: scheduler-fc72e828-0783-41b6-9892-ffc961e8567e@192.168.122.135:44950 tests/master_tests.cpp:249: Failure Actual function call count doesn't match EXPECT_CALL(exec, shutdown(_))... Expected: to be called once Actual: never called - unsatisfied and active I0224 18:51:32.741822 30229 master.cpp:787] Master terminating I0224 18:51:32.750730 30234 slave.cpp:2915] master@192.168.122.135:44950 exited W0224 18:51:32.751667 30234 slave.cpp:2918] Master disconnected! Waiting for a new master to be elected I0224 18:51:32.769243 30213 process.cpp:2117] Dropped / Lost event for PID: master@192.168.122.135:44950 I0224 18:51:32.770519 30213 process.cpp:2117] Dropped / Lost event for PID: master@192.168.122.135:44950 *** Aborted at 1424832692 (unix time) try """"date -d @1424832692"""" if you are using GNU date *** PC: @ 0x4612540 (unknown) *** SIGSEGV (@0x4612540) received by PID 30213 (TID 0x7fc7f2fa6880) from PID 73475392; stack trace: *** @ 0x3aa2a0efa0 (unknown) @ 0x4612540 (unknown) make[3]: *** [check-local] Segmentation fault (core dumped) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2403","02/25/2015 20:06:36",2,"MasterAllocatorTest/0.FrameworkReregistersFirst is flaky """""," [ RUN ] MasterAllocatorTest/0.FrameworkReregistersFirst Using temporary directory '/tmp/MasterAllocatorTest_0_FrameworkReregistersFirst_Vy5Nml' I0224 23:22:31.681670 30589 leveldb.cpp:176] Opened db in 2.943518ms I0224 23:22:31.682152 30619 process.cpp:2117] Dropped / Lost event for PID: slave(65)@67.195.81.187:38391 I0224 23:22:31.682732 30589 leveldb.cpp:183] Compacted db in 1.029469ms I0224 23:22:31.682777 30589 leveldb.cpp:198] Created db iterator in 15460ns I0224 23:22:31.682792 30589 leveldb.cpp:204] Seeked to beginning of db in 1832ns I0224 23:22:31.682802 30589 leveldb.cpp:273] Iterated through 0 keys in the db in 319ns I0224 23:22:31.682833 30589 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0224 23:22:31.683228 30605 recover.cpp:449] Starting replica recovery I0224 23:22:31.683537 30605 recover.cpp:475] Replica is in 4 status I0224 23:22:31.684624 30615 replica.cpp:641] Replica in 4 status received a broadcasted recover request I0224 23:22:31.684978 30616 recover.cpp:195] Received a recover response from a replica in 4 status I0224 23:22:31.685405 30610 recover.cpp:566] Updating replica status to 3 I0224 23:22:31.686249 30609 master.cpp:349] Master 20150224-232231-3142697795-38391-30589 (pomona.apache.org) started on 67.195.81.187:38391 I0224 23:22:31.686265 30617 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 717897ns I0224 23:22:31.686319 30617 replica.cpp:323] Persisted replica status to 3 I0224 23:22:31.686336 30609 master.cpp:395] Master only allowing authenticated frameworks to register I0224 23:22:31.686357 30609 master.cpp:400] Master only allowing authenticated slaves to register I0224 23:22:31.686390 30609 credentials.hpp:37] Loading credentials for authentication from '/tmp/MasterAllocatorTest_0_FrameworkReregistersFirst_Vy5Nml/credentials' I0224 23:22:31.686511 30606 recover.cpp:475] Replica is in 3 status I0224 23:22:31.686563 30609 master.cpp:442] Authorization enabled I0224 23:22:31.686929 30607 whitelist_watcher.cpp:79] No whitelist given I0224 23:22:31.686954 30603 hierarchical.hpp:287] Initialized hierarchical allocator process I0224 23:22:31.687134 30605 replica.cpp:641] Replica in 3 status received a broadcasted recover request I0224 23:22:31.687731 30609 master.cpp:1356] The newly elected leader is master@67.195.81.187:38391 with id 20150224-232231-3142697795-38391-30589 I0224 23:22:31.839818 30609 master.cpp:1369] Elected as the leading master! I0224 23:22:31.839834 30609 master.cpp:1187] Recovering from registrar I0224 23:22:31.839926 30605 registrar.cpp:313] Recovering registrar I0224 23:22:31.840000 30613 recover.cpp:195] Received a recover response from a replica in 3 status I0224 23:22:31.840504 30606 recover.cpp:566] Updating replica status to 1 I0224 23:22:31.841599 30611 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 990330ns I0224 23:22:31.841627 30611 replica.cpp:323] Persisted replica status to 1 I0224 23:22:31.841743 30611 recover.cpp:580] Successfully joined the Paxos group I0224 23:22:31.841904 30611 recover.cpp:464] Recover process terminated I0224 23:22:31.842366 30608 log.cpp:660] Attempting to start the writer I0224 23:22:31.843557 30607 replica.cpp:477] Replica received implicit promise request with proposal 1 I0224 23:22:31.844312 30607 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 722368ns I0224 23:22:31.844337 30607 replica.cpp:345] Persisted promised to 1 I0224 23:22:31.844889 30615 coordinator.cpp:230] Coordinator attemping to fill missing position I0224 23:22:31.846043 30614 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0224 23:22:31.846729 30614 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 660024ns I0224 23:22:31.846746 30614 replica.cpp:679] Persisted action at 0 I0224 23:22:31.847671 30611 replica.cpp:511] Replica received write request for position 0 I0224 23:22:31.847723 30611 leveldb.cpp:438] Reading position from leveldb took 27349ns I0224 23:22:31.848429 30611 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 671461ns I0224 23:22:31.848454 30611 replica.cpp:679] Persisted action at 0 I0224 23:22:31.849041 30615 replica.cpp:658] Replica received learned notice for position 0 I0224 23:22:31.849762 30615 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 690386ns I0224 23:22:31.849787 30615 replica.cpp:679] Persisted action at 0 I0224 23:22:31.849808 30615 replica.cpp:664] Replica learned 1 action at position 0 I0224 23:22:31.850416 30612 log.cpp:676] Writer started with ending position 0 I0224 23:22:31.851490 30615 leveldb.cpp:438] Reading position from leveldb took 30659ns I0224 23:22:31.854452 30610 registrar.cpp:346] Successfully fetched the registry (0B) in 14.491136ms I0224 23:22:31.854543 30610 registrar.cpp:445] Applied 1 operations in 18024ns; attempting to update the 'registry' I0224 23:22:31.857095 30604 log.cpp:684] Attempting to append 139 bytes to the log I0224 23:22:31.857208 30608 coordinator.cpp:340] Coordinator attempting to write 2 action at position 1 I0224 23:22:31.858073 30609 replica.cpp:511] Replica received write request for position 1 I0224 23:22:31.858808 30609 leveldb.cpp:343] Persisting action (158 bytes) to leveldb took 701708ns I0224 23:22:31.858835 30609 replica.cpp:679] Persisted action at 1 I0224 23:22:31.859508 30618 replica.cpp:658] Replica received learned notice for position 1 I0224 23:22:31.860267 30618 leveldb.cpp:343] Persisting action (160 bytes) to leveldb took 731035ns I0224 23:22:31.860309 30618 replica.cpp:679] Persisted action at 1 I0224 23:22:31.860332 30618 replica.cpp:664] Replica learned 2 action at position 1 I0224 23:22:31.860983 30609 registrar.cpp:490] Successfully updated the 'registry' in 6.39616ms I0224 23:22:31.861071 30609 registrar.cpp:376] Successfully recovered registrar I0224 23:22:31.861126 30608 log.cpp:703] Attempting to truncate the log to 1 I0224 23:22:31.861249 30603 coordinator.cpp:340] Coordinator attempting to write 3 action at position 2 I0224 23:22:31.861248 30617 master.cpp:1214] Recovered 0 slaves from the Registry (101B) ; allowing 10mins for slaves to re-register I0224 23:22:31.861831 30613 replica.cpp:511] Replica received write request for position 2 I0224 23:22:31.862504 30613 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 648125ns I0224 23:22:31.862531 30613 replica.cpp:679] Persisted action at 2 I0224 23:22:31.863067 30603 replica.cpp:658] Replica received learned notice for position 2 I0224 23:22:31.863689 30603 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 602784ns I0224 23:22:31.863737 30603 leveldb.cpp:401] Deleting ~1 keys from leveldb took 28697ns I0224 23:22:31.863751 30603 replica.cpp:679] Persisted action at 2 I0224 23:22:31.863767 30603 replica.cpp:664] Replica learned 3 action at position 2 I0224 23:22:31.875962 30610 slave.cpp:174] Slave started on 66)@67.195.81.187:38391 I0224 23:22:31.876008 30610 credentials.hpp:85] Loading credential for authentication from '/tmp/MasterAllocatorTest_0_FrameworkReregistersFirst_ikVXQM/credential' I0224 23:22:31.876144 30610 slave.cpp:281] Slave using credential for: test-principal I0224 23:22:31.876404 30610 slave.cpp:299] Slave resources: cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] I0224 23:22:31.876489 30610 slave.cpp:328] Slave hostname: pomona.apache.org I0224 23:22:31.876502 30610 slave.cpp:329] Slave checkpoint: false W0224 23:22:31.876507 30610 slave.cpp:331] Disabling checkpointing is deprecated and the --checkpoint flag will be removed in a future release. Please avoid using this flag I0224 23:22:31.877014 30603 state.cpp:35] Recovering state from '/tmp/MasterAllocatorTest_0_FrameworkReregistersFirst_ikVXQM/meta' I0224 23:22:31.877230 30610 status_update_manager.cpp:197] Recovering status update manager I0224 23:22:31.877495 30609 slave.cpp:3776] Finished recovery I0224 23:22:31.877879 30607 status_update_manager.cpp:171] Pausing sending status updates I0224 23:22:31.877879 30604 slave.cpp:624] New master detected at master@67.195.81.187:38391 I0224 23:22:31.877959 30604 slave.cpp:687] Authenticating with master master@67.195.81.187:38391 I0224 23:22:31.877975 30604 slave.cpp:692] Using default CRAM-MD5 authenticatee I0224 23:22:31.878069 30604 slave.cpp:660] Detecting new master I0224 23:22:31.878093 30608 authenticatee.hpp:139] Creating new client SASL connection I0224 23:22:31.878223 30604 master.cpp:3813] Authenticating slave(66)@67.195.81.187:38391 I0224 23:22:31.878244 30604 master.cpp:3824] Using default CRAM-MD5 authenticator I0224 23:22:31.878412 30613 authenticator.hpp:170] Creating new server SASL connection I0224 23:22:31.878525 30603 authenticatee.hpp:230] Received SASL authentication mechanisms: CRAM-MD5 I0224 23:22:31.878551 30603 authenticatee.hpp:256] Attempting to authenticate with mechanism 'CRAM-MD5' I0224 23:22:31.878625 30617 authenticator.hpp:276] Received SASL authentication start I0224 23:22:31.878662 30617 authenticator.hpp:398] Authentication requires more steps I0224 23:22:31.878727 30603 authenticatee.hpp:276] Received SASL authentication step I0224 23:22:31.878815 30617 authenticator.hpp:304] Received SASL authentication step I0224 23:22:31.878839 30617 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0224 23:22:31.878847 30617 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0224 23:22:31.878875 30617 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0224 23:22:31.878891 30617 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0224 23:22:31.878900 30617 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0224 23:22:31.878906 30617 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0224 23:22:31.878916 30617 authenticator.hpp:390] Authentication success I0224 23:22:31.880717 30589 sched.cpp:157] Version: 0.23.0 I0224 23:22:32.017823 30611 authenticatee.hpp:316] Authentication success I0224 23:22:32.017901 30618 master.cpp:3871] Successfully authenticated principal 'test-principal' at slave(66)@67.195.81.187:38391 I0224 23:22:32.018156 30615 sched.cpp:254] New master detected at master@67.195.81.187:38391 I0224 23:22:32.018240 30615 sched.cpp:310] Authenticating with master master@67.195.81.187:38391 I0224 23:22:32.018263 30615 sched.cpp:317] Using default CRAM-MD5 authenticatee I0224 23:22:32.018496 30613 slave.cpp:758] Successfully authenticated with master master@67.195.81.187:38391 I0224 23:22:32.018579 30611 authenticatee.hpp:139] Creating new client SASL connection I0224 23:22:32.018620 30613 slave.cpp:1090] Will retry registration in 363167ns if necessary I0224 23:22:32.018811 30615 master.cpp:2938] Registering slave at slave(66)@67.195.81.187:38391 (pomona.apache.org) with id 20150224-232231-3142697795-38391-30589-S0 I0224 23:22:32.019122 30615 master.cpp:3813] Authenticating scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.019156 30615 master.cpp:3824] Using default CRAM-MD5 authenticator I0224 23:22:32.019232 30612 registrar.cpp:445] Applied 1 operations in 57599ns; attempting to update the 'registry' I0224 23:22:32.019394 30603 authenticator.hpp:170] Creating new server SASL connection I0224 23:22:32.019541 30611 authenticatee.hpp:230] Received SASL authentication mechanisms: CRAM-MD5 I0224 23:22:32.019568 30611 authenticatee.hpp:256] Attempting to authenticate with mechanism 'CRAM-MD5' I0224 23:22:32.019666 30605 authenticator.hpp:276] Received SASL authentication start I0224 23:22:32.019717 30605 authenticator.hpp:398] Authentication requires more steps I0224 23:22:32.019805 30615 authenticatee.hpp:276] Received SASL authentication step I0224 23:22:32.019942 30605 authenticator.hpp:304] Received SASL authentication step I0224 23:22:32.019979 30605 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0224 23:22:32.019994 30605 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0224 23:22:32.020025 30605 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0224 23:22:32.020036 30610 slave.cpp:1090] Will retry registration in 10.850555ms if necessary I0224 23:22:32.020053 30605 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0224 23:22:32.020102 30605 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0224 23:22:32.020117 30605 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0224 23:22:32.020133 30605 authenticator.hpp:390] Authentication success I0224 23:22:32.020151 30611 master.cpp:2926] Ignoring register slave message from slave(66)@67.195.81.187:38391 (pomona.apache.org) as admission is already in progress I0224 23:22:32.020226 30603 authenticatee.hpp:316] Authentication success I0224 23:22:32.020256 30611 master.cpp:3871] Successfully authenticated principal 'test-principal' at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.020534 30615 sched.cpp:398] Successfully authenticated with master master@67.195.81.187:38391 I0224 23:22:32.020561 30615 sched.cpp:521] Sending registration request to master@67.195.81.187:38391 I0224 23:22:32.020635 30615 sched.cpp:554] Will retry registration in 490.035142ms if necessary I0224 23:22:32.020720 30613 master.cpp:1574] Received registration request for framework 'default' at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.020787 30613 master.cpp:1435] Authorizing framework principal 'test-principal' to receive offers for role '*' I0224 23:22:32.021122 30607 master.cpp:1638] Registering framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.021502 30611 hierarchical.hpp:321] Added framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.021531 30611 hierarchical.hpp:834] No resources available to allocate! I0224 23:22:32.021543 30611 hierarchical.hpp:741] Performed allocation for 0 slaves in 18915ns I0224 23:22:32.021618 30609 sched.cpp:448] Framework registered with 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.021673 30609 sched.cpp:462] Scheduler::registered took 26310ns I0224 23:22:32.022400 30613 log.cpp:684] Attempting to append 316 bytes to the log I0224 23:22:32.022523 30608 coordinator.cpp:340] Coordinator attempting to write 2 action at position 3 I0224 23:22:32.023232 30607 replica.cpp:511] Replica received write request for position 3 I0224 23:22:32.024055 30607 leveldb.cpp:343] Persisting action (335 bytes) to leveldb took 798548ns I0224 23:22:32.024073 30607 replica.cpp:679] Persisted action at 3 I0224 23:22:32.024651 30610 replica.cpp:658] Replica received learned notice for position 3 I0224 23:22:32.025252 30610 leveldb.cpp:343] Persisting action (337 bytes) to leveldb took 580525ns I0224 23:22:32.025271 30610 replica.cpp:679] Persisted action at 3 I0224 23:22:32.025297 30610 replica.cpp:664] Replica learned 2 action at position 3 I0224 23:22:32.025995 30618 registrar.cpp:490] Successfully updated the 'registry' in 6.586112ms I0224 23:22:32.026228 30604 log.cpp:703] Attempting to truncate the log to 3 I0224 23:22:32.026360 30609 coordinator.cpp:340] Coordinator attempting to write 3 action at position 4 I0224 23:22:32.026669 30609 slave.cpp:2831] Received ping from slave-observer(66)@67.195.81.187:38391 I0224 23:22:32.026772 30609 slave.cpp:792] Registered with master master@67.195.81.187:38391; given slave ID 20150224-232231-3142697795-38391-30589-S0 I0224 23:22:32.026737 30603 master.cpp:2995] Registered slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] I0224 23:22:32.026867 30603 status_update_manager.cpp:178] Resuming sending status updates I0224 23:22:32.026868 30617 hierarchical.hpp:455] Added slave 20150224-232231-3142697795-38391-30589-S0 (pomona.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] (and cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] available) I0224 23:22:32.026921 30615 replica.cpp:511] Replica received write request for position 4 I0224 23:22:32.027276 30617 hierarchical.hpp:759] Performed allocation for slave 20150224-232231-3142697795-38391-30589-S0 in 351257ns I0224 23:22:32.027580 30615 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 624249ns I0224 23:22:32.027604 30615 replica.cpp:679] Persisted action at 4 I0224 23:22:32.027642 30618 master.cpp:3755] Sending 1 offers to framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.028223 30617 replica.cpp:658] Replica received learned notice for position 4 I0224 23:22:32.028621 30607 sched.cpp:611] Scheduler::resourceOffers took 648326ns I0224 23:22:32.028916 30617 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 662416ns I0224 23:22:32.028991 30617 leveldb.cpp:401] Deleting ~2 keys from leveldb took 47386ns I0224 23:22:32.029021 30617 replica.cpp:679] Persisted action at 4 I0224 23:22:32.029044 30617 replica.cpp:664] Replica learned 3 action at position 4 I0224 23:22:32.029534 30613 master.cpp:2268] Processing ACCEPT call for offers: [ 20150224-232231-3142697795-38391-30589-O0 ] on slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) for framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.190521 30613 master.cpp:2112] Authorizing framework principal 'test-principal' to launch task 0 as user 'jenkins' W0224 23:22:32.191864 30604 validation.cpp:328] Executor default for task 0 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0224 23:22:32.191905 30604 validation.cpp:340] Executor default for task 0 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0224 23:22:32.192206 30604 master.hpp:822] Adding task 0 with resources cpus(*):1; mem(*):500 on slave 20150224-232231-3142697795-38391-30589-S0 (pomona.apache.org) I0224 23:22:32.192318 30604 master.cpp:2545] Launching task 0 of framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 with resources cpus(*):1; mem(*):500 on slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) I0224 23:22:32.192659 30611 slave.cpp:1121] Got assigned task 0 for framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.192847 30609 hierarchical.hpp:648] Recovered cpus(*):1; mem(*):524; disk(*):3.70122e+06; ports(*):[31000-32000] (total allocatable: cpus(*):1; mem(*):524; disk(*):3.70122e+06; ports(*):[31000-32000]) on slave 20150224-232231-3142697795-38391-30589-S0 from framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.192916 30609 hierarchical.hpp:684] Framework 20150224-232231-3142697795-38391-30589-0000 filtered slave 20150224-232231-3142697795-38391-30589-S0 for 5secs I0224 23:22:32.193327 30611 slave.cpp:1231] Launching task 0 for framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.196038 30611 slave.cpp:4178] Launching executor default of framework 20150224-232231-3142697795-38391-30589-0000 in work directory '/tmp/MasterAllocatorTest_0_FrameworkReregistersFirst_ikVXQM/slaves/20150224-232231-3142697795-38391-30589-S0/frameworks/20150224-232231-3142697795-38391-30589-0000/executors/default/runs/7c203619-b40f-4d0a-9b75-9ddeecb63472' I0224 23:22:32.197996 30611 exec.cpp:132] Version: 0.23.0 I0224 23:22:32.198206 30605 exec.cpp:182] Executor started at: executor(32)@67.195.81.187:38391 with pid 30589 I0224 23:22:32.198314 30611 slave.cpp:1378] Queuing task '0' for executor default of framework '20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.198407 30611 slave.cpp:577] Successfully attached file '/tmp/MasterAllocatorTest_0_FrameworkReregistersFirst_ikVXQM/slaves/20150224-232231-3142697795-38391-30589-S0/frameworks/20150224-232231-3142697795-38391-30589-0000/executors/default/runs/7c203619-b40f-4d0a-9b75-9ddeecb63472' I0224 23:22:32.198508 30611 slave.cpp:3133] Monitoring executor 'default' of framework '20150224-232231-3142697795-38391-30589-0000' in container '7c203619-b40f-4d0a-9b75-9ddeecb63472' I0224 23:22:32.198637 30611 slave.cpp:2141] Got registration for executor 'default' of framework 20150224-232231-3142697795-38391-30589-0000 from executor(32)@67.195.81.187:38391 I0224 23:22:32.198839 30604 exec.cpp:206] Executor registered on slave 20150224-232231-3142697795-38391-30589-S0 I0224 23:22:32.199090 30611 slave.cpp:1532] Sending queued task '0' to executor 'default' of framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.200916 30604 exec.cpp:218] Executor::registered took 24976ns I0224 23:22:32.201087 30604 exec.cpp:293] Executor asked to run task '0' I0224 23:22:32.201164 30604 exec.cpp:302] Executor::launchTask took 54201ns I0224 23:22:32.203301 30604 exec.cpp:525] Executor sending status update TASK_RUNNING (UUID: 49340611-fb39-423b-96f6-6c1e724c1a53) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.203533 30618 slave.cpp:2508] Handling status update TASK_RUNNING (UUID: 49340611-fb39-423b-96f6-6c1e724c1a53) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 from executor(32)@67.195.81.187:38391 I0224 23:22:32.203799 30604 status_update_manager.cpp:317] Received status update TASK_RUNNING (UUID: 49340611-fb39-423b-96f6-6c1e724c1a53) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.203840 30604 status_update_manager.cpp:494] Creating StatusUpdate stream for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.204038 30604 status_update_manager.cpp:371] Forwarding update TASK_RUNNING (UUID: 49340611-fb39-423b-96f6-6c1e724c1a53) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 to the slave I0224 23:22:32.204262 30607 slave.cpp:2751] Forwarding the update TASK_RUNNING (UUID: 49340611-fb39-423b-96f6-6c1e724c1a53) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 to master@67.195.81.187:38391 I0224 23:22:32.204473 30607 slave.cpp:2678] Status update manager successfully handled status update TASK_RUNNING (UUID: 49340611-fb39-423b-96f6-6c1e724c1a53) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.204511 30607 slave.cpp:2684] Sending acknowledgement for status update TASK_RUNNING (UUID: 49340611-fb39-423b-96f6-6c1e724c1a53) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 to executor(32)@67.195.81.187:38391 I0224 23:22:32.204558 30616 master.cpp:3295] Status update TASK_RUNNING (UUID: 49340611-fb39-423b-96f6-6c1e724c1a53) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 from slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) I0224 23:22:32.204602 30616 master.cpp:3336] Forwarding status update TASK_RUNNING (UUID: 49340611-fb39-423b-96f6-6c1e724c1a53) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.204676 30610 exec.cpp:339] Executor received status update acknowledgement 49340611-fb39-423b-96f6-6c1e724c1a53 for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.204753 30616 master.cpp:4615] Updating the latest state of task 0 of framework 20150224-232231-3142697795-38391-30589-0000 to TASK_RUNNING I0224 23:22:32.204874 30603 sched.cpp:717] Scheduler::statusUpdate took 52057ns I0224 23:22:32.205277 30614 master.cpp:2782] Forwarding status update acknowledgement 49340611-fb39-423b-96f6-6c1e724c1a53 for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 to slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) I0224 23:22:32.205579 30618 status_update_manager.cpp:389] Received status update acknowledgement (UUID: 49340611-fb39-423b-96f6-6c1e724c1a53) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.205798 30604 slave.cpp:2081] Status update manager successfully handled status update acknowledgement (UUID: 49340611-fb39-423b-96f6-6c1e724c1a53) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.206073 30607 master.cpp:787] Master terminating W0224 23:22:32.206197 30607 master.cpp:4668] Removing task 0 with resources cpus(*):1; mem(*):500 of framework 20150224-232231-3142697795-38391-30589-0000 on slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) in non-terminal state TASK_RUNNING I0224 23:22:32.206420 30614 hierarchical.hpp:648] Recovered cpus(*):1; mem(*):500 (total allocatable: cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000]) on slave 20150224-232231-3142697795-38391-30589-S0 from framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.206552 30607 master.cpp:4711] Removing executor 'default' with resources of framework 20150224-232231-3142697795-38391-30589-0000 on slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) I0224 23:22:32.321058 30619 process.cpp:2117] Dropped / Lost event for PID: hierarchical-allocator(81)@67.195.81.187:38391 I0224 23:22:32.367074 30612 slave.cpp:2916] master@67.195.81.187:38391 exited W0224 23:22:32.367101 30612 slave.cpp:2919] Master disconnected! Waiting for a new master to be elected I0224 23:22:32.368388 30589 process.cpp:2117] Dropped / Lost event for PID: master@67.195.81.187:38391 I0224 23:22:32.368482 30589 process.cpp:2117] Dropped / Lost event for PID: master@67.195.81.187:38391 I0224 23:22:32.375794 30589 leveldb.cpp:176] Opened db in 3.725405ms I0224 23:22:32.379137 30589 leveldb.cpp:183] Compacted db in 3.309337ms I0224 23:22:32.379196 30589 leveldb.cpp:198] Created db iterator in 21994ns I0224 23:22:32.379236 30589 leveldb.cpp:204] Seeked to beginning of db in 20370ns I0224 23:22:32.379356 30589 leveldb.cpp:273] Iterated through 3 keys in the db in 102640ns I0224 23:22:32.379411 30589 replica.cpp:744] Replica recovered with log positions 3 -> 4 with 0 holes and 0 unlearned I0224 23:22:32.379884 30610 recover.cpp:449] Starting replica recovery I0224 23:22:32.380156 30610 recover.cpp:475] Replica is in 1 status I0224 23:22:32.380460 30610 recover.cpp:464] Recover process terminated I0224 23:22:32.381878 30616 master.cpp:349] Master 20150224-232232-3142697795-38391-30589 (pomona.apache.org) started on 67.195.81.187:38391 I0224 23:22:32.381927 30616 master.cpp:395] Master only allowing authenticated frameworks to register I0224 23:22:32.381945 30616 master.cpp:400] Master only allowing authenticated slaves to register I0224 23:22:32.381974 30616 credentials.hpp:37] Loading credentials for authentication from '/tmp/MasterAllocatorTest_0_FrameworkReregistersFirst_Vy5Nml/credentials' I0224 23:22:32.382220 30616 master.cpp:442] Authorization enabled I0224 23:22:32.382783 30604 whitelist_watcher.cpp:79] No whitelist given I0224 23:22:32.382861 30607 hierarchical.hpp:287] Initialized hierarchical allocator process I0224 23:22:32.384114 30616 master.cpp:1356] The newly elected leader is master@67.195.81.187:38391 with id 20150224-232232-3142697795-38391-30589 I0224 23:22:32.384150 30616 master.cpp:1369] Elected as the leading master! I0224 23:22:32.384178 30616 master.cpp:1187] Recovering from registrar I0224 23:22:32.384330 30617 registrar.cpp:313] Recovering registrar I0224 23:22:32.384851 30606 log.cpp:660] Attempting to start the writer I0224 23:22:32.386044 30615 replica.cpp:477] Replica received implicit promise request with proposal 2 I0224 23:22:32.386862 30615 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 790008ns I0224 23:22:32.386888 30615 replica.cpp:345] Persisted promised to 2 I0224 23:22:32.387544 30612 coordinator.cpp:230] Coordinator attemping to fill missing position I0224 23:22:32.387799 30615 log.cpp:676] Writer started with ending position 4 I0224 23:22:32.388854 30606 leveldb.cpp:438] Reading position from leveldb took 51391ns I0224 23:22:32.388949 30606 leveldb.cpp:438] Reading position from leveldb took 40544ns I0224 23:22:32.390015 30611 registrar.cpp:346] Successfully fetched the registry (277B) in 5.604096ms I0224 23:22:32.390182 30611 registrar.cpp:445] Applied 1 operations in 46501ns; attempting to update the 'registry' I0224 23:22:32.392963 30608 log.cpp:684] Attempting to append 316 bytes to the log I0224 23:22:32.393081 30606 coordinator.cpp:340] Coordinator attempting to write 2 action at position 5 I0224 23:22:32.393875 30607 replica.cpp:511] Replica received write request for position 5 I0224 23:22:32.394582 30607 leveldb.cpp:343] Persisting action (335 bytes) to leveldb took 675133ns I0224 23:22:32.394609 30607 replica.cpp:679] Persisted action at 5 I0224 23:22:32.395190 30614 replica.cpp:658] Replica received learned notice for position 5 I0224 23:22:32.395917 30614 leveldb.cpp:343] Persisting action (337 bytes) to leveldb took 701360ns I0224 23:22:32.395944 30614 replica.cpp:679] Persisted action at 5 I0224 23:22:32.395966 30614 replica.cpp:664] Replica learned 2 action at position 5 I0224 23:22:32.396880 30614 registrar.cpp:490] Successfully updated the 'registry' in 6.644224ms I0224 23:22:32.397056 30614 registrar.cpp:376] Successfully recovered registrar I0224 23:22:32.397111 30604 log.cpp:703] Attempting to truncate the log to 5 I0224 23:22:32.397274 30616 coordinator.cpp:340] Coordinator attempting to write 3 action at position 6 I0224 23:22:32.397708 30615 master.cpp:1214] Recovered 1 slaves from the Registry (277B) ; allowing 10mins for slaves to re-register I0224 23:22:32.398107 30612 replica.cpp:511] Replica received write request for position 6 I0224 23:22:32.398763 30612 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 625992ns I0224 23:22:32.398789 30612 replica.cpp:679] Persisted action at 6 I0224 23:22:32.399373 30615 replica.cpp:658] Replica received learned notice for position 6 I0224 23:22:32.400152 30615 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 754us I0224 23:22:32.400228 30615 leveldb.cpp:401] Deleting ~2 keys from leveldb took 39811ns I0224 23:22:32.400250 30615 replica.cpp:679] Persisted action at 6 I0224 23:22:32.400274 30615 replica.cpp:664] Replica learned 3 action at position 6 I0224 23:22:32.410408 30604 sched.cpp:248] Scheduler::disconnected took 15348ns I0224 23:22:32.410430 30604 sched.cpp:254] New master detected at master@67.195.81.187:38391 I0224 23:22:32.410508 30604 sched.cpp:310] Authenticating with master master@67.195.81.187:38391 I0224 23:22:32.410531 30604 sched.cpp:317] Using default CRAM-MD5 authenticatee I0224 23:22:32.410781 30605 authenticatee.hpp:139] Creating new client SASL connection I0224 23:22:32.410964 30608 master.cpp:3813] Authenticating scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.410991 30608 master.cpp:3824] Using default CRAM-MD5 authenticator I0224 23:22:32.411188 30617 authenticator.hpp:170] Creating new server SASL connection I0224 23:22:32.411362 30612 authenticatee.hpp:230] Received SASL authentication mechanisms: CRAM-MD5 I0224 23:22:32.411391 30612 authenticatee.hpp:256] Attempting to authenticate with mechanism 'CRAM-MD5' I0224 23:22:32.411499 30618 authenticator.hpp:276] Received SASL authentication start I0224 23:22:32.411600 30618 authenticator.hpp:398] Authentication requires more steps I0224 23:22:32.411710 30614 authenticatee.hpp:276] Received SASL authentication step I0224 23:22:32.411818 30618 authenticator.hpp:304] Received SASL authentication step I0224 23:22:32.411849 30618 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0224 23:22:32.411861 30618 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0224 23:22:32.411897 30618 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0224 23:22:32.411922 30618 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0224 23:22:32.411934 30618 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0224 23:22:32.411942 30618 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0224 23:22:32.411958 30618 authenticator.hpp:390] Authentication success I0224 23:22:32.412032 30614 authenticatee.hpp:316] Authentication success I0224 23:22:32.412061 30612 master.cpp:3871] Successfully authenticated principal 'test-principal' at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.412345 30605 sched.cpp:398] Successfully authenticated with master master@67.195.81.187:38391 I0224 23:22:32.412375 30605 sched.cpp:521] Sending registration request to master@67.195.81.187:38391 I0224 23:22:32.528939 30605 sched.cpp:554] Will retry registration in 718.250035ms if necessary I0224 23:22:32.529072 30605 sched.cpp:521] Sending registration request to master@67.195.81.187:38391 I0224 23:22:32.529136 30605 sched.cpp:554] Will retry registration in 2.186117614secs if necessary I0224 23:22:32.529161 30618 master.cpp:1711] Received re-registration request from framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.529274 30618 master.cpp:1435] Authorizing framework principal 'test-principal' to receive offers for role '*' I0224 23:22:32.529597 30618 master.cpp:1711] Received re-registration request from framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.529662 30618 master.cpp:1435] Authorizing framework principal 'test-principal' to receive offers for role '*' I0224 23:22:32.529856 30618 master.cpp:1764] Re-registering framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.530449 30614 hierarchical.hpp:321] Added framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.530490 30614 hierarchical.hpp:834] No resources available to allocate! I0224 23:22:32.530510 30614 hierarchical.hpp:741] Performed allocation for 0 slaves in 32479ns I0224 23:22:32.530743 30618 master.cpp:1764] Re-registering framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.530777 30611 sched.cpp:448] Framework registered with 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.530787 30618 master.cpp:1804] Allowing framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 to re-register with an already used id I0224 23:22:32.530824 30611 sched.cpp:462] Scheduler::registered took 20697ns I0224 23:22:32.530948 30605 sched.cpp:477] Ignoring framework re-registered message because the driver is already connected! I0224 23:22:32.533220 30606 status_update_manager.cpp:171] Pausing sending status updates I0224 23:22:32.533226 30605 slave.cpp:624] New master detected at master@67.195.81.187:38391 I0224 23:22:32.533362 30605 slave.cpp:687] Authenticating with master master@67.195.81.187:38391 I0224 23:22:32.533385 30605 slave.cpp:692] Using default CRAM-MD5 authenticatee I0224 23:22:32.533529 30605 slave.cpp:660] Detecting new master I0224 23:22:32.533576 30612 authenticatee.hpp:139] Creating new client SASL connection I0224 23:22:32.533759 30603 master.cpp:3813] Authenticating slave(66)@67.195.81.187:38391 I0224 23:22:32.533790 30603 master.cpp:3824] Using default CRAM-MD5 authenticator I0224 23:22:32.534018 30605 authenticator.hpp:170] Creating new server SASL connection I0224 23:22:32.534195 30609 authenticatee.hpp:230] Received SASL authentication mechanisms: CRAM-MD5 I0224 23:22:32.534227 30609 authenticatee.hpp:256] Attempting to authenticate with mechanism 'CRAM-MD5' I0224 23:22:32.534339 30609 authenticator.hpp:276] Received SASL authentication start I0224 23:22:32.534394 30609 authenticator.hpp:398] Authentication requires more steps I0224 23:22:32.534494 30609 authenticatee.hpp:276] Received SASL authentication step I0224 23:22:32.534600 30609 authenticator.hpp:304] Received SASL authentication step I0224 23:22:32.534629 30609 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0224 23:22:32.534642 30609 auxprop.cpp:171] Looking up auxiliary property '*userPassword' I0224 23:22:32.534692 30609 auxprop.cpp:171] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0224 23:22:32.534725 30609 auxprop.cpp:99] Request to lookup properties for user: 'test-principal' realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0224 23:22:32.534740 30609 auxprop.cpp:121] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0224 23:22:32.534749 30609 auxprop.cpp:121] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0224 23:22:32.534765 30609 authenticator.hpp:390] Authentication success I0224 23:22:32.534891 30604 authenticatee.hpp:316] Authentication success I0224 23:22:32.534906 30609 master.cpp:3871] Successfully authenticated principal 'test-principal' at slave(66)@67.195.81.187:38391 I0224 23:22:32.535146 30608 slave.cpp:758] Successfully authenticated with master master@67.195.81.187:38391 I0224 23:22:32.535567 30608 slave.cpp:1090] Will retry registration in 10.96399ms if necessary I0224 23:22:32.535843 30610 master.cpp:3120] Re-registering slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) I0224 23:22:32.536542 30606 registrar.cpp:445] Applied 1 operations in 71981ns; attempting to update the 'registry' I0224 23:22:32.539535 30612 log.cpp:684] Attempting to append 316 bytes to the log I0224 23:22:32.539682 30617 coordinator.cpp:340] Coordinator attempting to write 2 action at position 7 I0224 23:22:32.540549 30608 replica.cpp:511] Replica received write request for position 7 I0224 23:22:32.540832 30608 leveldb.cpp:343] Persisting action (335 bytes) to leveldb took 249089ns I0224 23:22:32.540858 30608 replica.cpp:679] Persisted action at 7 I0224 23:22:32.541555 30604 replica.cpp:658] Replica received learned notice for position 7 I0224 23:22:32.542254 30604 leveldb.cpp:343] Persisting action (337 bytes) to leveldb took 671501ns I0224 23:22:32.542294 30604 replica.cpp:679] Persisted action at 7 I0224 23:22:32.542320 30604 replica.cpp:664] Replica learned 2 action at position 7 I0224 23:22:32.543231 30603 registrar.cpp:490] Successfully updated the 'registry' in 6.617088ms I0224 23:22:32.543637 30613 log.cpp:703] Attempting to truncate the log to 7 I0224 23:22:32.543784 30611 coordinator.cpp:340] Coordinator attempting to write 3 action at position 8 I0224 23:22:32.544088 30610 master.hpp:822] Adding task 0 with resources cpus(*):1; mem(*):500 on slave 20150224-232231-3142697795-38391-30589-S0 (pomona.apache.org) I0224 23:22:32.544503 30612 slave.cpp:2831] Received ping from slave-observer(67)@67.195.81.187:38391 I0224 23:22:32.544530 30618 replica.cpp:511] Replica received write request for position 8 I0224 23:22:32.544800 30610 master.cpp:3191] Re-registered slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] I0224 23:22:32.544862 30607 slave.cpp:860] Re-registered with master master@67.195.81.187:38391 I0224 23:22:32.544961 30606 status_update_manager.cpp:178] Resuming sending status updates I0224 23:22:32.544975 30610 master.cpp:3219] Sending updated checkpointed resources to slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) I0224 23:22:32.545047 30612 hierarchical.hpp:455] Added slave 20150224-232231-3142697795-38391-30589-S0 (pomona.apache.org) with cpus(*):2; mem(*):1024; disk(*):3.70122e+06; ports(*):[31000-32000] (and cpus(*):1; mem(*):524; disk(*):3.70122e+06; ports(*):[31000-32000] available) I0224 23:22:32.545090 30603 slave.cpp:1922] Updating framework 20150224-232231-3142697795-38391-30589-0000 pid to scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.545128 30618 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 567893ns I0224 23:22:32.545155 30618 replica.cpp:679] Persisted action at 8 I0224 23:22:32.545240 30613 status_update_manager.cpp:178] Resuming sending status updates I0224 23:22:32.545424 30603 slave.cpp:2010] Updated checkpointed resources from to I0224 23:22:32.545569 30612 hierarchical.hpp:759] Performed allocation for slave 20150224-232231-3142697795-38391-30589-S0 in 472208ns I0224 23:22:32.545915 30610 replica.cpp:658] Replica received learned notice for position 8 I0224 23:22:32.545930 30607 master.cpp:3755] Sending 1 offers to framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.688380 30615 hierarchical.hpp:741] Performed allocation for 1 slaves in 418073ns I0224 23:22:32.691712 30607 master.cpp:3755] Sending 1 offers to framework 20150224-232231-3142697795-38391-30589-0000 (default) at scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.691795 30605 sched.cpp:611] Scheduler::resourceOffers took 95977ns I0224 23:22:32.691961 30610 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 649479ns I0224 23:22:32.692033 30610 leveldb.cpp:401] Deleting ~2 keys from leveldb took 43354ns I0224 23:22:32.692061 30610 replica.cpp:679] Persisted action at 8 I0224 23:22:32.692093 30610 replica.cpp:664] Replica learned 3 action at position 8 I0224 23:22:32.692191 30589 sched.cpp:1589] Asked to stop the driver ../../src/tests/master_allocator_tests.cpp:1308: Failure Mock function called more times than expected - returning directly. Function call: resourceOffers(0x7fff88bc3d10, @0x2b8726c54b30 { 128-byte object }) Expected: to be called once Actual: called twice - over-saturated and active I0224 23:22:32.692252 30605 sched.cpp:611] Scheduler::resourceOffers took 213312ns I0224 23:22:32.692296 30610 master.cpp:787] Master terminating I0224 23:22:32.692387 30605 sched.cpp:831] Stopping framework '20150224-232231-3142697795-38391-30589-0000' W0224 23:22:32.692448 30610 master.cpp:4668] Removing task 0 with resources cpus(*):1; mem(*):500 of framework 20150224-232231-3142697795-38391-30589-0000 on slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) in non-terminal state TASK_RUNNING I0224 23:22:32.692739 30613 hierarchical.hpp:648] Recovered cpus(*):1; mem(*):500 (total allocatable: cpus(*):1; mem(*):500) on slave 20150224-232231-3142697795-38391-30589-S0 from framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.692939 30610 master.cpp:4711] Removing executor 'default' with resources of framework 20150224-232231-3142697795-38391-30589-0000 on slave 20150224-232231-3142697795-38391-30589-S0 at slave(66)@67.195.81.187:38391 (pomona.apache.org) I0224 23:22:32.693953 30610 slave.cpp:2916] master@67.195.81.187:38391 exited W0224 23:22:32.693982 30610 slave.cpp:2919] Master disconnected! Waiting for a new master to be elected I0224 23:22:32.695185 30589 process.cpp:2117] Dropped / Lost event for PID: master@67.195.81.187:38391 I0224 23:22:32.695327 30589 process.cpp:2117] Dropped / Lost event for PID: master@67.195.81.187:38391 I0224 23:22:32.697404 30613 slave.cpp:3191] Executor 'default' of framework 20150224-232231-3142697795-38391-30589-0000 exited with status 0 I0224 23:22:32.699574 30613 slave.cpp:2508] Handling status update TASK_LOST (UUID: d33cdd1e-5586-4ae5-94b0-870a1443844b) for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 from @0.0.0.0:0 I0224 23:22:32.699658 30613 slave.cpp:4486] Terminating task 0 I0224 23:22:32.699964 30613 process.cpp:2117] Dropped / Lost event for PID: master@67.195.81.187:38391 I0224 23:22:32.700003 30613 slave.cpp:506] Slave terminating I0224 23:22:32.700073 30613 slave.cpp:1745] Asked to shut down framework 20150224-232231-3142697795-38391-30589-0000 by @0.0.0.0:0 I0224 23:22:32.700098 30613 slave.cpp:1770] Shutting down framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.700160 30613 slave.cpp:3300] Cleaning up executor 'default' of framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.700355 30610 gc.cpp:56] Scheduling '/tmp/MasterAllocatorTest_0_FrameworkReregistersFirst_ikVXQM/slaves/20150224-232231-3142697795-38391-30589-S0/frameworks/20150224-232231-3142697795-38391-30589-0000/executors/default/runs/7c203619-b40f-4d0a-9b75-9ddeecb63472' for gc 6.99999189524148days in the future I0224 23:22:32.700449 30613 slave.cpp:3379] Cleaning up framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.700522 30610 gc.cpp:56] Scheduling '/tmp/MasterAllocatorTest_0_FrameworkReregistersFirst_ikVXQM/slaves/20150224-232231-3142697795-38391-30589-S0/frameworks/20150224-232231-3142697795-38391-30589-0000/executors/default' for gc 6.99999189365926days in the future I0224 23:22:32.700580 30608 status_update_manager.cpp:279] Closing status update streams for framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.700630 30610 gc.cpp:56] Scheduling '/tmp/MasterAllocatorTest_0_FrameworkReregistersFirst_ikVXQM/slaves/20150224-232231-3142697795-38391-30589-S0/frameworks/20150224-232231-3142697795-38391-30589-0000' for gc 6.99999189199111days in the future I0224 23:22:32.700666 30608 status_update_manager.cpp:525] Cleaning up status update stream for task 0 of framework 20150224-232231-3142697795-38391-30589-0000 I0224 23:22:32.702638 30589 process.cpp:2117] Dropped / Lost event for PID: scheduler-9a3224cc-aef0-49a7-a240-4b85b913ff44@67.195.81.187:38391 I0224 23:22:32.702955 30589 process.cpp:2117] Dropped / Lost event for PID: slave(66)@67.195.81.187:38391 [ FAILED ] MasterAllocatorTest/0.FrameworkReregistersFirst, where TypeParam = mesos::internal::master::allocator::MesosAllocator > (1027 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2408","02/26/2015 00:53:46",5,"Slave should reclaim storage for destroyed persistent volumes. ""At present, destroying a persistent volume does not cleanup any filesystem space that was used by the volume (it just removes the Mesos-level metadata about the volume). This effectively leads to a storage leak, which is bad. For task sandboxes, we do """"garbage collection"""" to remove the sandbox at a later time to facilitate debugging failed tasks; for volumes, because they are explicitly deleted and are not tied to the lifecycle of a task, removing the associated storage immediately seems best. To implement this safely, we'll either need to ensure that libprocess messages are delivered in-order, or else add some extra safe-guards to ensure that out-of-order {{CheckpointResources}} messages don't lead to accidental data loss.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2427","03/02/2015 22:51:28",2,"Add Java binding for the acceptOffers API. ""We introduced the new acceptOffers API in C++ driver. We need to provide Java binding for this API as well.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2428","03/02/2015 22:52:42",2,"Add Python bindings for the acceptOffers API. ""We introduced the new acceptOffers API in C++ driver. We need to provide Python binding for this API as well.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2438","03/03/2015 22:25:53",8,"Improve support for streaming HTTP Responses in libprocess. ""Currently libprocess' HTTP::Response supports a PIPE construct for doing streaming responses: This interface is too low level and difficult to program against: * Connection closure is signaled with SIGPIPE, which is difficult for callers to deal with (must suppress SIGPIPE locally or globally in order to get EPIPE instead). * Pipes are generally for inter-process communication, and the pipe has finite size. With a blocking pipe the caller must deal with blocking when the pipe's buffer limit is exceeded. With a non-blocking pipe, the caller must deal with retrying the write. We'll want to consider a few use cases: # Sending an HTTP::Response with streaming data. # Making a request with http::get and http::post in which the data is returned in a streaming manner. # Making a request in which the request content is streaming. This ticket will focus on 1 as it is required for the HTTP API."""," struct Response { ... // Either provide a """"body"""", an absolute """"path"""" to a file, or a // """"pipe"""" for streaming a response. Distinguish between the cases // using 'type' below. // // BODY: Uses 'body' as the body of the response. These may be // encoded using gzip for efficiency, if 'Content-Encoding' is not // already specified. // // PATH: Attempts to perform a 'sendfile' operation on the file // found at 'path'. // // PIPE: Splices data from 'pipe' using 'Transfer-Encoding=chunked'. // Note that the read end of the pipe will be closed by libprocess // either after the write end has been closed or if the socket the // data is being spliced to has been closed (i.e., nobody is // listening any longer). This can cause writes to the pipe to // generate a SIGPIPE (which will terminate your program unless you // explicitly ignore them or handle them). // // In all cases (BODY, PATH, PIPE), you are expected to properly // specify the 'Content-Type' header, but the 'Content-Length' and // or 'Transfer-Encoding' headers will be filled in for you. enum { NONE, BODY, PATH, PIPE } type; ... }; ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2461","03/06/2015 21:27:40",1,"Slave should provide details on processes running in its cgroups ""The slave can optionally be put into its own cgroups for a list of subsystems, e.g., for monitoring of memory and cpu. See the slave flag: --slave_subsystems It currently refuses to start if there are any processes in its cgroups - this could be another slave or some subprocess started by a previous slave - and only logs the pids of those processes. Improve this to log details about the processes: suggest at least the process command, uid running it, and perhaps its start time.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2462","03/06/2015 22:25:45",3,"Add option for Subprocess to set a death signal for the forked child ""Currently, children forked by the slave, including those through Subprocess, will continue running if the slave exits. For some processes, including helper processes like the fetcher, du, or perf, we'd like them to be terminated when the slave exits. Add support to Subprocess to optionally set a DEATHSIG for the child, e.g., setting SIGTERM would mean the child would get SIGTERM when the slave terminates. This can be done (*after forking*) with PR_SET_DEATHSIG. See """"man prctl"""". It is preserved through an exec call.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2485","03/12/2015 19:44:00",3,"Add ability to distinguish slave removals metrics by reason. ""Currently we only expose a single removal metric ({{""""master/slave_removals""""}}) which makes it difficult to distinguish between removal reasons in the alerting. Currently, a slave can be removed for the following reasons: # Health checks failed. # Slave unregistered. # Slave was replaced by a new slave (on the same endpoint). In the case of (2), we expect this to be due to maintenance and don't want to be notified as strongly as with health check failures.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2491","03/13/2015 08:17:28",5,"Persist the reservation state on the slave ""h3. Goal The goal for this task is to persist the reservation state stored on the master on the corresponding slave. The {{needCheckpointing}} predicate is used to capture the condition for which a resource needs to be checkpointed. Currently the only condition is {{isPersistentVolume}}. We'll update this to include dynamically reserved resources. h3. Expected Outcome * The dynamically reserved resources will be persisted on the slave.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2500","03/16/2015 15:45:33",2,"Doxygen setup for libprocess ""Goals: - Initial doxygen setup. - Enable interested developers to generate already available doxygen content locally in their workspace and view it. - Form the basis for future contributions of more doxygen content. 1. Devise a way to use Doxygen with Mesos source code. (For example, solve this by adding optional brew/apt-get installation to the """"Getting Started"""" doc.) 2. Create a make target for libprocess documentation that can be manually triggered. 3. Create initial library top level documentation. 4. Enhance one header file with Doxygen. Make sure the generated output has all necessary links to navigate from the lib to the file and back, etc. ""","",0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2501","03/16/2015 15:49:57",1,"Doxygen style for libprocess ""Create a description of the Doxygen style to use for libprocess documentation. It is expected that this will later also become the Doxygen style for stout and Mesos, but we are working on libprocess only for now. Possible outcome: a file named docs/doxygen-style.md We hope for much input and expect a lot of discussion! ""","",0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2507","03/17/2015 01:13:50",5,"Performance issue in the master when a large number of slaves are registering. ""For large clusters, when a lot of slaves are registering, the master gets backlogged processing registration requests. {{perf}} revealed the following: This is likely because we loop over all the slaves for each registration: """," Events: 14K cycles 25.44% libmesos-0.22.0-x.so [.] mesos::internal::master::Master::registerSlave(process::UPID const&, mesos::SlaveInfo const&, std::vector > cons 11.18% libmesos-0.22.0-x.so [.] pipecb 5.88% libc-2.5.so [.] malloc_consolidate 5.33% libc-2.5.so [.] _int_free 5.25% libc-2.5.so [.] malloc 5.23% libc-2.5.so [.] _int_malloc 4.11% libstdc++.so.6.0.8 [.] std::string::assign(std::string const&) 3.22% libmesos-0.22.0-x.so [.] mesos::Resource::SharedDtor() 3.10% [kernel] [k] _raw_spin_lock 1.97% libmesos-0.22.0-x.so [.] mesos::Attribute::SharedDtor() 1.28% libc-2.5.so [.] memcmp 1.08% libc-2.5.so [.] free void Master::registerSlave( const UPID& from, const SlaveInfo& slaveInfo, const vector& checkpointedResources, const string& version) { // ... // Check if this slave is already registered (because it retries). foreachvalue (Slave* slave, slaves.registered) { if (slave->pid == from) { // ... } } // ... } ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2534","03/23/2015 23:28:52",2,"PerfTest.ROOT_SampleInit test fails. ""From MESOS-2300 as well, it looks like this test is not reliable: It looks like this test samples PID 1, which is either {{init}} or {{systemd}}. Per a chat with [~idownes] this should probably sample something that is guaranteed to be consuming cycles."""," [ RUN ] PerfTest.ROOT_SampleInit ../../src/tests/perf_tests.cpp:147: Failure Expected: (0u) < (statistics.get().cycles()), actual: 0 vs 0 ../../src/tests/perf_tests.cpp:150: Failure Expected: (0.0) < (statistics.get().task_clock()), ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2545","03/25/2015 12:33:06",2,"Developer guide for libprocess ""Create a developer guide for libprocess that explains the philosophy behind it and explains the most important features as well as the prevalent use patterns in Mesos with examples. This could be similar to stout/README.md. ""","",0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2559","03/26/2015 19:21:30",1,"Do not use RunTaskMessage.framework_id. ""Assume that FrameworkInfo.id is always set and so need to read/set RunTaskMessage.framework_id. This should land after https://issues.apache.org/jira/browse/MESOS-2558 has been shipped.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2591","04/03/2015 23:35:31",2,"Refactor launchHelper and statisticsHelper in port_mapping_tests to allow reuse ""Refactor launchHelper and statisticsHelper in port_mapping_tests to allow reuse""","",0,1,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2595","04/07/2015 01:34:31",8,"Create docker executor ""Currently we're reusing the command executor to wait on the progress of the docker executor, but has the following drawback: - We need to launch a seperate docker log process just to forward logs, where we can just simply reattach stdout/stderr if we create a specific executor for docker - In general, Mesos slave is assuming that the executor is the one starting the actual task. But the current docker containerizer, the containerizer is actually starting the docker container first then launches the command executor to wait on it. This can cause problems if the container failed before the command executor was able to launch, as slave will try to update the limits of the containerizer on executor registration but then the docker containerizer will fail to do so since the container failed. Overall it's much simpler to tie the container lifecycle with the executor and simplfies logic and log management.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2596","04/07/2015 14:53:43",2,"Update allocator docs ""Once Allocator interface changes, so does the way of writing new allocators. This should be reflected in Mesos docs. The modules doc should mention how to write and use allocator modules. Configuration doc should mention the new {{--allocator}} flag.""","",0,0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2598","04/07/2015 17:30:26",3,"Slave state.json frameworks.executors.queued_tasks wrong format? ""queued_tasks.executor_id is expected to be a string and not a complete json object. It should have the very same format as the tasks array on the same level. Example, directly taken from slave """," .... """"queued_tasks"""": [ { """"data"""": """""""", """"executor_id"""": { """"command"""": { """"argv"""": [], """"uris"""": [ { """"executable"""": false, """"value"""": """"http://downloads.foo.io/orchestra/storm-mesos/0.9.2-incubating-47-ovh.bb373df1c/storm-mesos-0.9.2-incubating.tgz"""" } ], """"value"""": """"cd storm-mesos* && python bin/storm supervisor storm.mesos.MesosSupervisor"""" }, """"data"""": """"{\""""assignmentid\"""":\""""srv4.hw.ca1.foo.com\"""",\""""supervisorid\"""":\""""srv4.hw.ca1.foo.com-stage-ingestion-stats-slave-111-1428421145\""""}"""", """"executor_id"""": """"stage-ingestion-stats-slave-111-1428421145"""", """"framework_id"""": """"20150401-160104-251662508-5050-2197-0002"""", """"name"""": """""""", """"resources"""": { """"cpus"""": 0.5, """"disk"""": 0, """"mem"""": 1000 } }, """"id"""": """"srv4.hw.ca1.foo.com-31708"""", """"name"""": """"worker srv4.hw.ca1.foo.com:31708"""", """"resources"""": { """"cpus"""": 1, """"disk"""": 0, """"mem"""": 5120, """"ports"""": """"[31708-31708]"""" }, """"slave_id"""": """"20150327-025553-218108076-5050-4122-S0"""" }, ... ] ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2600","04/08/2015 19:50:56",5,"Add /reserve and /unreserve endpoints on the master for dynamic reservation ""Enable operators to manage dynamic reservations by Introducing the {{/reserve}} and {{/unreserve}} HTTP endpoints on the master.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2613","04/13/2015 19:34:26",2,"Change docker rm command ""Right now it seems Mesos is using „docker rm –f ID“ to delete containers so bind mounts are not deleted. This means thousands of dirs in /var/lib/docker/vfs/dir I would like to have the option to change it to „docker rm –f –v ID“ This deletes bind mounts but not persistant volumes. Best, Mike""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2615","04/14/2015 03:22:39",1,"Pipe 'updateFramework' path from master to Allocator to support framework re-registration ""Pipe the 'updateFramework' call from the master through the allocator, as described in the design doc in the epic: MESOS-703""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2622","04/16/2015 00:36:50",1,"Document the semantic change in decorator return values ""In order to enable decorator modules to _remove_ metadata (environment variables or labels), we changed the meaning of the return value for decorator hooks. The Result return values means: ||State||Before||After|| |Error|Error is propagated to the call-site|No change| |None|The result of the decorator is not applied|No change| |Some|The result of the decorator is *appended*|The result of the decorator *overwrites* the final labels/environment object|""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2633","04/16/2015 23:04:40",1,"Move implementations of Framework struct functions out of master.hpp. ""To help reduce compile time and keep the header easy to read, let's move the implementations of the Framework struct functions out of master.hpp""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2665","04/27/2015 17:32:07",5,"Fix queuing discipline wrapper in linux/routing/queueing ""qdisc search function is dependent on matching a single hard coded handle and does not correctly test for interface, making the implementation fragile. Additionally, the current setup scripts (using dynamically created shell commands) do not match the hard coded handles. ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2680","04/30/2015 09:43:24",1,"Update modules doc with hook usage example ""Modules doc states the possibility of using hooks, but doesn't refer to necessary flags and usage example.""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2696","05/05/2015 23:51:01",5,"Explore exposing stats from kernel ""Exploratory work. Additional tickets to follow.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2700","05/06/2015 18:20:33",13,"Determine CFS behavior with biased cpu.shares subtrees ""See this [ticket|https://issues.apache.org/jira/browse/MESOS-2652] for context. * Understand the relationship between cpu.shares and CFS quota. * Determine range of possible bias splits * Determine how to achieve bias, e.g., should 20:1 be 20480:1024 or ~1024:50 * Rigorous testing of behavior with varying loads, particularly the combination of latency sensitive loads for high biased tasks (non-revokable), and cpu intensive loads for the low biased tasks (revokable). * Discover any performance edge cases?""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2701","05/06/2015 18:24:37",8,"Implement bi-level cpu.shares subtrees in cgroups/cpu isolator. ""See this [ticket|https://issues.apache.org/jira/browse/MESOS-2652] for context. # Configurable bias # Change cgroup layout ** Implement roll-forward migration path in isolator recover ** Document roll-back migration path""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2702","05/06/2015 18:33:53",3,"Compare split/flattened cgroup hierarchy for CPU oversubscription ""Investigate if a flat hierarchy is sufficient for oversubscription of CPU or if a two-way split is necessary/preferred.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2709","05/08/2015 19:51:49",3,"Design Master discovery functionality for HTTP-only clients ""When building clients that do not bind to {{libmesos}} and only use the HTTP API (via """"pure"""" language bindings - eg, Java-only) there is no simple way to discover the Master's IP address to connect to. Rather than relying on 'out-of-band' configuration mechanisms, we would like to enable the ability of interrogating the ZooKeeper ensemble to discover the Master's IP address (and, possibly, other information) to which the HTTP API requests can be addressed to.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2726","05/13/2015 23:21:53",13,"Add support for enabling network namespace without enabling the network isolator ""Following the discussion Kapil started, it is currently not possible to enable the linux network namespace for a container without enabling the network isolator (which requires certain kernel capabilities and dependencies). Following the pattern of enabling pid namespaces (--isolation=""""namespaces/pid""""). One possible solution could be to add another one for network i.e. """"namespaces/network"""". ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2737","05/14/2015 22:44:29",3,"Add documentation for maintainers. ""In order to scale the number of committers in the project, we proposed the concept of maintainers here: http://markmail.org/thread/cjmdn3d7qfzbxhpm To follow up on that proposal, we'll need some documentation to capture the concept of maintainers. Both how contributors can benefit from maintainer feedback and the expectations of """"maintainer-ship"""". In order to not enforce an excessive amount of process, maintainers will initially only serve as an encouraged means to help contributors find reviewers and get meaningful feedback.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2743","05/15/2015 23:13:33",3,"Include ExecutorInfos for custom executors in master/state.json ""The slave/state.json already reports executorInfos: https://github.com/apache/mesos/blob/0.22.1/src/slave/http.cpp#L215-219 Would be great to see this in the master/state.json as well, so external tools don't have to query each slave to find out executor resources, sandbox directories, etc.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2750","05/19/2015 19:47:03",3,"Extend queueing discipline wrappers to expose network isolator statistics ""Export Traffic Control statistics in queueing library to enable reporting out impact of network bandwidth statistics.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2753","05/19/2015 23:48:13",3,"Master should validate tasks using oversubscribed resources ""Current implementation out for [review|https://reviews.apache.org/r/34310] only supports setting the priority of containers with revocable CPU if it's specified in the initial executor info resources. This should be enforced at the master. Also master should make sure that oversubscribed resources used by the task are valid.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2757","05/21/2015 01:31:52",3,"Add -> operator for Option, Try, Result, Future. ""Let's add operator overloads to Option to allow access to the underlying T using the `->` operator. ""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2783","05/29/2015 17:12:16",5,"document the fetcher ""For framework developers specifically, Mesos provides a fetcher to move binaries. This needs MVP documentation. - What is it - How does it help - What protocols or schemas are supported - Can it be extended This is important to get framework developers over the hump of learning to code against Mesos and grow the ecosystem.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2784","05/30/2015 00:05:30",1,"Added constexpr to C++11 whitelist. ""constexpr is currently used to eliminate initialization dependency issues for non-POD objects. We should add it to the whitelist of acceptable c++11 features in the style guide.""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2793","06/01/2015 22:18:59",1,"Add support for container rootfs to Mesos isolators ""Mesos containers can have a different rootfs to the host. Update Isolator interface to pass rootfs during Isolator::prepare(). Update Isolators where necessary.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2794","06/01/2015 22:23:33",13,"Implement filesystem isolators ""Move persistent volume support from Mesos containerizer to separate filesystem isolators, including support for container rootfs, where possible. Use symlinks for posix systems without container rootfs. Use bind mounts for Linux with/without container rootfs.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2795","06/01/2015 22:25:55",5,"Introduce filesystem provisioner abstraction ""Optional filesystem provisioner component for the Mesos containerizer that can provision per-container filesystems. This is different to a filesystem isolators because it just provisions a root filesystem for a container and doesn't actually do any isolation (e.g., through a mount namespace + pivot or chroot).""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2796","06/01/2015 22:28:39",5,"Implement AppC image provisioner. ""Implement a filesystem provisioner that can provision container images compliant with the Application Container Image (aci) [specification|https://github.com/appc/spec].""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2800","06/02/2015 07:11:08",3,"Rename Option::get(const T& _t) to getOrElse() and refactor the original function ""As suggested, if we want to change the name then we should refactor the original function as opposed to having 2 copies. If we did have 2 versions of the same function, would it make more sense to delegate one of them to the other. As of today, there is only one file need to be refactor: 3rdparty/libprocess/3rdparty/stout/include/stout/os/osx.hpp at line 151, 161""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2801","06/02/2015 09:02:47",3,"Remove dynamic allocation from Future ""Remove the dynamic allocation of `T*` inside `Future::Data`""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2804","06/03/2015 01:43:31",1,"Log framework capabilities in the master. ""Now that {{Capabilities}} has been added to FrameworkInfo, we should log these in the master when a framework (re-)registers (i.e. which capabilities are enabled and disabled). This would make debugging easier for framework developers. Ideally, folding in the old {{checkpoint}} capability and logging that as well. In the past, the fact that {{checkpoint}} defaults to false has tripped up a lot of developers.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2805","06/03/2015 18:10:57",8,"Make synchronized as primary form of synchronization. ""Re-organize Synchronized to allow {{synchronized(m)}} to work on: 1. {{std::mutex}} 2. {{std::recursive_mutex}} 3. {{std::atomic_flag}} Move synchronized.hpp into stout, so that developers don't think it's part of the utility suite for actors in libprocess. Remove references to internal.hpp and replace them with {{std::atomic_flag}} synchronization.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2807","06/03/2015 20:09:50",3,"As a developer I need an easy way to convert MasterInfo protobuf to/from JSON ""As a preliminary to MESOS-2340, this requires the implementation of a simple (de)serialization mechanism to JSON from/to {{MasterInfo}} protobuf.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2815","06/04/2015 15:11:55",2,"Flaky test: FetcherCacheHttpTest.HttpCachedSerialized ""FetcherCacheHttpTest.HttpCachedSerialized has been observed to fail (once so far), but normally works fine. Here is the failure output: [ RUN ] FetcherCacheHttpTest.HttpCachedSerialized GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: resourceOffers(0x3cca8e0, @0x2b1053422b20 { 128-byte object }) Stack trace: F0604 13:08:16.377907 6813 fetcher_cache_tests.cpp:354] CHECK_READY(offers): is PENDING Failed to wait for resource offers *** Check failure stack trace: *** @ 0x2b10488ff6c0 google::LogMessage::Fail() @ 0x2b10488ff60c google::LogMessage::SendToLog() @ 0x2b10488ff00e google::LogMessage::Flush() @ 0x2b1048901f22 google::LogMessageFatal::~LogMessageFatal() @ 0x9721e4 _CheckFatal::~_CheckFatal() @ 0xb4da86 mesos::internal::tests::FetcherCacheTest::launchTask() @ 0xb53f8d mesos::internal::tests::FetcherCacheHttpTest_HttpCachedSerialized_Test::TestBody() @ 0x116ac21 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x1165e1e testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x114e1df testing::Test::Run() @ 0x114e902 testing::TestInfo::Run() @ 0x114ee8a testing::TestCase::Run() @ 0x1153b54 testing::internal::UnitTestImpl::RunAllTests() @ 0x116ba93 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x1166b0f testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x1152a60 testing::UnitTest::Run() @ 0xcbc50f main @ 0x2b104af78ec5 (unknown) @ 0x867559 (unknown) make[4]: *** [check-local] Aborted make[4]: Leaving directory `/home/jenkins/jenkins-slave/workspace/mesos-reviewbot/mesos-0.23.0/_build/src' make[3]: *** [check-am] Error 2 make[3]: Leaving directory `/home/jenkins/jenkins-slave/workspace/mesos-reviewbot/mesos-0.23.0/_build/src' make[2]: *** [check] Error 2 make[2]: Leaving directory `/home/jenkins/jenkins-slave/workspace/mesos-reviewbot/mesos-0.23.0/_build/src' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/home/jenkins/jenkins-slave/workspace/mesos-reviewbot/mesos-0.23.0/_build' make: *** [distcheck] Error 1 ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2817","06/04/2015 20:16:00",3,"Support revocable/non-revocable CPU updates in Mesos containerizer ""MESOS-2652 provided preliminary support for revocable cpu resources only when specified in the initial resources for a container. Improve this to support updates to/from revocable cpu. Note, *any* revocable cpu will result in the entire container's cpu being treated as revocable at the cpu isolator level. Higher level logic is responsible for adding/removing based on some policy.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2829","06/08/2015 18:13:31",0,"FetcherCacheTest.CachedFallback test is flaky ""Observed this on internal CI """," DEBUG: [ RUN ] FetcherCacheTest.CachedFallback DEBUG: Using temporary directory '/tmp/FetcherCacheTest_CachedFallback_dmr8bH' DEBUG: I0608 08:32:39.091538 56580 leveldb.cpp:176] Opened db in 5.078136ms DEBUG: I0608 08:32:39.093271 56580 leveldb.cpp:183] Compacted db in 1.696938ms DEBUG: I0608 08:32:39.093312 56580 leveldb.cpp:198] Created db iterator in 5708ns DEBUG: I0608 08:32:39.093325 56580 leveldb.cpp:204] Seeked to beginning of db in 940ns DEBUG: I0608 08:32:39.093333 56580 leveldb.cpp:273] Iterated through 0 keys in the db in 270ns DEBUG: I0608 08:32:39.093353 56580 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned DEBUG: I0608 08:32:39.093597 56613 recover.cpp:449] Starting replica recovery DEBUG: I0608 08:32:39.093706 56597 recover.cpp:475] Replica is in EMPTY status DEBUG: I0608 08:32:39.094511 56607 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request DEBUG: I0608 08:32:39.094952 56606 recover.cpp:195] Received a recover response from a replica in EMPTY status DEBUG: I0608 08:32:39.095067 56606 recover.cpp:566] Updating replica status to STARTING DEBUG: I0608 08:32:39.095487 56598 master.cpp:363] Master 20150608-083239-1787367596-39187-56580 (<***redacted***>) started on <***redacted***> DEBUG: I0608 08:32:39.095510 56598 master.cpp:365] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --credentials=""""/tmp/FetcherCacheTest_CachedFallback_dmr8bH/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""25secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/FetcherCacheTest_CachedFallback_dmr8bH/master"""" --zk_session_timeout=""""10secs"""" DEBUG: I0608 08:32:39.095630 56598 master.cpp:410] Master only allowing authenticated frameworks to register DEBUG: I0608 08:32:39.095636 56598 master.cpp:415] Master only allowing authenticated slaves to register DEBUG: I0608 08:32:39.095643 56598 credentials.hpp:37] Loading credentials for authentication from '/tmp/FetcherCacheTest_CachedFallback_dmr8bH/credentials' DEBUG: I0608 08:32:39.095743 56598 master.cpp:454] Using default 'crammd5' authenticator DEBUG: I0608 08:32:39.095782 56598 master.cpp:491] Authorization enabled DEBUG: I0608 08:32:39.096765 56606 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 1.654474ms DEBUG: I0608 08:32:39.096788 56606 replica.cpp:323] Persisted replica status to STARTING DEBUG: I0608 08:32:39.096925 56612 master.cpp:1474] The newly elected leader is master@<***redacted***> with id 20150608-083239-1787367596-39187-56580 DEBUG: I0608 08:32:39.096940 56612 master.cpp:1487] Elected as the leading master! DEBUG: I0608 08:32:39.096946 56612 master.cpp:1257] Recovering from registrar DEBUG: I0608 08:32:39.097033 56608 registrar.cpp:313] Recovering registrar DEBUG: I0608 08:32:39.097095 56616 recover.cpp:475] Replica is in STARTING status DEBUG: I0608 08:32:39.097718 56607 replica.cpp:641] Replica in STARTING status received a broadcasted recover request DEBUG: I0608 08:32:39.097949 56602 recover.cpp:195] Received a recover response from a replica in STARTING status DEBUG: I0608 08:32:39.098423 56609 recover.cpp:566] Updating replica status to VOTING DEBUG: I0608 08:32:39.099462 56609 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 962629ns DEBUG: I0608 08:32:39.099488 56609 replica.cpp:323] Persisted replica status to VOTING DEBUG: I0608 08:32:39.099527 56609 recover.cpp:580] Successfully joined the Paxos group DEBUG: I0608 08:32:39.099577 56609 recover.cpp:464] Recover process terminated DEBUG: I0608 08:32:39.099874 56601 log.cpp:661] Attempting to start the writer DEBUG: I0608 08:32:39.101229 56611 replica.cpp:477] Replica received implicit promise request with proposal 1 DEBUG: I0608 08:32:39.102149 56611 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 894488ns DEBUG: I0608 08:32:39.102175 56611 replica.cpp:345] Persisted promised to 1 DEBUG: I0608 08:32:39.102978 56612 coordinator.cpp:230] Coordinator attemping to fill missing position DEBUG: I0608 08:32:39.104015 56612 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 DEBUG: I0608 08:32:39.105124 56612 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 1.08594ms DEBUG: I0608 08:32:39.105156 56612 replica.cpp:679] Persisted action at 0 DEBUG: I0608 08:32:39.106197 56596 replica.cpp:511] Replica received write request for position 0 DEBUG: I0608 08:32:39.106230 56596 leveldb.cpp:438] Reading position from leveldb took 16312ns DEBUG: I0608 08:32:39.107139 56596 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 891559ns DEBUG: I0608 08:32:39.107167 56596 replica.cpp:679] Persisted action at 0 DEBUG: I0608 08:32:39.107656 56601 replica.cpp:658] Replica received learned notice for position 0 DEBUG: I0608 08:32:39.109146 56601 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 1.459405ms DEBUG: I0608 08:32:39.109180 56601 replica.cpp:679] Persisted action at 0 DEBUG: I0608 08:32:39.109194 56601 replica.cpp:664] Replica learned NOP action at position 0 DEBUG: I0608 08:32:39.109606 56604 log.cpp:677] Writer started with ending position 0 DEBUG: I0608 08:32:39.109839 56613 leveldb.cpp:438] Reading position from leveldb took 17622ns DEBUG: I0608 08:32:39.111635 56600 registrar.cpp:346] Successfully fetched the registry (0B) in 14.553088ms DEBUG: I0608 08:32:39.111667 56600 registrar.cpp:445] Applied 1 operations in 2865ns; attempting to update the 'registry' DEBUG: I0608 08:32:39.112975 56601 log.cpp:685] Attempting to append 157 bytes to the log DEBUG: I0608 08:32:39.113034 56608 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 DEBUG: I0608 08:32:39.113862 56595 replica.cpp:511] Replica received write request for position 1 DEBUG: I0608 08:32:39.114780 56595 leveldb.cpp:343] Persisting action (176 bytes) to leveldb took 885199ns DEBUG: I0608 08:32:39.114806 56595 replica.cpp:679] Persisted action at 1 DEBUG: I0608 08:32:39.115378 56610 replica.cpp:658] Replica received learned notice for position 1 DEBUG: I0608 08:32:39.116317 56610 leveldb.cpp:343] Persisting action (178 bytes) to leveldb took 915397ns DEBUG: I0608 08:32:39.116343 56610 replica.cpp:679] Persisted action at 1 DEBUG: I0608 08:32:39.116356 56610 replica.cpp:664] Replica learned APPEND action at position 1 DEBUG: I0608 08:32:39.117009 56615 registrar.cpp:490] Successfully updated the 'registry' in 4.941824ms DEBUG: I0608 08:32:39.117075 56615 registrar.cpp:376] Successfully recovered registrar DEBUG: I0608 08:32:39.117147 56615 master.cpp:1284] Recovered 0 slaves from the Registry (119B) ; allowing 10mins for slaves to re-register DEBUG: I0608 08:32:39.117447 56614 log.cpp:704] Attempting to truncate the log to 1 DEBUG: I0608 08:32:39.117504 56613 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 DEBUG: I0608 08:32:39.118638 56608 replica.cpp:511] Replica received write request for position 2 DEBUG: I0608 08:32:39.119679 56608 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 1.017833ms DEBUG: I0608 08:32:39.119707 56608 replica.cpp:679] Persisted action at 2 DEBUG: I0608 08:32:39.120102 56608 replica.cpp:658] Replica received learned notice for position 2 DEBUG: I0608 08:32:39.121323 56608 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 1.201996ms DEBUG: I0608 08:32:39.121378 56608 leveldb.cpp:401] Deleting ~1 keys from leveldb took 22912ns DEBUG: I0608 08:32:39.121395 56608 replica.cpp:679] Persisted action at 2 DEBUG: I0608 08:32:39.121404 56608 replica.cpp:664] Replica learned TRUNCATE action at position 2 DEBUG: I0608 08:32:39.129438 56580 containerizer.cpp:111] Using isolation: posix/cpu,posix/mem DEBUG: I0608 08:32:39.132251 56615 slave.cpp:188] Slave started on 40)@<***redacted***> DEBUG: I0608 08:32:39.132272 56615 slave.cpp:189] Flags at startup: --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/FetcherCacheTest_CachedFallback_P03rUd/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_remove_delay=""""6hrs"""" --docker_sandbox_directory=""""/mnt/mesos/sandbox"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --egress_unique_flow_per_container=""""false"""" --enforce_container_disk_quota=""""false"""" --ephemeral_ports_per_container=""""1024"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/FetcherCacheTest_CachedFallback_P03rUd/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/builddir/build/BUILD/mesos-0.23.0/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --network_enable_socket_statistics_details=""""false"""" --network_enable_socket_statistics_summary=""""false"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resource_monitoring_interval=""""1secs"""" --resources=""""cpus(*):1000; mem(*):1000"""" --revocable_cpu_low_priority=""""true"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/FetcherCacheTest_CachedFallback_P03rUd"""" DEBUG: I0608 08:32:39.132558 56615 credentials.hpp:85] Loading credential for authentication from '/tmp/FetcherCacheTest_CachedFallback_P03rUd/credential' DEBUG: I0608 08:32:39.132689 56615 slave.cpp:319] Slave using credential for: test-principal DEBUG: I0608 08:32:39.132869 56615 slave.cpp:352] Slave resources: cpus(*):1000; mem(*):1000; disk(*):464332; ports(*):[31000-32000] DEBUG: I0608 08:32:39.133673 56615 slave.cpp:382] Slave hostname: <***redacted***> DEBUG: I0608 08:32:39.133689 56615 slave.cpp:387] Slave checkpoint: true DEBUG: I0608 08:32:39.133949 56596 state.cpp:35] Recovering state from '/tmp/FetcherCacheTest_CachedFallback_P03rUd/meta' DEBUG: I0608 08:32:39.134446 56595 status_update_manager.cpp:201] Recovering status update manager DEBUG: I0608 08:32:39.134539 56609 containerizer.cpp:312] Recovering containerizer DEBUG: I0608 08:32:39.135972 56616 slave.cpp:3950] Finished recovery DEBUG: I0608 08:32:39.136243 56616 slave.cpp:4104] Querying resource estimator for oversubscribable resources DEBUG: I0608 08:32:39.136576 56599 status_update_manager.cpp:175] Pausing sending status updates DEBUG: I0608 08:32:39.136595 56602 slave.cpp:678] New master detected at master@<***redacted***> DEBUG: I0608 08:32:39.136620 56602 slave.cpp:741] Authenticating with master master@<***redacted***> DEBUG: I0608 08:32:39.136628 56602 slave.cpp:746] Using default CRAM-MD5 authenticatee DEBUG: I0608 08:32:39.136670 56602 slave.cpp:714] Detecting new master DEBUG: I0608 08:32:39.136698 56602 slave.cpp:4125] Received oversubscribable resources from the resource estimator DEBUG: I0608 08:32:39.136703 56602 slave.cpp:4129] No master detected. Re-querying resource estimator after 15secs DEBUG: I0608 08:32:39.136778 56618 authenticatee.hpp:139] Creating new client SASL connection DEBUG: I0608 08:32:39.137074 56618 master.cpp:4167] Authenticating slave(40)@<***redacted***> DEBUG: I0608 08:32:39.137186 56611 authenticator.cpp:92] Creating new server SASL connection DEBUG: I0608 08:32:39.137485 56605 authenticatee.hpp:230] Received SASL authentication mechanisms: CRAM-MD5 DEBUG: I0608 08:32:39.137503 56605 authenticatee.hpp:256] Attempting to authenticate with mechanism 'CRAM-MD5' DEBUG: I0608 08:32:39.137540 56605 authenticator.cpp:197] Received SASL authentication start DEBUG: I0608 08:32:39.137615 56605 authenticator.cpp:319] Authentication requires more steps DEBUG: I0608 08:32:39.137645 56605 authenticatee.hpp:276] Received SASL authentication step DEBUG: I0608 08:32:39.137817 56609 authenticator.cpp:225] Received SASL authentication step DEBUG: I0608 08:32:39.137866 56609 authenticator.cpp:311] Authentication success DEBUG: I0608 08:32:39.137989 56609 master.cpp:4197] Successfully authenticated principal 'test-principal' at slave(40)@<***redacted***> DEBUG: I0608 08:32:39.138123 56601 authenticatee.hpp:316] Authentication success DEBUG: I0608 08:32:39.138363 56601 slave.cpp:812] Successfully authenticated with master master@<***redacted***> DEBUG: I0608 08:32:39.138468 56596 master.cpp:3149] Registering slave at slave(40)@<***redacted***> (<***redacted***>) with id 20150608-083239-1787367596-39187-56580-S0 DEBUG: I0608 08:32:39.138628 56614 registrar.cpp:445] Applied 1 operations in 11844ns; attempting to update the 'registry' DEBUG: I0608 08:32:39.140049 56596 log.cpp:685] Attempting to append 351 bytes to the log DEBUG: I0608 08:32:39.140110 56600 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 DEBUG: I0608 08:32:39.141154 56596 replica.cpp:511] Replica received write request for position 3 DEBUG: I0608 08:32:39.142328 56596 leveldb.cpp:343] Persisting action (370 bytes) to leveldb took 1.151559ms DEBUG: I0608 08:32:39.142359 56596 replica.cpp:679] Persisted action at 3 DEBUG: I0608 08:32:39.142838 56607 replica.cpp:658] Replica received learned notice for position 3 DEBUG: I0608 08:32:39.144026 56607 leveldb.cpp:343] Persisting action (372 bytes) to leveldb took 1.167494ms DEBUG: I0608 08:32:39.144060 56607 replica.cpp:679] Persisted action at 3 DEBUG: I0608 08:32:39.144073 56607 replica.cpp:664] Replica learned APPEND action at position 3 DEBUG: I0608 08:32:39.144342 56595 registrar.cpp:490] Successfully updated the 'registry' in 5696us DEBUG: I0608 08:32:39.144366 56607 log.cpp:704] Attempting to truncate the log to 3 DEBUG: I0608 08:32:39.144423 56614 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 DEBUG: I0608 08:32:39.144631 56607 master.cpp:3206] Registered slave 20150608-083239-1787367596-39187-56580-S0 at slave(40)@<***redacted***> (<***redacted***>) with cpus(*):1000; mem(*):1000; disk(*):464332; ports(*):[31000-32000] DEBUG: I0608 08:32:39.144726 56617 hierarchical.hpp:496] Added slave 20150608-083239-1787367596-39187-56580-S0 (<***redacted***>) with cpus(*):1000; mem(*):1000; disk(*):464332; ports(*):[31000-32000] (and cpus(*):1000; mem(*):1000; disk(*):464332; ports(*):[31000-32000] available) DEBUG: I0608 08:32:39.144745 56609 slave.cpp:846] Registered with master master@<***redacted***>; given slave ID 20150608-083239-1787367596-39187-56580-S0 DEBUG: I0608 08:32:39.144872 56612 status_update_manager.cpp:182] Resuming sending status updates DEBUG: I0608 08:32:39.145023 56602 replica.cpp:511] Replica received write request for position 4 DEBUG: GMOCK WARNING: DEBUG: Uninteresting mock function call - returning directly. DEBUG: Function call: resourceOffers(0x3754860, @0x7fad6e321b30 { 128-byte object <50-DB E0-80 AD-7F 00-00 00-00 00-00 00-00 00-00 D0-78 00-04 AD-7F 00-00 70-79 00-04 AD-7F 00-00 10-7A 00-04 AD-7F 00-00 B0-7A 00-04 AD-7F 00-00 A0-6B 00-04 AD-7F 00-00 04-00 00-00 04-00 00-00 04-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 0F-00 00-00> }) DEBUG: Stack trace: DEBUG: I0608 08:32:39.146750 56580 sched.cpp:157] Version: 0.23.0-rc0 DEBUG: I0608 08:32:39.147104 56613 sched.cpp:254] New master detected at master@<***redacted***> DEBUG: I0608 08:32:39.147121 56613 sched.cpp:310] Authenticating with master master@<***redacted***> DEBUG: I0608 08:32:39.147128 56613 sched.cpp:317] Using default CRAM-MD5 authenticatee DEBUG: I0608 08:32:39.147186 56613 authenticatee.hpp:139] Creating new client SASL connection DEBUG: I0608 08:32:39.147374 56595 master.cpp:4167] Authenticating scheduler-4b62df2f-7cdc-427c-8301-0e282d99bc06@<***redacted***> DEBUG: I0608 08:32:39.147596 56595 authenticator.cpp:92] Creating new server SASL connection DEBUG: I0608 08:32:39.147666 56595 authenticatee.hpp:230] Received SASL authentication mechanisms: CRAM-MD5 DEBUG: I0608 08:32:39.147678 56595 authenticatee.hpp:256] Attempting to authenticate with mechanism 'CRAM-MD5' DEBUG: I0608 08:32:39.147701 56595 authenticator.cpp:197] Received SASL authentication start DEBUG: I0608 08:32:39.147738 56595 authenticator.cpp:319] Authentication requires more steps DEBUG: I0608 08:32:39.147789 56615 authenticatee.hpp:276] Received SASL authentication step DEBUG: I0608 08:32:39.147853 56615 authenticator.cpp:225] Received SASL authentication step DEBUG: I0608 08:32:39.147881 56615 authenticator.cpp:311] Authentication success DEBUG: I0608 08:32:39.147923 56615 authenticatee.hpp:316] Authentication success DEBUG: I0608 08:32:39.148066 56617 master.cpp:4197] Successfully authenticated principal 'test-principal' at scheduler-4b62df2f-7cdc-427c-8301-0e282d99bc06@<***redacted***> DEBUG: I0608 08:32:39.148905 56598 sched.cpp:398] Successfully authenticated with master master@<***redacted***> DEBUG: I0608 08:32:39.149000 56608 master.cpp:1714] Received registration request for framework 'default' at scheduler-4b62df2f-7cdc-427c-8301-0e282d99bc06@<***redacted***> DEBUG: W0608 08:32:39.149026 56608 master.cpp:1537] Framework at scheduler-4b62df2f-7cdc-427c-8301-0e282d99bc06@<***redacted***> (authenticated as 'test-principal') does not specify principal in its FrameworkInfo DEBUG: I0608 08:32:39.149039 56608 master.cpp:1553] Authorizing framework principal '' to receive offers for role '*' DEBUG: I0608 08:32:39.149502 56618 master.cpp:1781] Registering framework 20150608-083239-1787367596-39187-56580-0000 (default) at scheduler-4b62df2f-7cdc-427c-8301-0e282d99bc06@<***redacted***> DEBUG: I0608 08:32:39.149587 56618 hierarchical.hpp:354] Added framework 20150608-083239-1787367596-39187-56580-0000 DEBUG: I0608 08:32:39.149668 56614 sched.cpp:448] Framework registered with 20150608-083239-1787367596-39187-56580-0000 DEBUG: I0608 08:32:39.149765 56615 master.cpp:4086] Sending 1 offers to framework 20150608-083239-1787367596-39187-56580-0000 (default) at scheduler-4b62df2f-7cdc-427c-8301-0e282d99bc06@<***redacted***> DEBUG: I0608 08:32:39.153530 56602 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 8.487561ms DEBUG: I0608 08:32:39.153561 56602 replica.cpp:679] Persisted action at 4 DEBUG: I0608 08:32:39.154438 56603 replica.cpp:658] Replica received learned notice for position 4 DEBUG: I0608 08:32:39.155531 56603 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 1.074345ms DEBUG: I0608 08:32:39.155571 56603 leveldb.cpp:401] Deleting ~2 keys from leveldb took 16036ns DEBUG: I0608 08:32:39.155580 56603 replica.cpp:679] Persisted action at 4 DEBUG: I0608 08:32:39.155587 56603 replica.cpp:664] Replica learned TRUNCATE action at position 4 DEBUG: I0608 08:32:54.136912 56596 slave.cpp:4104] Querying resource estimator for oversubscribable resources DEBUG: I0608 08:32:54.137251 56611 slave.cpp:4125] Received oversubscribable resources from the resource estimator DEBUG: F0608 08:32:54.154590 56580 fetcher_cache_tests.cpp:354] CHECK_READY(offers): is PENDING Failed to wait for resource offers DEBUG: *** Check failure stack trace: *** DEBUG: @ 0x7fad806b9f1d google::LogMessage::Fail() DEBUG: @ 0x7fad806bbd5d google::LogMessage::SendToLog() DEBUG: @ 0x7fad806b9b0c google::LogMessage::Flush() DEBUG: @ 0x7fad806bc659 google::LogMessageFatal::~LogMessageFatal() DEBUG: @ 0x53d858 _CheckFatal::~_CheckFatal() DEBUG: @ 0x66c14f mesos::internal::tests::FetcherCacheTest::launchTask() DEBUG: @ 0x66f9e9 mesos::internal::tests::FetcherCacheTest_CachedFallback_Test::TestBody() DEBUG: @ 0xba34d3 testing::internal::HandleExceptionsInMethodIfSupported<>() DEBUG: @ 0xb9a777 testing::Test::Run() DEBUG: @ 0xb9a81e testing::TestInfo::Run() DEBUG: @ 0xb9a925 testing::TestCase::Run() DEBUG: @ 0xb9abc8 testing::internal::UnitTestImpl::RunAllTests() DEBUG: @ 0xb9ae67 testing::UnitTest::Run() DEBUG: @ 0x4a1e73 main DEBUG: @ 0x7fad7e4e2d5d __libc_start_main DEBUG: @ 0x4acf79 (unknown) DEBUG: make[3]: *** [check-local] Aborted (core dumped) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2830","06/08/2015 20:10:11",8,"Add an endpoint to slaves to allow launching system administration tasks ""As a System Administrator often times I need to run a organization-mandated task on every machine in the cluster. Ideally I could do this within the framework of mesos resources if it is a """"cleanup"""" or auditing task, but sometimes I just have to run something, and run it now, regardless if a machine has un-accounted resources (Ex: Adding/removing a user). Currently to do this I have to completely bypass Mesos and SSH to the box. Ideally I could tell a mesos slave (With proper authentication) to run a container with the limited special permissions needed to get the task done.""","",0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2834","06/09/2015 05:32:10",3,"Support different perf output formats ""The output format of perf changes in 3.14 (inserting an additional field) and in again in 4.1 (appending additional) fields. See kernel commits: 410136f5dd96b6013fe6d1011b523b1c247e1ccb d73515c03c6a2706e088094ff6095a3abefd398b Update the perf::parse() function to understand all these formats.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2848","06/10/2015 21:47:19",2,"Local filesystem docker image discovery ""Given a docker image name and the local directory where images can be found, creates a URI with a path to the corresponding image. Done when system successfully checks for the image, untars the image if necessary, and returns the proper URI to the image.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2849","06/10/2015 21:47:34",5,"Implement Docker local image store ""Given a local Docker image name and path to the image or image tarball, fetches the image's dependent layers, untarring if necessary. It will also parse the image layers' configuration json and place the layers and image into persistent store. Done when a Docker image can be successfully stored and retrieved using 'put' and 'get' methods. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2850","06/10/2015 21:48:06",3,"Implement Docker image provisioner ""Provisions a Docker image (provisions all its dependent layers), fetch an image from persistent store, and also destroy an image. Done when tested for local discovery and copy backend. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2857","06/11/2015 04:18:06",1,"FetcherCacheTest.LocalCachedExtract is flaky. ""From jenkins: [~bernd-mesos] not sure if there's a ticket capturing this already, sorry if this is a duplicate."""," [ RUN ] FetcherCacheTest.LocalCachedExtract Using temporary directory '/tmp/FetcherCacheTest_LocalCachedExtract_Cwdcdj' I0610 20:04:48.591573 24561 leveldb.cpp:176] Opened db in 3.512525ms I0610 20:04:48.592456 24561 leveldb.cpp:183] Compacted db in 828630ns I0610 20:04:48.592512 24561 leveldb.cpp:198] Created db iterator in 32992ns I0610 20:04:48.592531 24561 leveldb.cpp:204] Seeked to beginning of db in 8967ns I0610 20:04:48.592545 24561 leveldb.cpp:273] Iterated through 0 keys in the db in 7762ns I0610 20:04:48.592604 24561 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0610 20:04:48.593438 24587 recover.cpp:449] Starting replica recovery I0610 20:04:48.593698 24587 recover.cpp:475] Replica is in EMPTY status I0610 20:04:48.595641 24580 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0610 20:04:48.596086 24590 recover.cpp:195] Received a recover response from a replica in EMPTY status I0610 20:04:48.596607 24590 recover.cpp:566] Updating replica status to STARTING I0610 20:04:48.597507 24590 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 717888ns I0610 20:04:48.597535 24590 replica.cpp:323] Persisted replica status to STARTING I0610 20:04:48.597697 24590 recover.cpp:475] Replica is in STARTING status I0610 20:04:48.599165 24584 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0610 20:04:48.599434 24584 recover.cpp:195] Received a recover response from a replica in STARTING status I0610 20:04:48.599915 24590 recover.cpp:566] Updating replica status to VOTING I0610 20:04:48.600545 24590 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 432335ns I0610 20:04:48.600574 24590 replica.cpp:323] Persisted replica status to VOTING I0610 20:04:48.600659 24590 recover.cpp:580] Successfully joined the Paxos group I0610 20:04:48.600797 24590 recover.cpp:464] Recover process terminated I0610 20:04:48.602905 24594 master.cpp:363] Master 20150610-200448-3875541420-32907-24561 (dbade881e927) started on 172.17.0.231:32907 I0610 20:04:48.602957 24594 master.cpp:365] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --credentials=""""/tmp/FetcherCacheTest_LocalCachedExtract_Cwdcdj/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""25secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-0.23.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/FetcherCacheTest_LocalCachedExtract_Cwdcdj/master"""" --zk_session_timeout=""""10secs"""" I0610 20:04:48.603374 24594 master.cpp:410] Master only allowing authenticated frameworks to register I0610 20:04:48.603392 24594 master.cpp:415] Master only allowing authenticated slaves to register I0610 20:04:48.603404 24594 credentials.hpp:37] Loading credentials for authentication from '/tmp/FetcherCacheTest_LocalCachedExtract_Cwdcdj/credentials' I0610 20:04:48.603751 24594 master.cpp:454] Using default 'crammd5' authenticator I0610 20:04:48.604928 24594 master.cpp:491] Authorization enabled I0610 20:04:48.606034 24593 hierarchical.hpp:309] Initialized hierarchical allocator process I0610 20:04:48.606106 24593 whitelist_watcher.cpp:79] No whitelist given I0610 20:04:48.607430 24594 master.cpp:1476] The newly elected leader is master@172.17.0.231:32907 with id 20150610-200448-3875541420-32907-24561 I0610 20:04:48.607466 24594 master.cpp:1489] Elected as the leading master! I0610 20:04:48.607481 24594 master.cpp:1259] Recovering from registrar I0610 20:04:48.607712 24594 registrar.cpp:313] Recovering registrar I0610 20:04:48.608543 24588 log.cpp:661] Attempting to start the writer I0610 20:04:48.610231 24588 replica.cpp:477] Replica received implicit promise request with proposal 1 I0610 20:04:48.611335 24588 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 1.086439ms I0610 20:04:48.611382 24588 replica.cpp:345] Persisted promised to 1 I0610 20:04:48.612303 24588 coordinator.cpp:230] Coordinator attemping to fill missing position I0610 20:04:48.613883 24593 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0610 20:04:48.619205 24593 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 5.228235ms I0610 20:04:48.619257 24593 replica.cpp:679] Persisted action at 0 I0610 20:04:48.621919 24593 replica.cpp:511] Replica received write request for position 0 I0610 20:04:48.621987 24593 leveldb.cpp:438] Reading position from leveldb took 49394ns I0610 20:04:48.622689 24593 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 668412ns I0610 20:04:48.622716 24593 replica.cpp:679] Persisted action at 0 I0610 20:04:48.623507 24584 replica.cpp:658] Replica received learned notice for position 0 I0610 20:04:48.624155 24584 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 612283ns I0610 20:04:48.624186 24584 replica.cpp:679] Persisted action at 0 I0610 20:04:48.624215 24584 replica.cpp:664] Replica learned NOP action at position 0 I0610 20:04:48.625144 24593 log.cpp:677] Writer started with ending position 0 I0610 20:04:48.626724 24589 leveldb.cpp:438] Reading position from leveldb took 72013ns I0610 20:04:48.629276 24591 registrar.cpp:346] Successfully fetched the registry (0B) in 21.520128ms I0610 20:04:48.629663 24591 registrar.cpp:445] Applied 1 operations in 129587ns; attempting to update the 'registry' I0610 20:04:48.632237 24579 log.cpp:685] Attempting to append 131 bytes to the log I0610 20:04:48.632624 24579 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0610 20:04:48.633739 24579 replica.cpp:511] Replica received write request for position 1 I0610 20:04:48.634351 24579 leveldb.cpp:343] Persisting action (150 bytes) to leveldb took 583937ns I0610 20:04:48.634382 24579 replica.cpp:679] Persisted action at 1 I0610 20:04:48.635073 24583 replica.cpp:658] Replica received learned notice for position 1 I0610 20:04:48.635442 24583 leveldb.cpp:343] Persisting action (152 bytes) to leveldb took 357122ns I0610 20:04:48.635469 24583 replica.cpp:679] Persisted action at 1 I0610 20:04:48.635494 24583 replica.cpp:664] Replica learned APPEND action at position 1 I0610 20:04:48.636337 24583 registrar.cpp:490] Successfully updated the 'registry' in 6.534144ms I0610 20:04:48.636725 24594 log.cpp:704] Attempting to truncate the log to 1 I0610 20:04:48.636858 24583 registrar.cpp:376] Successfully recovered registrar I0610 20:04:48.637073 24594 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0610 20:04:48.637789 24594 master.cpp:1286] Recovered 0 slaves from the Registry (95B) ; allowing 10mins for slaves to re-register I0610 20:04:48.638630 24583 replica.cpp:511] Replica received write request for position 2 I0610 20:04:48.639127 24583 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 396272ns I0610 20:04:48.639153 24583 replica.cpp:679] Persisted action at 2 I0610 20:04:48.639804 24583 replica.cpp:658] Replica received learned notice for position 2 I0610 20:04:48.640965 24583 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 1.147322ms I0610 20:04:48.641054 24583 leveldb.cpp:401] Deleting ~1 keys from leveldb took 72395ns I0610 20:04:48.641197 24583 replica.cpp:679] Persisted action at 2 I0610 20:04:48.641345 24583 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0610 20:04:48.652274 24561 containerizer.cpp:111] Using isolation: posix/cpu,posix/mem I0610 20:04:48.658994 24590 slave.cpp:188] Slave started on 42)@172.17.0.231:32907 I0610 20:04:48.659049 24590 slave.cpp:189] Flags at startup: --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/FetcherCacheTest_LocalCachedExtract_LCHuuM/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_remove_delay=""""6hrs"""" --docker_sandbox_directory=""""/mnt/mesos/sandbox"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/FetcherCacheTest_LocalCachedExtract_LCHuuM/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.23.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resource_monitoring_interval=""""1secs"""" --resources=""""cpus(*):1000; mem(*):1000"""" --revocable_cpu_low_priority=""""true"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/FetcherCacheTest_LocalCachedExtract_LCHuuM"""" I0610 20:04:48.659570 24590 credentials.hpp:85] Loading credential for authentication from '/tmp/FetcherCacheTest_LocalCachedExtract_LCHuuM/credential' I0610 20:04:48.659803 24590 slave.cpp:319] Slave using credential for: test-principal I0610 20:04:48.660441 24590 slave.cpp:352] Slave resources: cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] I0610 20:04:48.660555 24590 slave.cpp:382] Slave hostname: dbade881e927 I0610 20:04:48.660578 24590 slave.cpp:387] Slave checkpoint: true I0610 20:04:48.661550 24588 state.cpp:35] Recovering state from '/tmp/FetcherCacheTest_LocalCachedExtract_LCHuuM/meta' I0610 20:04:48.661913 24590 status_update_manager.cpp:201] Recovering status update manager I0610 20:04:48.662253 24590 containerizer.cpp:312] Recovering containerizer I0610 20:04:48.663207 24581 slave.cpp:3950] Finished recovery I0610 20:04:48.663761 24581 slave.cpp:4104] Querying resource estimator for oversubscribable resources I0610 20:04:48.664077 24581 slave.cpp:678] New master detected at master@172.17.0.231:32907 I0610 20:04:48.664088 24586 status_update_manager.cpp:175] Pausing sending status updates I0610 20:04:48.664245 24581 slave.cpp:741] Authenticating with master master@172.17.0.231:32907 I0610 20:04:48.664388 24581 slave.cpp:746] Using default CRAM-MD5 authenticatee I0610 20:04:48.664611 24581 slave.cpp:714] Detecting new master I0610 20:04:48.664647 24594 authenticatee.hpp:139] Creating new client SASL connection I0610 20:04:48.664813 24581 slave.cpp:4125] Received oversubscribable resources from the resource estimator I0610 20:04:48.665060 24581 slave.cpp:4129] No master detected. Re-querying resource estimator after 15secs I0610 20:04:48.665096 24594 master.cpp:4181] Authenticating slave(42)@172.17.0.231:32907 I0610 20:04:48.665247 24581 authenticator.cpp:406] Starting authentication session for crammd5_authenticatee(130)@172.17.0.231:32907 I0610 20:04:48.665657 24581 authenticator.cpp:92] Creating new server SASL connection I0610 20:04:48.666013 24581 authenticatee.hpp:230] Received SASL authentication mechanisms: CRAM-MD5 I0610 20:04:48.666159 24581 authenticatee.hpp:256] Attempting to authenticate with mechanism 'CRAM-MD5' I0610 20:04:48.666443 24592 authenticator.cpp:197] Received SASL authentication start I0610 20:04:48.666591 24592 authenticator.cpp:319] Authentication requires more steps I0610 20:04:48.666779 24592 authenticatee.hpp:276] Received SASL authentication step I0610 20:04:48.667007 24585 authenticator.cpp:225] Received SASL authentication step I0610 20:04:48.667043 24585 auxprop.cpp:101] Request to lookup properties for user: 'test-principal' realm: 'dbade881e927' server FQDN: 'dbade881e927' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0610 20:04:48.667058 24585 auxprop.cpp:173] Looking up auxiliary property '*userPassword' I0610 20:04:48.667110 24585 auxprop.cpp:173] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0610 20:04:48.667142 24585 auxprop.cpp:101] Request to lookup properties for user: 'test-principal' realm: 'dbade881e927' server FQDN: 'dbade881e927' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0610 20:04:48.667155 24585 auxprop.cpp:123] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0610 20:04:48.667163 24585 auxprop.cpp:123] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0610 20:04:48.667181 24585 authenticator.cpp:311] Authentication success I0610 20:04:48.667331 24585 authenticatee.hpp:316] Authentication success I0610 20:04:48.667414 24585 master.cpp:4211] Successfully authenticated principal 'test-principal' at slave(42)@172.17.0.231:32907 I0610 20:04:48.667505 24585 authenticator.cpp:424] Authentication session cleanup for crammd5_authenticatee(130)@172.17.0.231:32907 I0610 20:04:48.667809 24585 slave.cpp:812] Successfully authenticated with master master@172.17.0.231:32907 I0610 20:04:48.667982 24585 slave.cpp:1146] Will retry registration in 7.257154ms if necessary I0610 20:04:48.668226 24585 master.cpp:3157] Registering slave at slave(42)@172.17.0.231:32907 (dbade881e927) with id 20150610-200448-3875541420-32907-24561-S0 I0610 20:04:48.668737 24585 registrar.cpp:445] Applied 1 operations in 90255ns; attempting to update the 'registry' I0610 20:04:48.672297 24585 log.cpp:685] Attempting to append 305 bytes to the log I0610 20:04:48.672541 24585 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 3 I0610 20:04:48.673528 24593 replica.cpp:511] Replica received write request for position 3 I0610 20:04:48.674321 24593 leveldb.cpp:343] Persisting action (324 bytes) to leveldb took 766804ns I0610 20:04:48.674355 24593 replica.cpp:679] Persisted action at 3 I0610 20:04:48.675138 24587 replica.cpp:658] Replica received learned notice for position 3 I0610 20:04:48.675866 24587 leveldb.cpp:343] Persisting action (326 bytes) to leveldb took 714643ns I0610 20:04:48.675897 24587 replica.cpp:679] Persisted action at 3 I0610 20:04:48.675922 24587 replica.cpp:664] Replica learned APPEND action at position 3 I0610 20:04:48.677471 24587 registrar.cpp:490] Successfully updated the 'registry' in 8.656128ms I0610 20:04:48.677759 24587 log.cpp:704] Attempting to truncate the log to 3 I0610 20:04:48.678423 24593 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 4 I0610 20:04:48.678621 24587 master.cpp:3214] Registered slave 20150610-200448-3875541420-32907-24561-S0 at slave(42)@172.17.0.231:32907 (dbade881e927) with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] I0610 20:04:48.678959 24593 hierarchical.hpp:496] Added slave 20150610-200448-3875541420-32907-24561-S0 (dbade881e927) with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] (and cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] available) I0610 20:04:48.679157 24593 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:48.679183 24593 hierarchical.hpp:852] Performed allocation for slave 20150610-200448-3875541420-32907-24561-S0 in 175519ns I0610 20:04:48.679805 24593 replica.cpp:511] Replica received write request for position 4 I0610 20:04:48.684160 24587 slave.cpp:846] Registered with master master@172.17.0.231:32907; given slave ID 20150610-200448-3875541420-32907-24561-S0 I0610 20:04:48.684229 24587 fetcher.cpp:77] Clearing fetcher cache I0610 20:04:48.684666 24587 slave.cpp:869] Checkpointing SlaveInfo to '/tmp/FetcherCacheTest_LocalCachedExtract_LCHuuM/meta/slaves/20150610-200448-3875541420-32907-24561-S0/slave.info' I0610 20:04:48.687366 24587 slave.cpp:2895] Received ping from slave-observer(42)@172.17.0.231:32907 I0610 20:04:48.687453 24584 status_update_manager.cpp:182] Resuming sending status updates I0610 20:04:48.690901 24593 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 3.385583ms I0610 20:04:48.690975 24593 replica.cpp:679] Persisted action at 4 I0610 20:04:48.692137 24593 replica.cpp:658] Replica received learned notice for position 4 I0610 20:04:48.692603 24593 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 449838ns I0610 20:04:48.692674 24593 leveldb.cpp:401] Deleting ~2 keys from leveldb took 52471ns I0610 20:04:48.692699 24593 replica.cpp:679] Persisted action at 4 I0610 20:04:48.692726 24593 replica.cpp:664] Replica learned TRUNCATE action at position 4 I0610 20:04:48.693544 24561 sched.cpp:157] Version: 0.23.0 I0610 20:04:48.695550 24590 sched.cpp:254] New master detected at master@172.17.0.231:32907 I0610 20:04:48.697090 24590 sched.cpp:310] Authenticating with master master@172.17.0.231:32907 I0610 20:04:48.697136 24590 sched.cpp:317] Using default CRAM-MD5 authenticatee I0610 20:04:48.697511 24586 authenticatee.hpp:139] Creating new client SASL connection I0610 20:04:48.697937 24586 master.cpp:4181] Authenticating scheduler-51f5c1b5-bb50-4118-bde8-4dcdfd69205d@172.17.0.231:32907 I0610 20:04:48.698185 24584 authenticator.cpp:406] Starting authentication session for crammd5_authenticatee(131)@172.17.0.231:32907 I0610 20:04:48.698575 24584 authenticator.cpp:92] Creating new server SASL connection I0610 20:04:48.698807 24584 authenticatee.hpp:230] Received SASL authentication mechanisms: CRAM-MD5 I0610 20:04:48.699898 24584 authenticatee.hpp:256] Attempting to authenticate with mechanism 'CRAM-MD5' I0610 20:04:48.700040 24584 authenticator.cpp:197] Received SASL authentication start I0610 20:04:48.700119 24584 authenticator.cpp:319] Authentication requires more steps I0610 20:04:48.700193 24584 authenticatee.hpp:276] Received SASL authentication step I0610 20:04:48.700287 24584 authenticator.cpp:225] Received SASL authentication step I0610 20:04:48.700320 24584 auxprop.cpp:101] Request to lookup properties for user: 'test-principal' realm: 'dbade881e927' server FQDN: 'dbade881e927' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0610 20:04:48.700333 24584 auxprop.cpp:173] Looking up auxiliary property '*userPassword' I0610 20:04:48.700392 24584 auxprop.cpp:173] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0610 20:04:48.700425 24584 auxprop.cpp:101] Request to lookup properties for user: 'test-principal' realm: 'dbade881e927' server FQDN: 'dbade881e927' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0610 20:04:48.700439 24584 auxprop.cpp:123] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0610 20:04:48.700448 24584 auxprop.cpp:123] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0610 20:04:48.700467 24584 authenticator.cpp:311] Authentication success I0610 20:04:48.700640 24584 authenticatee.hpp:316] Authentication success I0610 20:04:48.700742 24584 authenticator.cpp:424] Authentication session cleanup for crammd5_authenticatee(131)@172.17.0.231:32907 I0610 20:04:48.701282 24590 sched.cpp:398] Successfully authenticated with master master@172.17.0.231:32907 I0610 20:04:48.701315 24590 sched.cpp:521] Sending registration request to master@172.17.0.231:32907 I0610 20:04:48.701386 24590 sched.cpp:554] Will retry registration in 1.128089605secs if necessary I0610 20:04:48.701676 24586 master.cpp:4211] Successfully authenticated principal 'test-principal' at scheduler-51f5c1b5-bb50-4118-bde8-4dcdfd69205d@172.17.0.231:32907 I0610 20:04:48.702863 24586 master.cpp:1716] Received registration request for framework 'default' at scheduler-51f5c1b5-bb50-4118-bde8-4dcdfd69205d@172.17.0.231:32907 W0610 20:04:48.702924 24586 master.cpp:1539] Framework at scheduler-51f5c1b5-bb50-4118-bde8-4dcdfd69205d@172.17.0.231:32907 (authenticated as 'test-principal') does not specify principal in its FrameworkInfo I0610 20:04:48.702957 24586 master.cpp:1555] Authorizing framework principal '' to receive offers for role '*' I0610 20:04:48.703580 24586 master.cpp:1783] Registering framework 20150610-200448-3875541420-32907-24561-0000 (default) at scheduler-51f5c1b5-bb50-4118-bde8-4dcdfd69205d@172.17.0.231:32907 with checkpointing enabled and capabilities [ ] I0610 20:04:48.705044 24590 hierarchical.hpp:354] Added framework 20150610-200448-3875541420-32907-24561-0000 I0610 20:04:48.705657 24590 hierarchical.hpp:834] Performed allocation for 1 slaves in 583520ns I0610 20:04:48.707613 24586 master.cpp:4100] Sending 1 offers to framework 20150610-200448-3875541420-32907-24561-0000 (default) at scheduler-51f5c1b5-bb50-4118-bde8-4dcdfd69205d@172.17.0.231:32907 I0610 20:04:48.709035 24590 sched.cpp:448] Framework registered with 20150610-200448-3875541420-32907-24561-0000 I0610 20:04:48.709113 24590 sched.cpp:462] Scheduler::registered took 33214ns GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: resourceOffers(0x3bd7fb0, @0x7fe5bb7af898 { 128-byte object <90-8D 30-CD E5-7F 00-00 00-00 00-00 00-00 00-00 30-E8 00-80 E5-7F 00-00 D0-E8 00-80 E5-7F 00-00 70-E9 00-80 E5-7F 00-00 10-EA 00-80 E5-7F 00-00 F0-58 00-80 E5-7F 00-00 04-00 00-00 04-00 00-00 04-00 00-00 65-45 76-65 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 E5-7F 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 0F-00 00-00> }) Stack trace: I0610 20:04:48.709631 24590 sched.cpp:611] Scheduler::resourceOffers took 189034ns I0610 20:04:49.607378 24589 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:49.607435 24589 hierarchical.hpp:834] Performed allocation for 1 slaves in 474600ns I0610 20:04:50.608489 24582 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:50.608551 24582 hierarchical.hpp:834] Performed allocation for 1 slaves in 517133ns I0610 20:04:51.609849 24589 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:51.609908 24589 hierarchical.hpp:834] Performed allocation for 1 slaves in 440523ns I0610 20:04:52.611188 24584 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:52.611250 24584 hierarchical.hpp:834] Performed allocation for 1 slaves in 471882ns I0610 20:04:53.612911 24581 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:53.612962 24581 hierarchical.hpp:834] Performed allocation for 1 slaves in 411941ns I0610 20:04:54.614280 24582 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:54.614336 24582 hierarchical.hpp:834] Performed allocation for 1 slaves in 448103ns I0610 20:04:55.615985 24583 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:55.616046 24583 hierarchical.hpp:834] Performed allocation for 1 slaves in 494677ns I0610 20:04:56.616896 24580 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:56.616952 24580 hierarchical.hpp:834] Performed allocation for 1 slaves in 461555ns I0610 20:04:57.618814 24587 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:57.618885 24587 hierarchical.hpp:834] Performed allocation for 1 slaves in 491478ns I0610 20:04:58.620564 24589 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:58.620621 24589 hierarchical.hpp:834] Performed allocation for 1 slaves in 434384ns I0610 20:04:59.621649 24584 hierarchical.hpp:933] No resources available to allocate! I0610 20:04:59.621706 24584 hierarchical.hpp:834] Performed allocation for 1 slaves in 453279ns I0610 20:05:00.623241 24593 hierarchical.hpp:933] No resources available to allocate! I0610 20:05:00.623299 24593 hierarchical.hpp:834] Performed allocation for 1 slaves in 423551ns I0610 20:05:01.624984 24590 hierarchical.hpp:933] No resources available to allocate! I0610 20:05:01.625041 24590 hierarchical.hpp:834] Performed allocation for 1 slaves in 458545ns I0610 20:05:02.626266 24591 hierarchical.hpp:933] No resources available to allocate! I0610 20:05:02.626327 24591 hierarchical.hpp:834] Performed allocation for 1 slaves in 490068ns I0610 20:05:03.627702 24593 hierarchical.hpp:933] No resources available to allocate! I0610 20:05:03.627766 24593 hierarchical.hpp:834] Performed allocation for 1 slaves in 473279ns I0610 20:05:03.666060 24581 slave.cpp:4104] Querying resource estimator for oversubscribable resources I0610 20:05:03.666353 24581 slave.cpp:4125] Received oversubscribable resources from the resource estimator I0610 20:05:03.680258 24588 slave.cpp:2895] Received ping from slave-observer(42)@172.17.0.231:32907 F0610 20:05:03.725155 24561 fetcher_cache_tests.cpp:354] CHECK_READY(offers): is PENDING Failed to wait for resource offers *** Check failure stack trace: *** @ 0x7fe5cc5a2a0d google::LogMessage::Fail() @ 0x7fe5cc5a1dee google::LogMessage::SendToLog() @ 0x7fe5cc5a26cd google::LogMessage::Flush() @ 0x7fe5cc5a5b38 google::LogMessageFatal::~LogMessageFatal() @ 0x8947c7 _CheckFatal::~_CheckFatal() @ 0xadf458 mesos::internal::tests::FetcherCacheTest::launchTask() @ 0xae5ea4 mesos::internal::tests::FetcherCacheTest_LocalCachedExtract_Test::TestBody() @ 0x128fb83 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x127a7e7 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x1261c45 testing::Test::Run() @ 0x126287b testing::TestInfo::Run() @ 0x1262ec7 testing::TestCase::Run() @ 0x126854a testing::internal::UnitTestImpl::RunAllTests() @ 0x128c163 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x127c817 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x1268247 testing::UnitTest::Run() @ 0xca398e main @ 0x7fe5c8436ec5 (unknown) @ 0x749e8c (unknown) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2860","06/11/2015 21:50:45",3,"Create the basic infrastructure to handle /scheduler endpoint ""This is the first basic step in ensuring the basic {{/call}} functionality: processing a and returning: - {{202}} if all goes well; - {{401}} if not authorized; and - {{403}} if the request is malformed. We'll get more sophisticated as the work progressed (eg, supporting {{415}} if the content-type is not of the right kind)."""," POST /call ",0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2862","06/11/2015 23:50:14",2,"mesos-fetcher won't fetch uris which begin with a "" "" ""Discovered while running mesos with marathon on top. If I launch a marathon task with a URI which is """" http://apache.osuosl.org/mesos/0.22.1/mesos-0.22.1.tar.gz"""" mesos will log to stderr: It would be nice if mesos trimmed leading whitespace before doing protocol detection so that simple mistakes are just fixed. """," I0611 22:39:22.815636 35673 logging.cpp:177] Logging to STDERR I0611 22:39:25.643889 35673 fetcher.cpp:214] Fetching URI ' http://apache.osuosl.org/mesos/0.22.1/mesos-0.22.1.tar.gz' I0611 22:39:25.648111 35673 fetcher.cpp:94] Hadoop Client not available, skipping fetch with Hadoop Client Failed to fetch: http://apache.osuosl.org/mesos/0.22.1/mesos-0.22.1.tar.gz Failed to synchronize with slave (it's probably exited) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2866","06/12/2015 23:48:57",3,"Slave should send oversubscribed resource information after master failover. ""After a master failover, if the total amount of oversubscribed resources does not change, then the slave will not send the UpdateSlave message to the new master. The slave needs to send the information to the new master regardless of this.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2874","06/16/2015 17:00:36",2,"Convert PortMappingStatistics to use automatic JSON encoding/decoding ""Simplify PortMappingStatistics by using JSON::Protocol and protobuf::parse to convert ResourceStatistics to/from line format. This change will simplify the implementation of MESOS-2332.""","",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2879","06/17/2015 17:09:22",1,"Random recursive_mutex errors in when running make check ""While running make check on OS X, from time to time {{recursive_mutex}} errors appear after running all the test successfully. Just one of the experience messages actually stops {{make check}} reporting an error. The following error messages have been experienced: """," libc++abi.dylib: libc++abi.dylib: libc++abi.dylib: libc++abi.dylib: libc++abi.dylib: libc++abi.dylib: terminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argumentterminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argumentterminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argumentterminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argumentterminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argumentterminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argument *** Aborted at 1434553937 (unix time) try """"date -d @1434553937"""" if you are using GNU date *** libc++abi.dylib: terminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argument *** Aborted at 1434557001 (unix time) try """"date -d @1434557001"""" if you are using GNU date *** libc++abi.dylib: PC: @ 0x7fff93855286 __pthread_kill libc++abi.dylib: *** SIGABRT (@0x7fff93855286) received by PID 88060 (TID 0x10fc40000) stack trace: *** @ 0x7fff8e1d6f1a _sigtramp libc++abi.dylib: @ 0x10fc3f1a8 (unknown) libc++abi.dylib: @ 0x7fff979deb53 abort libc++abi.dylib: libc++abi.dylib: libc++abi.dylib: terminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argumentterminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argumentterminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argumentterminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argumentterminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argumentterminating with uncaught exception of type std::__1::system_error: recursive_mutex lock failed: Invalid argumentMaking check in include Assertion failed: (e == 0), function ~recursive_mutex, file /SourceCache/libcxx/libcxx-120/src/mutex.cpp, line 82. *** Aborted at 1434555685 (unix time) try """"date -d @1434555685"""" if you are using GNU date *** PC: @ 0x7fff93855286 __pthread_kill *** SIGABRT (@0x7fff93855286) received by PID 60235 (TID 0x7fff7ebdc300) stack trace: *** @ 0x7fff8e1d6f1a _sigtramp @ 0x10b512350 google::CheckNotNull<>() @ 0x7fff979deb53 abort @ 0x7fff979a6c39 __assert_rtn @ 0x7fff9bffdcc9 std::__1::recursive_mutex::~recursive_mutex() @ 0x10b881928 process::ProcessManager::~ProcessManager() @ 0x10b874445 process::ProcessManager::~ProcessManager() @ 0x10b874418 process::finalize() @ 0x10b2f7aec main @ 0x7fff98edc5c9 start make[5]: *** [check-local] Abort trap: 6 make[4]: *** [check-am] Error 2 make[3]: *** [check-recursive] Error 1 make[2]: *** [check-recursive] Error 1 make[1]: *** [check] Error 2 make: *** [check-recursive] Error 1 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2883","06/17/2015 22:05:41",2,"Do not call hook manager if no hooks installed ""Hooks modules allow us to provide decorators during various aspects of a task lifecycle such as label decorator, environment decorator, etc. Often the call into such a decorator hooks results in a new copy of labels, environment, etc., being returned to the call site. This is an unnecessary overhead if there are no hooks installed. The proper way would be to call decorators via the hook manager only if there are some hooks installed. This would prevent unnecessary copying overhead if no hooks are available.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2884","06/18/2015 01:52:46",5,"Allow isolators to specify required namespaces ""Currently, the LinuxLauncher looks into SlaveFlags to compute the namespaces that should be enabled when launching the executor. This means that a custom Isolator module doesn't have any way to specify dependency on a set of namespaces. The proposed solution is to extend the Isolator interface to also export the namespaces dependency. This way the MesosContainerizer can directly query all loaded Isolators (inbuilt and custom modules) to compute the set of namespaces required by the executor. This set of namespaces is then passed on to the LinuxLauncher. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2886","06/18/2015 16:49:39",1,"Capture some testing patterns we use in a doc ""In Mesos tests we use some tricks and patterns to express certain expectations. These are not always obvious and not documented. The intent of the ticket is to kick-start the document with the description of those tricks for posterity.""","",0,0,0,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2888","06/18/2015 20:58:08",5,"Add SSL socket tests ""commit beac384c77d4a9c235a813e9286716f4509bdd55 Author: Joris Van Remoortere Date: Fri Jun 26 18:30:12 2015 -0700 Add SSL tests. Review: https://reviews.apache.org/r/35889""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2890","06/18/2015 21:03:42",3,"Sandbox URL doesn't work in web-ui when using SSL ""The links to the sandbox in the web ui don't work when ssl is enabled. This can happen if the certificate for the master and the slave do not match. This is a consequence of the redirection that happens when serving files. The resolution to this is currently to set up your certificates to serve the hostnames of the master and slaves.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2891","06/18/2015 22:34:29",3,"Performance regression in hierarchical allocator. ""For large clusters, the 0.23.0 allocator cannot keep up with the volume of slaves. After the following slave was re-registered, it took the allocator a long time to work through the backlog of slaves to add: {noformat:title=45 minute delay} I0618 18:55:40.738399 10172 master.cpp:3419] Re-registered slave 20150422-211121-2148346890-5050-3253-S4695 I0618 19:40:14.960636 10164 hierarchical.hpp:496] Added slave 20150422-211121-2148346890-5050-3253-S4695 {noformat} Empirically, [addSlave|https://github.com/apache/mesos/blob/dda49e688c7ece603ac7a04a977fc7085c713dd1/src/master/allocator/mesos/hierarchical.hpp#L462] and [updateSlave|https://github.com/apache/mesos/blob/dda49e688c7ece603ac7a04a977fc7085c713dd1/src/master/allocator/mesos/hierarchical.hpp#L533] have become expensive. Some timings from a production cluster reveal that the allocator spending in the low tens of milliseconds for each call to {{addSlave}} and {{updateSlave}}, when there are tens of thousands of slaves this amounts to the large delay seen above. We also saw a slow steady increase in memory consumption, hinting further at a queue backup in the allocator. A synthetic benchmark like we did for the registrar would be prudent here, along with visibility into the allocator's queue size."""," I0618 18:55:40.738399 10172 master.cpp:3419] Re-registered slave 20150422-211121-2148346890-5050-3253-S4695 I0618 19:40:14.960636 10164 hierarchical.hpp:496] Added slave 20150422-211121-2148346890-5050-3253-S4695 ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2892","06/18/2015 22:42:30",3,"Add benchmark for hierarchical allocator. ""In light of the performance regression in MESOS-2891, we'd like to have a synthetic benchmark of the allocator code, in order to analyze and direct improvements.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2893","06/18/2015 22:51:26",1,"Add queue size metrics for the allocator. ""In light of the performance regression in MESOS-2891, we'd like to have visibility into the queue size of the allocator. This will enable alerting on performance problems. We currently have no metrics in the allocator. I will also look into MESOS-1286 now that we have gcc 4.8, current queue size gauges require a trip through the Process' queue.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2902","06/19/2015 22:36:40",5,"Enable Mesos to use arbitrary script / module to figure out IP, HOSTNAME ""Currently Mesos tries to guess the IP, HOSTNAME by doing a reverse DNS lookup. This doesn't work on a lot of clouds as we want things like public IPs (which aren't the default DNS), there aren't FQDN names (Azure), or the correct way to figure it out is to call some cloud-specific endpoint. If Mesos / Libprocess could load a mesos-module (Or run a script) which is provided per-cloud, we can figure out perfectly the IP / Hostname for the given environment. It also means we can ship one identical set of files to all hosts in a given provider which doesn't happen to have the DNS scheme + hostnames that libprocess/Mesos expects. Currently we have to generate host-specific config files which Mesos uses to guess. The host-specific files break / fall apart if machines change IP / hostname without being reinstalled.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2903","06/20/2015 00:21:39",3,"Network isolator should not fail when target state already exists ""Network isolator has multiple instances of the following pattern: These failures have occurred in operation due to the failure to recover or delete an orphan, causing the slave to remain on line but unable to create new resources. We should convert the second failure message in this pattern to an information message since the final state of the system is the state that we requested."""," Try something = ....::create(); if (something.isError()) { ++metrics.something_errors; return Failure(""""Failed to create something ..."""") } else if (!icmpVethToEth0.get()) { ++metrics.adding_veth_icmp_filters_already_exist; return Failure(""""Something already exists""""); } ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2904","06/20/2015 01:04:00",1,"Add slave metric to count container launch failures ""We have seen circumstances where a machine has been consistently unable to launch containers due to an inconsistent state (for example, unexpected network configuration). Adding a metric to track container launch failures will allow us to detect and alert on slaves in such a state.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2917","06/23/2015 03:27:49",1,"Specify correct libnl version for configure check ""Currently configure.ac lists 3.2.24 as the required libnl version. However, https://reviews.apache.org/r/31503 caused the minimum required version to be bumped to 3.2.26. The configure check thus fails to error out during execution and the dependency is captured only during the build step.""","",0,0,1,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2920","06/24/2015 02:43:20",3,"Add move constructors / assignment to Try. ""Now that we have C++11, let's add move constructors and move assignment operators for Try, similarly to what was done for Option.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2921","06/24/2015 02:43:22",3,"Add move constructors / assignment to Result. ""Now that we have C++11, let's add move constructors and move assignment operators for Result, similarly to what was done for Option.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2922","06/24/2015 02:43:24",3,"Add move constructors / assignment to Future. ""Now that we have C++11, let's add move constructors and move assignment operators for Future, similarly to what was done for Option. There is currently one move constructor for Future, but not for T, U, and no assignment operator.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2925","06/24/2015 20:42:22",1,"Invalid usage of ATOMIC_FLAG_INIT in member initialization ""The C++ specification states: The macro ATOMIC_FLAG_INIT shall be defined in such a way that it can be used to initialize an object of type atomic_flag to the clear state. The macro can be used in the form: """"atomic_flag guard = ATOMIC_FLAG_INIT; """"It is unspecified whether the macro can be used in other initialization contexts."""" Clang catches this (although reports it erroneously as a braced scaled init issue) and refuses to compile libprocess.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2936","06/25/2015 18:47:05",8,"Create a design document for Quota support in Master ""Create a design document for the Quota feature support in Mesos Master (excluding allocator) to be shared with the Mesos community. Design Doc: https://docs.google.com/document/d/16iRNmziasEjVOblYp5bbkeBZ7pnjNlaIzPQqMTHQ-9I/edit?usp=sharing""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2937","06/25/2015 18:51:13",5,"Initial design document for Quota support in Allocator. ""Create a design document for the Quota feature support in the built-in Hierarchical DRF allocator to be shared with the Mesos community.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2938","06/25/2015 18:54:54",1,"Linux docker inspect crashes ""On linux, when a simple task is being executed on docker container executor, the sandbox stderr shows a backtrace: *** Aborted at 1435254156 (unix time) try """"date -d @1435254156"""" if you are using GNU date *** PC: @ 0x7ffff2b1364d (unknown) *** SIGSEGV (@0xfffffffffffffff8) received by PID 88424 (TID 0x7fffe88fb700) from PID 18446744073709551608; stack trace: *** @ 0x7ffff25a4340 (unknown) @ 0x7ffff2b1364d (unknown) @ 0x7ffff2b724df (unknown) @ 0x4a6466 Docker::Container::~Container() @ 0x7ffff5bfa49a Option<>::~Option() @ 0x7ffff5c15989 Option<>::operator=() @ 0x7ffff5c09e9f Try<>::operator=() @ 0x7ffff5c09ee3 Result<>::operator=() @ 0x7ffff5c0a938 process::Future<>::set() @ 0x7ffff5bff412 process::Promise<>::set() @ 0x7ffff5be53e3 Docker::___inspect() @ 0x7ffff5be3cf8 _ZZN6Docker9__inspectERKSsRKN7process5OwnedINS2_7PromiseINS_9ContainerEEEEERK6OptionI8DurationENS2_6FutureISsEERKNS2_10SubprocessEENKUlRKSG_E1_clESL_ @ 0x7ffff5be91e9 _ZZNK7process6FutureISsE5onAnyIZN6Docker9__inspectERKSsRKNS_5OwnedINS_7PromiseINS3_9ContainerEEEEERK6OptionI8DurationES1_RKNS_10SubprocessEEUlRKS1_E1_vEESM_OT_NS1_6PreferEENUlSM_E_clESM_ @ 0x7ffff5be9d9d _ZNSt17_Function_handlerIFvRKN7process6FutureISsEEEZNKS2_5onAnyIZN6Docker9__inspectERKSsRKNS0_5OwnedINS0_7PromiseINS7_9ContainerEEEEERK6OptionI8DurationES2_RKNS0_10SubprocessEEUlS4_E1_vEES4_OT_NS2_6PreferEEUlS4_E_E9_M_invokeERKSt9_Any_dataS4_ @ 0x7ffff5c1eadd std::function<>::operator()() @ 0x7ffff5c15e07 process::Future<>::onAny() @ 0x7ffff5be93a1 _ZNK7process6FutureISsE5onAnyIZN6Docker9__inspectERKSsRKNS_5OwnedINS_7PromiseINS3_9ContainerEEEEERK6OptionI8DurationES1_RKNS_10SubprocessEEUlRKS1_E1_vEESM_OT_NS1_6PreferE @ 0x7ffff5be87f6 _ZNK7process6FutureISsE5onAnyIZN6Docker9__inspectERKSsRKNS_5OwnedINS_7PromiseINS3_9ContainerEEEEERK6OptionI8DurationES1_RKNS_10SubprocessEEUlRKS1_E1_EESM_OT_ @ 0x7ffff5be459c Docker::__inspect() @ 0x7ffff5be337c _ZZN6Docker8_inspectERKSsRKN7process5OwnedINS2_7PromiseINS_9ContainerEEEEERK6OptionI8DurationEENKUlvE_clEv @ 0x7ffff5be8c5a _ZZNK7process6FutureI6OptionIiEE5onAnyIZN6Docker8_inspectERKSsRKNS_5OwnedINS_7PromiseINS5_9ContainerEEEEERKS1_I8DurationEEUlvE_vEERKS3_OT_NS3_10LessPreferEENUlSL_E_clESL_ @ 0x7ffff5be9b36 _ZNSt17_Function_handlerIFvRKN7process6FutureI6OptionIiEEEEZNKS4_5onAnyIZN6Docker8_inspectERKSsRKNS0_5OwnedINS0_7PromiseINS9_9ContainerEEEEERKS2_I8DurationEEUlvE_vEES6_OT_NS4_10LessPreferEEUlS6_E_E9_M_invokeERKSt9_Any_dataS6_ @ 0x7ffff5c1e9b3 std::function<>::operator()() @ 0x7ffff6184a1a _ZN7process8internal3runISt8functionIFvRKNS_6FutureI6OptionIiEEEEEJRS6_EEEvRKSt6vectorIT_SaISD_EEDpOT0_ @ 0x7ffff617e64d process::Future<>::set() @ 0x7ffff6752e46 process::Promise<>::set() @ 0x7ffff675faec process::internal::cleanup() @ 0x7ffff6765293 _ZNSt5_BindIFPFvRKN7process6FutureI6OptionIiEEEPNS0_7PromiseIS3_EERKNS0_10SubprocessEESt12_PlaceholderILi1EES9_SA_EE6__callIvIS6_EILm0ELm1ELm2EEEET_OSt5tupleIIDpT0_EESt12_Index_tupleIIXspT1_EEE @ 0x7ffff6764bcd _ZNSt5_BindIFPFvRKN7process6FutureI6OptionIiEEEPNS0_7PromiseIS3_EERKNS0_10SubprocessEESt12_PlaceholderILi1EES9_SA_EEclIJS6_EvEET0_DpOT_ @ 0x7ffff67642a5 _ZZNK7process6FutureI6OptionIiEE5onAnyISt5_BindIFPFvRKS3_PNS_7PromiseIS2_EERKNS_10SubprocessEESt12_PlaceholderILi1EESA_SB_EEvEES7_OT_NS3_6PreferEENUlS7_E_clES7_ @ 0x7ffff676531d _ZNSt17_Function_handlerIFvRKN7process6FutureI6OptionIiEEEEZNKS4_5onAnyISt5_BindIFPFvS6_PNS0_7PromiseIS3_EERKNS0_10SubprocessEESt12_PlaceholderILi1EESC_SD_EEvEES6_OT_NS4_6PreferEEUlS6_E_E9_M_invokeERKSt9_Any_dataS6_ @ 0x7ffff5c1e9b3 std::function<>::operator()() (END) ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2940","06/25/2015 22:33:26",3,"Reconciliation is expensive for large numbers of tasks. ""We've observed that both implicit and explicit reconciliation are expensive for large numbers of tasks: {noformat: title=Explicit O(100,000) tasks: 70secs} I0625 20:55:23.716320 21937 master.cpp:3863] Performing explicit task state reconciliation for N tasks of framework F (NAME) at S@IP:PORT I0625 20:56:34.812464 21937 master.cpp:5041] Removing task T with resources R of framework F on slave S at slave(1)@IP:PORT (HOST) Let's add a benchmark to see if there are any bottlenecks here, and to guide improvements."""," I0625 20:55:23.716320 21937 master.cpp:3863] Performing explicit task state reconciliation for N tasks of framework F (NAME) at S@IP:PORT I0625 20:56:34.812464 21937 master.cpp:5041] Removing task T with resources R of framework F on slave S at slave(1)@IP:PORT (HOST) I0625 20:25:22.310601 21936 master.cpp:3802] Performing implicit task state reconciliation for framework F (NAME) at S@IP:PORT I0625 20:26:23.874528 21921 master.cpp:218] Scheduling shutdown of slave S due to health check timeout ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2943","06/26/2015 04:24:22",2,"mesos fails to compile under mac when libssl and libevent are enabled ""../configure --enable-debug --enable-libevent --enable-ssl && make produces the following error: poll.cpp' || echo '../../../3rdparty/libprocess/'`src/libevent_poll.cpp libtool: compile: g++ -DPACKAGE_NAME=\""""libprocess\"""" -DPACKAGE_TARNAME=\""""libprocess\"""" -DPACKAGE_VERSION=\""""0.0.1\"""" """"-DPACKAGE_STRING=\""""libprocess 0.0.1\"""""""" -DPACKAGE_BUGREPORT=\""""\"""" -DPACKAGE_URL=\""""\"""" -DPACKAGE=\""""libprocess\"""" -DVERSION=\""""0.0.1\"""" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\"""".libs/\"""" -DHAVE_APR_POOLS_H=1 -DHAVE_LIBAPR_1=1 -DHAVE_SVN_VERSION_H=1 -DHAVE_LIBSVN_SUBR_1=1 -DHAVE_SVN_DELTA_H=1 -DHAVE_LIBSVN_DELTA_1=1 -DHAVE_LIBCURL=1 -DHAVE_EVENT2_EVENT_H=1 -DHAVE_LIBEVENT=1 -DHAVE_EVENT2_THREAD_H=1 -DHAVE_LIBEVENT_PTHREADS=1 -DHAVE_OPENSSL_SSL_H=1 -DHAVE_LIBSSL=1 -DHAVE_LIBCRYPTO=1 -DHAVE_EVENT2_BUFFEREVENT_SSL_H=1 -DHAVE_LIBEVENT_OPENSSL=1 -DUSE_SSL_SOCKET=1 -DHAVE_PTHREAD_PRIO_INHERIT=1 -DHAVE_PTHREAD=1 -DHAVE_LIBZ=1 -DHAVE_LIBDL=1 -I. -I../../../3rdparty/libprocess -I../../../3rdparty/libprocess/include -I../../../3rdparty/libprocess/3rdparty/stout/include -I3rdparty/boost-1.53.0 -I3rdparty/libev-4.15 -I3rdparty/picojson-4f93734 -I3rdparty/glog-0.3.3/src -I3rdparty/ry-http-parser-1c3624a -I/usr/local/opt/openssl/include -I/usr/local/opt/libevent/include -I/usr/local/opt/subversion/include/subversion-1 -I/usr/include/apr-1 -I/usr/include/apr-1.0 -g1 -O0 -std=c++11 -stdlib=libc++ -DGTEST_USE_OWN_TR1_TUPLE=1 -MT libprocess_la-libevent_poll.lo -MD -MP -MF .deps/libprocess_la-libevent_poll.Tpo -c ../../../3rdparty/libprocess/src/libevent_poll.cpp -fno-common -DPIC -o libprocess_la-libevent_poll.o mv -f .deps/libprocess_la-socket.Tpo .deps/libprocess_la-socket.Plo mv -f .deps/libprocess_la-subprocess.Tpo .deps/libprocess_la-subprocess.Plo mv -f .deps/libprocess_la-libevent.Tpo .deps/libprocess_la-libevent.Plo mv -f .deps/libprocess_la-metrics.Tpo .deps/libprocess_la-metrics.Plo In file included from ../../../3rdparty/libprocess/src/libevent_ssl_socket.cpp:11: In file included from ../../../3rdparty/libprocess/include/process/queue.hpp:9: ../../../3rdparty/libprocess/include/process/future.hpp:849:7: error: no viable conversion from 'const process::Future >' to 'const process::network::Socket' set(u); ^ ../../../3rdparty/libprocess/src/libevent_ssl_socket.cpp:769:10: note: in instantiation of function template specialization 'process::Future::Future > >' requested here return accept_queue.get() ^ ../../../3rdparty/libprocess/include/process/socket.hpp:21:7: note: candidate constructor (the implicit move constructor) not viable: no known conversion from 'const process::Future >' to 'process::network::Socket &&' for 1st argument class Socket ^ ../../../3rdparty/libprocess/include/process/socket.hpp:21:7: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from 'const process::Future >' to 'const process::network::Socket &' for 1st argument class Socket ^ ../../../3rdparty/libprocess/include/process/future.hpp:411:21: note: passing argument to parameter '_t' here bool set(const T& _t); ^ 1 error generated. make[4]: *** [libprocess_la-libevent_ssl_socket.lo] Error 1 make[4]: *** Waiting for unfinished jobs.... mv -f .deps/libprocess_la-libevent_poll.Tpo .deps/libprocess_la-libevent_poll.Plo mv -f .deps/libprocess_la-openssl.Tpo .deps/libprocess_la-openssl.Plo mv -f .deps/libprocess_la-process.Tpo .deps/libprocess_la-process.Plo make[3]: *** [all-recursive] Error 1 make[2]: *** [all-recursive] Error 1 make[1]: *** [all] Error 2 make: *** [all-recursive] Error 1""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2944","06/26/2015 13:18:18",1,"Use of EXPECT in test and relying on the checked condition afterwards. ""In docker_containerizer_test we have the following pattern. As we rely on the value afterwards we should use ASSERT_NE instead. In that case the test will fail immediately. """," EXPECT_NE(0u, offers.get().size()); const Offer& offer = offers.get()[0]; ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2949","06/26/2015 15:24:04",3,"Draft design for generalized Authorizer interface ""As mentioned in MESOS-2948 the current {{mesos::Authorizer}} interface is rather inflexible if new _Actions_ or _Objects_ need to be added. A new API needs to be designed in a way that allows for arbitrary _Actions_ and _Objects_ to be added to the authorization mechanism without having to recompile mesos.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2950","06/26/2015 15:27:09",8,"Implement current mesos Authorizer in terms of generalized Authorizer interface ""In order to maintain compatibility with existent versions of Mesos, as well as to prove the flexibility of the generalized {{mesos::Authorizer}} design, the current authorization mechanism through ACL definitions needs to run under the updated interface without any changes being noticeable by the current authorization users.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2951","06/26/2015 19:22:14",3,"Inefficient container usage collection ""docker containerizer currently collects usage statistics by calling os's process statistics (eg ps ). There is scope for making this efficient, say by querying cgroups file system. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2957","06/29/2015 21:05:40",1,"Add version to MasterInfo ""This will help schedulers figure out the version of the master that they are interacting with. See MESOS-2736 for additional context.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2961","06/30/2015 18:14:10",2,"Add cpuacct subsystem utils to cgroups ""Current cgroups implementation does not have a cpuacct subsystem implementation. This subsystem reports important metrics like user and system CPU ticks spent by a process. """"cgroups"""" namespace has subsystem specific utilities for """"cpu"""", """"memory"""" etc. It could use other subsystems specific utils (eg. cpuacct). In the future, we could also view cgroups as a mesos-subsystem with features like event notifications. Although refactoring cgroups would be a different epic, listing the possible tasks: - Have hierarchies, subsystems abstracted to represent the domain - Create """"cgroups service"""" - """"cgroups service"""" listen to update events from the OS on files like stats. This would be an interrupt based system(maybe use linux fsnotify) - """"cgroups service"""" services events to mesos (containers for example). ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2962","06/30/2015 19:36:49",1,"Slave fails with Abort stacktrace when DNS cannot resolve hostname ""If the DNS cannot resolve the hostname-to-IP for a slave node, we correctly return an {{Error}} object, but we then fail with a segfault. This code adds a more user-friendly message and exits normally (with an {{EXIT_FAILURE}} code). For example, forcing {{net::getIp()}} to always return an {{Error}}, now causes the slave to exit like this: """," $ ./bin/mesos-slave.sh --master=10.10.1.121:5405 WARNING: Logging before InitGoogleLogging() is written to STDERR E0630 11:31:45.777465 1944417024 process.cpp:899] Could not obtain the IP address for stratos.local; the DNS service may not be able to resolve it: >>> Marco was here!!! $ echo $? 1 ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2964","06/30/2015 21:55:36",3,"libprocess io does not support peek() ""Finally, I so wish we could just do: from: https://reviews.apache.org/r/31207/"""," io::peek(request->socket, 6) .then([request](const string& data) { // Comment about the rules ... if (data.length() < 2) { // Rule 1 } else if (...) { // Rule 2. } else if (...) { // Rule 3. } if (ssl) { accept_SSL_callback(request); } else { ...; } }); ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2965","06/30/2015 22:04:09",2,"Add implicit cast to string operator to Path. ""For example: does not have an overload for The implementation should be something like: ""","inline Try rm(const std::string& path)inline Try rm(const Path& path) inline Try rm(const Path& path) { rm(path.value); } ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2966","06/30/2015 22:09:56",5,"socket::peer() and socket::address() might fail with SSL enabled ""libevent SSL currently uses a secondary FD so we need to virtualize the get() function on socket interface. ""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2967","06/30/2015 22:18:33",5,"Missing doxygen documentation for libprocess socket interface ""Convert existing comments to doxygen format. ""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2968","07/01/2015 01:27:48",3,"Implement shared copy based provisioner backend ""Currently Appc and Docker both implemented its own copy backend, but most of the logic is the same where the input is just a image name with its dependencies. We can refactor both so that we just have one implementation that is shared between both provisioners, so appc and docker can reuse the shared copy backend.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2973","07/01/2015 02:51:24",3,"SSL tests don't work with --gtest_repeat ""commit bfa89f22e9d6a3f365113b32ee1cac5208a0456f Author: Joris Van Remoortere Date: Wed Jul 1 16:16:52 2015 -0700 MESOS-2973: Allow SSL tests to run using gtest_repeat. The SSL ctx object carried some settings between reinitialize() calls. Re-construct the object to avoid this state transition. Review: https://reviews.apache.org/r/36074""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2986","07/02/2015 09:56:54",1,"Docker version output is not compatible with Mesos ""We currently use docker version to get Docker version, in Docker master branch and soon in Docker 1.8 [1] the output for this command changes. The solution for now will be to use the unchanged docker --version output, in the long term we should consider stop using the cli and use the API instead. [1] https://github.com/docker/docker/pull/14047""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2991","07/06/2015 14:18:24",1,"Compilation Error on Mac OS 10.10.4 with clang 3.5.0 ""Compiling 0.23.0 (rc1) produces compilation errors on Mac OS 10.10.4 with {{g++}} based on LLVM 3.5. It looks like the issue was introduced in {{a5640ad813e6256b548fca068f04fd9fa3a03eda}}, https://reviews.apache.org/r/32838. In contrast to the commit message, compiling the rc with gcc4.4 on CentOS worked fine for me. According to 0.23 release notes and MESOS-2604, we should support clang 3.5. Compiler version: """," ../../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:543:25: error: conversion from 'void ()' to 'const Option' is ambiguous Fork(dosetsid, // Great-great-granchild. ^~~~~~~~ ../../../../../3rdparty/libprocess/3rdparty/stout/include/stout/option.hpp:40:3: note: candidate constructor Option(const T& _t) : state(SOME), t(_t) {} ^ ../../../../../3rdparty/libprocess/3rdparty/stout/include/stout/option.hpp:42:3: note: candidate constructor Option(T&& _t) : state(SOME), t(std::move(_t)) {} ^ ../../../../../3rdparty/libprocess/3rdparty/stout/include/stout/option.hpp:45:3: note: candidate constructor [with U = void ()] Option(const U& u) : state(SOME), t(u) {} ^ $ g++ --version Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn) Target: x86_64-apple-darwin14.4.0 Thread model: posix ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2993","07/06/2015 18:52:47",3,"Document per container unique egress flow and network queueing statistics ""Document new network isolation capabilities in 0.23""","",0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-2997","07/07/2015 03:43:00",3,"SSL connection failure causes failed CHECK. """""," [ RUN ] SSLTest.BasicSameProcess F0706 18:32:28.465451 238583808 libevent_ssl_socket.cpp:507] Check failed: 'self->bev' Must be non NULL ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3001","07/07/2015 19:20:38",8,"Create a ""demo"" HTTP API client ""We want to create a simple """"demo"""" HTTP API Client (in Java, Python or Go) that can serve as an """"example framework"""" for people who will want to use the new API for their Frameworks. The scope should be fairly limited (eg, launching a simple Container task?) but sufficient to exercise most of the new API endpoint messages/capabilities. Scope: TBD Non-Goals: - create a """"best-of-breed"""" Framework to deliver any specific functionality; - create an Integration Test for the HTTP API.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3002","07/07/2015 19:44:29",1,"Rename Option::get(const T& _t) to getOrElse() broke network isolator ""Change to Option from get() to getOrElse() breaks network isolator. Building with '../configure --with-network-isolator' generates the following error: ../../src/slave/containerizer/isolators/network/port_mapping.cpp: In static member function 'static Try mesos::internal::slave::PortMappingIsolatorProcess::create(const mesos::internal::slave::Flags&)': ../../src/slave/containerizer/isolators/network/port_mapping.cpp:1103:29: error: no matching function for call to 'Option >::get(const char [1]) const' flags.resources.get(""""""""), ^ ../../src/slave/containerizer/isolators/network/port_mapping.cpp:1103:29: note: candidates are: In file included from ../../3rdparty/libprocess/3rdparty/stout/include/stout/check.hpp:26:0, from ../../3rdparty/libprocess/include/process/check.hpp:19, from ../../3rdparty/libprocess/include/process/collect.hpp:7, from ../../src/slave/containerizer/isolators/network/port_mapping.cpp:30: ../../3rdparty/libprocess/3rdparty/stout/include/stout/option.hpp:130:12: note: const T& Option::get() const [with T = std::basic_string] const T& get() const { assert(isSome()); return t; } ^ ../../3rdparty/libprocess/3rdparty/stout/include/stout/option.hpp:130:12: note: candidate expects 0 arguments, 1 provided ../../3rdparty/libprocess/3rdparty/stout/include/stout/option.hpp:131:6: note: T& Option::get() [with T = std::basic_string] T& get() { assert(isSome()); return t; } ^ ../../3rdparty/libprocess/3rdparty/stout/include/stout/option.hpp:131:6: note: candidate expects 0 arguments, 1 provided make[2]: *** [slave/containerizer/isolators/network/libmesos_no_3rdparty_la-port_mapping.lo] Error 1 make[2]: Leaving directory `/home/pbrett/sandbox/mesos.master/build/src' make[1]: *** [check] Error 2 make[1]: Leaving directory `/home/pbrett/sandbox/mesos.master/build/src' make: *** [check-recursive] Error 1 ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3004","07/07/2015 20:24:27",5,"Design support running the command executor with provisioned image for running a task in a container ""Mesos Containerizer uses the command executor to actually launch the user defined command, and the command executor then can communicate with the slave about the process lifecycle. When we provision a new container with the user specified image, we also need to be able to run the command executor in the container to support the same semantics. One approach is to dynamically mount in a static binary of the command executor with all its dependencies in a special directory so it doesn't interfere with the provisioned root filesystem and configure the mesos containerizer to run the command executor in that directory.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3005","07/07/2015 20:38:03",3,"SSL tests can fail depending on hostname configuration ""Depending on how /etc/hosts is configured, the SSL tests can fail with a bad hostname match for the certificate. We can avoid this by explicitly matching the hostname for the certificate to the IP that will be used during the test.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3008","07/07/2015 21:40:54",8,"Libevent SSL doesn't use EPOLL ""we currently disable to epoll in libevent to allow SSL to work. It would be more scalable if we didn't have to do that.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3024","07/09/2015 07:45:48",8,"HTTP endpoint authN is enabled merely by specifying --credentials ""If I set `--credentials` on the master, framework and slave authentication are allowed, but not required. On the other hand, http authentication is now required for authenticated endpoints (currently only `/shutdown`). That means that I cannot enable framework or slave authentication without also enabling http endpoint authentication. This is undesirable. Framework and slave authentication have separate flags (`\--authenticate` and `\--authenticate_slaves`) to require authentication for each. It would be great if there was also such a flag for http authentication. Or maybe we get rid of these flags altogether and rely on ACLs to determine which unauthenticated principals are even allowed to authenticate for each endpoint/action.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3032","07/13/2015 19:32:48",3,"Document containerizer launch ""We currently dont have enough documentation for the containerizer component. This task adds documentation for containerizer launch sequence. The mail goals are: - Have diagrams (state, sequence, class etc) depicting the containerizer launch process. - Make the documentation newbie friendly. - Usable for future design discussions.""","",0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3038","07/14/2015 01:53:32",8,"Resource offers do not contain Unavailability, given a maintenance schedule ""Given a schedule, defined elsewhere, any resource offers to affected slaves must include an Unavailability field. The maintenance schedule for a single slave should be held in [persistent storage|MESOS-2075] and locally by the master. i.e. In src/master/master.hpp: The new field should be populated via an API call (see [MESOS-2067]). The Unavailability field can be added to Master::offer (src/master/master.cpp). Possible test(s): * PendingUnavailibilityTest ** Start master, slave. ** Check unavailability of offer == none. ** Set unavailability to the future. ** Check offer has unavailability. """," struct Slave { ... // Existing fields. // New field that the master/allocator can access Maintenances pendingDowntime; } offer->mutable_unavailability()->MergeFrom(slave->pendingDowntime); ",0,0,0,1,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3042","07/14/2015 19:24:38",8,"Master/Allocator does not send InverseOffers to resources to be maintained ""Offers are currently sent from master/allocator to framework via ResourceOffersMessage's. InverseOffers, which are roughly equivalent to negative Offers, can be sent in the same package. In src/messages/messages.proto Sent InverseOffers can be tracked in the master's local state: i.e. In src/master/master.hpp: One actor (master or allocator) should populate the new InverseOffers field. * In master (src/master/master.cpp) ** Master::offer is where the ResourceOffersMessage and Offer object is constructed. ** The same method could also check for maintenance and send InverseOffers. * In the allocator (src/master/allocator/mesos/hierarchical.hpp) ** HierarchicalAllocatorProcess::allocate is where slave resources are aggregated an sent off to the frameworks. ** InverseOffers (i.e. negative resources) allocation could be calculated in this method. ** A change to Master::offer (i.e. the """"offerCallback"""") may be necessary to account for the negative resources. Possible test(s): * InverseOfferTest ** Start master, slave, framework. ** Accept resource offer, start task. ** Set maintenance schedule to the future. ** Check that InverseOffer(s) are sent to the framework. ** Decline InverseOffer. ** Check that more InverseOffer(s) are sent. ** Accept InverseOffer. ** Check that more InverseOffer(s) are sent."""," message ResourceOffersMessage { repeated Offer offers = 1; repeated string pids = 2; // New field with InverseOffers repeated InverseOffer inverseOffers = 3; } struct Slave { ... // Existing fields. // Active InverseOffers on this slave. // Similar pattern to the """"offers"""" field hashset inverseOffers; } ",0,0,0,1,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3043","07/14/2015 19:42:43",3,"Master does not handle InverseOffers in the Accept call (Event/Call API) ""InverseOffers are similar to Offers in that they are Accepted or Declined based on their OfferID. Some additional logic may be neccesary in Master::accept (src/master/master.cpp) to gracefully handle the acceptance of InverseOffers. * The InverseOffer needs to be removed from the set of pending InverseOffers. * The InverseOffer should not result any errors/warnings. Note: accepted InverseOffers do not preclude further InverseOffers from being sent to the framework. Instead, an accepted InverseOffer merely signifies that the framework is _currently_ fine with the expected downtime.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3044","07/14/2015 20:02:39",8,"Slaves are not deactivated upon reaching a maintenance window ""After a maintenance window is reached, the slave should be deactivated to prevent further tasks from utilizing it. * For slaves that have completely drained, simply deactivate the slave. See Master::deactivate(Slave*). * For tasks which have not explicitly declined the InverseOffers (i.e. they've accepted them or do not understand InverseOffers), send kill signals. See Master::killTask * If a slave has tasks that have declined the InverseOffers, do not deactivate the slave. Possible test(s): * SlaveDrainedTest ** Start master, slave. ** Set maintenance to now. ** Check that slave gets deactivated * InverseOfferAgnosticTest ** Start master, slave, framework. ** Have a task run on the slave (ignores InverseOffers). ** Set maintenance to now. ** Check that task gets killed. ** Check that slave gets deactivated. * InverseOfferAcceptanceTest ** Start master, slave, framework. ** Run a task on the slave. ** Set maintenance to future. ** Have task accept InverseOffer. ** Check task gets killed, slave gets deactivated. * InverseOfferDeclinedTest ** Start master, slave, framework. ** Run task on slave. ** Set maintenance to future. ** Have task decline maintenance with reason. ** Check task lives, slave still active.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3045","07/14/2015 21:53:12",3,"Maintenance information is not populated in case of failover ""When a master starts up, or after a master has failed, it must re-populate maintenance information (i.e. from the registry to the local state). Particularly, {{Master::recover}} in {{src/master/master.cpp}} should be changed to process maintenance information.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3046","07/14/2015 22:59:15",3,"Stout's UUID re-seeds a new random generator during each call to UUID::random. ""Per [~StephanErb] and [~kevints]'s observations on MESOS-2940, stout's UUID abstraction is re-seeding the random generator during each call to {{UUID::random()}}, which is really expensive. This is confirmed in the perf graph from MESOS-2940.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3050","07/15/2015 01:50:20",5,"Failing ROOT_ tests on CentOS 7.1 ""Running `sudo make check` on CentOS 7.1 for Mesos 0.23.0-rc3 causes several several failures/errors: ... ... """," [ RUN ] DockerTest.ROOT_DOCKER_CheckPortResource ../../src/tests/docker_tests.cpp:303: Failure (run).failure(): Container exited on error: exited with status 1 [ FAILED ] DockerTest.ROOT_DOCKER_CheckPortResource (709 ms) [ RUN ] PerfEventIsolatorTest.ROOT_CGROUPS_Sample ../../src/tests/isolator_tests.cpp:837: Failure isolator: Failed to create PerfEvent isolator, invalid events: { cycles, task-clock } [ FAILED ] PerfEventIsolatorTest.ROOT_CGROUPS_Sample (9 ms) [----------] 1 test from PerfEventIsolatorTest (9 ms total) [----------] 2 tests from SharedFilesystemIsolatorTest [ RUN ] SharedFilesystemIsolatorTest.ROOT_RelativeVolume + mount -n --bind /tmp/SharedFilesystemIsolatorTest_ROOT_RelativeVolume_4yTEAC/var/tmp /var/tmp + touch /var/tmp/492407e1-5dec-4b34-8f2f-130430f41aac ../../src/tests/isolator_tests.cpp:1001: Failure Value of: os::exists(file) Actual: true Expected: false [ FAILED ] SharedFilesystemIsolatorTest.ROOT_RelativeVolume (92 ms) [ RUN ] SharedFilesystemIsolatorTest.ROOT_AbsoluteVolume + mount -n --bind /tmp/SharedFilesystemIsolatorTest_ROOT_AbsoluteVolume_OwYrXK /var/tmp + touch /var/tmp/7de712aa-52eb-4976-b0f9-32b6a006418d ../../src/tests/isolator_tests.cpp:1086: Failure Value of: os::exists(path::join(containerPath, filename)) Actual: true Expected: false [ FAILED ] SharedFilesystemIsolatorTest.ROOT_AbsoluteVolume (100 ms) [----------] 1 test from UserCgroupIsolatorTest/0, where TypeParam = mesos::internal::slave::CgroupsMemIsolatorProcess userdel: user 'mesos.test.unprivileged.user' does not exist [ RUN ] UserCgroupIsolatorTest/0.ROOT_CGROUPS_UserCgroup -bash: /sys/fs/cgroup/blkio/user.slice/cgroup.procs: Permission denied mkdir: cannot create directory ‘/sys/fs/cgroup/blkio/user.slice/user’: Permission denied ../../src/tests/isolator_tests.cpp:1274: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'mkdir """" + path::join(flags.cgroups_hierarchy, userCgroup) + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/blkio/user.slice/user/cgroup.procs: No such file or directory ../../src/tests/isolator_tests.cpp:1283: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'echo $$ >"""" + path::join(flags.cgroups_hierarchy, userCgroup, """"cgroup.procs"""") + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/memory/mesos/bbf8c8f0-3d67-40df-a269-b3dc6a9597aa/cgroup.procs: Permission denied -bash: /sys/fs/cgroup/cpuacct,cpu/user.slice/cgroup.procs: No such file or directory mkdir: cannot create directory ‘/sys/fs/cgroup/cpuacct,cpu/user.slice/user’: No such file or directory ../../src/tests/isolator_tests.cpp:1274: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'mkdir """" + path::join(flags.cgroups_hierarchy, userCgroup) + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/cpuacct,cpu/user.slice/user/cgroup.procs: No such file or directory ../../src/tests/isolator_tests.cpp:1283: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'echo $$ >"""" + path::join(flags.cgroups_hierarchy, userCgroup, """"cgroup.procs"""") + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/name=systemd/user.slice/user-2004.slice/session-3865.scope/cgroup.procs: No such file or directory mkdir: cannot create directory ‘/sys/fs/cgroup/name=systemd/user.slice/user-2004.slice/session-3865.scope/user’: No such file or directory ../../src/tests/isolator_tests.cpp:1274: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'mkdir """" + path::join(flags.cgroups_hierarchy, userCgroup) + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/name=systemd/user.slice/user-2004.slice/session-3865.scope/user/cgroup.procs: No such file or directory ../../src/tests/isolator_tests.cpp:1283: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'echo $$ >"""" + path::join(flags.cgroups_hierarchy, userCgroup, """"cgroup.procs"""") + """"'"""") Actual: 256 Expected: 0 [ FAILED ] UserCgroupIsolatorTest/0.ROOT_CGROUPS_UserCgroup, where TypeParam = mesos::internal::slave::CgroupsMemIsolatorProcess (1034 ms) [----------] 1 test from UserCgroupIsolatorTest/0 (1034 ms total) [----------] 1 test from UserCgroupIsolatorTest/1, where TypeParam = mesos::internal::slave::CgroupsCpushareIsolatorProcess userdel: user 'mesos.test.unprivileged.user' does not exist [ RUN ] UserCgroupIsolatorTest/1.ROOT_CGROUPS_UserCgroup -bash: /sys/fs/cgroup/blkio/user.slice/cgroup.procs: Permission denied mkdir: cannot create directory ‘/sys/fs/cgroup/blkio/user.slice/user’: Permission denied ../../src/tests/isolator_tests.cpp:1274: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'mkdir """" + path::join(flags.cgroups_hierarchy, userCgroup) + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/blkio/user.slice/user/cgroup.procs: No such file or directory ../../src/tests/isolator_tests.cpp:1283: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'echo $$ >"""" + path::join(flags.cgroups_hierarchy, userCgroup, """"cgroup.procs"""") + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/cpuacct,cpu/mesos/eeeb99f0-7c5c-4185-869d-635d51dcc6e1/cgroup.procs: No such file or directory mkdir: cannot create directory ‘/sys/fs/cgroup/cpuacct,cpu/mesos/eeeb99f0-7c5c-4185-869d-635d51dcc6e1/user’: No such file or directory ../../src/tests/isolator_tests.cpp:1274: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'mkdir """" + path::join(flags.cgroups_hierarchy, userCgroup) + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/cpuacct,cpu/mesos/eeeb99f0-7c5c-4185-869d-635d51dcc6e1/user/cgroup.procs: No such file or directory ../../src/tests/isolator_tests.cpp:1283: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'echo $$ >"""" + path::join(flags.cgroups_hierarchy, userCgroup, """"cgroup.procs"""") + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/name=systemd/user.slice/user-2004.slice/session-3865.scope/cgroup.procs: No such file or directory mkdir: cannot create directory ‘/sys/fs/cgroup/name=systemd/user.slice/user-2004.slice/session-3865.scope/user’: No such file or directory ../../src/tests/isolator_tests.cpp:1274: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'mkdir """" + path::join(flags.cgroups_hierarchy, userCgroup) + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/name=systemd/user.slice/user-2004.slice/session-3865.scope/user/cgroup.procs: No such file or directory ../../src/tests/isolator_tests.cpp:1283: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'echo $$ >"""" + path::join(flags.cgroups_hierarchy, userCgroup, """"cgroup.procs"""") + """"'"""") Actual: 256 Expected: 0 [ FAILED ] UserCgroupIsolatorTest/1.ROOT_CGROUPS_UserCgroup, where TypeParam = mesos::internal::slave::CgroupsCpushareIsolatorProcess (763 ms) [----------] 1 test from UserCgroupIsolatorTest/1 (763 ms total) [----------] 1 test from UserCgroupIsolatorTest/2, where TypeParam = mesos::internal::slave::CgroupsPerfEventIsolatorProcess userdel: user 'mesos.test.unprivileged.user' does not exist [ RUN ] UserCgroupIsolatorTest/2.ROOT_CGROUPS_UserCgroup ../../src/tests/isolator_tests.cpp:1200: Failure isolator: Failed to create PerfEvent isolator, invalid events: { cpu-cycles } [ FAILED ] UserCgroupIsolatorTest/2.ROOT_CGROUPS_UserCgroup, where TypeParam = mesos::internal::slave::CgroupsPerfEventIsolatorProcess (6 ms) [----------] 1 test from UserCgroupIsolatorTest/2 (6 ms total) ",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3051","07/15/2015 06:11:05",8,"performance issues with port ranges comparison ""Testing in an environment with lots of frameworks (>200), where the frameworks permanently decline resources they don't need. The allocator ends up spending a lot of time figuring out whether offers are refused (the code path through {{HierarchicalAllocatorProcess::isFiltered()}}. In profiling a synthetic benchmark, it turns out that comparing port ranges is very expensive, involving many temporary allocations. 61% of Resources::contains() run time is in operator -= (Resource). 35% of Resources::contains() run time is in Resources::_contains(). The heaviest call chain through {{Resources::_contains}} is: mesos::coalesce(Value_Ranges) gets done a lot and ends up being really expensive. The heaviest parts of the inverted call chain are: """," Running Time Self (ms) Symbol Name 7237.0ms 35.5% 4.0 mesos::Resources::_contains(mesos::Resource const&) const 7200.0ms 35.3% 1.0 mesos::contains(mesos::Resource const&, mesos::Resource const&) 7133.0ms 35.0% 121.0 mesos::operator<=(mesos::Value_Ranges const&, mesos::Value_Ranges const&) 6319.0ms 31.0% 7.0 mesos::coalesce(mesos::Value_Ranges*, mesos::Value_Ranges const&) 6240.0ms 30.6% 161.0 mesos::coalesce(mesos::Value_Ranges*, mesos::Value_Range const&) 1867.0ms 9.1% 25.0 mesos::Value_Ranges::add_range() 1694.0ms 8.3% 4.0 mesos::Value_Ranges::~Value_Ranges() 1495.0ms 7.3% 16.0 mesos::Value_Ranges::operator=(mesos::Value_Ranges const&) 445.0ms 2.1% 94.0 mesos::Value_Range::MergeFrom(mesos::Value_Range const&) 154.0ms 0.7% 24.0 mesos::Value_Ranges::range(int) const 103.0ms 0.5% 24.0 mesos::Value_Ranges::range_size() const 95.0ms 0.4% 2.0 mesos::Value_Range::Value_Range(mesos::Value_Range const&) 59.0ms 0.2% 4.0 mesos::Value_Ranges::Value_Ranges() 50.0ms 0.2% 50.0 mesos::Value_Range::begin() const 28.0ms 0.1% 28.0 mesos::Value_Range::end() const 26.0ms 0.1% 0.0 mesos::Value_Range::~Value_Range() Running Time Self (ms) Symbol Name 3209.0ms 15.7% 3209.0 mesos::Value_Range::~Value_Range() 3209.0ms 15.7% 0.0 google::protobuf::internal::GenericTypeHandler::Delete(mesos::Value_Range*) 3209.0ms 15.7% 0.0 void google::protobuf::internal::RepeatedPtrFieldBase::Destroy::TypeHandler>() 3209.0ms 15.7% 0.0 google::protobuf::RepeatedPtrField::~RepeatedPtrField() 3209.0ms 15.7% 0.0 google::protobuf::RepeatedPtrField::~RepeatedPtrField() 3209.0ms 15.7% 0.0 mesos::Value_Ranges::~Value_Ranges() 3209.0ms 15.7% 0.0 mesos::Value_Ranges::~Value_Ranges() 2441.0ms 11.9% 0.0 mesos::coalesce(mesos::Value_Ranges*, mesos::Value_Range const&) 452.0ms 2.2% 0.0 mesos::remove(mesos::Value_Ranges*, mesos::Value_Range const&) 169.0ms 0.8% 0.0 mesos::operator<=(mesos::Value_Ranges const&, mesos::Value_Ranges const&) 82.0ms 0.4% 0.0 mesos::operator-=(mesos::Value_Ranges&, mesos::Value_Ranges const&) 65.0ms 0.3% 0.0 mesos::Value_Ranges::~Value_Ranges() 2541.0ms 12.4% 2541.0 google::protobuf::internal::GenericTypeHandler::New() 2541.0ms 12.4% 0.0 google::protobuf::RepeatedPtrField::TypeHandler::Type* google::protobuf::internal::RepeatedPtrFieldBase::Add::TypeHandler>() 2305.0ms 11.3% 0.0 google::protobuf::RepeatedPtrField::Add() 2305.0ms 11.3% 0.0 mesos::Value_Ranges::add_range() 1962.0ms 9.6% 0.0 mesos::coalesce(mesos::Value_Ranges*, mesos::Value_Range const&) 343.0ms 1.6% 0.0 mesos::ranges::add(mesos::Value_Ranges*, long long, long long) 236.0ms 1.1% 0.0 void google::protobuf::internal::RepeatedPtrFieldBase::MergeFrom::TypeHandler>(google::protobuf::internal::RepeatedPtrFieldBase const&) 1471.0ms 7.2% 1471.0 google::protobuf::internal::RepeatedPtrFieldBase::Reserve(int) 1333.0ms 6.5% 0.0 google::protobuf::RepeatedPtrField::TypeHandler::Type* google::protobuf::internal::RepeatedPtrFieldBase::Add::TypeHandler>() 1333.0ms 6.5% 0.0 google::protobuf::RepeatedPtrField::Add() 1333.0ms 6.5% 0.0 mesos::Value_Ranges::add_range() 1086.0ms 5.3% 0.0 mesos::coalesce(mesos::Value_Ranges*, mesos::Value_Range const&) 247.0ms 1.2% 0.0 mesos::ranges::add(mesos::Value_Ranges*, long long, long long) 107.0ms 0.5% 0.0 void google::protobuf::internal::RepeatedPtrFieldBase::MergeFrom::TypeHandler>(google::protobuf::internal::RepeatedPtrFieldBase const&) 107.0ms 0.5% 0.0 google::protobuf::RepeatedPtrField::MergeFrom(google::protobuf::RepeatedPtrField const&) 107.0ms 0.5% 0.0 mesos::Value_Ranges::MergeFrom(mesos::Value_Ranges const&) 105.0ms 0.5% 0.0 mesos::Value_Ranges::CopyFrom(mesos::Value_Ranges const&) 105.0ms 0.5% 0.0 mesos::Value_Ranges::operator=(mesos::Value_Ranges const&) 104.0ms 0.5% 0.0 mesos::coalesce(mesos::Value_Ranges*, mesos::Value_Range const&) 1.0ms 0.0% 0.0 mesos::remove(mesos::Value_Ranges*, mesos::Value_Range const&) 2.0ms 0.0% 0.0 mesos::Resource::MergeFrom(mesos::Resource const&) 2.0ms 0.0% 0.0 google::protobuf::internal::GenericTypeHandler::Merge(mesos::Resource const&, mesos::Resource*) 2.0ms 0.0% 0.0 void google::protobuf::internal::RepeatedPtrFieldBase::MergeFrom::TypeHandler>(google::protobuf::internal::RepeatedPtrFieldBase const&) 29.0ms 0.1% 0.0 void google::protobuf::internal::RepeatedPtrFieldBase::MergeFrom::TypeHandler>(google::protobuf::internal::RepeatedPtrFieldBase const&) 898.0ms 4.4% 898.0 google::protobuf::RepeatedPtrField::TypeHandler::Type* google::protobuf::internal::RepeatedPtrFieldBase::Add::TypeHandler>() 517.0ms 2.5% 0.0 google::protobuf::RepeatedPtrField::Add() 517.0ms 2.5% 0.0 mesos::Value_Ranges::add_range() 429.0ms 2.1% 0.0 mesos::coalesce(mesos::Value_Ranges*, mesos::Value_Range const&) 88.0ms 0.4% 0.0 mesos::ranges::add(mesos::Value_Ranges*, long long, long long) 379.0ms 1.8% 0.0 void google::protobuf::internal::RepeatedPtrFieldBase::MergeFrom::TypeHandler>(google::protobuf::internal::RepeatedPtrFieldBase const&) ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3060","07/16/2015 14:18:24",1,"FTP response code for success not recognized by fetcher. ""The response code for successful HTTP requests is 200, the response code for successful FTP file transfers is 226. The fetcher currently only checks for a response code of 200 even for FTP URIs. This results in failed fetching even though the resource gets downloaded successfully. This has been found by a dedicated external test using an FTP server. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3062","07/16/2015 17:10:09",2,"Add authorization for dynamic reservation ""Dynamic reservations should be authorized with the {{principal}} of the reserving entity (framework or master). The idea is to introduce {{Reserve}} and {{Unreserve}} into the ACL. When a framework/operator reserves resources, """"reserve"""" ACLs are checked to see if the framework ({{FrameworkInfo.principal}}) or the operator ({{Credential.user}}) is authorized to reserve the specified resources. If not authorized, the reserve operation is rejected. When a framework/operator unreserves resources, """"unreserve"""" ACLs are checked to see if the framework ({{FrameworkInfo.principal}}) or the operator ({{Credential.user}}) is authorized to unreserve the resources reserved by a framework or operator ({{Resource.ReservationInfo.principal}}). If not authorized, the unreserve operation is rejected."""," message Reserve { // Subjects. required Entity principals = 1; // Objects. MVP: Only possible values = ANY, NONE required Entity resources = 1; } message Unreserve { // Subjects. required Entity principals = 1; // Objects. required Entity reserver_principals = 2; } ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3066","07/16/2015 19:05:30",3,"Replicated registry needs a representation of maintenance schedules ""In order to persist maintenance schedules across failovers of the master, the schedule information must be kept in the replicated registry. This means adding an additional message in the Registry protobuf in src/master/registry.proto. The status of each individual slave's maintenance will also be persisted in this way. Note: There can be multiple SlaveID's attached to a single hostname."""," message Maintenance { message HostStatus { required string hostname = 1; // True if the slave is deactivated for maintenance. // False if the slave is draining in preparation for maintenance. required bool is_down = 2; // Or an enum } message Schedule { // The set of affected slave(s). repeated HostStatus hosts = 1; // Interval in which this set of slaves is expected to be down for. optional Unavailability interval = 2; } message Schedules { repeated Schedule schedules; } optional Schedules schedules = 1; } ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3068","07/16/2015 19:22:57",5,"Registry operations are hardcoded for a single key (Registry object) ""This is primarily a refactoring. The prototype for modifying the registry is currently: In order to support Maintenance schedules (possibly Quotas as well), there should be an alternate prototype for Maintenance. Something like: The existing RegistrarProcess::update (src/master/registrar.cpp) should be refactored to allow for more than one key. If necessary, refactor existing operations defined in src/master/master.hpp (AdminSlave, ReadminSlave, RemoveSlave)."""," Try operator () ( Registry* registry, hashset* slaveIDs, bool strict); Try operation () ( Maintenance* maintenance, bool strict); ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3069","07/16/2015 19:38:53",8,"Registry operations do not exist for manipulating maintanence schedules ""In order to modify the maintenance schedule in the replicated registry, we will need Operations (src/master/registrar.hpp). The operations will likely correspond to the HTTP API: * UpdateMaintenanceSchedule: Given a blob representing a maintenance schedule, perform some verification on the blob. Write the blob to the registry. * StartMaintenance: Given a set of machines, verify then transition machines from Draining to Deactivated. * StopMaintenance: Given a set of machines, verify then transition machines from Deactivated to Normal. Remove affected machines from the schedule(s).""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3072","07/17/2015 10:28:59",3,"Unify initialization of modularized components ""h1.Introduction As it stands right now, default implementations of modularized components are required to have a non parametrized {{create()}} static method. This allows to write tests which can cover default implementations and modules based on these default implementations on a uniform way. For example, with the interface {{Foo}}: With a default implementation: This allows to create typed tests which look as following: The test will be applied to each of types in the template parameters of {{FooTestTypes}}. This allows to test different implementation of an interface. In our code, it tests default implementations and a module which uses the same default implementation. The class {{tests::Module}} needs a little explanation, it is a wrapper around {{ModuleManager}} which allows the tests to encode information about the requested module in the type itself instead of passing a string to the factory method. The wrapper around create, the real important method looks as follows: h1.The Problem Consider the following implementation of {{Foo}}: As it can be seen, this implementation cannot be used as a default implementation since its create API does not match the one of {{test::Module<>}}: {{create()}} has a different signature for both types. It is still a common situation to require initialization parameters for objects, however this constraint (keeping both interfaces alike) forces default implementations of modularized components to have default constructors, therefore the tests are forcing the design of the interfaces. Implementations which are supposed to be used as modules only, i.e. non default implementations are allowed to have constructor parameters, since the actual signature of their factory method is, this factory method's function is to decode the parameters and call the appropriate constructor: where parameters is just an array of key-value string pairs whose interpretation is left to the specific module. Sadly, this call is wrapped by {{ModuleManager}} which only allows module parameters to be passed from the command line and does not offer a programmatic way to feed construction parameters to modules. h1.The Ugly Workaround With the requirement of a default constructor and parameters devoid {{create()}} factory function, a common pattern (see [Authenticator|https://github.com/apache/mesos/blob/9d4ac11ed757aa5869da440dfe5343a61b07199a/include/mesos/authentication/authenticator.hpp]) has been introduced to feed construction parameters into default implementation, this leads to adding an {{initialize()}} call to the public interface, which will have {{Foo}} become: {{ParameterFoo}} will thus look as follows: Look that this {{initialize()}} method now has to be implemented by all descendants of {{Foo}}, even if there's a {{DatabaseFoo}} which takes is return value for {{hello()}} from a DB, it will need to support {{int}} as an initialization parameter. The problem is more severe the more specific the parameter to {{initialize()}} is. For example, if there is a very complex structure implementing ACLs, all implementations of an authorizer will need to import this structure even if they can completely ignore it. In the {{Foo}} example if {{ParameterFoo}} were to become the default implementation of {{Foo}}, the tests would look as follows: """," class Foo { public: virtual ~Foo() {} virtual Future hello() = 0; protected: Foo() {} }; class LocalFoo { public: Try create() { return new Foo; } virtual Future hello() { return 1; } }; typedef ::testing::Types> FooTestTypes; TYPED_TEST_CASE(FooTest, FooTestTypes); TYPED_TEST(FooTest, ATest) { Try foo = TypeParam::create(); ASSERT_SOME(foo); AWAIT_CHECK_EQUAL(foo.get()->hello(), 1); } template static Try test::Module::create() { Try moduleName = getModuleName(N); if (moduleName.isError()) { return Error(moduleName.error()); } return mesos::modules::ModuleManager::create(moduleName.get()); } class ParameterFoo { public: Try create(int i) { return new ParameterFoo(i); } ParameterFoo(int i) : i_(i) {} virtual Future hello() { return i; } private: int i_; }; template T* Module::create(const Parameters& params); class Foo { public: virtual ~Foo() {} virtual Try initialize(Option i) = 0; virtual Future hello() = 0; protected: Foo() {} }; class ParameterFoo { public: Try create() { return new ParameterFoo; } ParameterFoo() : i_(None()) {} virtual Try initialize(Option i) { if (i.isNone()) { return Error(""""Need value to initialize""""); } i_ = i; return Nothing; } virtual Future hello() { if (i_.isNone()) { return Future::failure(""""Not initialized""""); } return i_.get(); } private: Option i_; }; typedef ::testing::Types> FooTestTypes; TYPED_TEST_CASE(FooTest, FooTestTypes); TYPED_TEST(FooTest, ATest) { Try foo = TypeParam::create(); ASSERT_SOME(foo); int fooValue = 1; foo.get()->initialize(fooValue); AWAIT_CHECK_EQUAL(foo.get()->hello(), fooValue); } ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3077","07/17/2015 22:21:22",5,"Registry recovery does not recover the maintenance object. ""Persisted info is fetched from the registry when a master is elected or after failover. Currently, this process involves 3 steps: * Fetch the """"registry"""". * Start an operation to add the new master to the fetched registry. * Check the success of the operation and finish recovering. These methods can be found in src/master/registrar.cpp Since the maintenance schedule is stored in a separate key, the recover process must also fetch a new """"maintenance"""" object. This object needs to be passed along to the master along with the existing """"registry"""" object. Possible test(s): * src/tests/registrar_tests.cpp ** Change the """"Recovery"""" test to include checks for the new object.""","RegistrarProcess::recover, ::_recover, ::__recover",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3087","07/20/2015 11:42:06",1,"Typos in oversubscription doc ""* In docs/oversubscription.md: there are three cases where """"revocable"""" is written as """"recovable"""", including the name of a JSON field. * Also in `docs/oversubscription.md`: the last sentence doesn't make sense Maybe should say """"To select a custom..."""" or """"To install a custom..."""""""," $ grep -niR recovable . ./docs/oversubscription.md:51:with revocable resources. Further more, recovable resources cannot be ./docs/oversubscription.md:95:Launching tasks using recovable resources is done through the existing ./docs/oversubscription.md:96:`launchTasks` API. Revocable resources will have the `recovable` field set. See To select custom a resource estimator and QoS controller, please refer to the [modules documentation](modules.md). ",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3093","07/20/2015 23:25:16",3,"Support HTTPS requests in libprocess ""In order to pull images from Docker registries, https calls are needed to securely communicate with the registry hosts. Currently, only http requests are supported through libprocess. Now that SSL sockets are available through libprocess, support for https can be added.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3097","07/20/2015 23:40:54",13,"OS-specific code touched by the containerizer tests is not Windows compatible ""In the process of adding the Cmake build system, [~hausdorff] noted and stubbed out all OS-specific code. That sweep (mostly of libprocess and stout) is here: https://github.com/hausdorff/mesos/commit/b862f66c6ff58c115a009513621e5128cb734d52 Instead of having inline {{#if defined(...)}}, the OS-specific code will be separated into directories. The Windows code will be stubbed out.""","",0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3098","07/20/2015 23:44:30",13,"Implement WindowsContainerizer and WindowsDockerContainerizer ""The MVP for Windows support is a containerizer that (1) runs on Windows, and (2) runs and passes all the tests that are relevant to the Windows platform (_e.g._, not the tests that involve cgroups). To do this we require at least a `WindowsContainerizer` (to be implemented alongside the `MesosContainerizer`), which provides no meaningful (_e.g._) process namespacing (much like the default unix containerizer). In the long term (hopefully before MesosCon) we want to support also the Windows container API. This will require implementing a separate containerizer, maybe called `WindowsDockerContainerizer`. Since the Windows container API is actually officially supported through the Docker interface (_i.e._, MSFT actually ported the Docker engine to Windows, and that is the official API), the interfaces (like the fetcher) shouldn't change much. The tests probably will have to change, as we don't have access to any isolation primitives like cgroups for those tests. Outstanding TODO([~hausdorff]): Flesh out this description when more details are available, regarding: * The container API for Windows (when we know them) * The nuances of Windows vs Linux (when we know them) * etc.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3101","07/21/2015 00:09:32",3,"Standardize separation of Windows/Linux-specific OS code ""There are 50+ files that must be touched to separate OS-specific code. First, we will standardize the changes by using stout/abort.hpp as an example. The review/discussion can be found here: https://reviews.apache.org/r/36625/""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3102","07/21/2015 00:17:16",5,"Separate OS-specific code in the stout library ""This issue tracks changes for all files under {{3rdparty/libprocess/3rdparty/stout/}} The changes will be based on this commit: https://github.com/hausdorff/mesos/commit/b862f66c6ff58c115a009513621e5128cb734d52#diff-a6d038bad64b154996452bec020cfa7c""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3103","07/21/2015 00:22:07",5,"Separate OS-specific code in the libprocess library ""This issue tracks changes for all files under {{3rdparty/libprocess/include/}} and {{3rdparty/libprocess/src}}. The changes will be based on this commit: https://github.com/hausdorff/mesos/commit/b862f66c6ff58c115a009513621e5128cb734d52#diff-a6d038bad64b154996452bec020cfa7c""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3106","07/21/2015 01:53:36",5,"Extend CMake build system to support building against third-party libraries from either the system or the local Mesos rebundling ""Currently Mesos has third-party dependencies of two types: (1) those that are expected to be on the system (such as APR, libsvn, _etc_.), and (2) those that have been historically bundled as tarballs inside the Mesos repository, and are not expected to be on the system when Mesos is installed (these are located in the `3rdparty/` directory, and includes things like boost and glog). For type (2), the MVP of the CMake-based build system will always pull down a fresh tarball from an external source, instead of using the bundled tarballs in the `3rdparty/` folder. However, many CI systems do not have Internet access, so in the long term, we need to provide many options for getting these dependencies.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3107","07/21/2015 02:21:40",3,"Define CMake style guide ""The short story is that it is important to be principled about how the CMake build system is maintained, because there CMake language makes it difficult to statically verify that a configuration is correct. It is not unique in this regard, but (make is arguably even worse) but it is something that's important to make sure we get right. The longer story is, CMake's language is dynamically scoped and often has somewhat odd defaults for variable values (_e.g._, IIRC, target names passed to ExternalProject_Add default to """"PREFIX"""" instead of erroring out). This means that it is rare to get a configuration-time error (_i.e._, CMake usually doesn't say something like """"hey this variable isn't defined""""), and in large projects, this can make it very difficult to know where definitions come from, or whether it's important that one config routine runs before another. Dynamic scoping also makes it particularly easy to write spaghetti code, which is clearly undesirable for something as important as a build system. Thus, it is particularly important that we lay down our expectations for how the CMake system is to be structured. This might include: * Function naming (_e.g._, making it easy to tell whether a function was defined by us, and where it was defined; so we might say that we want our functions to have an underscore to start, and start with the package the come from, like libprocess, so that we know where to look for the definition.) * What assertions we want to check variable values against, so that we can replace subtle errors (_e.g._, a library is accidentally named something silly like """"PREFIX.0.0.1"""") with an obvious ones (_e.g._, """"You have failed to define your target name, so CMake has defaulted to 'PREFIX'; please check your configuration routines"""") * Decisions of what goes where. (_e.g._, the most complex parts of the CMake MVPs is in the configuration routines, like `MesosConfigure.cmake`; to curb this, we should have strict rules about what goes in that file vs other files, and how we know what is to be run before what. Part of this should probably be prominent comments explaining the structure of the project, so that people aren't confused!) * And so on.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3109","07/21/2015 02:28:51",3,"Expand CMake build system to support building the containerizer and associated components ""In other tasks in epic MESOS-898, we implement a CMake-based build system that allows us to build process library, the process tests, and the stout tests. For the CMake build system MVP, it's important that we expand this to build the containerizer, associated modules, and all related tests.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3110","07/21/2015 02:33:46",3,"Harden the CMake system-dependency-locating routines ""Currently the Mesos project has two flavors of dependency: (1) the dependencies we expect are already on the system (_e.g._, apr, libsvn), and (2) the dependencies that are historically bundled with Mesos (_e.g._, glog). Dependency type (1) requires solid modules that will locate them on any system: Linux, BSD, or Windows. This would come for free if we were using CMake 3.0, but we're using CMake 2.8 so that Ubuntu users can install it out of the box, instead of upgrading CMake first. This is additionally useful for dependency type (2), where we will expect to have to use these routines when we support both the rebundled dependencies in the `3rdparty/` folder, and system installations of those dependencies.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3112","07/21/2015 12:19:28",8,"Fetcher should perform cache eviction based on cache file usage patterns. ""Currently, the fetcher uses a trivial strategy to select eviction victims: it picks the first cache file it finds in linear iteration. This means that potentially a file that has just been used gets evicted the next moment. This performance loss can be avoided by even the simplest enhancement of the selection procedure. Proposed approach: determine an effective yet relatively uncomplex and quick algorithm and implement it in `FetcherProcess::Cache::selectVictims(const Bytes& requiredSpace)`. Suggestion: approximate MRU-retention somehow. Unit-test what actually happens!""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3121","07/21/2015 20:55:56",2,"Always disable SSLV2 ""The SSL protocol mismatch tests are failing on Centos7 when matching SSLV2 with SSLV2. Since this version of the protocol is highly discouraged anyway, let's disable it completely unless requested otherwise.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3122","07/21/2015 21:04:22",2,"Add configurable UNIMPLEMENTED macro to stout ""During the transition to support for windows, it would be great if we had the ability to use a macro that marks functions as un-implemented. To support being able to find all the unimplemented functions easily at compile time, while also being able to run the tests at the same time, we can add a configuration flag that controls whether this macro aborts or expands to a static assertion.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3127","07/22/2015 10:37:00",1,"Improve task reconciliation documentation. ""Include additional information about task reconciliation that explain why the master may not return the states of all tasks immediately and why an explicit task reconciliation algorithm is necessary.""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3129","07/22/2015 18:59:34",2,"Move all MesosContainerizer related files under src/slave/containerizer/mesos ""Currently, some MesosContainerizer specific files are not in the correct location. For example: They should be put under src/slave/containerizer/mesos/"""," src/slave/containerizer/isolators/* src/slave/containerizer/provisioner.hpp|cpp ",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3132","07/23/2015 00:44:25",5,"Allow slave to forward messages through the master for HTTP schedulers. ""The master currently has no install handler for {{ExecutorToFramework}} messages and the slave directly sends these messages to the scheduler driver, bypassing the master entirely. We need to preserve this behavior for the driver, but HTTP schedulers will not have a libprocess 'pid'. We'll have to ensure that the {{RunTaskMessage}} and {{UpdateFrameworkMessage}} have an optional pid. For now the master will continue to set the pid, but 0.24.0 slaves will know to send messages through the master when the 'pid' is not available.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3134","07/23/2015 06:33:01",5,"Port bootstrap to CMake ""Bootstrap does a lot of significant things, like setting up the git commit hooks. We will want something like bootstrap to run also on systems that don't have bash -- ideally this should just run in CMake itself.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3135","07/23/2015 07:55:08",2,"Publish MasterInfo to ZK using JSON ""Following from MESOS-2340, which now allows Master to correctly decode JSON information ({{MasterInfo}}) published to Zookeeper, we can now enable the Master Leader Contender to serialize it too in JSON.""","",0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3139","07/23/2015 20:14:47",13,"Incorporate CMake into standard documentation ""Right now it's anyone's guess how to build with CMake. If we want people to use it, we should put up documentation. The central challenge is that the CMake instructions will be slightly different for different platforms. For example, on Linux, the gist of the build is basically the same as autotools; you pull down the system dependencies (like APR, _etc_.), and then: ``` ./bootstrap mkdir build-cmake && cd build-cmake cmake .. make ``` But, on Windows, it will be somewhat more complicated. There is no bootstrap step, for example, because Windows doesn't have bash natively. And even when we put that in, you'll still have to build the glog stuff out-of-band because CMake has no way of booting up Visual Studio and calling """"build."""" So practically, we need to figure out: * What our build story is for different platforms * Write specific instructions for our """"core"""" target platforms.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3142","07/24/2015 07:50:27",2,"As a Developer I want a better way to run shell commands ""When reviewing the code in [r/36425|https://reviews.apache.org/r/36425/] [~benjaminhindman] noticed that there is a better abstraction that is possible to introduce for {{os::shell()}} that will simplify the caller's life. Instead of having to handle all possible outcomes, we propose to refactor {{os::shell()}} as follows: where the returned string is {{stdout}} and, should the program be signaled, or exit with a non-zero exit code, we will simply return a {{Failure}} with an error message that will encapsulate both the returned/signaled state, and, possibly {{stderr}}. And some test driven development: Alternatively, the caller can ask to have {{stderr}} conflated with {{stdout}}: However, {{stderr}} will be ignored by default: An analysis of existing usage shows that in almost all cases, the caller only cares {{if not error}}; in fact, the actual exit code is read only once, and even then, in a test case. We believe this will simplify the API to the caller, and will significantly reduce the length and complexity at the calling sites (<6 LOC against the current 20+)."""," /** * Returns the output from running the specified command with the shell. */ Try shell(const string& command) { // Actually handle the WIFEXITED, WIFSIGNALED here! } EXPECT_ERROR(os::shell(""""false"""")); EXPECT_SOME(os::shell(""""true"""")); EXPECT_SOME_EQ(""""hello world"""", os::shell(""""echo hello world"""")); Try outAndErr = os::shell(""""myCmd --foo 2>&1""""); // We don't read standard error by default. EXPECT_SOME_EQ("""""""", os::shell(""""echo hello world 1>&2"""")); // We don't even read stderr if something fails (to return in Try::error). Try output = os::shell(""""echo hello world 1>&2 && false""""); EXPECT_ERROR(output); EXPECT_FALSE(strings::contains(output.error(), """"hello world"""")); ",0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3143","07/24/2015 11:07:30",2,"Disable endpoints rule fails to recognize HTTP path delegates ""In mesos, one can use the flag {{--firewall_rules}} to disable endpoints. Disabled endpoints will return a _403 Forbidden_ response whenever someone tries to access endpoints. Libprocess support adding one default delegate for endpoints, which is the default process id which handles endpoints if no process id was given. For example, the default id of the master libprocess process is {{master}} which is also set as the delegate for the master system process, so a request to the endpoint {{http://master-address:5050/state.json}} will effectively be resolved by {{http://master-address:5050/master/state.json}}. But if one disables {{/state.json}} because of how delegates work, it can still access {{/master/state.json}}. The only workaround is to disabled both enpoints.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3154","07/27/2015 19:24:19",1,"Enable Mesos Agent Node to use arbitrary script / module to figure out IP, HOSTNAME ""Following from MESOS-2902 we want to enable the same functionality in the Mesos Agents too. This is probably best done once we implement the new {{os::shell}} semantics, as described in MESOS-3142.""","",0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3158","07/27/2015 22:35:12",3,"Libprocess Process: Join runqueue workers during finalization ""The lack of synchronization between ProcessManager destruction and the thread pool threads running the queued processes means that the shared state that is part of the ProcessManager gets destroyed prematurely. Synchronizing the ProcessManager destructor with draining the work queues and stopping the workers will allow us to not require leaking the shared state to avoid use beyond destruction.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3162","07/28/2015 02:55:20",3,"Provide a means to check http connection equality for streaming connections. ""If one uses an http::Pipe::Writer to stream a response, one cannot compare the writer with another to see if the connection has changed. This is useful for example, in the master's http api when there is asynchronous disconnection logic. When we handle the disconnection, it's possible for the scheduler to have re-subscribed, and so the master needs to tell if the disconnection event is relevant for the current connection before taking action.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3164","07/28/2015 17:53:29",3,"Introduce QuotaInfo message ""A {{QuotaInfo}} protobuf message is internal representation for quota related information (e.g. for persisting quota). The protobuf message should be extendable for future needs and allows for easy aggregation across roles and operator principals. It may also be used to pass quota information to allocators.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3165","07/28/2015 18:02:05",5,"Persist and recover quota to/from Registry ""To persist quotas across failovers, the Master should save them in the registry. To support this, we shall: * Introduce a Quota state variable in registry.proto; * Extend the Operation interface so that it supports a ‘Quota’ accumulator (see src/master/registrar.hpp); * Introduce AddQuota / RemoveQuota operations; * Recover quotas from the registry on failover to the Master’s internal::master::Role struct; * Extend RegistrarTest with quota-specific tests. NOTE: Registry variable can be rather big for production clusters (see MESOS-2075). While it should be fine for MVP to add quota information to registry, we should consider storing Quota separately, as this does not need to be in sync with slaves update. However, currently adding more variable is not supported by the registrar. While the Agents are reregistering (note they may fail to do so), the information about what part of the quota is allocated is only partially available to the Master. In other words, the state of the quota allocation is reconstructed as Agents reregister. During this period, some roles may be under quota from the perspective of the newly elected Master. The same problem exists on the allocator side: it may think the cluster is under quota and may eagerly try to satisfy quotas before enough Agents reregister, which may result in resources being allocated to frameworks beyond their quota. To address this issue and also to avoid panicking and generating under quota alerts, the Master should give a certain amount of time for the majority (e.g. 80%) of the Agents to reregister before reporting any quota status and notifying the allocator about granted quotas.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3166","07/28/2015 18:46:49",3,"Design doc for docker image registry client ""Create design document for the docker registry Authenticator component so that we have a baseline for the implementation. ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3168","07/28/2015 22:01:03",2,"MesosZooKeeperTest fixture can have side effects across tests ""MesosZooKeeperTest fixture doesn't restart the ZooKeeper server for each test. This means if a test shuts down the ZooKeeper server, the next test (using the same fixture) might fail. For an example see https://reviews.apache.org/r/36807/""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3169","07/29/2015 09:00:04",2,"FrameworkInfo should only be updated if the re-registration is valid ""See Ben Mahler's comment in https://reviews.apache.org/r/32961/ FrameworkInfo should not be updated if the re-registration is invalid. This can happen in a few cases under the branching logic, so this requires some refactoring. Notice that a can be generated both inside as well as from inside ""","FrameworkErrorMessageelse if (from != framework->pid)failoverFramework(framework, from);",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3173","07/29/2015 15:18:34",1,"Mark Path::basename, Path::dirname as const functions. ""The functions Path::basename and Path::dirname in stout/path.hpp are not marked const, although they could. Marking them const would remove some ambiguities in the usage of these functions.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3174","07/30/2015 12:09:58",1,"Fetcher logs erroneous message when successfully extracting an archive ""When fetching an asset while not using the cache, the fetcher may erroneously report this: """"Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: """". This message appears in the stderr log in the sandbox no matter whether extraction succeeded or not. It should be absent after successful extraction. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3183","07/31/2015 00:43:06",3,"Documentation images do not load ""Any images which are referenced from the generated docs ({{docs/*.md}}) do not show up on the website. For example: * [Architecture|http://mesos.apache.org/documentation/latest/architecture/] * [External Containerizer|http://mesos.apache.org/documentation/latest/external-containerizer/] * [Fetcher Cache Internals|http://mesos.apache.org/documentation/latest/fetcher-cache-internals/] * [Maintenance|http://mesos.apache.org/documentation/latest/maintenance/] * [Oversubscription|http://mesos.apache.org/documentation/latest/oversubscription/] ""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3185","07/31/2015 22:06:36",3,"Refactor Subprocess logic in linux/perf.cpp to use common subroutine ""MESOS-2834 will enhance the perf isolator to support the different output formats provided by difference kernel versions. In order to achieve this, it requires to execute the """"perf --version"""" command. We should decompose the existing Subcommand processing in perf so that we can share the implementation between the multiple uses of perf.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3189","08/03/2015 20:18:37",2,"TimeTest.Now fails with --enable-libevent ""[ RUN ] TimeTest.Now ../../../3rdparty/libprocess/src/tests/time_tests.cpp:50: Failure Expected: (Microseconds(10)) < (Clock::now() - t1), actual: 8-byte object <10-27 00-00 00-00 00-00> vs 0ns [ FAILED ] TimeTest.Now (0 ms)""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3197","08/04/2015 00:43:34",2,"MemIsolatorTest/{0,1}.MemUsage fails on OS X ""Looks like this is due to {{mlockall}} being unimplemented on OS X. """," [----------] 1 test from MemIsolatorTest/0, where TypeParam = N5mesos8internal5slave23PosixMemIsolatorProcessE [ RUN ] MemIsolatorTest/0.MemUsage Failed to allocate RSS memory: Failed to make pages to be mapped unevictable: Function not implemented../../src/tests/containerizer/isolator_tests.cpp:812: Failure helper.increaseRSS(allocation): Failed to sync with the subprocess ../../src/tests/containerizer/isolator_tests.cpp:815: Failure (usage).failure(): Failed to get usage: No process found at 40558 [ FAILED ] MemIsolatorTest/0.MemUsage, where TypeParam = N5mesos8internal5slave23PosixMemIsolatorProcessE (56 ms) [----------] 1 test from MemIsolatorTest/0 (57 ms total) [----------] 1 test from MemIsolatorTest/1, where TypeParam = N5mesos8internal5tests6ModuleINS_5slave8IsolatorELNS1_8ModuleIDE0EEE [ RUN ] MemIsolatorTest/1.MemUsage Failed to allocate RSS memory: Failed to make pages to be mapped unevictable: Function not implemented../../src/tests/containerizer/isolator_tests.cpp:812: Failure helper.increaseRSS(allocation): Failed to sync with the subprocess ../../src/tests/containerizer/isolator_tests.cpp:815: Failure (usage).failure(): Failed to get usage: No process found at 40572 [ FAILED ] MemIsolatorTest/1.MemUsage, where TypeParam = N5mesos8internal5tests6ModuleINS_5slave8IsolatorELNS1_8ModuleIDE0EEE (50 ms) [----------] 1 test from MemIsolatorTest/1 (50 ms total) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3200","08/04/2015 18:41:44",1,"Remove unused 'fatal' and 'fatalerror' macros ""There exist {{fatal}} and {{fatalerror}} macros in both {{libprocess}} and {{stout}}. None of them are currently used as we favor {{glog}}'s {{LOG(FATAL)}}, and therefore should be removed.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3201","08/04/2015 18:58:28",3,"Libev handle_async can deadlock with run_in_event_loop ""Due to the arbitrary nature of the functions that are executed in handle_async, invoking them under the (A) {{watchers_mutex}} can lead to deadlocks if (B) is acquired before calling {{run_in_event_loop}} and (B) is also acquired within the arbitrary function. This was introduced in https://github.com/apache/mesos/commit/849fc4d361e40062073324153ba97e98e294fdf2"""," ==82679== Thread #10: lock order """"0x60774F8 before 0x60768C0"""" violated ==82679== ==82679== Observed (incorrect) order is: acquisition of lock at 0x60768C0 ==82679== at 0x4C32145: pthread_mutex_lock (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==82679== by 0x692C9B: __gthread_mutex_lock(pthread_mutex_t*) (gthr-default.h:748) ==82679== by 0x6950BF: std::mutex::lock() (mutex:134) ==82679== by 0x696219: Synchronized synchronize(std::mutex*)::{lambda(std::mutex*)#1}::operator()(std::mutex*) const (synchronized.hpp:58) ==82679== by 0x696238: Synchronized synchronize(std::mutex*)::{lambda(std::mutex*)#1}::_FUN(std::mutex*) (synchronized.hpp:58) ==82679== by 0x6984CF: Synchronized::Synchronized(std::mutex*, void (*)(std::mutex*), void (*)(std::mutex*)) (synchronized.hpp:35) ==82679== by 0x6962DE: Synchronized synchronize(std::mutex*) (synchronized.hpp:60) ==82679== by 0x728FE1: process::handle_async(ev_loop*, ev_async*, int) (libev.cpp:48) ==82679== by 0x761384: ev_invoke_pending (ev.c:2994) ==82679== by 0x7643C4: ev_run (ev.c:3394) ==82679== by 0x728E37: ev_loop (ev.h:826) ==82679== by 0x729469: process::EventLoop::run() (libev.cpp:135) ==82679== ==82679== followed by a later acquisition of lock at 0x60774F8 ==82679== at 0x4C32145: pthread_mutex_lock (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==82679== by 0x4C6F9D: __gthread_mutex_lock(pthread_mutex_t*) (gthr-default.h:748) ==82679== by 0x4C6FED: __gthread_recursive_mutex_lock(pthread_mutex_t*) (gthr-default.h:810) ==82679== by 0x4F5D3D: std::recursive_mutex::lock() (mutex:175) ==82679== by 0x516513: Synchronized synchronize(std::recursive_mutex*)::{lambda(std::recursive_mutex*)#1}::operator()(std::recursive_mutex*) const (synchronized.hpp:58) ==82679== by 0x516532: Synchronized synchronize(std::recursive_mutex*)::{lambda(std::recursive_mutex*)#1}::_FUN(std::recursive_mutex*) (synchronized.hpp:58) ==82679== by 0x52E619: Synchronized::Synchronized(std::recursive_mutex*, void (*)(std::recursive_mutex*), void (*)(std::recursive_mutex*)) (synchronized.hpp:35) ==82679== by 0x5165D4: Synchronized synchronize(std::recursive_mutex*) (synchronized.hpp:60) ==82679== by 0x6BF4E1: process::ProcessManager::use(process::UPID const&) (process.cpp:2127) ==82679== by 0x6C2B8C: process::ProcessManager::terminate(process::UPID const&, bool, process::ProcessBase*) (process.cpp:2604) ==82679== by 0x6C6C3C: process::terminate(process::UPID const&, bool) (process.cpp:3107) ==82679== by 0x692B65: process::Latch::trigger() (latch.cpp:53) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3203","08/04/2015 23:24:48",1,"MasterAuthorizationTest.DuplicateRegistration test is flaky ""[ RUN ] MasterAuthorizationTest.DuplicateRegistration Using temporary directory '/tmp/MasterAuthorizationTest_DuplicateRegistration_NKT3f7' I0804 22:16:01.578500 26185 leveldb.cpp:176] Opened db in 2.188338ms I0804 22:16:01.579172 26185 leveldb.cpp:183] Compacted db in 645075ns I0804 22:16:01.579211 26185 leveldb.cpp:198] Created db iterator in 15766ns I0804 22:16:01.579227 26185 leveldb.cpp:204] Seeked to beginning of db in 1658ns I0804 22:16:01.579238 26185 leveldb.cpp:273] Iterated through 0 keys in the db in 313ns I0804 22:16:01.579282 26185 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0804 22:16:01.579787 26212 recover.cpp:449] Starting replica recovery I0804 22:16:01.580075 26212 recover.cpp:475] Replica is in EMPTY status I0804 22:16:01.581014 26205 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0804 22:16:01.581357 26211 recover.cpp:195] Received a recover response from a replica in EMPTY status I0804 22:16:01.581761 26207 recover.cpp:566] Updating replica status to STARTING I0804 22:16:01.582334 26218 master.cpp:377] Master 20150804-221601-2550141356-59302-26185 (d6d349cd895b) started on 172.17.0.152:59302 I0804 22:16:01.582355 26218 master.cpp:379] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --credentials=""""/tmp/MasterAuthorizationTest_DuplicateRegistration_NKT3f7/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""25secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-0.24.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/MasterAuthorizationTest_DuplicateRegistration_NKT3f7/master"""" --zk_session_timeout=""""10secs"""" I0804 22:16:01.582711 26218 master.cpp:424] Master only allowing authenticated frameworks to register I0804 22:16:01.582722 26218 master.cpp:429] Master only allowing authenticated slaves to register I0804 22:16:01.582728 26218 credentials.hpp:37] Loading credentials for authentication from '/tmp/MasterAuthorizationTest_DuplicateRegistration_NKT3f7/credentials' I0804 22:16:01.582929 26204 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 421543ns I0804 22:16:01.582950 26204 replica.cpp:323] Persisted replica status to STARTING I0804 22:16:01.583032 26218 master.cpp:468] Using default 'crammd5' authenticator I0804 22:16:01.583132 26211 recover.cpp:475] Replica is in STARTING status I0804 22:16:01.583154 26218 master.cpp:505] Authorization enabled I0804 22:16:01.583356 26214 whitelist_watcher.cpp:79] No whitelist given I0804 22:16:01.583411 26217 hierarchical.hpp:346] Initialized hierarchical allocator process I0804 22:16:01.583976 26213 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0804 22:16:01.584187 26209 recover.cpp:195] Received a recover response from a replica in STARTING status I0804 22:16:01.584581 26213 master.cpp:1495] The newly elected leader is master@172.17.0.152:59302 with id 20150804-221601-2550141356-59302-26185 I0804 22:16:01.584609 26213 master.cpp:1508] Elected as the leading master! I0804 22:16:01.584627 26213 master.cpp:1278] Recovering from registrar I0804 22:16:01.584656 26204 recover.cpp:566] Updating replica status to VOTING I0804 22:16:01.584770 26212 registrar.cpp:313] Recovering registrar I0804 22:16:01.585261 26218 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 370526ns I0804 22:16:01.585285 26218 replica.cpp:323] Persisted replica status to VOTING I0804 22:16:01.585412 26216 recover.cpp:580] Successfully joined the Paxos group I0804 22:16:01.585667 26216 recover.cpp:464] Recover process terminated I0804 22:16:01.586047 26213 log.cpp:661] Attempting to start the writer I0804 22:16:01.587164 26211 replica.cpp:477] Replica received implicit promise request with proposal 1 I0804 22:16:01.587549 26211 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 358261ns I0804 22:16:01.587568 26211 replica.cpp:345] Persisted promised to 1 I0804 22:16:01.588173 26209 coordinator.cpp:230] Coordinator attemping to fill missing position I0804 22:16:01.589316 26208 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0804 22:16:01.589700 26208 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 351778ns I0804 22:16:01.589721 26208 replica.cpp:679] Persisted action at 0 I0804 22:16:01.590698 26213 replica.cpp:511] Replica received write request for position 0 I0804 22:16:01.590754 26213 leveldb.cpp:438] Reading position from leveldb took 31557ns I0804 22:16:01.591147 26213 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 321842ns I0804 22:16:01.591167 26213 replica.cpp:679] Persisted action at 0 I0804 22:16:01.591790 26217 replica.cpp:658] Replica received learned notice for position 0 I0804 22:16:01.592133 26217 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 315281ns I0804 22:16:01.592155 26217 replica.cpp:679] Persisted action at 0 I0804 22:16:01.592180 26217 replica.cpp:664] Replica learned NOP action at position 0 I0804 22:16:01.592686 26211 log.cpp:677] Writer started with ending position 0 I0804 22:16:01.593729 26205 leveldb.cpp:438] Reading position from leveldb took 26394ns I0804 22:16:01.596165 26209 registrar.cpp:346] Successfully fetched the registry (0B) in 11.343104ms I0804 22:16:01.596281 26209 registrar.cpp:445] Applied 1 operations in 26242ns; attempting to update the 'registry' I0804 22:16:01.598415 26212 log.cpp:685] Attempting to append 178 bytes to the log I0804 22:16:01.598563 26215 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1 I0804 22:16:01.599324 26215 replica.cpp:511] Replica received write request for position 1 I0804 22:16:01.599778 26215 leveldb.cpp:343] Persisting action (197 bytes) to leveldb took 420523ns I0804 22:16:01.599800 26215 replica.cpp:679] Persisted action at 1 I0804 22:16:01.600349 26204 replica.cpp:658] Replica received learned notice for position 1 I0804 22:16:01.600684 26204 leveldb.cpp:343] Persisting action (199 bytes) to leveldb took 310315ns I0804 22:16:01.600706 26204 replica.cpp:679] Persisted action at 1 I0804 22:16:01.600723 26204 replica.cpp:664] Replica learned APPEND action at position 1 I0804 22:16:01.601632 26213 registrar.cpp:490] Successfully updated the 'registry' in 5.287936ms I0804 22:16:01.601747 26213 registrar.cpp:376] Successfully recovered registrar I0804 22:16:01.601826 26215 log.cpp:704] Attempting to truncate the log to 1 I0804 22:16:01.601948 26210 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2 I0804 22:16:01.602145 26208 master.cpp:1305] Recovered 0 slaves from the Registry (139B) ; allowing 10mins for slaves to re-register I0804 22:16:01.602859 26219 replica.cpp:511] Replica received write request for position 2 I0804 22:16:01.603181 26219 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 284713ns I0804 22:16:01.603209 26219 replica.cpp:679] Persisted action at 2 I0804 22:16:01.603984 26211 replica.cpp:658] Replica received learned notice for position 2 I0804 22:16:01.604313 26211 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 302445ns I0804 22:16:01.604365 26211 leveldb.cpp:401] Deleting ~1 keys from leveldb took 29354ns I0804 22:16:01.604387 26211 replica.cpp:679] Persisted action at 2 I0804 22:16:01.604408 26211 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0804 22:16:01.616402 26185 sched.cpp:164] Version: 0.24.0 I0804 22:16:01.616902 26209 sched.cpp:262] New master detected at master@172.17.0.152:59302 I0804 22:16:01.617000 26209 sched.cpp:318] Authenticating with master master@172.17.0.152:59302 I0804 22:16:01.617019 26209 sched.cpp:325] Using default CRAM-MD5 authenticatee I0804 22:16:01.617324 26212 authenticatee.cpp:115] Creating new client SASL connection I0804 22:16:01.617550 26209 master.cpp:4405] Authenticating scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 I0804 22:16:01.617641 26212 authenticator.cpp:406] Starting authentication session for crammd5_authenticatee(259)@172.17.0.152:59302 I0804 22:16:01.617858 26208 authenticator.cpp:92] Creating new server SASL connection I0804 22:16:01.618140 26216 authenticatee.cpp:206] Received SASL authentication mechanisms: CRAM-MD5 I0804 22:16:01.618191 26216 authenticatee.cpp:232] Attempting to authenticate with mechanism 'CRAM-MD5' I0804 22:16:01.618324 26213 authenticator.cpp:197] Received SASL authentication start I0804 22:16:01.618413 26213 authenticator.cpp:319] Authentication requires more steps I0804 22:16:01.618557 26216 authenticatee.cpp:252] Received SASL authentication step I0804 22:16:01.618664 26216 authenticator.cpp:225] Received SASL authentication step I0804 22:16:01.618703 26216 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: 'd6d349cd895b' server FQDN: 'd6d349cd895b' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0804 22:16:01.618719 26216 auxprop.cpp:174] Looking up auxiliary property '*userPassword' I0804 22:16:01.618778 26216 auxprop.cpp:174] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0804 22:16:01.618820 26216 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: 'd6d349cd895b' server FQDN: 'd6d349cd895b' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0804 22:16:01.618834 26216 auxprop.cpp:124] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0804 22:16:01.618839 26216 auxprop.cpp:124] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0804 22:16:01.618857 26216 authenticator.cpp:311] Authentication success I0804 22:16:01.618954 26219 authenticatee.cpp:292] Authentication success I0804 22:16:01.619035 26204 master.cpp:4435] Successfully authenticated principal 'test-principal' at scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 I0804 22:16:01.619083 26219 authenticator.cpp:424] Authentication session cleanup for crammd5_authenticatee(259)@172.17.0.152:59302 I0804 22:16:01.619309 26208 sched.cpp:407] Successfully authenticated with master master@172.17.0.152:59302 I0804 22:16:01.619335 26208 sched.cpp:713] Sending SUBSCRIBE call to master@172.17.0.152:59302 I0804 22:16:01.619494 26208 sched.cpp:746] Will retry registration in 439203ns if necessary I0804 22:16:01.619627 26217 master.cpp:1812] Received SUBSCRIBE call for framework 'default' at scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 I0804 22:16:01.619695 26217 master.cpp:1534] Authorizing framework principal 'test-principal' to receive offers for role '*' I0804 22:16:01.620848 26217 sched.cpp:713] Sending SUBSCRIBE call to master@172.17.0.152:59302 I0804 22:16:01.620929 26217 sched.cpp:746] Will retry registration in 2.099193326secs if necessary I0804 22:16:01.621036 26210 master.cpp:1812] Received SUBSCRIBE call for framework 'default' at scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 I0804 22:16:01.621083 26210 master.cpp:1534] Authorizing framework principal 'test-principal' to receive offers for role '*' I0804 22:16:01.621727 26217 master.cpp:1876] Subscribing framework default with checkpointing disabled and capabilities [ ] I0804 22:16:01.621981 26208 sched.cpp:262] New master detected at master@172.17.0.152:59302 I0804 22:16:01.622131 26208 sched.cpp:318] Authenticating with master master@172.17.0.152:59302 I0804 22:16:01.622153 26208 sched.cpp:325] Using default CRAM-MD5 authenticatee I0804 22:16:01.622323 26212 authenticatee.cpp:115] Creating new client SASL connection I0804 22:16:01.622324 26210 hierarchical.hpp:391] Added framework 20150804-221601-2550141356-59302-26185-0000 I0804 22:16:01.622369 26210 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:01.622386 26210 hierarchical.hpp:908] Performed allocation for 0 slaves in 28592ns I0804 22:16:01.622511 26210 sched.cpp:640] Framework registered with 20150804-221601-2550141356-59302-26185-0000 I0804 22:16:01.622586 26210 sched.cpp:654] Scheduler::registered took 48005ns I0804 22:16:01.622592 26208 master.cpp:4405] Authenticating scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 I0804 22:16:01.622673 26212 authenticator.cpp:406] Starting authentication session for crammd5_authenticatee(260)@172.17.0.152:59302 I0804 22:16:01.622923 26205 authenticator.cpp:92] Creating new server SASL connection I0804 22:16:01.623112 26204 authenticatee.cpp:206] Received SASL authentication mechanisms: CRAM-MD5 I0804 22:16:01.623133 26216 master.cpp:1870] Dropping SUBSCRIBE call for framework 'default' at scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302: Re-authentication in progress I0804 22:16:01.623144 26204 authenticatee.cpp:232] Attempting to authenticate with mechanism 'CRAM-MD5' I0804 22:16:01.623258 26215 authenticator.cpp:197] Received SASL authentication start I0804 22:16:01.623313 26215 authenticator.cpp:319] Authentication requires more steps I0804 22:16:01.623394 26215 authenticatee.cpp:252] Received SASL authentication step I0804 22:16:01.623512 26212 authenticator.cpp:225] Received SASL authentication step I0804 22:16:01.623546 26212 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: 'd6d349cd895b' server FQDN: 'd6d349cd895b' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0804 22:16:01.623564 26212 auxprop.cpp:174] Looking up auxiliary property '*userPassword' I0804 22:16:01.623603 26212 auxprop.cpp:174] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0804 22:16:01.623622 26212 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: 'd6d349cd895b' server FQDN: 'd6d349cd895b' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0804 22:16:01.623631 26212 auxprop.cpp:124] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0804 22:16:01.623636 26212 auxprop.cpp:124] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0804 22:16:01.623649 26212 authenticator.cpp:311] Authentication success I0804 22:16:01.623777 26212 authenticatee.cpp:292] Authentication success I0804 22:16:01.623846 26212 master.cpp:4435] Successfully authenticated principal 'test-principal' at scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 I0804 22:16:01.623913 26212 authenticator.cpp:424] Authentication session cleanup for crammd5_authenticatee(260)@172.17.0.152:59302 I0804 22:16:01.624130 26212 sched.cpp:407] Successfully authenticated with master master@172.17.0.152:59302 I0804 22:16:02.583772 26218 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:02.583818 26218 hierarchical.hpp:908] Performed allocation for 0 slaves in 80538ns I0804 22:16:03.585110 26211 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:03.585156 26211 hierarchical.hpp:908] Performed allocation for 0 slaves in 69272ns I0804 22:16:04.586539 26214 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:04.586586 26214 hierarchical.hpp:908] Performed allocation for 0 slaves in 79232ns I0804 22:16:05.587239 26209 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:05.587293 26209 hierarchical.hpp:908] Performed allocation for 0 slaves in 85128ns I0804 22:16:06.587935 26212 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:06.587985 26212 hierarchical.hpp:908] Performed allocation for 0 slaves in 78141ns I0804 22:16:07.588817 26214 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:07.588865 26214 hierarchical.hpp:908] Performed allocation for 0 slaves in 81433ns I0804 22:16:08.589857 26214 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:08.589906 26214 hierarchical.hpp:908] Performed allocation for 0 slaves in 71929ns I0804 22:16:09.591085 26207 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:09.591133 26207 hierarchical.hpp:908] Performed allocation for 0 slaves in 78223ns I0804 22:16:10.591737 26207 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:10.591785 26207 hierarchical.hpp:908] Performed allocation for 0 slaves in 71894ns I0804 22:16:11.593166 26210 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:11.593221 26210 hierarchical.hpp:908] Performed allocation for 0 slaves in 89782ns I0804 22:16:12.593647 26212 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:12.593689 26212 hierarchical.hpp:908] Performed allocation for 0 slaves in 69426ns I0804 22:16:13.594154 26210 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:13.594202 26210 hierarchical.hpp:908] Performed allocation for 0 slaves in 70581ns I0804 22:16:14.594712 26207 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:14.594758 26207 hierarchical.hpp:908] Performed allocation for 0 slaves in 71201ns I0804 22:16:15.595412 26219 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:15.595464 26219 hierarchical.hpp:908] Performed allocation for 0 slaves in 85183ns I0804 22:16:16.596201 26217 hierarchical.hpp:1008] No resources available to allocate! I0804 22:16:16.596247 26217 hierarchical.hpp:908] Performed allocation for 0 slaves in 95132ns ../../src/tests/master_authorization_tests.cpp:794: Failure Failed to wait 15secs for frameworkRegisteredMessage I0804 22:16:16.624354 26212 master.cpp:966] Framework 20150804-221601-2550141356-59302-26185-0000 (default) at scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 disconnected I0804 22:16:16.624398 26212 master.cpp:2092] Disconnecting framework 20150804-221601-2550141356-59302-26185-0000 (default) at scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 I0804 22:16:16.624445 26212 master.cpp:2116] Deactivating framework 20150804-221601-2550141356-59302-26185-0000 (default) at scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 I0804 22:16:16.624686 26212 master.cpp:988] Giving framework 20150804-221601-2550141356-59302-26185-0000 (default) at scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 0ns to failover I0804 22:16:16.625641 26219 hierarchical.hpp:474] Deactivated framework 20150804-221601-2550141356-59302-26185-0000 I0804 22:16:16.626688 26218 master.cpp:4180] Framework failover timeout, removing framework 20150804-221601-2550141356-59302-26185-0000 (default) at scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 I0804 22:16:16.626734 26218 master.cpp:4759] Removing framework 20150804-221601-2550141356-59302-26185-0000 (default) at scheduler-ac5e7b68-e2d2-441c-a5f5-60c1ff8cf00c@172.17.0.152:59302 I0804 22:16:16.627074 26218 master.cpp:858] Master terminating I0804 22:16:16.627218 26215 hierarchical.hpp:428] Removed framework 20150804-221601-2550141356-59302-26185-0000 ../../3rdparty/libprocess/include/process/gmock.hpp:365: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: message matcher (8-byte object <98-98 02-AC 54-2B 00-00>, 1-byte object <97>, 1-byte object ) Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] MasterAuthorizationTest.DuplicateRegistration (15056 ms) ""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3207","08/05/2015 10:11:19",1,"C++ style guide is not rendered correctly (code section syntax disregarded) ""Some paragraphs at the bottom of docs/mesos-c++-style-guide.md containing code sections are not rendered correctly by the web site generator. It looks fine in a github gist and apparently the syntax used is correct. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3213","08/05/2015 21:29:14",2,"Design doc for docker registry token manager ""Create design document for describing the component and interaction between Docker Registry Client and remote Docker Registry for token based authorization.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3222","08/06/2015 21:49:38",5,"Implement docker registry client ""Implement the following functionality: - fetch manifest from remote registry based on authorization method dictated by the registry. - fetch image layers from remote registry based on authorization method dictated by the registry.. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3251","08/11/2015 19:42:37",1,"http::get API evaluates ""host"" wrongly ""Currently libprocess http API sets the """"Host"""" header field from the peer socket address (IP:port). The problem is that socket address might not be right HTTP server and might be just a proxy. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3252","08/11/2015 20:59:08",2,"Ignore no statistics condition for containers with no qdisc ""In PortMappingStatistics::execute, we log the following errors to stderr if the egress rate limiting qdiscs are not configured inside the container. This can occur because of an error reading the qdisc (statistics function return an error) or because the qdisc does not exist (function returns none). We should not log an error when the qdisc does not exist since this is normal behaviour if the container is created without rate limiting. We do not want to gate this function on the slave rate limiting flag since we would have to compare the behaviour against the flag value at the time the container was created."""," Failed to get the network statistics for the htb qdisc on eth0 Failed to get the network statistics for the fq_codel qdisc on eth0 ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3254","08/12/2015 01:44:04",2,"Cgroup CHECK fails test harness ""CHECK in clean up of ContainerizerTest causes test harness to abort rather than fail or skip only perf related tests. [ RUN ] SlaveRecoveryTest/0.RestartBeforeContainerizerLaunch [ OK ] SlaveRecoveryTest/0.RestartBeforeContainerizerLaunch (628 ms) [----------] 24 tests from SlaveRecoveryTest/0 (38986 ms total) [----------] 4 tests from MesosContainerizerSlaveRecoveryTest [ RUN ] MesosContainerizerSlaveRecoveryTest.ResourceStatistics ../../src/tests/mesos.cpp:720: Failure cgroups::mount(hierarchy, subsystem): 'perf_event' is already attached to another hierarchy ------------------------------------------------------------- We cannot run any cgroups tests that require a hierarchy with subsystem 'perf_event' because we failed to find an existing hierarchy or create a new one (tried '/tmp/mesos_test_cgroup/perf_event'). You can either remove all existing hierarchies, or disable this test case (i.e., --gtest_filter=-MesosContainerizerSlaveRecoveryTest.*). ------------------------------------------------------------- F0811 17:23:43.874696 12955 mesos.cpp:774] CHECK_SOME(cgroups): '/tmp/mesos_test_cgroup/perf_event' is not a valid hierarchy *** Check failure stack trace: *** @ 0x7fb2fb4835fd google::LogMessage::Fail() @ 0x7fb2fb48543d google::LogMessage::SendToLog() @ 0x7fb2fb4831ec google::LogMessage::Flush() @ 0x7fb2fb485d39 google::LogMessageFatal::~LogMessageFatal() @ 0x4e3f98 _CheckFatal::~_CheckFatal() @ 0x82f25a mesos::internal::tests::ContainerizerTest<>::TearDown() @ 0xc030e3 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0xbf9050 testing::Test::Run() @ 0xbf912e testing::TestInfo::Run() @ 0xbf9235 testing::TestCase::Run() @ 0xbf94e8 testing::internal::UnitTestImpl::RunAllTests() @ 0xbf97a4 testing::UnitTest::Run() @ 0x4a9df3 main @ 0x7fb2f9371ec5 (unknown) @ 0x4b63ee (unknown) Build step 'Execute shell' marked build as failure""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3265","08/14/2015 23:55:30",8,"Starting maintenance needs to deactivate agents and kill tasks. ""After using the {{/maintenance/start}} endpoint to begin maintenance on a machine, agents running on said machine should: * Be deactivated such that no offers are sent from that agent. (Investigate if {{Master::deactivate(Slave*)}} can be used or modified for this purpose.) * Kill all tasks still running on the agent (See MESOS-1475). * Prevent other agents on that machine from registering or sending out offers. This will likely involve some modifications to {{Master::register}} and {{Master::reregister}}. ""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3266","08/15/2015 00:00:25",5,"Stopping/Completing maintenance needs to reactivate agents. ""After using the {{/maintenance/stop}} endpoint to end maintenance on a machine, any deactivated agents must be reactivated and allowed to register with the master.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3273","08/16/2015 20:23:36",5,"EventCall Test Framework is flaky ""Observed this on ASF CI. h/t [~haosdent@gmail.com] Looks like the HTTP scheduler never sent a SUBSCRIBE request to the master. """," [ RUN ] ExamplesTest.EventCallFramework Using temporary directory '/tmp/ExamplesTest_EventCallFramework_k4vXkx' I0813 19:55:15.643579 26085 exec.cpp:443] Ignoring exited event because the driver is aborted! Shutting down Sending SIGTERM to process tree at pid 26061 Killing the following process trees: [ ] Shutting down Sending SIGTERM to process tree at pid 26062 Shutting down Killing the following process trees: [ ] Sending SIGTERM to process tree at pid 26063 Killing the following process trees: [ ] Shutting down Sending SIGTERM to process tree at pid 26098 Killing the following process trees: [ ] Shutting down Sending SIGTERM to process tree at pid 26099 Killing the following process trees: [ ] WARNING: Logging before InitGoogleLogging() is written to STDERR I0813 19:55:17.161726 26100 process.cpp:1012] libprocess is initialized on 172.17.2.10:60249 for 16 cpus I0813 19:55:17.161888 26100 logging.cpp:177] Logging to STDERR I0813 19:55:17.163625 26100 scheduler.cpp:157] Version: 0.24.0 I0813 19:55:17.175302 26100 leveldb.cpp:176] Opened db in 3.167446ms I0813 19:55:17.176393 26100 leveldb.cpp:183] Compacted db in 1.047996ms I0813 19:55:17.176496 26100 leveldb.cpp:198] Created db iterator in 77155ns I0813 19:55:17.176518 26100 leveldb.cpp:204] Seeked to beginning of db in 8429ns I0813 19:55:17.176527 26100 leveldb.cpp:273] Iterated through 0 keys in the db in 4219ns I0813 19:55:17.176708 26100 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0813 19:55:17.178951 26136 recover.cpp:449] Starting replica recovery I0813 19:55:17.179934 26136 recover.cpp:475] Replica is in EMPTY status I0813 19:55:17.181970 26126 master.cpp:378] Master 20150813-195517-167907756-60249-26100 (297daca2d01a) started on 172.17.2.10:60249 I0813 19:55:17.182317 26126 master.cpp:380] Flags at startup: --acls=""""permissive: false register_frameworks { principals { type: SOME values: """"test-principal"""" } roles { type: SOME values: """"*"""" } } run_tasks { principals { type: SOME values: """"test-principal"""" } users { type: SOME values: """"mesos"""" } } """" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""false"""" --authenticate_slaves=""""false"""" --authenticators=""""crammd5"""" --credentials=""""/tmp/ExamplesTest_EventCallFramework_k4vXkx/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""5secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-0.24.0/src/webui"""" --work_dir=""""/tmp/mesos-II8Gua"""" --zk_session_timeout=""""10secs"""" I0813 19:55:17.183475 26126 master.cpp:427] Master allowing unauthenticated frameworks to register I0813 19:55:17.183536 26126 master.cpp:432] Master allowing unauthenticated slaves to register I0813 19:55:17.183615 26126 credentials.hpp:37] Loading credentials for authentication from '/tmp/ExamplesTest_EventCallFramework_k4vXkx/credentials' W0813 19:55:17.183859 26126 credentials.hpp:52] Permissions on credentials file '/tmp/ExamplesTest_EventCallFramework_k4vXkx/credentials' are too open. It is recommended that your credentials file is NOT accessible by others. I0813 19:55:17.183969 26123 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0813 19:55:17.184306 26126 master.cpp:469] Using default 'crammd5' authenticator I0813 19:55:17.184661 26126 authenticator.cpp:512] Initializing server SASL I0813 19:55:17.185104 26138 recover.cpp:195] Received a recover response from a replica in EMPTY status I0813 19:55:17.185972 26100 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix I0813 19:55:17.186058 26135 recover.cpp:566] Updating replica status to STARTING I0813 19:55:17.187001 26138 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 654586ns I0813 19:55:17.187037 26138 replica.cpp:323] Persisted replica status to STARTING I0813 19:55:17.187499 26134 recover.cpp:475] Replica is in STARTING status I0813 19:55:17.187605 26126 auxprop.cpp:66] Initialized in-memory auxiliary property plugin I0813 19:55:17.187710 26126 master.cpp:506] Authorization enabled I0813 19:55:17.188657 26138 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0813 19:55:17.188853 26131 hierarchical.hpp:346] Initialized hierarchical allocator process I0813 19:55:17.189252 26132 whitelist_watcher.cpp:79] No whitelist given I0813 19:55:17.189321 26134 recover.cpp:195] Received a recover response from a replica in STARTING status I0813 19:55:17.190001 26125 recover.cpp:566] Updating replica status to VOTING I0813 19:55:17.190696 26124 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 357331ns I0813 19:55:17.190775 26124 replica.cpp:323] Persisted replica status to VOTING I0813 19:55:17.190970 26133 recover.cpp:580] Successfully joined the Paxos group I0813 19:55:17.192183 26129 recover.cpp:464] Recover process terminated I0813 19:55:17.192699 26123 slave.cpp:190] Slave started on 1)@172.17.2.10:60249 I0813 19:55:17.192741 26123 slave.cpp:191] Flags at startup: --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/mesos/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.24.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""1secs"""" --resource_monitoring_interval=""""1secs"""" --resources=""""cpus:2;mem:10240"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/mesos-II8Gua/0"""" I0813 19:55:17.194514 26100 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix I0813 19:55:17.194658 26123 slave.cpp:354] Slave resources: cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000] I0813 19:55:17.194854 26123 slave.cpp:384] Slave hostname: 297daca2d01a I0813 19:55:17.194877 26123 slave.cpp:389] Slave checkpoint: true I0813 19:55:17.196751 26132 master.cpp:1524] The newly elected leader is master@172.17.2.10:60249 with id 20150813-195517-167907756-60249-26100 I0813 19:55:17.196797 26132 master.cpp:1537] Elected as the leading master! I0813 19:55:17.196815 26132 master.cpp:1307] Recovering from registrar I0813 19:55:17.197032 26138 registrar.cpp:311] Recovering registrar I0813 19:55:17.197845 26132 slave.cpp:190] Slave started on 2)@172.17.2.10:60249 I0813 19:55:17.198420 26125 log.cpp:661] Attempting to start the writer I0813 19:55:17.197948 26132 slave.cpp:191] Flags at startup: --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/mesos/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.24.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""1secs"""" --resource_monitoring_interval=""""1secs"""" --resources=""""cpus:2;mem:10240"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/mesos-II8Gua/1"""" I0813 19:55:17.199121 26132 slave.cpp:354] Slave resources: cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000] I0813 19:55:17.199235 26138 state.cpp:54] Recovering state from '/tmp/mesos-II8Gua/0/meta' I0813 19:55:17.199322 26132 slave.cpp:384] Slave hostname: 297daca2d01a I0813 19:55:17.199345 26132 slave.cpp:389] Slave checkpoint: true I0813 19:55:17.199676 26100 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix I0813 19:55:17.200085 26135 state.cpp:54] Recovering state from '/tmp/mesos-II8Gua/1/meta' I0813 19:55:17.200317 26132 status_update_manager.cpp:202] Recovering status update manager I0813 19:55:17.200371 26129 status_update_manager.cpp:202] Recovering status update manager I0813 19:55:17.202003 26129 replica.cpp:477] Replica received implicit promise request with proposal 1 I0813 19:55:17.202585 26131 slave.cpp:190] Slave started on 3)@172.17.2.10:60249 I0813 19:55:17.202596 26129 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 523191ns I0813 19:55:17.202756 26129 replica.cpp:345] Persisted promised to 1 I0813 19:55:17.202770 26132 containerizer.cpp:379] Recovering containerizer I0813 19:55:17.203061 26135 containerizer.cpp:379] Recovering containerizer I0813 19:55:17.202663 26131 slave.cpp:191] Flags at startup: --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/mesos/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.24.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""1secs"""" --resource_monitoring_interval=""""1secs"""" --resources=""""cpus:2;mem:10240"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/mesos-II8Gua/2"""" I0813 19:55:17.203819 26131 slave.cpp:354] Slave resources: cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000] I0813 19:55:17.203930 26131 slave.cpp:384] Slave hostname: 297daca2d01a I0813 19:55:17.203948 26131 slave.cpp:389] Slave checkpoint: true I0813 19:55:17.204674 26137 state.cpp:54] Recovering state from '/tmp/mesos-II8Gua/2/meta' I0813 19:55:17.205178 26135 status_update_manager.cpp:202] Recovering status update manager I0813 19:55:17.205323 26135 containerizer.cpp:379] Recovering containerizer I0813 19:55:17.205521 26136 slave.cpp:4069] Finished recovery I0813 19:55:17.206074 26136 slave.cpp:4226] Querying resource estimator for oversubscribable resources I0813 19:55:17.206424 26128 slave.cpp:4069] Finished recovery I0813 19:55:17.206722 26137 status_update_manager.cpp:176] Pausing sending status updates I0813 19:55:17.206858 26136 slave.cpp:684] New master detected at master@172.17.2.10:60249 I0813 19:55:17.206902 26138 slave.cpp:4069] Finished recovery I0813 19:55:17.206962 26128 slave.cpp:4226] Querying resource estimator for oversubscribable resources I0813 19:55:17.208312 26134 scheduler.cpp:272] New master detected at master@172.17.2.10:60249 I0813 19:55:17.208364 26136 slave.cpp:709] No credentials provided. Attempting to register without authentication I0813 19:55:17.208608 26136 slave.cpp:720] Detecting new master I0813 19:55:17.208839 26138 slave.cpp:4226] Querying resource estimator for oversubscribable resources I0813 19:55:17.209216 26123 coordinator.cpp:231] Coordinator attemping to fill missing position I0813 19:55:17.209247 26127 status_update_manager.cpp:176] Pausing sending status updates I0813 19:55:17.209259 26128 slave.cpp:684] New master detected at master@172.17.2.10:60249 I0813 19:55:17.209322 26127 status_update_manager.cpp:176] Pausing sending status updates I0813 19:55:17.209364 26128 slave.cpp:709] No credentials provided. Attempting to register without authentication I0813 19:55:17.209344 26138 slave.cpp:684] New master detected at master@172.17.2.10:60249 I0813 19:55:17.209455 26128 slave.cpp:720] Detecting new master I0813 19:55:17.209492 26138 slave.cpp:709] No credentials provided. Attempting to register without authentication I0813 19:55:17.209573 26128 slave.cpp:4240] Received oversubscribable resources from the resource estimator I0813 19:55:17.209601 26138 slave.cpp:720] Detecting new master I0813 19:55:17.209730 26138 slave.cpp:4240] Received oversubscribable resources from the resource estimator I0813 19:55:17.209883 26136 slave.cpp:4240] Received oversubscribable resources from the resource estimator I0813 19:55:17.211266 26136 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0813 19:55:17.211771 26136 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 462128ns I0813 19:55:17.211797 26136 replica.cpp:679] Persisted action at 0 I0813 19:55:17.212980 26130 replica.cpp:511] Replica received write request for position 0 I0813 19:55:17.213124 26130 leveldb.cpp:438] Reading position from leveldb took 67075ns I0813 19:55:17.213580 26130 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 301649ns I0813 19:55:17.213603 26130 replica.cpp:679] Persisted action at 0 I0813 19:55:17.214284 26123 replica.cpp:658] Replica received learned notice for position 0 I0813 19:55:17.214622 26123 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 284547ns I0813 19:55:17.214648 26123 replica.cpp:679] Persisted action at 0 I0813 19:55:17.214675 26123 replica.cpp:664] Replica learned NOP action at position 0 I0813 19:55:17.215420 26136 log.cpp:677] Writer started with ending position 0 I0813 19:55:17.217463 26133 leveldb.cpp:438] Reading position from leveldb took 47943ns I0813 19:55:17.220762 26125 registrar.cpp:344] Successfully fetched the registry (0B) in 23.649024ms I0813 19:55:17.221081 26125 registrar.cpp:443] Applied 1 operations in 136902ns; attempting to update the 'registry' I0813 19:55:17.223667 26133 log.cpp:685] Attempting to append 174 bytes to the log I0813 19:55:17.223778 26125 coordinator.cpp:341] Coordinator attempting to write APPEND action at position 1 I0813 19:55:17.224516 26127 replica.cpp:511] Replica received write request for position 1 I0813 19:55:17.225009 26127 leveldb.cpp:343] Persisting action (193 bytes) to leveldb took 466230ns I0813 19:55:17.225042 26127 replica.cpp:679] Persisted action at 1 I0813 19:55:17.225653 26126 replica.cpp:658] Replica received learned notice for position 1 I0813 19:55:17.225953 26126 leveldb.cpp:343] Persisting action (195 bytes) to leveldb took 286966ns I0813 19:55:17.225975 26126 replica.cpp:679] Persisted action at 1 I0813 19:55:17.226013 26126 replica.cpp:664] Replica learned APPEND action at position 1 I0813 19:55:17.227545 26137 registrar.cpp:488] Successfully updated the 'registry' in 6.328064ms I0813 19:55:17.227722 26137 registrar.cpp:374] Successfully recovered registrar I0813 19:55:17.227918 26124 log.cpp:704] Attempting to truncate the log to 1 I0813 19:55:17.228024 26133 coordinator.cpp:341] Coordinator attempting to write TRUNCATE action at position 2 I0813 19:55:17.228193 26131 master.cpp:1334] Recovered 0 slaves from the Registry (135B) ; allowing 10mins for slaves to re-register I0813 19:55:17.228659 26127 replica.cpp:511] Replica received write request for position 2 I0813 19:55:17.228972 26127 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 297903ns I0813 19:55:17.229004 26127 replica.cpp:679] Persisted action at 2 I0813 19:55:17.229565 26127 replica.cpp:658] Replica received learned notice for position 2 I0813 19:55:17.229837 26127 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 260326ns I0813 19:55:17.229899 26127 leveldb.cpp:401] Deleting ~1 keys from leveldb took 48697ns I0813 19:55:17.229923 26127 replica.cpp:679] Persisted action at 2 I0813 19:55:17.229956 26127 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0813 19:55:17.325634 26138 slave.cpp:1209] Will retry registration in 445.955946ms if necessary I0813 19:55:17.326088 26124 master.cpp:3635] Registering slave at slave(2)@172.17.2.10:60249 (297daca2d01a) with id 20150813-195517-167907756-60249-26100-S0 I0813 19:55:17.327446 26124 registrar.cpp:443] Applied 1 operations in 231072ns; attempting to update the 'registry' I0813 19:55:17.330252 26136 log.cpp:685] Attempting to append 344 bytes to the log I0813 19:55:17.330407 26132 coordinator.cpp:341] Coordinator attempting to write APPEND action at position 3 I0813 19:55:17.331418 26128 replica.cpp:511] Replica received write request for position 3 I0813 19:55:17.331753 26128 leveldb.cpp:343] Persisting action (363 bytes) to leveldb took 264140ns I0813 19:55:17.331778 26128 replica.cpp:679] Persisted action at 3 I0813 19:55:17.332324 26133 replica.cpp:658] Replica received learned notice for position 3 I0813 19:55:17.332809 26133 leveldb.cpp:343] Persisting action (365 bytes) to leveldb took 313064ns I0813 19:55:17.332834 26133 replica.cpp:679] Persisted action at 3 I0813 19:55:17.332865 26133 replica.cpp:664] Replica learned APPEND action at position 3 I0813 19:55:17.334211 26132 registrar.cpp:488] Successfully updated the 'registry' in 6.668032ms I0813 19:55:17.334430 26127 log.cpp:704] Attempting to truncate the log to 3 I0813 19:55:17.334566 26132 coordinator.cpp:341] Coordinator attempting to write TRUNCATE action at position 4 I0813 19:55:17.335283 26129 replica.cpp:511] Replica received write request for position 4 I0813 19:55:17.335615 26127 slave.cpp:3058] Received ping from slave-observer(1)@172.17.2.10:60249 I0813 19:55:17.335816 26129 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 458268ns I0813 19:55:17.335908 26137 master.cpp:3698] Registered slave 20150813-195517-167907756-60249-26100-S0 at slave(2)@172.17.2.10:60249 (297daca2d01a) with cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000] I0813 19:55:17.335983 26129 replica.cpp:679] Persisted action at 4 I0813 19:55:17.336019 26136 slave.cpp:859] Registered with master master@172.17.2.10:60249; given slave ID 20150813-195517-167907756-60249-26100-S0 I0813 19:55:17.336073 26136 fetcher.cpp:77] Clearing fetcher cache I0813 19:55:17.336220 26127 status_update_manager.cpp:183] Resuming sending status updates I0813 19:55:17.336328 26128 hierarchical.hpp:540] Added slave 20150813-195517-167907756-60249-26100-S0 (297daca2d01a) with cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000] (allocated: ) I0813 19:55:17.336599 26138 replica.cpp:658] Replica received learned notice for position 4 I0813 19:55:17.336910 26128 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:17.336957 26128 hierarchical.hpp:926] Performed allocation for slave 20150813-195517-167907756-60249-26100-S0 in 580663ns I0813 19:55:17.337016 26136 slave.cpp:882] Checkpointing SlaveInfo to '/tmp/mesos-II8Gua/1/meta/slaves/20150813-195517-167907756-60249-26100-S0/slave.info' I0813 19:55:17.337035 26138 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 403607ns I0813 19:55:17.337138 26138 leveldb.cpp:401] Deleting ~2 keys from leveldb took 77040ns I0813 19:55:17.337167 26138 replica.cpp:679] Persisted action at 4 I0813 19:55:17.337208 26138 replica.cpp:664] Replica learned TRUNCATE action at position 4 I0813 19:55:17.337514 26136 slave.cpp:918] Forwarding total oversubscribed resources I0813 19:55:17.337745 26131 master.cpp:3997] Received update of slave 20150813-195517-167907756-60249-26100-S0 at slave(2)@172.17.2.10:60249 (297daca2d01a) with total oversubscribed resources I0813 19:55:17.338240 26131 hierarchical.hpp:600] Slave 20150813-195517-167907756-60249-26100-S0 (297daca2d01a) updated with oversubscribed resources (total: cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) I0813 19:55:17.338479 26131 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:17.338505 26131 hierarchical.hpp:926] Performed allocation for slave 20150813-195517-167907756-60249-26100-S0 in 216259ns I0813 19:55:17.504086 26124 slave.cpp:1209] Will retry registration in 1.92618421secs if necessary I0813 19:55:17.504408 26124 master.cpp:3635] Registering slave at slave(3)@172.17.2.10:60249 (297daca2d01a) with id 20150813-195517-167907756-60249-26100-S1 I0813 19:55:17.505203 26124 registrar.cpp:443] Applied 1 operations in 144314ns; attempting to update the 'registry' I0813 19:55:17.507616 26124 log.cpp:685] Attempting to append 511 bytes to the log I0813 19:55:17.507796 26132 coordinator.cpp:341] Coordinator attempting to write APPEND action at position 5 I0813 19:55:17.508735 26128 replica.cpp:511] Replica received write request for position 5 I0813 19:55:17.509291 26128 leveldb.cpp:343] Persisting action (530 bytes) to leveldb took 527776ns I0813 19:55:17.509328 26128 replica.cpp:679] Persisted action at 5 I0813 19:55:17.509945 26124 replica.cpp:658] Replica received learned notice for position 5 I0813 19:55:17.510393 26124 leveldb.cpp:343] Persisting action (532 bytes) to leveldb took 438543ns I0813 19:55:17.510416 26124 replica.cpp:679] Persisted action at 5 I0813 19:55:17.510437 26124 replica.cpp:664] Replica learned APPEND action at position 5 I0813 19:55:17.511907 26125 registrar.cpp:488] Successfully updated the 'registry' in 6624us I0813 19:55:17.512225 26138 log.cpp:704] Attempting to truncate the log to 5 I0813 19:55:17.512305 26136 coordinator.cpp:341] Coordinator attempting to write TRUNCATE action at position 6 I0813 19:55:17.513066 26133 slave.cpp:3058] Received ping from slave-observer(2)@172.17.2.10:60249 I0813 19:55:17.513242 26133 slave.cpp:859] Registered with master master@172.17.2.10:60249; given slave ID 20150813-195517-167907756-60249-26100-S1 I0813 19:55:17.513221 26126 master.cpp:3698] Registered slave 20150813-195517-167907756-60249-26100-S1 at slave(3)@172.17.2.10:60249 (297daca2d01a) with cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000] I0813 19:55:17.513089 26129 replica.cpp:511] Replica received write request for position 6 I0813 19:55:17.513393 26133 fetcher.cpp:77] Clearing fetcher cache I0813 19:55:17.513380 26138 hierarchical.hpp:540] Added slave 20150813-195517-167907756-60249-26100-S1 (297daca2d01a) with cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000] (allocated: ) I0813 19:55:17.513805 26132 status_update_manager.cpp:183] Resuming sending status updates I0813 19:55:17.513949 26129 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 340511ns I0813 19:55:17.514046 26138 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:17.514050 26129 replica.cpp:679] Persisted action at 6 I0813 19:55:17.514195 26133 slave.cpp:882] Checkpointing SlaveInfo to '/tmp/mesos-II8Gua/2/meta/slaves/20150813-195517-167907756-60249-26100-S1/slave.info' I0813 19:55:17.514140 26138 hierarchical.hpp:926] Performed allocation for slave 20150813-195517-167907756-60249-26100-S1 in 417609ns I0813 19:55:17.514704 26133 slave.cpp:918] Forwarding total oversubscribed resources I0813 19:55:17.514708 26138 replica.cpp:658] Replica received learned notice for position 6 I0813 19:55:17.514880 26133 master.cpp:3997] Received update of slave 20150813-195517-167907756-60249-26100-S1 at slave(3)@172.17.2.10:60249 (297daca2d01a) with total oversubscribed resources I0813 19:55:17.515244 26127 hierarchical.hpp:600] Slave 20150813-195517-167907756-60249-26100-S1 (297daca2d01a) updated with oversubscribed resources (total: cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) I0813 19:55:17.515454 26138 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 640882ns I0813 19:55:17.515522 26138 leveldb.cpp:401] Deleting ~2 keys from leveldb took 56550ns I0813 19:55:17.515547 26138 replica.cpp:679] Persisted action at 6 I0813 19:55:17.515581 26138 replica.cpp:664] Replica learned TRUNCATE action at position 6 I0813 19:55:17.515802 26127 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:17.515866 26127 hierarchical.hpp:926] Performed allocation for slave 20150813-195517-167907756-60249-26100-S1 in 591007ns I0813 19:55:17.984196 26135 slave.cpp:1209] Will retry registration in 1.542495291secs if necessary I0813 19:55:17.984391 26138 master.cpp:3635] Registering slave at slave(1)@172.17.2.10:60249 (297daca2d01a) with id 20150813-195517-167907756-60249-26100-S2 I0813 19:55:17.985170 26133 registrar.cpp:443] Applied 1 operations in 202126ns; attempting to update the 'registry' I0813 19:55:17.987498 26133 log.cpp:685] Attempting to append 678 bytes to the log I0813 19:55:17.987656 26123 coordinator.cpp:341] Coordinator attempting to write APPEND action at position 7 I0813 19:55:17.988704 26138 replica.cpp:511] Replica received write request for position 7 I0813 19:55:17.989223 26138 leveldb.cpp:343] Persisting action (697 bytes) to leveldb took 490422ns I0813 19:55:17.989248 26138 replica.cpp:679] Persisted action at 7 I0813 19:55:17.989972 26126 replica.cpp:658] Replica received learned notice for position 7 I0813 19:55:17.990401 26126 leveldb.cpp:343] Persisting action (699 bytes) to leveldb took 404333ns I0813 19:55:17.990420 26126 replica.cpp:679] Persisted action at 7 I0813 19:55:17.990440 26126 replica.cpp:664] Replica learned APPEND action at position 7 I0813 19:55:17.994066 26123 registrar.cpp:488] Successfully updated the 'registry' in 8.788224ms I0813 19:55:17.994436 26134 log.cpp:704] Attempting to truncate the log to 7 I0813 19:55:17.994575 26123 coordinator.cpp:341] Coordinator attempting to write TRUNCATE action at position 8 I0813 19:55:17.995070 26134 slave.cpp:3058] Received ping from slave-observer(3)@172.17.2.10:60249 I0813 19:55:17.995291 26134 slave.cpp:859] Registered with master master@172.17.2.10:60249; given slave ID 20150813-195517-167907756-60249-26100-S2 I0813 19:55:17.995319 26134 fetcher.cpp:77] Clearing fetcher cache I0813 19:55:17.995246 26129 master.cpp:3698] Registered slave 20150813-195517-167907756-60249-26100-S2 at slave(1)@172.17.2.10:60249 (297daca2d01a) with cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000] I0813 19:55:17.995565 26123 status_update_manager.cpp:183] Resuming sending status updates I0813 19:55:17.995579 26129 replica.cpp:511] Replica received write request for position 8 I0813 19:55:17.996016 26134 slave.cpp:882] Checkpointing SlaveInfo to '/tmp/mesos-II8Gua/0/meta/slaves/20150813-195517-167907756-60249-26100-S2/slave.info' I0813 19:55:17.996039 26129 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 440511ns I0813 19:55:17.996067 26129 replica.cpp:679] Persisted action at 8 I0813 19:55:17.996294 26128 hierarchical.hpp:540] Added slave 20150813-195517-167907756-60249-26100-S2 (297daca2d01a) with cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000] (allocated: ) I0813 19:55:17.996556 26134 slave.cpp:918] Forwarding total oversubscribed resources I0813 19:55:17.996623 26133 replica.cpp:658] Replica received learned notice for position 8 I0813 19:55:17.997095 26134 master.cpp:3997] Received update of slave 20150813-195517-167907756-60249-26100-S2 at slave(1)@172.17.2.10:60249 (297daca2d01a) with total oversubscribed resources I0813 19:55:17.997263 26133 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 442619ns I0813 19:55:17.997385 26133 leveldb.cpp:401] Deleting ~2 keys from leveldb took 95741ns I0813 19:55:17.997413 26133 replica.cpp:679] Persisted action at 8 I0813 19:55:17.997465 26133 replica.cpp:664] Replica learned TRUNCATE action at position 8 I0813 19:55:17.997756 26128 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:17.997925 26128 hierarchical.hpp:926] Performed allocation for slave 20150813-195517-167907756-60249-26100-S2 in 1.14489ms I0813 19:55:17.998159 26128 hierarchical.hpp:600] Slave 20150813-195517-167907756-60249-26100-S2 (297daca2d01a) updated with oversubscribed resources (total: cpus(*):2; mem(*):10240; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) I0813 19:55:17.998445 26128 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:17.998471 26128 hierarchical.hpp:926] Performed allocation for slave 20150813-195517-167907756-60249-26100-S2 in 218856ns I0813 19:55:18.190146 26133 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:18.190217 26133 hierarchical.hpp:908] Performed allocation for 3 slaves in 637042ns I0813 19:55:19.191346 26131 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:19.191915 26131 hierarchical.hpp:908] Performed allocation for 3 slaves in 1.215355ms I0813 19:55:20.193631 26135 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:20.193709 26135 hierarchical.hpp:908] Performed allocation for 3 slaves in 834491ns I0813 19:55:21.194805 26134 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:21.194870 26134 hierarchical.hpp:908] Performed allocation for 3 slaves in 536547ns I0813 19:55:22.196143 26137 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:22.196216 26137 hierarchical.hpp:908] Performed allocation for 3 slaves in 755140ns I0813 19:55:23.197412 26132 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:23.197979 26132 hierarchical.hpp:908] Performed allocation for 3 slaves in 1.223984ms I0813 19:55:24.199429 26132 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:24.199735 26132 hierarchical.hpp:908] Performed allocation for 3 slaves in 904654ns I0813 19:55:25.200978 26127 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:25.201206 26127 hierarchical.hpp:908] Performed allocation for 3 slaves in 939979ns I0813 19:55:26.203023 26132 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:26.203101 26132 hierarchical.hpp:908] Performed allocation for 3 slaves in 721178ns I0813 19:55:27.204815 26126 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:27.204888 26126 hierarchical.hpp:908] Performed allocation for 3 slaves in 767983ns I0813 19:55:28.206374 26126 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:28.206444 26126 hierarchical.hpp:908] Performed allocation for 3 slaves in 745214ns I0813 19:55:29.207515 26124 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:29.207579 26124 hierarchical.hpp:908] Performed allocation for 3 slaves in 551217ns I0813 19:55:30.208966 26136 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:30.209053 26136 hierarchical.hpp:908] Performed allocation for 3 slaves in 649887ns I0813 19:55:31.210078 26123 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:31.210144 26123 hierarchical.hpp:908] Performed allocation for 3 slaves in 558919ns I0813 19:55:32.211027 26130 slave.cpp:4226] Querying resource estimator for oversubscribable resources I0813 19:55:32.211045 26129 slave.cpp:4226] Querying resource estimator for oversubscribable resources I0813 19:55:32.211084 26132 slave.cpp:4226] Querying resource estimator for oversubscribable resources I0813 19:55:32.211386 26129 slave.cpp:4240] Received oversubscribable resources from the resource estimator I0813 19:55:32.211688 26132 slave.cpp:4240] Received oversubscribable resources from the resource estimator I0813 19:55:32.211853 26133 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:32.212035 26133 hierarchical.hpp:908] Performed allocation for 3 slaves in 898985ns I0813 19:55:32.212169 26133 slave.cpp:4240] Received oversubscribable resources from the resource estimator I0813 19:55:32.336745 26135 slave.cpp:3058] Received ping from slave-observer(1)@172.17.2.10:60249 I0813 19:55:32.514333 26129 slave.cpp:3058] Received ping from slave-observer(2)@172.17.2.10:60249 I0813 19:55:32.996134 26128 slave.cpp:3058] Received ping from slave-observer(3)@172.17.2.10:60249 I0813 19:55:33.213248 26128 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:33.213326 26128 hierarchical.hpp:908] Performed allocation for 3 slaves in 827511ns I0813 19:55:34.214326 26125 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:34.214391 26125 hierarchical.hpp:908] Performed allocation for 3 slaves in 546422ns I0813 19:55:35.215909 26123 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:35.215973 26123 hierarchical.hpp:908] Performed allocation for 3 slaves in 627190ns I0813 19:55:36.217156 26134 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:36.217339 26134 hierarchical.hpp:908] Performed allocation for 3 slaves in 906249ns I0813 19:55:37.218739 26132 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:37.219169 26132 hierarchical.hpp:908] Performed allocation for 3 slaves in 1.102465ms I0813 19:55:38.220641 26133 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:38.220711 26133 hierarchical.hpp:908] Performed allocation for 3 slaves in 643146ns I0813 19:55:39.221976 26133 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:39.222118 26133 hierarchical.hpp:908] Performed allocation for 3 slaves in 845334ns I0813 19:55:40.223338 26129 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:40.223546 26129 hierarchical.hpp:908] Performed allocation for 3 slaves in 849995ns I0813 19:55:41.225558 26138 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:41.225752 26138 hierarchical.hpp:908] Performed allocation for 3 slaves in 958480ns I0813 19:55:42.227176 26131 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:42.227378 26131 hierarchical.hpp:908] Performed allocation for 3 slaves in 927048ns I0813 19:55:43.228813 26137 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:43.229441 26137 hierarchical.hpp:908] Performed allocation for 3 slaves in 1.310118ms I0813 19:55:44.230828 26135 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:44.231142 26135 hierarchical.hpp:908] Performed allocation for 3 slaves in 896369ns I0813 19:55:45.232656 26135 hierarchical.hpp:1008] No resources available to allocate! I0813 19:55:45.232903 26135 hierarchical.hpp:908] Performed allocation for 3 slaves in 1.357693ms I0813 19:55:46.234973 26137 hierarchical.hpp:1008 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3280","08/17/2015 17:11:15",8,"Master fails to access replicated log after network partition ""In a 5 node cluster with 3 masters and 2 slaves, and ZK on each node, when a network partition is forced, all the masters apparently lose access to their replicated log. The leading master halts. Unknown reasons, but presumably related to replicated log access. The others fail to recover from the replicated log. Unknown reasons. This could have to do with ZK setup, but it might also be a Mesos bug. This was observed in a Chronos test drive scenario described in detail here: https://github.com/mesos/chronos/issues/511 With setup instructions here: https://github.com/mesos/chronos/issues/508 ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3284","08/17/2015 23:36:01",3,"JSON representation of Protobuf should use base64 encoding for 'bytes' fields. ""Currently we encode 'bytes' fields as UTF-8 strings, which is lossy for binary data due to invalid byte sequences! In order to encode binary data in a lossless fashion, we can encode 'bytes' fields in base64. Note that this is also how proto3 does its encoding (see [here|https://developers.google.com/protocol-buffers/docs/proto3?hl=en#json]), so this would make migration easier as well.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3288","08/18/2015 18:08:22",5,"Implement docker registry client ""Implement the docker registry client as per design document: https://docs.google.com/document/d/1kE-HXPQl4lQgamPIiaD4Ytdr-N4HeQc4fnE93WHR4X4/edit""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3289","08/18/2015 18:10:48",5,"Add DockerRegistry unit tests ""Add unit tests suite for docker registry implementation. This could include: - Creating mock docker registry server - Using openssl library for digest functions.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3293","08/19/2015 00:40:19",5,"Failing ROOT_ tests on CentOS 7.1 - LimitedCpuIsolatorTest ""h2. LimitedCpuIsolatorTest.ROOT_CGROUPS_Pids_and_Tids This is one of several ROOT failing tests: we want to track them *individually* and for each of them decide whether to: * fix; * remove; OR * redesign. (full verbose logs attached) h2. Steps to Reproduce Completely cleaned the build, removed directory, clean pull from {{master}} (SHA: {{fb93d93}}) - same results, 9 failed tests: """," [==========] 751 tests from 114 test cases ran. (231218 ms total) [ PASSED ] 742 tests. [ FAILED ] 9 tests, listed below: [ FAILED ] LimitedCpuIsolatorTest.ROOT_CGROUPS_Pids_and_Tids [ FAILED ] UserCgroupIsolatorTest/1.ROOT_CGROUPS_UserCgroup, where TypeParam = mesos::internal::slave::CgroupsCpushareIsolatorProcess [ FAILED ] ContainerizerTest.ROOT_CGROUPS_BalloonFramework [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_ChangeRootFilesystem [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromSandbox [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromHost [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromHostSandboxMountPoint [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_PersistentVolumeWithRootFilesystem [ FAILED ] MesosContainerizerLaunchTest.ROOT_ChangeRootfs 9 FAILED TESTS YOU HAVE 10 DISABLED TESTS ",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3294","08/19/2015 00:47:00",5,"Failing ROOT_ tests on CentOS 7.1 - UserCgroupIsolatorTest ""h2. UserCgroupIsolatorTest This is one of several ROOT failing tests: we want to track them *individually* and for each of them decide whether to: * fix; * remove; OR * redesign. (full verbose logs attached) h2. Steps to Reproduce Completely cleaned the build, removed directory, clean pull from {{master}} (SHA: {{fb93d93}}) - same results, 9 failed tests: """," [==========] 751 tests from 114 test cases ran. (231218 ms total) [ PASSED ] 742 tests. [ FAILED ] 9 tests, listed below: [ FAILED ] LimitedCpuIsolatorTest.ROOT_CGROUPS_Pids_and_Tids [ FAILED ] UserCgroupIsolatorTest/1.ROOT_CGROUPS_UserCgroup, where TypeParam = mesos::internal::slave::CgroupsCpushareIsolatorProcess [ FAILED ] ContainerizerTest.ROOT_CGROUPS_BalloonFramework [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_ChangeRootFilesystem [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromSandbox [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromHost [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromHostSandboxMountPoint [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_PersistentVolumeWithRootFilesystem [ FAILED ] MesosContainerizerLaunchTest.ROOT_ChangeRootfs 9 FAILED TESTS YOU HAVE 10 DISABLED TESTS ",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3295","08/19/2015 00:48:01",5,"Failing ROOT_ tests on CentOS 7.1 - ContainerizerTest ""h2. ContainerizerTest.ROOT_CGROUPS_BalloonFramework This is one of several ROOT failing tests: we want to track them *individually* and for each of them decide whether to: * fix; * remove; OR * redesign. (full verbose logs attached) h2. Steps to Reproduce Completely cleaned the build, removed directory, clean pull from {{master}} (SHA: {{fb93d93}}) - same results, 9 failed tests: """," [==========] 751 tests from 114 test cases ran. (231218 ms total) [ PASSED ] 742 tests. [ FAILED ] 9 tests, listed below: [ FAILED ] LimitedCpuIsolatorTest.ROOT_CGROUPS_Pids_and_Tids [ FAILED ] UserCgroupIsolatorTest/1.ROOT_CGROUPS_UserCgroup, where TypeParam = mesos::internal::slave::CgroupsCpushareIsolatorProcess [ FAILED ] ContainerizerTest.ROOT_CGROUPS_BalloonFramework [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_ChangeRootFilesystem [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromSandbox [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromHost [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromHostSandboxMountPoint [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_PersistentVolumeWithRootFilesystem [ FAILED ] MesosContainerizerLaunchTest.ROOT_ChangeRootfs 9 FAILED TESTS YOU HAVE 10 DISABLED TESTS ",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3296","08/19/2015 00:49:18",5,"Failing ROOT_ tests on CentOS 7.1 - LinuxFilesystemIsolatorTest ""h2. LinuxFilesystemIsolatorTest This is one of several ROOT failing tests: we want to track them *individually* and for each of them decide whether to: * fix; * remove; OR * redesign. (full verbose logs attached) h2. Steps to Reproduce Completely cleaned the build, removed directory, clean pull from {{master}} (SHA: {{fb93d93}}) - same results, 9 failed tests: """," [==========] 751 tests from 114 test cases ran. (231218 ms total) [ PASSED ] 742 tests. [ FAILED ] 9 tests, listed below: [ FAILED ] LimitedCpuIsolatorTest.ROOT_CGROUPS_Pids_and_Tids [ FAILED ] UserCgroupIsolatorTest/1.ROOT_CGROUPS_UserCgroup, where TypeParam = mesos::internal::slave::CgroupsCpushareIsolatorProcess [ FAILED ] ContainerizerTest.ROOT_CGROUPS_BalloonFramework [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_ChangeRootFilesystem [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromSandbox [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromHost [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromHostSandboxMountPoint [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_PersistentVolumeWithRootFilesystem [ FAILED ] MesosContainerizerLaunchTest.ROOT_ChangeRootfs 9 FAILED TESTS YOU HAVE 10 DISABLED TESTS ",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3297","08/19/2015 00:50:39",5,"Failing ROOT_ tests on CentOS 7.1 - MesosContainerizerLaunchTest ""h2. MesosContainerizerLaunchTest This is one of several ROOT failing tests: we want to track them *individually* and for each of them decide whether to: * fix; * remove; OR * redesign. (full verbose logs attached) h2. Steps to Reproduce Completely cleaned the build, removed directory, clean pull from {{master}} (SHA: {{fb93d93}}) - same results, 9 failed tests: """," [==========] 751 tests from 114 test cases ran. (231218 ms total) [ PASSED ] 742 tests. [ FAILED ] 9 tests, listed below: [ FAILED ] LimitedCpuIsolatorTest.ROOT_CGROUPS_Pids_and_Tids [ FAILED ] UserCgroupIsolatorTest/1.ROOT_CGROUPS_UserCgroup, where TypeParam = mesos::internal::slave::CgroupsCpushareIsolatorProcess [ FAILED ] ContainerizerTest.ROOT_CGROUPS_BalloonFramework [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_ChangeRootFilesystem [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromSandbox [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromHost [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_VolumeFromHostSandboxMountPoint [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_PersistentVolumeWithRootFilesystem [ FAILED ] MesosContainerizerLaunchTest.ROOT_ChangeRootfs 9 FAILED TESTS YOU HAVE 10 DISABLED TESTS ",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3311","08/25/2015 23:18:40",2,"SlaveTest.HTTPSchedulerSlaveRestart ""Observed on ASF CI """," [ RUN ] SlaveTest.HTTPSchedulerSlaveRestart Using temporary directory '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_CXyDrA' I0825 22:07:36.809872 27610 leveldb.cpp:176] Opened db in 3.751801ms I0825 22:07:36.811115 27610 leveldb.cpp:183] Compacted db in 1.2194ms I0825 22:07:36.811175 27610 leveldb.cpp:198] Created db iterator in 30669ns I0825 22:07:36.811197 27610 leveldb.cpp:204] Seeked to beginning of db in 7829ns I0825 22:07:36.811208 27610 leveldb.cpp:273] Iterated through 0 keys in the db in 6017ns I0825 22:07:36.811245 27610 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0825 22:07:36.811722 27638 recover.cpp:449] Starting replica recovery I0825 22:07:36.811980 27638 recover.cpp:475] Replica is in EMPTY status I0825 22:07:36.813033 27641 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0825 22:07:36.813355 27635 recover.cpp:195] Received a recover response from a replica in EMPTY status I0825 22:07:36.813756 27628 recover.cpp:566] Updating replica status to STARTING I0825 22:07:36.814434 27636 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 570160ns I0825 22:07:36.814471 27636 replica.cpp:323] Persisted replica status to STARTING I0825 22:07:36.814743 27642 recover.cpp:475] Replica is in STARTING status I0825 22:07:36.814965 27638 master.cpp:378] Master 20150825-220736-234885548-51219-27610 (09c6504e3a31) started on 172.17.0.14:51219 I0825 22:07:36.814999 27638 master.cpp:380] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_CXyDrA/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""25secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-0.25.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_CXyDrA/master"""" --zk_session_timeout=""""10secs"""" I0825 22:07:36.815347 27638 master.cpp:425] Master only allowing authenticated frameworks to register I0825 22:07:36.815371 27638 master.cpp:430] Master only allowing authenticated slaves to register I0825 22:07:36.815402 27638 credentials.hpp:37] Loading credentials for authentication from '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_CXyDrA/credentials' I0825 22:07:36.815634 27632 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0825 22:07:36.815752 27638 master.cpp:469] Using default 'crammd5' authenticator I0825 22:07:36.815904 27638 master.cpp:506] Authorization enabled I0825 22:07:36.815979 27643 recover.cpp:195] Received a recover response from a replica in STARTING status I0825 22:07:36.816185 27637 whitelist_watcher.cpp:79] No whitelist given I0825 22:07:36.816186 27641 hierarchical.hpp:346] Initialized hierarchical allocator process I0825 22:07:36.816519 27630 recover.cpp:566] Updating replica status to VOTING I0825 22:07:36.817258 27639 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 475231ns I0825 22:07:36.817296 27639 replica.cpp:323] Persisted replica status to VOTING I0825 22:07:36.817420 27637 master.cpp:1525] The newly elected leader is master@172.17.0.14:51219 with id 20150825-220736-234885548-51219-27610 I0825 22:07:36.817467 27637 master.cpp:1538] Elected as the leading master! I0825 22:07:36.817483 27637 master.cpp:1308] Recovering from registrar I0825 22:07:36.817509 27635 recover.cpp:580] Successfully joined the Paxos group I0825 22:07:36.817708 27633 registrar.cpp:311] Recovering registrar I0825 22:07:36.817844 27635 recover.cpp:464] Recover process terminated I0825 22:07:36.818439 27631 log.cpp:661] Attempting to start the writer I0825 22:07:36.819694 27636 replica.cpp:477] Replica received implicit promise request with proposal 1 I0825 22:07:36.820133 27636 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 421255ns I0825 22:07:36.820168 27636 replica.cpp:345] Persisted promised to 1 I0825 22:07:36.820804 27630 coordinator.cpp:231] Coordinator attemping to fill missing position I0825 22:07:36.822105 27638 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0825 22:07:36.822597 27638 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 468065ns I0825 22:07:36.822625 27638 replica.cpp:679] Persisted action at 0 I0825 22:07:36.823737 27637 replica.cpp:511] Replica received write request for position 0 I0825 22:07:36.823796 27637 leveldb.cpp:438] Reading position from leveldb took 39603ns I0825 22:07:36.824267 27637 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 446655ns I0825 22:07:36.824296 27637 replica.cpp:679] Persisted action at 0 I0825 22:07:36.824961 27634 replica.cpp:658] Replica received learned notice for position 0 I0825 22:07:36.825340 27634 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 362236ns I0825 22:07:36.825369 27634 replica.cpp:679] Persisted action at 0 I0825 22:07:36.825388 27634 replica.cpp:664] Replica learned NOP action at position 0 I0825 22:07:36.825975 27642 log.cpp:677] Writer started with ending position 0 I0825 22:07:36.826997 27628 leveldb.cpp:438] Reading position from leveldb took 56us I0825 22:07:36.829946 27639 registrar.cpp:344] Successfully fetched the registry (0B) in 12.187136ms I0825 22:07:36.830077 27639 registrar.cpp:443] Applied 1 operations in 40874ns; attempting to update the 'registry' I0825 22:07:36.832870 27635 log.cpp:685] Attempting to append 174 bytes to the log I0825 22:07:36.833088 27641 coordinator.cpp:341] Coordinator attempting to write APPEND action at position 1 I0825 22:07:36.833845 27636 replica.cpp:511] Replica received write request for position 1 I0825 22:07:36.834293 27636 leveldb.cpp:343] Persisting action (193 bytes) to leveldb took 425175ns I0825 22:07:36.834324 27636 replica.cpp:679] Persisted action at 1 I0825 22:07:36.835077 27643 replica.cpp:658] Replica received learned notice for position 1 I0825 22:07:36.835500 27643 leveldb.cpp:343] Persisting action (195 bytes) to leveldb took 404831ns I0825 22:07:36.835532 27643 replica.cpp:679] Persisted action at 1 I0825 22:07:36.835574 27643 replica.cpp:664] Replica learned APPEND action at position 1 I0825 22:07:36.836545 27643 registrar.cpp:488] Successfully updated the 'registry' in 6.393088ms I0825 22:07:36.836707 27643 registrar.cpp:374] Successfully recovered registrar I0825 22:07:36.836874 27639 log.cpp:704] Attempting to truncate the log to 1 I0825 22:07:36.837174 27632 master.cpp:1335] Recovered 0 slaves from the Registry (135B) ; allowing 10mins for slaves to re-register I0825 22:07:36.837291 27634 coordinator.cpp:341] Coordinator attempting to write TRUNCATE action at position 2 I0825 22:07:36.838249 27639 replica.cpp:511] Replica received write request for position 2 I0825 22:07:36.838685 27639 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 412214ns I0825 22:07:36.838716 27639 replica.cpp:679] Persisted action at 2 I0825 22:07:36.839735 27628 replica.cpp:658] Replica received learned notice for position 2 I0825 22:07:36.840304 27628 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 547841ns I0825 22:07:36.840375 27628 leveldb.cpp:401] Deleting ~1 keys from leveldb took 51256ns I0825 22:07:36.840401 27628 replica.cpp:679] Persisted action at 2 I0825 22:07:36.840428 27628 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0825 22:07:36.849371 27610 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix I0825 22:07:36.856500 27633 slave.cpp:190] Slave started on 286)@172.17.0.14:51219 I0825 22:07:36.856541 27633 slave.cpp:191] Flags at startup: --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.25.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resource_monitoring_interval=""""1secs"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L"""" I0825 22:07:36.857074 27633 credentials.hpp:85] Loading credential for authentication from '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/credential' I0825 22:07:36.857275 27633 slave.cpp:321] Slave using credential for: test-principal I0825 22:07:36.857822 27633 slave.cpp:354] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0825 22:07:36.857936 27633 slave.cpp:384] Slave hostname: 09c6504e3a31 I0825 22:07:36.857959 27633 slave.cpp:389] Slave checkpoint: true I0825 22:07:36.858886 27637 state.cpp:54] Recovering state from '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta' I0825 22:07:36.859130 27638 status_update_manager.cpp:202] Recovering status update manager I0825 22:07:36.859465 27636 containerizer.cpp:379] Recovering containerizer I0825 22:07:36.860631 27634 slave.cpp:4069] Finished recovery I0825 22:07:36.861034 27634 slave.cpp:4226] Querying resource estimator for oversubscribable resources I0825 22:07:36.861239 27643 status_update_manager.cpp:176] Pausing sending status updates I0825 22:07:36.861240 27634 slave.cpp:684] New master detected at master@172.17.0.14:51219 I0825 22:07:36.861322 27634 slave.cpp:747] Authenticating with master master@172.17.0.14:51219 I0825 22:07:36.861343 27634 slave.cpp:752] Using default CRAM-MD5 authenticatee I0825 22:07:36.861450 27634 slave.cpp:720] Detecting new master I0825 22:07:36.861495 27628 authenticatee.cpp:115] Creating new client SASL connection I0825 22:07:36.861569 27634 slave.cpp:4240] Received oversubscribable resources from the resource estimator I0825 22:07:36.861716 27632 master.cpp:4694] Authenticating slave(286)@172.17.0.14:51219 I0825 22:07:36.861799 27629 authenticator.cpp:407] Starting authentication session for crammd5_authenticatee(665)@172.17.0.14:51219 I0825 22:07:36.862045 27642 authenticator.cpp:92] Creating new server SASL connection I0825 22:07:36.862308 27635 authenticatee.cpp:206] Received SASL authentication mechanisms: CRAM-MD5 I0825 22:07:36.862337 27635 authenticatee.cpp:232] Attempting to authenticate with mechanism 'CRAM-MD5' I0825 22:07:36.862421 27629 authenticator.cpp:197] Received SASL authentication start I0825 22:07:36.862478 27629 authenticator.cpp:319] Authentication requires more steps I0825 22:07:36.862579 27633 authenticatee.cpp:252] Received SASL authentication step I0825 22:07:36.862679 27628 authenticator.cpp:225] Received SASL authentication step I0825 22:07:36.862707 27628 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: '09c6504e3a31' server FQDN: '09c6504e3a31' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0825 22:07:36.862717 27628 auxprop.cpp:174] Looking up auxiliary property '*userPassword' I0825 22:07:36.862754 27628 auxprop.cpp:174] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0825 22:07:36.862785 27628 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: '09c6504e3a31' server FQDN: '09c6504e3a31' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0825 22:07:36.862797 27628 auxprop.cpp:124] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0825 22:07:36.862802 27628 auxprop.cpp:124] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0825 22:07:36.862817 27628 authenticator.cpp:311] Authentication success I0825 22:07:36.862884 27629 authenticatee.cpp:292] Authentication success I0825 22:07:36.862921 27630 master.cpp:4724] Successfully authenticated principal 'test-principal' at slave(286)@172.17.0.14:51219 I0825 22:07:36.862969 27642 authenticator.cpp:425] Authentication session cleanup for crammd5_authenticatee(665)@172.17.0.14:51219 I0825 22:07:36.863139 27639 slave.cpp:815] Successfully authenticated with master master@172.17.0.14:51219 I0825 22:07:36.863256 27639 slave.cpp:1209] Will retry registration in 15.028678ms if necessary I0825 22:07:36.863382 27643 master.cpp:3636] Registering slave at slave(286)@172.17.0.14:51219 (09c6504e3a31) with id 20150825-220736-234885548-51219-27610-S0 I0825 22:07:36.863899 27610 sched.cpp:164] Version: 0.25.0 I0825 22:07:36.863940 27636 registrar.cpp:443] Applied 1 operations in 94492ns; attempting to update the 'registry' I0825 22:07:36.864670 27632 sched.cpp:262] New master detected at master@172.17.0.14:51219 I0825 22:07:36.864790 27632 sched.cpp:318] Authenticating with master master@172.17.0.14:51219 I0825 22:07:36.864821 27632 sched.cpp:325] Using default CRAM-MD5 authenticatee I0825 22:07:36.865095 27637 authenticatee.cpp:115] Creating new client SASL connection I0825 22:07:36.865453 27643 master.cpp:4694] Authenticating scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 I0825 22:07:36.865603 27629 authenticator.cpp:407] Starting authentication session for crammd5_authenticatee(666)@172.17.0.14:51219 I0825 22:07:36.865840 27638 authenticator.cpp:92] Creating new server SASL connection I0825 22:07:36.866217 27630 authenticatee.cpp:206] Received SASL authentication mechanisms: CRAM-MD5 I0825 22:07:36.866260 27630 authenticatee.cpp:232] Attempting to authenticate with mechanism 'CRAM-MD5' I0825 22:07:36.866433 27639 authenticator.cpp:197] Received SASL authentication start I0825 22:07:36.866513 27639 authenticator.cpp:319] Authentication requires more steps I0825 22:07:36.866710 27630 authenticatee.cpp:252] Received SASL authentication step I0825 22:07:36.866999 27638 authenticator.cpp:225] Received SASL authentication step I0825 22:07:36.867051 27638 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: '09c6504e3a31' server FQDN: '09c6504e3a31' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0825 22:07:36.867077 27638 auxprop.cpp:174] Looking up auxiliary property '*userPassword' I0825 22:07:36.867130 27638 auxprop.cpp:174] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0825 22:07:36.867162 27638 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: '09c6504e3a31' server FQDN: '09c6504e3a31' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0825 22:07:36.867175 27638 auxprop.cpp:124] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0825 22:07:36.867183 27638 auxprop.cpp:124] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0825 22:07:36.867202 27638 authenticator.cpp:311] Authentication success I0825 22:07:36.867426 27636 authenticatee.cpp:292] Authentication success I0825 22:07:36.867434 27633 authenticator.cpp:425] Authentication session cleanup for crammd5_authenticatee(666)@172.17.0.14:51219 I0825 22:07:36.867627 27630 master.cpp:4724] Successfully authenticated principal 'test-principal' at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 I0825 22:07:36.867951 27641 sched.cpp:407] Successfully authenticated with master master@172.17.0.14:51219 I0825 22:07:36.867986 27641 sched.cpp:713] Sending SUBSCRIBE call to master@172.17.0.14:51219 I0825 22:07:36.868114 27641 sched.cpp:746] Will retry registration in 1.352726078secs if necessary I0825 22:07:36.868233 27634 log.cpp:685] Attempting to append 344 bytes to the log I0825 22:07:36.868268 27638 master.cpp:2094] Received SUBSCRIBE call for framework 'default' at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 I0825 22:07:36.868305 27638 master.cpp:1564] Authorizing framework principal 'test-principal' to receive offers for role '*' I0825 22:07:36.868373 27631 coordinator.cpp:341] Coordinator attempting to write APPEND action at position 3 I0825 22:07:36.868614 27642 master.cpp:2164] Subscribing framework default with checkpointing enabled and capabilities [ ] I0825 22:07:36.868999 27643 hierarchical.hpp:391] Added framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:36.869030 27643 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:36.869046 27643 hierarchical.hpp:910] Performed allocation for 0 slaves in 34654ns I0825 22:07:36.869215 27631 sched.cpp:640] Framework registered with 20150825-220736-234885548-51219-27610-0000 I0825 22:07:36.869215 27643 replica.cpp:511] Replica received write request for position 3 I0825 22:07:36.869268 27631 sched.cpp:654] Scheduler::registered took 29976ns I0825 22:07:36.869453 27643 leveldb.cpp:343] Persisting action (363 bytes) to leveldb took 181689ns I0825 22:07:36.869477 27643 replica.cpp:679] Persisted action at 3 I0825 22:07:36.870075 27629 replica.cpp:658] Replica received learned notice for position 3 I0825 22:07:36.870542 27629 leveldb.cpp:343] Persisting action (365 bytes) to leveldb took 469081ns I0825 22:07:36.870589 27629 replica.cpp:679] Persisted action at 3 I0825 22:07:36.870622 27629 replica.cpp:664] Replica learned APPEND action at position 3 I0825 22:07:36.872133 27632 registrar.cpp:488] Successfully updated the 'registry' in 8.113152ms I0825 22:07:36.872354 27639 log.cpp:704] Attempting to truncate the log to 3 I0825 22:07:36.872470 27632 coordinator.cpp:341] Coordinator attempting to write TRUNCATE action at position 4 I0825 22:07:36.872879 27637 slave.cpp:3058] Received ping from slave-observer(274)@172.17.0.14:51219 I0825 22:07:36.873015 27636 master.cpp:3699] Registered slave 20150825-220736-234885548-51219-27610-S0 at slave(286)@172.17.0.14:51219 (09c6504e3a31) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0825 22:07:36.873180 27637 slave.cpp:859] Registered with master master@172.17.0.14:51219; given slave ID 20150825-220736-234885548-51219-27610-S0 I0825 22:07:36.873219 27637 fetcher.cpp:77] Clearing fetcher cache I0825 22:07:36.873410 27634 status_update_manager.cpp:183] Resuming sending status updates I0825 22:07:36.873379 27628 hierarchical.hpp:542] Added slave 20150825-220736-234885548-51219-27610-S0 (09c6504e3a31) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0825 22:07:36.873482 27642 replica.cpp:511] Replica received write request for position 4 I0825 22:07:36.873661 27637 slave.cpp:882] Checkpointing SlaveInfo to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/slave.info' I0825 22:07:36.874042 27642 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 538208ns I0825 22:07:36.874078 27642 replica.cpp:679] Persisted action at 4 I0825 22:07:36.874196 27628 hierarchical.hpp:928] Performed allocation for slave 20150825-220736-234885548-51219-27610-S0 in 739900ns I0825 22:07:36.874204 27637 slave.cpp:918] Forwarding total oversubscribed resources I0825 22:07:36.874824 27635 master.cpp:4613] Sending 1 offers to framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 I0825 22:07:36.874958 27639 replica.cpp:658] Replica received learned notice for position 4 I0825 22:07:36.875074 27635 master.cpp:3998] Received update of slave 20150825-220736-234885548-51219-27610-S0 at slave(286)@172.17.0.14:51219 (09c6504e3a31) with total oversubscribed resources I0825 22:07:36.875485 27636 sched.cpp:803] Scheduler::resourceOffers took 243089ns I0825 22:07:36.875450 27638 hierarchical.hpp:602] Slave 20150825-220736-234885548-51219-27610-S0 (09c6504e3a31) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I0825 22:07:36.875495 27639 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 462264ns I0825 22:07:36.875643 27639 leveldb.cpp:401] Deleting ~2 keys from leveldb took 109856ns I0825 22:07:36.875682 27639 replica.cpp:679] Persisted action at 4 I0825 22:07:36.875717 27639 replica.cpp:664] Replica learned TRUNCATE action at position 4 I0825 22:07:36.876045 27638 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:36.876072 27638 hierarchical.hpp:928] Performed allocation for slave 20150825-220736-234885548-51219-27610-S0 in 541099ns I0825 22:07:36.879416 27639 master.cpp:2739] Processing ACCEPT call for offers: [ 20150825-220736-234885548-51219-27610-O0 ] on slave 20150825-220736-234885548-51219-27610-S0 at slave(286)@172.17.0.14:51219 (09c6504e3a31) for framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 I0825 22:07:36.879475 27639 master.cpp:2570] Authorizing framework principal 'test-principal' to launch task b89d1df8-f2fb-44be-8f60-9352cf32a79d as user 'mesos' I0825 22:07:36.880975 27639 master.hpp:170] Adding task b89d1df8-f2fb-44be-8f60-9352cf32a79d with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150825-220736-234885548-51219-27610-S0 (09c6504e3a31) I0825 22:07:36.881124 27639 master.cpp:3069] Launching task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 20150825-220736-234885548-51219-27610-S0 at slave(286)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:36.882314 27636 slave.cpp:1249] Got assigned task b89d1df8-f2fb-44be-8f60-9352cf32a79d for framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:36.882470 27636 slave.cpp:4720] Checkpointing FrameworkInfo to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/framework.info' I0825 22:07:36.882984 27636 slave.cpp:4731] Checkpointing framework pid '@0.0.0.0:0' to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/framework.pid' I0825 22:07:36.884068 27636 slave.cpp:1365] Launching task b89d1df8-f2fb-44be-8f60-9352cf32a79d for framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:36.895586 27636 slave.cpp:5156] Checkpointing ExecutorInfo to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/executors/b89d1df8-f2fb-44be-8f60-9352cf32a79d/executor.info' I0825 22:07:36.896765 27636 slave.cpp:4799] Launching executor b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/executors/b89d1df8-f2fb-44be-8f60-9352cf32a79d/runs/1499299a-93dd-4982-9249-ad0e19d1c06c' I0825 22:07:36.897374 27643 containerizer.cpp:633] Starting container '1499299a-93dd-4982-9249-ad0e19d1c06c' for executor 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' of framework '20150825-220736-234885548-51219-27610-0000' I0825 22:07:36.897414 27636 slave.cpp:5179] Checkpointing TaskInfo to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/executors/b89d1df8-f2fb-44be-8f60-9352cf32a79d/runs/1499299a-93dd-4982-9249-ad0e19d1c06c/tasks/b89d1df8-f2fb-44be-8f60-9352cf32a79d/task.info' I0825 22:07:36.897974 27636 slave.cpp:1583] Queuing task 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' for executor b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework '20150825-220736-234885548-51219-27610-0000 I0825 22:07:36.898123 27636 slave.cpp:637] Successfully attached file '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/executors/b89d1df8-f2fb-44be-8f60-9352cf32a79d/runs/1499299a-93dd-4982-9249-ad0e19d1c06c' I0825 22:07:36.902439 27641 launcher.cpp:131] Forked child with pid '2326' for container '1499299a-93dd-4982-9249-ad0e19d1c06c' I0825 22:07:36.902752 27641 containerizer.cpp:855] Checkpointing executor's forked pid 2326 to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/executors/b89d1df8-f2fb-44be-8f60-9352cf32a79d/runs/1499299a-93dd-4982-9249-ad0e19d1c06c/pids/forked.pid' WARNING: Logging before InitGoogleLogging() is written to STDERR I0825 22:07:37.029348 2340 process.cpp:1012] libprocess is initialized on 172.17.0.14:42774 for 16 cpus I0825 22:07:37.030342 2340 logging.cpp:177] Logging to STDERR I0825 22:07:37.032822 2340 exec.cpp:133] Version: 0.25.0 I0825 22:07:37.038837 2355 exec.cpp:183] Executor started at: executor(1)@172.17.0.14:42774 with pid 2340 I0825 22:07:37.041252 27638 slave.cpp:2358] Got registration for executor 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' of framework 20150825-220736-234885548-51219-27610-0000 from executor(1)@172.17.0.14:42774 I0825 22:07:37.041371 27638 slave.cpp:2444] Checkpointing executor pid 'executor(1)@172.17.0.14:42774' to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/executors/b89d1df8-f2fb-44be-8f60-9352cf32a79d/runs/1499299a-93dd-4982-9249-ad0e19d1c06c/pids/libprocess.pid' I0825 22:07:37.044067 27634 slave.cpp:1739] Sending queued task 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' to executor 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.044256 2358 exec.cpp:207] Executor registered on slave 20150825-220736-234885548-51219-27610-S0 I0825 22:07:37.046058 2358 exec.cpp:219] Executor::registered took 239083ns Registered executor on 09c6504e3a31 Starting task b89d1df8-f2fb-44be-8f60-9352cf32a79d I0825 22:07:37.046394 2358 exec.cpp:294] Executor asked to run task 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' I0825 22:07:37.046493 2358 exec.cpp:303] Executor::launchTask took 84034ns sh -c 'sleep 1000' Forked command at 2371 I0825 22:07:37.049942 2366 exec.cpp:516] Executor sending status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.050977 27635 slave.cpp:2696] Handling status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 from executor(1)@172.17.0.14:42774 I0825 22:07:37.051316 27632 status_update_manager.cpp:322] Received status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.051379 27632 status_update_manager.cpp:499] Creating StatusUpdate stream for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.052251 27632 status_update_manager.cpp:826] Checkpointing UPDATE for status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.053840 27632 status_update_manager.cpp:376] Forwarding update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to the slave I0825 22:07:37.054127 27642 slave.cpp:2975] Forwarding the update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to master@172.17.0.14:51219 I0825 22:07:37.054364 27642 slave.cpp:2899] Status update manager successfully handled status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.054407 27642 slave.cpp:2905] Sending acknowledgement for status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to executor(1)@172.17.0.14:42774 I0825 22:07:37.054469 27635 master.cpp:4069] Status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 from slave 20150825-220736-234885548-51219-27610-S0 at slave(286)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:37.054519 27635 master.cpp:4108] Forwarding status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.054743 27635 master.cpp:5576] Updating the latest state of task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to TASK_RUNNING I0825 22:07:37.055011 27641 sched.cpp:910] Scheduler::statusUpdate took 169426ns I0825 22:07:37.055639 27634 master.cpp:3398] Processing ACKNOWLEDGE call 98c4a799-ad82-497d-be1e-6dfb56a0894e for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 on slave 20150825-220736-234885548-51219-27610-S0 I0825 22:07:37.055665 2359 exec.cpp:340] Executor received status update acknowledgement 98c4a799-ad82-497d-be1e-6dfb56a0894e for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.055886 27640 slave.cpp:564] Slave terminating I0825 22:07:37.056210 27634 master.cpp:1012] Slave 20150825-220736-234885548-51219-27610-S0 at slave(286)@172.17.0.14:51219 (09c6504e3a31) disconnected I0825 22:07:37.056257 27634 master.cpp:2415] Disconnecting slave 20150825-220736-234885548-51219-27610-S0 at slave(286)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:37.056339 27634 master.cpp:2434] Deactivating slave 20150825-220736-234885548-51219-27610-S0 at slave(286)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:37.056675 27643 hierarchical.hpp:635] Slave 20150825-220736-234885548-51219-27610-S0 deactivated I0825 22:07:37.059391 27610 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix I0825 22:07:37.066619 27641 slave.cpp:190] Slave started on 287)@172.17.0.14:51219 I0825 22:07:37.066668 27641 slave.cpp:191] Flags at startup: --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.25.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resource_monitoring_interval=""""1secs"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L"""" I0825 22:07:37.067343 27641 credentials.hpp:85] Loading credential for authentication from '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/credential' I0825 22:07:37.067643 27641 slave.cpp:321] Slave using credential for: test-principal I0825 22:07:37.068413 27641 slave.cpp:354] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0825 22:07:37.068580 27641 slave.cpp:384] Slave hostname: 09c6504e3a31 I0825 22:07:37.068613 27641 slave.cpp:389] Slave checkpoint: true I0825 22:07:37.069970 27636 state.cpp:54] Recovering state from '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta' I0825 22:07:37.070089 27636 state.cpp:690] Failed to find resources file '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/resources/resources.info' I0825 22:07:37.075319 27628 fetcher.cpp:77] Clearing fetcher cache I0825 22:07:37.075393 27628 slave.cpp:4157] Recovering framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.075475 27628 slave.cpp:4908] Recovering executor 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.076370 27641 status_update_manager.cpp:202] Recovering status update manager I0825 22:07:37.076409 27641 status_update_manager.cpp:210] Recovering executor 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.076504 27641 status_update_manager.cpp:499] Creating StatusUpdate stream for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.077056 27641 status_update_manager.cpp:802] Replaying status update stream for task b89d1df8-f2fb-44be-8f60-9352cf32a79d I0825 22:07:37.077715 27628 slave.cpp:637] Successfully attached file '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/executors/b89d1df8-f2fb-44be-8f60-9352cf32a79d/runs/1499299a-93dd-4982-9249-ad0e19d1c06c' I0825 22:07:37.078111 27634 containerizer.cpp:379] Recovering containerizer I0825 22:07:37.078229 27634 containerizer.cpp:434] Recovering container '1499299a-93dd-4982-9249-ad0e19d1c06c' for executor 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.079934 27640 slave.cpp:4010] Sending reconnect request to executor b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 at executor(1)@172.17.0.14:42774 I0825 22:07:37.081012 2354 exec.cpp:253] Received reconnect request from slave 20150825-220736-234885548-51219-27610-S0 I0825 22:07:37.081893 27631 slave.cpp:2508] Re-registering executor b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:37.082904 2362 exec.cpp:230] Executor re-registered on slave 20150825-220736-234885548-51219-27610-S0 Re-registered executor on 09c6504e3a31 I0825 22:07:37.084738 2362 exec.cpp:242] Executor::reregistered took 119419ns I0825 22:07:37.816828 27634 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:37.816884 27634 hierarchical.hpp:910] Performed allocation for 1 slaves in 129850ns I0825 22:07:38.817526 27629 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:38.817607 27629 hierarchical.hpp:910] Performed allocation for 1 slaves in 152923ns I0825 22:07:39.081434 27637 slave.cpp:2645] Cleaning up un-reregistered executors I0825 22:07:39.081596 27637 slave.cpp:4069] Finished recovery I0825 22:07:39.082165 27637 slave.cpp:4226] Querying resource estimator for oversubscribable resources I0825 22:07:39.082417 27637 status_update_manager.cpp:176] Pausing sending status updates I0825 22:07:39.082442 27643 slave.cpp:684] New master detected at master@172.17.0.14:51219 I0825 22:07:39.082602 27643 slave.cpp:747] Authenticating with master master@172.17.0.14:51219 I0825 22:07:39.082628 27643 slave.cpp:752] Using default CRAM-MD5 authenticatee I0825 22:07:39.082830 27643 slave.cpp:720] Detecting new master I0825 22:07:39.082919 27638 authenticatee.cpp:115] Creating new client SASL connection I0825 22:07:39.082973 27643 slave.cpp:4240] Received oversubscribable resources from the resource estimator I0825 22:07:39.083277 27631 master.cpp:4694] Authenticating slave(287)@172.17.0.14:51219 I0825 22:07:39.083427 27635 authenticator.cpp:407] Starting authentication session for crammd5_authenticatee(667)@172.17.0.14:51219 I0825 22:07:39.083731 27630 authenticator.cpp:92] Creating new server SASL connection I0825 22:07:39.083982 27634 authenticatee.cpp:206] Received SASL authentication mechanisms: CRAM-MD5 I0825 22:07:39.084025 27634 authenticatee.cpp:232] Attempting to authenticate with mechanism 'CRAM-MD5' I0825 22:07:39.084106 27634 authenticator.cpp:197] Received SASL authentication start I0825 22:07:39.084168 27634 authenticator.cpp:319] Authentication requires more steps I0825 22:07:39.084300 27639 authenticatee.cpp:252] Received SASL authentication step I0825 22:07:39.084527 27628 authenticator.cpp:225] Received SASL authentication step I0825 22:07:39.084625 27628 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: '09c6504e3a31' server FQDN: '09c6504e3a31' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0825 22:07:39.084650 27628 auxprop.cpp:174] Looking up auxiliary property '*userPassword' I0825 22:07:39.084709 27628 auxprop.cpp:174] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0825 22:07:39.084738 27628 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: '09c6504e3a31' server FQDN: '09c6504e3a31' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0825 22:07:39.084750 27628 auxprop.cpp:124] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0825 22:07:39.084763 27628 auxprop.cpp:124] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0825 22:07:39.084780 27628 authenticator.cpp:311] Authentication success I0825 22:07:39.084905 27642 authenticatee.cpp:292] Authentication success I0825 22:07:39.085000 27637 master.cpp:4724] Successfully authenticated principal 'test-principal' at slave(287)@172.17.0.14:51219 I0825 22:07:39.085234 27642 slave.cpp:815] Successfully authenticated with master master@172.17.0.14:51219 I0825 22:07:39.085610 27642 slave.cpp:1209] Will retry registration in 6.014445ms if necessary I0825 22:07:39.085907 27643 authenticator.cpp:425] Authentication session cleanup for crammd5_authenticatee(667)@172.17.0.14:51219 I0825 22:07:39.092914 27640 master.cpp:3773] Re-registering slave 20150825-220736-234885548-51219-27610-S0 at slave(286)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:39.093181 27630 slave.cpp:1209] Will retry registration in 20.588077ms if necessary I0825 22:07:39.093858 27635 slave.cpp:959] Re-registered with master master@172.17.0.14:51219 I0825 22:07:39.093879 27638 hierarchical.hpp:621] Slave 20150825-220736-234885548-51219-27610-S0 reactivated I0825 22:07:39.093855 27640 master.cpp:3936] Sending updated checkpointed resources to slave 20150825-220736-234885548-51219-27610-S0 at slave(287)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:39.094110 27631 status_update_manager.cpp:183] Resuming sending status updates I0825 22:07:39.094130 27635 slave.cpp:995] Forwarding total oversubscribed resources W0825 22:07:39.094172 27631 status_update_manager.cpp:190] Resending status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:39.094211 27631 status_update_manager.cpp:376] Forwarding update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to the slave I0825 22:07:39.094435 27640 master.cpp:3773] Re-registering slave 20150825-220736-234885548-51219-27610-S0 at slave(287)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:39.094602 27635 slave.cpp:2227] Updated checkpointed resources from to I0825 22:07:39.095346 27640 master.cpp:3936] Sending updated checkpointed resources to slave 20150825-220736-234885548-51219-27610-S0 at slave(287)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:39.095775 27635 slave.cpp:2975] Forwarding the update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to master@172.17.0.14:51219 I0825 22:07:39.095803 27640 master.cpp:3998] Received update of slave 20150825-220736-234885548-51219-27610-S0 at slave(287)@172.17.0.14:51219 (09c6504e3a31) with total oversubscribed resources I0825 22:07:39.096372 27635 slave.cpp:2131] Updating framework 20150825-220736-234885548-51219-27610-0000 pid to @0.0.0.0:0 I0825 22:07:39.096467 27635 slave.cpp:2147] Checkpointing framework pid '@0.0.0.0:0' to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/framework.pid' I0825 22:07:39.096544 27640 hierarchical.hpp:602] Slave 20150825-220736-234885548-51219-27610-S0 (09c6504e3a31) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I0825 22:07:39.096652 27639 master.cpp:4069] Status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 from slave 20150825-220736-234885548-51219-27610-S0 at slave(287)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:39.096709 27639 master.cpp:4108] Forwarding status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:39.096978 27639 master.cpp:5576] Updating the latest state of task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to TASK_RUNNING I0825 22:07:39.097105 27639 status_update_manager.cpp:183] Resuming sending status updates W0825 22:07:39.097187 27635 slave.cpp:976] Already re-registered with master master@172.17.0.14:51219 I0825 22:07:39.097229 27635 slave.cpp:995] Forwarding total oversubscribed resources W0825 22:07:39.097230 27639 status_update_manager.cpp:190] Resending status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:39.097290 27639 status_update_manager.cpp:376] Forwarding update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to the slave I0825 22:07:39.097373 27643 sched.cpp:910] Scheduler::statusUpdate took 76470ns I0825 22:07:39.097450 27635 slave.cpp:2131] Updating framework 20150825-220736-234885548-51219-27610-0000 pid to scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 I0825 22:07:39.097473 27640 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:39.097497 27640 hierarchical.hpp:928] Performed allocation for slave 20150825-220736-234885548-51219-27610-S0 in 818746ns I0825 22:07:39.097525 27635 slave.cpp:2147] Checkpointing framework pid 'scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219' to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/framework.pid' I0825 22:07:39.097991 27640 status_update_manager.cpp:183] Resuming sending status updates W0825 22:07:39.098043 27640 status_update_manager.cpp:190] Resending status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:39.098093 27640 status_update_manager.cpp:376] Forwarding update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to the slave I0825 22:07:39.098242 27635 slave.cpp:2227] Updated checkpointed resources from to I0825 22:07:39.098433 27635 slave.cpp:3043] Sending message for framework 20150825-220736-234885548-51219-27610-0000 to scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 I0825 22:07:39.098480 27636 master.cpp:3998] Received update of slave 20150825-220736-234885548-51219-27610-S0 at slave(287)@172.17.0.14:51219 (09c6504e3a31) with total oversubscribed resources I0825 22:07:39.098639 27635 slave.cpp:2975] Forwarding the update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to master@172.17.0.14:51219 I0825 22:07:39.098755 27634 sched.cpp:1006] Scheduler::frameworkMessage took 68683ns I0825 22:07:39.098882 27636 master.cpp:3398] Processing ACKNOWLEDGE call 98c4a799-ad82-497d-be1e-6dfb56a0894e for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 on slave 20150825-220736-234885548-51219-27610-S0 I0825 22:07:39.098906 27635 slave.cpp:2975] Forwarding the update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to master@172.17.0.14:51219 I0825 22:07:39.099019 27641 hierarchical.hpp:602] Slave 20150825-220736-234885548-51219-27610-S0 (09c6504e3a31) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I0825 22:07:39.099192 27636 master.cpp:4069] Status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 from slave 20150825-220736-234885548-51219-27610-S0 at slave(287)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:39.099244 27636 master.cpp:4108] Forwarding status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:39.099369 27641 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:39.099395 27641 hierarchical.hpp:928] Performed allocation for slave 20150825-220736-234885548-51219-27610-S0 in 332336ns I0825 22:07:39.099403 27636 master.cpp:5576] Updating the latest state of task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to TASK_RUNNING I0825 22:07:39.099426 27635 status_update_manager.cpp:394] Received status update acknowledgement (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:39.099609 27641 sched.cpp:910] Scheduler::statusUpdate took 90272ns I0825 22:07:39.099617 27636 master.cpp:4069] Status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 from slave 20150825-220736-234885548-51219-27610-S0 at slave(287)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:39.099669 27636 master.cpp:4108] Forwarding status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:39.099607 27635 status_update_manager.cpp:826] Checkpointing ACK for status update TASK_RUNNING (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:39.099834 27636 master.cpp:5576] Updating the latest state of task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to TASK_RUNNING I0825 22:07:39.099992 27643 sched.cpp:910] Scheduler::statusUpdate took 29331ns I0825 22:07:39.100038 27636 master.cpp:3398] Processing ACKNOWLEDGE call 98c4a799-ad82-497d-be1e-6dfb56a0894e for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 on slave 20150825-220736-234885548-51219-27610-S0 I0825 22:07:39.100381 27636 master.cpp:3398] Processing ACKNOWLEDGE call 98c4a799-ad82-497d-be1e-6dfb56a0894e for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 on slave 20150825-220736-234885548-51219-27610-S0 I0825 22:07:39.102119 27635 status_update_manager.cpp:394] Received status update acknowledgement (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:39.102120 27637 slave.cpp:2298] Status update manager successfully handled status update acknowledgement (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:39.102375 27635 status_update_manager.cpp:394] Received status update acknowledgement (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 E0825 22:07:39.102407 27633 slave.cpp:2291] Failed to handle status update acknowledgement (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000: Unexpected status update acknowledgment (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 E0825 22:07:39.102546 27636 slave.cpp:2291] Failed to handle status update acknowledgement (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000: Unexpected status update acknowledgment (UUID: 98c4a799-ad82-497d-be1e-6dfb56a0894e) for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:39.819394 27637 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:39.819452 27637 hierarchical.hpp:910] Performed allocation for 1 slaves in 536774ns 2015-08-25 22:07:40,051:27610(0x2b3b2870c700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:40031] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0825 22:07:40.820246 27633 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:40.820302 27633 hierarchical.hpp:910] Performed allocation for 1 slaves in 511814ns I0825 22:07:41.821671 27637 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:41.821719 27637 hierarchical.hpp:910] Performed allocation for 1 slaves in 518909ns I0825 22:07:42.822906 27628 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:42.822959 27628 hierarchical.hpp:910] Performed allocation for 1 slaves in 659816ns 2015-08-25 22:07:43,388:27610(0x2b3b2870c700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:40031] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0825 22:07:43.824976 27632 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:43.825032 27632 hierarchical.hpp:910] Performed allocation for 1 slaves in 727197ns I0825 22:07:44.825883 27641 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:44.825932 27641 hierarchical.hpp:910] Performed allocation for 1 slaves in 422745ns I0825 22:07:45.828217 27634 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:45.828445 27634 hierarchical.hpp:910] Performed allocation for 1 slaves in 1.288273ms 2015-08-25 22:07:46,724:27610(0x2b3b2870c700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:40031] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0825 22:07:46.829910 27632 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:46.829953 27632 hierarchical.hpp:910] Performed allocation for 1 slaves in 483478ns I0825 22:07:47.830860 27636 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:47.830922 27636 hierarchical.hpp:910] Performed allocation for 1 slaves in 551674ns I0825 22:07:48.832027 27628 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:48.832078 27628 hierarchical.hpp:910] Performed allocation for 1 slaves in 417868ns I0825 22:07:49.833906 27629 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:49.833962 27629 hierarchical.hpp:910] Performed allocation for 1 slaves in 472647ns 2015-08-25 22:07:50,060:27610(0x2b3b2870c700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:40031] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0825 22:07:50.835659 27630 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:50.835718 27630 hierarchical.hpp:910] Performed allocation for 1 slaves in 522864ns I0825 22:07:51.837473 27638 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:51.837537 27638 hierarchical.hpp:910] Performed allocation for 1 slaves in 575837ns I0825 22:07:52.839296 27641 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:52.839350 27641 hierarchical.hpp:910] Performed allocation for 1 slaves in 558642ns 2015-08-25 22:07:53,397:27610(0x2b3b2870c700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:40031] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0825 22:07:53.840854 27630 hierarchical.hpp:1010] No resources available to allocate! I0825 22:07:53.840904 27630 hierarchical.hpp:910] Performed allocation for 1 slaves in 557112ns I0825 22:07:54.083889 27631 slave.cpp:4226] Querying resource estimator for oversubscribable resources I0825 22:07:54.084323 27629 slave.cpp:4240] Received oversubscribable resources from the resource estimator ../../src/tests/slave_tests.cpp:2651: Failure Failed to wait 15secs for executorToFrameworkMessage1 I0825 22:07:54.098143 27629 master.cpp:1051] Framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 disconnected I0825 22:07:54.098212 27629 master.cpp:2370] Disconnecting framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 I0825 22:07:54.098254 27629 master.cpp:2394] Deactivating framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 I0825 22:07:54.098363 27629 master.cpp:1075] Giving framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 0ns to failover I0825 22:07:54.098448 27631 hierarchical.hpp:474] Deactivated framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:54.098830 27641 master.cpp:4469] Framework failover timeout, removing framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 I0825 22:07:54.098867 27641 master.cpp:5112] Removing framework 20150825-220736-234885548-51219-27610-0000 (default) at scheduler-6c5ddcdb-9dd1-4b38-b051-5f714d3c1c55@172.17.0.14:51219 I0825 22:07:54.099156 27629 slave.cpp:1959] Asked to shut down framework 20150825-220736-234885548-51219-27610-0000 by master@172.17.0.14:51219 I0825 22:07:54.099211 27629 slave.cpp:1984] Shutting down framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:54.099198 27641 master.cpp:5576] Updating the latest state of task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 to TASK_KILLED I0825 22:07:54.099328 27629 slave.cpp:3710] Shutting down executor 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:54.099913 27641 master.cpp:5644] Removing task b89d1df8-f2fb-44be-8f60-9352cf32a79d with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 20150825-220736-234885548-51219-27610-0000 on slave 20150825-220736-234885548-51219-27610-S0 at slave(287)@172.17.0.14:51219 (09c6504e3a31) I0825 22:07:54.099987 27632 hierarchical.hpp:816] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on slave 20150825-220736-234885548-51219-27610-S0 from framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:54.100440 27641 hierarchical.hpp:428] Removed framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:54.100608 27643 master.cpp:860] Master terminating I0825 22:07:54.100778 2360 exec.cpp:380] Executor asked to shutdown II0825 22:07:54.100929 27641 hierarchical.hpp:573] Removed slave 20150825-220736-234885548-51219-27610-S0 0825 22:07:54.100896 2364 exec.cpp:79] Scheduling shutdown of the executor I0825 22:07:54.100958 2360 exec.cpp:395] Executor::shutdown took 75333ns Shutting down Sending SIGTERM to process tree at pid 2371 I0825 22:07:54.101748 27640 slave.cpp:3143] master@172.17.0.14:51219 exited W0825 22:07:54.101866 27640 slave.cpp:3146] Master disconnected! Waiting for a new master to be elected I0825 22:07:54.106029 27632 containerizer.cpp:1079] Destroying container '1499299a-93dd-4982-9249-ad0e19d1c06c' Killing the following process trees: [ -+- 2371 sh -c sleep 1000 \--- 2372 sleep 1000 ] I0825 22:07:54.211082 27639 containerizer.cpp:1266] Executor for container '1499299a-93dd-4982-9249-ad0e19d1c06c' has exited I0825 22:07:54.211087 27630 containerizer.cpp:1266] Executor for container '1499299a-93dd-4982-9249-ad0e19d1c06c' has exited I0825 22:07:54.211143 27639 containerizer.cpp:1079] Destroying container '1499299a-93dd-4982-9249-ad0e19d1c06c' I0825 22:07:54.212609 27637 slave.cpp:3399] Executor 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' of framework 20150825-220736-234885548-51219-27610-0000 terminated with signal Killed I0825 22:07:54.212685 27637 slave.cpp:3503] Cleaning up executor 'b89d1df8-f2fb-44be-8f60-9352cf32a79d' of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:54.213062 27631 gc.cpp:56] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/executors/b89d1df8-f2fb-44be-8f60-9352cf32a79d/runs/1499299a-93dd-4982-9249-ad0e19d1c06c' for gc 6.99999753474667days in the future I0825 22:07:54.214745 27630 gc.cpp:56] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/executors/b89d1df8-f2fb-44be-8f60-9352cf32a79d' for gc 6.99999753268444days in the future I0825 22:07:54.214859 27630 gc.cpp:56] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/executors/b89d1df8-f2fb-44be-8f60-9352cf32a79d/runs/1499299a-93dd-4982-9249-ad0e19d1c06c' for gc 6.99999751446815days in the future I0825 22:07:54.214921 27637 slave.cpp:3592] Cleaning up framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:54.215047 27630 gc.cpp:56] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000/executors/b89d1df8-f2fb-44be-8f60-9352cf32a79d' for gc 6.99999751310222days in the future I0825 22:07:54.215140 27634 status_update_manager.cpp:284] Closing status update streams for framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:54.215338 27634 status_update_manager.cpp:530] Cleaning up status update stream for task b89d1df8-f2fb-44be-8f60-9352cf32a79d of framework 20150825-220736-234885548-51219-27610-0000 I0825 22:07:54.215358 27637 slave.cpp:564] Slave terminating I0825 22:07:54.215347 27630 gc.cpp:56] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000' for gc 6.99999751012741days in the future I0825 22:07:54.215608 27630 gc.cpp:56] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_ukkA8L/meta/slaves/20150825-220736-234885548-51219-27610-S0/frameworks/20150825-220736-234885548-51219-27610-0000' for gc 6.99999750907259days in the future ../../3rdparty/libprocess/include/process/gmock.hpp:365: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: message matcher (8-byte object <58-B2 02-68 3A-2B 00-00>, 1, 1) Expected: to be called once Actual: never called - unsatisfied and active ../../3rdparty/libprocess/include/process/gmock.hpp:365: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: message matcher (8-byte object <58-B2 02-68 3A-2B 00-00>, 1, 1-byte object ) Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] SlaveTest.HTTPSchedulerSlaveRestart (17413 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3321","08/26/2015 20:53:46",1,"Spurious fetcher message about extracting an archive ""The fetcher emits a spurious log message about not extracting an archive with """".tgz"""" extension, even though the tarball is extracted correctly. """," I0826 19:02:08.304914 2109 logging.cpp:172] INFO level logging started! I0826 19:02:08.305253 2109 fetcher.cpp:413] Fetcher Info: {""""cache_directory"""":""""\/tmp\/mesos\/fetch\/slaves\/20150826-185716-251662764-5050-1-S0\/root"""",""""items"""":[{""""action"""":""""BYPASS_CACHE"""",""""uri"""":{""""extract"""":true,""""value"""":""""file:\/\/\/mesos\/sampleflaskapp.tgz""""}}],""""sandbox_directory"""":""""\/tmp\/mesos\/slaves\/20150826-185716-251662764-5050-1-S0\/frameworks\/20150826-185716-251662764-5050-1-0000\/executors\/sample-flask-app.f222d202-4c24-11e5-a628-0242ac110011\/runs\/e71f50b8-816d-46d5-bcc6-f9850a0402ed"""",""""user"""":""""root""""} I0826 19:02:08.306834 2109 fetcher.cpp:368] Fetching URI 'file:///mesos/sampleflaskapp.tgz' I0826 19:02:08.306864 2109 fetcher.cpp:242] Fetching directly into the sandbox directory I0826 19:02:08.306884 2109 fetcher.cpp:179] Fetching URI 'file:///mesos/sampleflaskapp.tgz' I0826 19:02:08.306900 2109 fetcher.cpp:159] Copying resource with command:cp '/mesos/sampleflaskapp.tgz' '/tmp/mesos/slaves/20150826-185716-251662764-5050-1-S0/frameworks/20150826-185716-251662764-5050-1-0000/executors/sample-flask-app.f222d202-4c24-11e5-a628-0242ac110011/runs/e71f50b8-816d-46d5-bcc6-f9850a0402ed/sampleflaskapp.tgz' I0826 19:02:08.309063 2109 fetcher.cpp:76] Extracting with command: tar -C '/tmp/mesos/slaves/20150826-185716-251662764-5050-1-S0/frameworks/20150826-185716-251662764-5050-1-0000/executors/sample-flask-app.f222d202-4c24-11e5-a628-0242ac110011/runs/e71f50b8-816d-46d5-bcc6-f9850a0402ed' -xf '/tmp/mesos/slaves/20150826-185716-251662764-5050-1-S0/frameworks/20150826-185716-251662764-5050-1-0000/executors/sample-flask-app.f222d202-4c24-11e5-a628-0242ac110011/runs/e71f50b8-816d-46d5-bcc6-f9850a0402ed/sampleflaskapp.tgz' I0826 19:02:08.315313 2109 fetcher.cpp:84] Extracted '/tmp/mesos/slaves/20150826-185716-251662764-5050-1-S0/frameworks/20150826-185716-251662764-5050-1-0000/executors/sample-flask-app.f222d202-4c24-11e5-a628-0242ac110011/runs/e71f50b8-816d-46d5-bcc6-f9850a0402ed/sampleflaskapp.tgz' into '/tmp/mesos/slaves/20150826-185716-251662764-5050-1-S0/frameworks/20150826-185716-251662764-5050-1-0000/executors/sample-flask-app.f222d202-4c24-11e5-a628-0242ac110011/runs/e71f50b8-816d-46d5-bcc6-f9850a0402ed' W0826 19:02:08.315381 2109 fetcher.cpp:264] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: file:///mesos/sampleflaskapp.tgz I0826 19:02:08.315604 2109 fetcher.cpp:445] Fetched 'file:///mesos/sampleflaskapp.tgz' to '/tmp/mesos/slaves/20150826-185716-251662764-5050-1-S0/frameworks/20150826-185716-251662764-5050-1-0000/executors/sample-flask-app.f222d202-4c24-11e5-a628-0242ac110011/runs/e71f50b8-816d-46d5-bcc6-f9850a0402ed/sampleflaskapp.tgz' ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3332","08/28/2015 21:20:38",8,"Support HTTP Pipelining in libprocess (http::post) ""Currently , {{http::post}} in libprocess, does not support HTTP pipelining. Each call as of know sends in the {{Connection: close}} header, thereby, signaling to the server to close the TCP socket after the response. We either need to create a new interface for supporting HTTP pipelining , or modify the existing {{http::post}} to do so. This is needed for the Scheduler/Executor library implementations to make sure """"Calls"""" are sent in order to the master. Currently, in order to do so, we send in the next request only after we have received a response for an earlier call that results in degraded performance. ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3338","08/31/2015 16:53:39",3,"Dynamic reservations are not counted as used resources in the master ""Dynamically reserved resources should be considered used or allocated and hence reflected in Mesos bookkeeping structures and {{state.json}}. I expanded the {{ReservationTest.ReserveThenUnreserve}} test with the following section: {code} // Check that the Master counts the reservation as a used resource. { Future response = process::http::get(master.get(), """"state.json""""); AWAIT_READY(response); Try parse = JSON::parse(response.get().body); ASSERT_SOME(parse); Result cpus = parse.get().find(""""slaves[0].used_resources.cpus""""); ASSERT_SOME_EQ(JSON::Number(1), cpus); } {code} and got Idea for new resources states: https://docs.google.com/drawings/d/1aquVIqPY8D_MR-cQjZu-wz5nNn3cYP3jXqegUHl-Kzc/edit"""," // Check that the Master counts the reservation as a used resource. { Future response = process::http::get(master.get(), """"state.json""""); AWAIT_READY(response); Try parse = JSON::parse(response.get().body); ASSERT_SOME(parse); Result cpus = parse.get().find(""""slaves[0].used_resources.cpus""""); ASSERT_SOME_EQ(JSON::Number(1), cpus); } ../../../src/tests/reservation_tests.cpp:168: Failure Value of: (cpus).get() Actual: 0 Expected: JSON::Number(1) Which is: 1 ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3339","08/31/2015 17:26:12",3,"Implement filtering mechanism for (Scheduler API Events) Testing ""Currently, our testing infrastructure does not have a mechanism of filtering/dropping HTTP events of a particular type from the Scheduler API response stream. We need a {{DROP_HTTP_CALLS}} abstraction that can help us to filter a particular event type. This helper code is duplicated in at least two places currently, Scheduler Library/Maintenance Primitives tests. - The solution can be as trivial as moving this helper function to a common test-header. - Implement a {{DROP_HTTP_CALLS}} similar to what we do for other protobufs via {{DROP_CALLS}}."""," // Enqueues all received events into a libprocess queue. ACTION_P(Enqueue, queue) { std::queue events = arg0; while (!events.empty()) { // Note that we currently drop HEARTBEATs because most of these tests // are not designed to deal with heartbeats. // TODO(vinod): Implement DROP_HTTP_CALLS that can filter heartbeats. if (events.front().type() == Event::HEARTBEAT) { VLOG(1) << """"Ignoring HEARTBEAT event""""; } else { queue->put(events.front()); } events.pop(); } } ",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3340","08/31/2015 19:06:29",2,"Command-line flags should take precedence over OS Env variables ""Currently, it appears that re-defining a flag on the command-line that was already defined via a OS Env var ({{MESOS_*}}) causes the Master to fail with a not very helpful message. For example, if one has {{MESOS_QUORUM}} defined, this happens: which is not very helpful. Ideally, we would parse the flags with a """"well-known"""" priority (command-line first, environment last) - but at the very least, the error message should be more helpful in explaining what the issue is."""," $ ./mesos-master --zk=zk://192.168.1.4/mesos --quorum=1 --hostname=192.168.1.4 --ip=192.168.1.4 Duplicate flag 'quorum' on command line ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3345","08/31/2015 22:56:22",5,"Expand the range of integer precision when converting into/out of json. ""For [MESOS-3299], we added some protobufs to represent time with integer precision. However, this precision is not maintained through protobuf <-> JSON conversion, because of how our JSON encoders/decoders convert numbers to floating point. To maintain precision, we can try one of the following: * Try using a {{long double}} to represent a number. * Add logic to stringify/parse numbers without loss when possible. * Try representing {{int64_t}} as a string and parse it as such? * Update PicoJson and add a compiler flag, i.e. {{-DPICOJSON_USE_INT64}} In all cases, we'll need to make sure that: * Integers are properly stringified without loss. * The JSON decoder parses the integer without loss. * We have some unit tests for big (close to {{INT32_MAX}}/{{INT64_MAX}}) and small integers.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3349","09/01/2015 03:26:07",5,"Removing mount point fails with EBUSY in LinuxFilesystemIsolator. ""When running the tests as root, we found PersistentVolumeTest.AccessPersistentVolume fails consistently on some platforms. Turns out that the 'rmdir' after the 'umount' fails with EBUSY because there's still some references to the mount. FYI [~jieyu] [~mcypark]"""," [ RUN ] PersistentVolumeTest.AccessPersistentVolume I0901 02:17:26.435140 39432 exec.cpp:133] Version: 0.25.0 I0901 02:17:26.442129 39461 exec.cpp:207] Executor registered on slave 20150901-021726-1828659978-52102-32604-S0 Registered executor on hostname Starting task d8ff1f00-e720-4a61-b440-e111009dfdc3 sh -c 'echo abc > path1/file' Forked command at 39484 Command exited with status 0 (pid: 39484) ../../src/tests/persistent_volume_tests.cpp:579: Failure Value of: os::exists(path::join(directory, """"path1"""")) Actual: true Expected: false [ FAILED ] PersistentVolumeTest.AccessPersistentVolume (777 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3356","09/01/2015 21:48:00",5,"Scope out approaches to deal with logging to finite disks (i.e. log rotation|capped-size logging). ""For the background, see the parent story [MESOS-3348]. For the work/design/discussion, see the linked design document (below). ""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3366","09/04/2015 02:36:06",3,"Allow resources/attributes discovery ""In heterogeneous clusters, tasks sometimes have strong constraints on the type of hardware they need to execute on. The current solution is to use custom resources and attributes on the agents. Detecting non-standard resources/attributes requires wrapping the """"mesos-slave"""" binary behind a script and use custom code to probe the agent. Unfortunately, this approach doesn't allow composition. The solution would be to provide a hook/module mechanism to allow users to use custom code performing resources/attributes discovery. Please review the detailed document below: https://docs.google.com/document/d/15OkebDezFxzeyLsyQoU0upB0eoVECAlzEkeg0HQAX9w Feel free to express comments/concerns by annotating the document or by replying to this issue. ""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3375","09/05/2015 03:34:25",1,"Add executor protobuf to v1 ""A new protobuf for Executor was introduced in Mesos for the HTTP API, it needs to be added to /v1 so it reflects changes made on v1/mesos.proto. This protobuf is ought to be changed as the executor HTTP API design evolves.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3378","09/07/2015 17:08:48",3,"Document a test pattern for expediting event firing ""We use {{Clock::advance()}} extensively in tests to expedite event firing and minimize overall {{make check}} time. Document this pattern for posterity.""","",0,0,0,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3393","09/08/2015 23:26:44",1,"Remove unused executor protobuf ""The executor protobuf definition living outside the v1/ directory is unused, it should be removed to avoid confusion.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3413","09/11/2015 15:06:11",5,"Docker containerizer does not symlink persistent volumes into sandbox ""For the ArangoDB framework I am trying to use the persistent primitives. nearly all is working, but I am missing a crucial piece at the end: I have successfully created a persistent disk resource and have set the persistence and volume information in the DiskInfo message. However, I do not see any way to find out what directory on the host the mesos slave has reserved for us. I know it is ${MESOS_SLAVE_WORKDIR}/volumes/roles//_ but we have no way to query this information anywhere. The docker containerizer does not automatically mount this directory into our docker container, or symlinks it into our sandbox. Therefore, I have essentially no access to it. Note that the mesos containerizer (which I cannot use for other reasons) seems to create a symlink in the sandbox to the actual path for the persistent volume. With that, I could mount the volume into our docker container and all would be well.""","",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3417","09/12/2015 06:55:39",2,"Log source address replicated log recieved broadcasts ""Currently Mesos doesn't log what machine a replicated log status broadcast was recieved from: It would be really useful for debugging replicated log startup issues to have info about where the message came from (libprocess address, ip, or hostname) the message came from"""," Sep 11 21:41:14 master-01 mesos-master[15625]: I0911 21:41:14.320164 15637 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request Sep 11 21:41:14 master-01 mesos-dns[15583]: I0911 21:41:14.321097 15583 detect.go:118] ignoring children-changed event, leader has not changed: /mesos Sep 11 21:41:14 master-01 mesos-master[15625]: I0911 21:41:14.353914 15639 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request Sep 11 21:41:14 master-01 mesos-master[15625]: I0911 21:41:14.479132 15639 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3423","09/14/2015 19:29:26",3,"Perf event isolator stops performing sampling if a single timeout occurs. ""Currently the perf event isolator times out a sample after a fixed extra time of 2 seconds on top of the sample time elapses: This should be based on the reap interval maximum. Also, the code stops sampling altogether when a single timeout occurs. We've observed time outs during normal operation, so it would be better for the isolator to continue performing perf sampling in the case of timeouts. It may also make sense to continue sampling in the case of errors, since these may be transient."""," Duration timeout = flags.perf_duration + Seconds(2); ",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3424","09/14/2015 19:39:00",5,"Support fetching AppC images into the store ""So far AppC store is read only and depends on out of band mechanisms to get the images. We need to design a way to support fetching in a native way. As commented on MESOS-2824: It's unacceptable to have either have: * the slave to be blocked for extended period of time (minutes) which delays the communication between the executor and scheduler, or * the first task that uses this image to be blocked for a long time to wait for the container image to be ready. The solution needs to enable the operator to prefetch a list of """"preferred images"""" without introducing the above problems.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3426","09/14/2015 21:11:06",3,"process::collect and process::await do not perform discard propagation. ""When aggregating futures with collect, one may discard the outer future: Discard requests should propagate down into the inner futures being collected."""," Promise p1; Promise p2; Future collect = process::collect(p1.future(), p2.future()); collect.discard(); // collect will transition to DISCARDED // However, p{1,2}.future().hasDiscard() remains false // as there is no discard propagation! ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3443","09/16/2015 08:15:56",2,"Windows: Port protobuf_tests.hpp ""We have ported `stout/protobuf.hpp`, but to make the `protobuf_tests.cpp` file to work, we need to port `stout/uuid.hpp`.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3458","09/18/2015 00:31:50",1,"Segfault when accepting or declining inverse offers ""Discovered while writing a test for filters (in regards to inverse offers). Fix here: https://reviews.apache.org/r/38470/""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3459","09/18/2015 00:38:32",1,"Change /machine/up and /machine/down endpoints to take an array ""With [MESOS-3312] committed, the {{/machine/up}} and {{/machine/down}} endpoints should also take an input as an array. It is important to change this before maintenance primitives are released: https://reviews.apache.org/r/38011/ Also, a minor change to the error message from these endpoints: https://reviews.apache.org/r/37969/""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3485","09/22/2015 00:55:05",3,"Make hook execution order deterministic ""Currently, when using multiple hooks of the same type, the execution order is implementation-defined. This is because in src/hook/manager.cpp, the list of available hooks is stored in a {{hashmap}}. A hashmap is probably unnecessary for this task since the number of hooks should remain reasonable. A data structure preserving ordering should be used instead to allow the user to predict the execution order of the hooks. I suggest that the execution order should be the order in which hooks are specified with {{--hooks}} when starting an agent/master. This will be useful when combining multiple hooks after MESOS-3366 is done.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3490","09/22/2015 17:53:23",1,"Mesos UI fails to represent JSON entities ""The Mesos UI is broken, it seems to fail to represent JSON from /state. This may have been introduced with https://reviews.apache.org/r/38028 ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3492","09/22/2015 19:49:57",1,"Expose maintenance user doc via the documentation home page ""The committed docs can be found here: http://mesos.apache.org/documentation/latest/maintenance/ We need to add a link to {{docs/home.md}} Also, the doc needs some minor formatting tweaks.""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3496","09/23/2015 00:54:48",2,"Create interface for digest verifier ""Add interface for digest verifier so that we can add implementations for digest types like sha256, sha512 etc""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3497","09/23/2015 00:59:16",3,"Add implementation for sha256 based file content verification. ""https://reviews.apache.org/r/38747/""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3513","09/24/2015 20:30:09",1,"Cgroups Test Filters aborts tests on Centos 6.6 ""Running make check on centos 6.6 causes all tests to abort due to CHECK_SOME test in CgroupsFIlter: """," Build directory: /home/jenkins/workspace/mesos-config-centos6/build F0923 23:00:49.748896 27362 environment.cpp:132] CHECK_SOME(hierarchies_): Failed to determine canonical path of /sys/fs/cgroup/freezer: No such file or directory *** Check failure stack trace: *** @ 0x7fb786ca0c4d google::LogMessage::Fail() @ 0x7fb786ca298c google::LogMessage::SendToLog() @ 0x7fb786ca083c google::LogMessage::Flush() @ 0x7fb786ca3289 google::LogMessageFatal::~LogMessageFatal() @ 0x58e66c mesos::internal::tests::CgroupsFilter::CgroupsFilter() @ 0x58712f mesos::internal::tests::Environment::Environment() @ 0x4c882f main @ 0x7fb782767d5d __libc_start_main @ 0x4d6331 (unknown) make[3]: *** [check-local] Aborted ",0,0,1,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3517","09/25/2015 14:29:10",2,"Building mesos from source fails when OS language is not English ""Line 963 of mesos/3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp contains the following: EXPECT_TRUE(strings::contains(result.get(), """"No such file or directory"""")); But this does not match when your locale is not English. When changing it to what my terminal gives me: """"Bestand of map bestaat niet"""" then it works just fine. ""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3525","09/26/2015 02:05:58",3,"Figure out how to enforce 64-bit builds on Windows. ""We need to make sure people don't try to compile Mesos on 32-bit architectures. We don't want a Windows repeat of something like this: https://issues.apache.org/jira/browse/MESOS-267""","",0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3540","09/28/2015 23:44:14",2,"Libevent termination triggers Broken Pipe ""When the libevent loop terminates and we unblock the {{SIGPIPE}} signal, the pending {{SIGPIPE}} instantly triggers and causes a broken pipe when the test binary stops running. """," Program received signal SIGPIPE, Broken pipe. [Switching to Thread 0x7ffff18b4700 (LWP 16270)] pthread_sigmask (how=1, newmask=, oldmask=0x7ffff18b3d80) at ../sysdeps/unix/sysv/linux/pthread_sigmask.c:53 53 ../sysdeps/unix/sysv/linux/pthread_sigmask.c: No such file or directory. (gdb) bt #0 pthread_sigmask (how=1, newmask=, oldmask=0x7ffff18b3d80) at ../sysdeps/unix/sysv/linux/pthread_sigmask.c:53 #1 0x00000000006fd9a4 in unblock () at ../../../3rdparty/libprocess/3rdparty/stout/include/stout/os/posix/signals.hpp:90 #2 0x00000000007d7915 in run () at ../../../3rdparty/libprocess/src/libevent.cpp:125 #3 0x00000000007950cb in _M_invoke<>(void) () at /usr/include/c++/4.9/functional:1700 #4 0x0000000000795000 in operator() () at /usr/include/c++/4.9/functional:1688 #5 0x0000000000794f6e in _M_run () at /usr/include/c++/4.9/thread:115 #6 0x00007ffff668de30 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #7 0x00007ffff79a16aa in start_thread (arg=0x7ffff18b4700) at pthread_create.c:333 #8 0x00007ffff5df1eed in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3551","09/29/2015 20:39:45",3,"Replace use of strerror with thread-safe alternatives strerror_r / strerror_l. ""{{strerror()}} is not required to be thread safe by POSIX and is listed as unsafe on Linux: http://pubs.opengroup.org/onlinepubs/9699919799/ http://man7.org/linux/man-pages/man3/strerror.3.html I don't believe we've seen any issues reported due to this. We should replace occurrences of strerror accordingly, possibly offering a wrapper in stout to simplify callsites.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3552","09/29/2015 22:17:13",3,"CHECK failure due to floating point precision on reservation request ""result.cpus() == cpus() check is failing due to ( double == double ) comparison problem. Root Cause : Framework requested 0.1 cpu reservation for the first task. So far so good. Next Reserve operation — lead to double operations resulting in following double values : results.cpus() : 23.9999999999999964472863211995 cpus() : 24 And the check ( result.cpus() == cpus() ) failed. The double arithmetic operations caused results.cpus() value to be : 23.9999999999999964472863211995 and hence ( 23.9999999999999964472863211995 == 24 ) failed. ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3554","09/30/2015 01:08:45",3,"Allocator changes trigger large re-compiles ""Due to the templatized nature of the allocator, even small changes trigger large recompiles of the code-base. This make iterating on changes expensive for developers.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3556","09/30/2015 09:03:00",1,"mesos.cli broken in 0.24.x ""The issue was initially reported on the mailing list: http://www.mail-archive.com/user@mesos.apache.org/msg04670.html The format of the master data stored in zookeeper has changed but the mesos.cli does not reflect these changes causing tools like {{mesos-tail}} and {{mesos-ps}} to fail. Example error from {{mesos-tail}}: The problem exists in https://github.com/mesosphere/mesos-cli/blob/master/mesos/cli/master.py#L107. The code should be along the lines of: This causes the master address to come back correctly."""," mesos-master ~$ mesos tail -f -n 50 service Traceback (most recent call last): File """"/usr/local/bin/mesos-tail"""", line 11, in sys.exit(main()) File """"/usr/local/lib/python2.7/dist-packages/mesos/cli/cli.py"""", line 61, in wrapper return fn(*args, **kwargs) File """"/usr/local/lib/python2.7/dist-packages/mesos/cli/cmds/tail.py"""", line 55, in main args.task, args.file, fail=(not args.follow)): File """"/usr/local/lib/python2.7/dist-packages/mesos/cli/cluster.py"""", line 27, in files tlist = MASTER.tasks(fltr) File """"/usr/local/lib/python2.7/dist-packages/mesos/cli/master.py"""", line 174, in tasks self._task_list(active_only)))) File """"/usr/local/lib/python2.7/dist-packages/mesos/cli/master.py"""", line 153, in _task_list *[util.merge(x, *keys) for x in self.frameworks(active_only)]) File """"/usr/local/lib/python2.7/dist-packages/mesos/cli/master.py"""", line 185, in frameworks return util.merge(self.state, *keys) File """"/usr/local/lib/python2.7/dist-packages/mesos/cli/util.py"""", line 58, in __get__ value = self.fget(inst) File """"/usr/local/lib/python2.7/dist-packages/mesos/cli/master.py"""", line 123, in state return self.fetch(""""/master/state.json"""").json() File """"/usr/local/lib/python2.7/dist-packages/mesos/cli/master.py"""", line 64, in fetch return requests.get(urlparse.urljoin(self.host, url), **kwargs) File """"/usr/local/lib/python2.7/dist-packages/requests/api.py"""", line 69, in get return request('get', url, params=params, **kwargs) File """"/usr/local/lib/python2.7/dist-packages/requests/api.py"""", line 50, in request response = session.request(method=method, url=url, **kwargs) File """"/usr/local/lib/python2.7/dist-packages/requests/sessions.py"""", line 451, in request prep = self.prepare_request(req) File """"/usr/local/lib/python2.7/dist-packages/requests/sessions.py"""", line 382, in prepare_request hooks=merge_hooks(request.hooks, self.hooks), File """"/usr/local/lib/python2.7/dist-packages/requests/models.py"""", line 304, in prepare self.prepare_url(url, params) File """"/usr/local/lib/python2.7/dist-packages/requests/models.py"""", line 357, in prepare_url raise InvalidURL(*e.args) requests.exceptions.InvalidURL: Failed to parse: 10.100.1.100:5050"""",""""port"""":5050,""""version"""":""""0.24.1""""} try: parsed = json.loads(val) return parsed[""""address""""][""""ip""""] + """":"""" + str(parsed[""""address""""][""""port""""]) except Exception: return val.split(""""@"""")[-1] ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3560","09/30/2015 21:31:16",1,"JSON-based credential files do not work correctly ""Specifying the following credentials file: Then hitting a master endpoint with: Does not work. This is contrary to the text-based credentials file which works: Currently, the password in a JSON-based credentials file needs to be base64-encoded in order for it to work: """," { “credentials”: [ { “principal”: “user”, “secret”: “password” } ] } curl -i -u “user:password” ... user password { “credentials”: [ { “principal”: “user”, “secret”: “cGFzc3dvcmQ=” } ] } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3571","10/01/2015 18:52:54",5,"Refactor registry_client ""Refactor registry client component to: - Make methods shorter for readability - Pull out structs not related to registry client into common namespace.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3573","10/02/2015 13:26:18",5,"Mesos does not kill orphaned docker containers ""After upgrade to 0.24.0 we noticed hanging containers appearing. Looks like there were changes between 0.23.0 and 0.24.0 that broke cleanup. Here's how to trigger this bug: 1. Deploy app in docker container. 2. Kill corresponding mesos-docker-executor process 3. Observe hanging container Here are the logs after kill: Another issue: if you restart mesos-slave on the host with orphaned docker containers, they are not getting killed. This was the case before and I hoped for this trick to kill hanging containers, but it doesn't work now. Marking this as critical because it hoards cluster resources and blocks scheduling."""," slave_1 | I1002 12:12:59.362002 7791 docker.cpp:1576] Executor for container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8' has exited slave_1 | I1002 12:12:59.362284 7791 docker.cpp:1374] Destroying container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8' slave_1 | I1002 12:12:59.363404 7791 docker.cpp:1478] Running docker stop on container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8' slave_1 | I1002 12:12:59.363876 7791 slave.cpp:3399] Executor 'sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c' of framework 20150923-122130-2153451692-5050-1-0000 terminated with signal Terminated slave_1 | I1002 12:12:59.367570 7791 slave.cpp:2696] Handling status update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 20150923-122130-2153451692-5050-1-0000 from @0.0.0.0:0 slave_1 | I1002 12:12:59.367842 7791 slave.cpp:5094] Terminating task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c slave_1 | W1002 12:12:59.368484 7791 docker.cpp:986] Ignoring updating unknown container: f083aaa2-d5c3-43c1-b6ba-342de8829fa8 slave_1 | I1002 12:12:59.368671 7791 status_update_manager.cpp:322] Received status update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 20150923-122130-2153451692-5050-1-0000 slave_1 | I1002 12:12:59.368741 7791 status_update_manager.cpp:826] Checkpointing UPDATE for status update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 20150923-122130-2153451692-5050-1-0000 slave_1 | I1002 12:12:59.370636 7791 status_update_manager.cpp:376] Forwarding update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 20150923-122130-2153451692-5050-1-0000 to the slave slave_1 | I1002 12:12:59.371335 7791 slave.cpp:2975] Forwarding the update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 20150923-122130-2153451692-5050-1-0000 to master@172.16.91.128:5050 slave_1 | I1002 12:12:59.371908 7791 slave.cpp:2899] Status update manager successfully handled status update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 20150923-122130-2153451692-5050-1-0000 master_1 | I1002 12:12:59.372047 11 master.cpp:4069] Status update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 20150923-122130-2153451692-5050-1-0000 from slave 20151002-120829-2153451692-5050-1-S0 at slave(1)@172.16.91.128:5051 (172.16.91.128) master_1 | I1002 12:12:59.372534 11 master.cpp:4108] Forwarding status update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 20150923-122130-2153451692-5050-1-0000 master_1 | I1002 12:12:59.373018 11 master.cpp:5576] Updating the latest state of task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 20150923-122130-2153451692-5050-1-0000 to TASK_FAILED master_1 | I1002 12:12:59.373447 11 hierarchical.hpp:814] Recovered cpus(*):0.1; mem(*):16; ports(*):[31685-31685] (total: cpus(*):4; mem(*):1001; disk(*):52869; ports(*):[31000-32000], allocated: cpus(*):8.32667e-17) on slave 20151002-120829-2153451692-5050-1-S0 from framework 20150923-122130-2153451692-5050-1-0000 ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3575","10/02/2015 20:07:55",2,"V1 API java/python protos are not generated ""The java/python protos for the V1 api should be generated according to the Makefile; however, they do not show up in the generated build directory.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3579","10/04/2015 02:02:01",2,"FetcherCacheTest.LocalUncachedExtract is flaky ""From ASF CI: https://builds.apache.org/job/Mesos/866/COMPILER=clang,CONFIGURATION=--verbose,OS=ubuntu:14.04,label_exp=docker%7C%7CHadoop/console """," [ RUN ] FetcherCacheTest.LocalUncachedExtract Using temporary directory '/tmp/FetcherCacheTest_LocalUncachedExtract_jHBfeA' I0925 19:15:39.541198 27410 leveldb.cpp:176] Opened db in 3.43934ms I0925 19:15:39.542362 27410 leveldb.cpp:183] Compacted db in 1.136184ms I0925 19:15:39.542428 27410 leveldb.cpp:198] Created db iterator in 35866ns I0925 19:15:39.542448 27410 leveldb.cpp:204] Seeked to beginning of db in 8807ns I0925 19:15:39.542459 27410 leveldb.cpp:273] Iterated through 0 keys in the db in 6325ns I0925 19:15:39.542505 27410 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0925 19:15:39.543143 27438 recover.cpp:449] Starting replica recovery I0925 19:15:39.543393 27438 recover.cpp:475] Replica is in EMPTY status I0925 19:15:39.544373 27436 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request I0925 19:15:39.544791 27433 recover.cpp:195] Received a recover response from a replica in EMPTY status I0925 19:15:39.545284 27433 recover.cpp:566] Updating replica status to STARTING I0925 19:15:39.546155 27436 master.cpp:376] Master c8bf1c95-50f4-4832-a570-c560f0b466ae (f57fd4291168) started on 172.17.1.195:41781 I0925 19:15:39.546257 27433 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 747249ns I0925 19:15:39.546288 27433 replica.cpp:323] Persisted replica status to STARTING I0925 19:15:39.546483 27434 recover.cpp:475] Replica is in STARTING status I0925 19:15:39.546187 27436 master.cpp:378] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/FetcherCacheTest_LocalUncachedExtract_jHBfeA/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""25secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-0.26.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/FetcherCacheTest_LocalUncachedExtract_jHBfeA/master"""" --zk_session_timeout=""""10secs"""" I0925 19:15:39.546567 27436 master.cpp:423] Master only allowing authenticated frameworks to register I0925 19:15:39.546617 27436 master.cpp:428] Master only allowing authenticated slaves to register I0925 19:15:39.546632 27436 credentials.hpp:37] Loading credentials for authentication from '/tmp/FetcherCacheTest_LocalUncachedExtract_jHBfeA/credentials' I0925 19:15:39.546931 27436 master.cpp:467] Using default 'crammd5' authenticator I0925 19:15:39.547044 27436 master.cpp:504] Authorization enabled I0925 19:15:39.547276 27441 whitelist_watcher.cpp:79] No whitelist given I0925 19:15:39.547320 27434 hierarchical.hpp:468] Initialized hierarchical allocator process I0925 19:15:39.547471 27438 replica.cpp:641] Replica in STARTING status received a broadcasted recover request I0925 19:15:39.548318 27443 recover.cpp:195] Received a recover response from a replica in STARTING status I0925 19:15:39.549067 27435 recover.cpp:566] Updating replica status to VOTING I0925 19:15:39.549115 27440 master.cpp:1603] The newly elected leader is master@172.17.1.195:41781 with id c8bf1c95-50f4-4832-a570-c560f0b466ae I0925 19:15:39.549162 27440 master.cpp:1616] Elected as the leading master! I0925 19:15:39.549190 27440 master.cpp:1376] Recovering from registrar I0925 19:15:39.549342 27434 registrar.cpp:309] Recovering registrar I0925 19:15:39.549666 27430 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 418187ns I0925 19:15:39.549753 27430 replica.cpp:323] Persisted replica status to VOTING I0925 19:15:39.550089 27442 recover.cpp:580] Successfully joined the Paxos group I0925 19:15:39.550320 27442 recover.cpp:464] Recover process terminated I0925 19:15:39.550904 27430 log.cpp:661] Attempting to start the writer I0925 19:15:39.551955 27434 replica.cpp:477] Replica received implicit promise request with proposal 1 I0925 19:15:39.552351 27434 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 380746ns I0925 19:15:39.552372 27434 replica.cpp:345] Persisted promised to 1 I0925 19:15:39.552896 27436 coordinator.cpp:231] Coordinator attemping to fill missing position I0925 19:15:39.554003 27432 replica.cpp:378] Replica received explicit promise request for position 0 with proposal 2 I0925 19:15:39.554534 27432 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 510572ns I0925 19:15:39.554558 27432 replica.cpp:679] Persisted action at 0 I0925 19:15:39.555516 27443 replica.cpp:511] Replica received write request for position 0 I0925 19:15:39.555595 27443 leveldb.cpp:438] Reading position from leveldb took 65355ns I0925 19:15:39.556177 27443 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 542757ns I0925 19:15:39.556200 27443 replica.cpp:679] Persisted action at 0 I0925 19:15:39.556813 27429 replica.cpp:658] Replica received learned notice for position 0 I0925 19:15:39.557251 27429 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 422272ns I0925 19:15:39.557281 27429 replica.cpp:679] Persisted action at 0 I0925 19:15:39.557315 27429 replica.cpp:664] Replica learned NOP action at position 0 I0925 19:15:39.558061 27442 log.cpp:677] Writer started with ending position 0 I0925 19:15:39.559294 27434 leveldb.cpp:438] Reading position from leveldb took 56536ns I0925 19:15:39.560333 27432 registrar.cpp:342] Successfully fetched the registry (0B) in 10.936064ms I0925 19:15:39.560469 27432 registrar.cpp:441] Applied 1 operations in 41340ns; attempting to update the 'registry' I0925 19:15:39.561244 27441 log.cpp:685] Attempting to append 176 bytes to the log I0925 19:15:39.561378 27436 coordinator.cpp:341] Coordinator attempting to write APPEND action at position 1 I0925 19:15:39.562126 27439 replica.cpp:511] Replica received write request for position 1 I0925 19:15:39.562515 27439 leveldb.cpp:343] Persisting action (195 bytes) to leveldb took 364968ns I0925 19:15:39.562539 27439 replica.cpp:679] Persisted action at 1 I0925 19:15:39.563160 27438 replica.cpp:658] Replica received learned notice for position 1 I0925 19:15:39.563699 27438 leveldb.cpp:343] Persisting action (197 bytes) to leveldb took 455933ns I0925 19:15:39.563730 27438 replica.cpp:679] Persisted action at 1 I0925 19:15:39.563753 27438 replica.cpp:664] Replica learned APPEND action at position 1 I0925 19:15:39.564749 27434 registrar.cpp:486] Successfully updated the 'registry' in 4.214016ms I0925 19:15:39.564893 27434 registrar.cpp:372] Successfully recovered registrar I0925 19:15:39.564950 27442 log.cpp:704] Attempting to truncate the log to 1 I0925 19:15:39.565039 27429 coordinator.cpp:341] Coordinator attempting to write TRUNCATE action at position 2 I0925 19:15:39.565172 27430 master.cpp:1413] Recovered 0 slaves from the Registry (137B) ; allowing 10mins for slaves to re-register I0925 19:15:39.565946 27429 replica.cpp:511] Replica received write request for position 2 I0925 19:15:39.566349 27429 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 375473ns I0925 19:15:39.566371 27429 replica.cpp:679] Persisted action at 2 I0925 19:15:39.566994 27431 replica.cpp:658] Replica received learned notice for position 2 I0925 19:15:39.567440 27431 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 437095ns I0925 19:15:39.567483 27431 leveldb.cpp:401] Deleting ~1 keys from leveldb took 31954ns I0925 19:15:39.567498 27431 replica.cpp:679] Persisted action at 2 I0925 19:15:39.567514 27431 replica.cpp:664] Replica learned TRUNCATE action at position 2 I0925 19:15:39.576660 27410 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix W0925 19:15:39.577055 27410 backend.cpp:50] Failed to create 'bind' backend: BindBackend requires root privileges I0925 19:15:39.583020 27443 slave.cpp:190] Slave started on 46)@172.17.1.195:41781 I0925 19:15:39.583062 27443 slave.cpp:191] Flags at startup: --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_local_archives_dir=""""/tmp/mesos/images/docker"""" --docker_puller=""""local"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.26.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resource_monitoring_interval=""""1secs"""" --resources=""""cpus(*):1000; mem(*):1000"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4"""" I0925 19:15:39.583472 27443 credentials.hpp:85] Loading credential for authentication from '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/credential' I0925 19:15:39.583752 27443 slave.cpp:321] Slave using credential for: test-principal I0925 19:15:39.584249 27443 slave.cpp:354] Slave resources: cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] I0925 19:15:39.584344 27443 slave.cpp:390] Slave hostname: f57fd4291168 I0925 19:15:39.584362 27443 slave.cpp:395] Slave checkpoint: true I0925 19:15:39.585180 27428 state.cpp:54] Recovering state from '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/meta' I0925 19:15:39.585383 27440 status_update_manager.cpp:202] Recovering status update manager I0925 19:15:39.585636 27435 containerizer.cpp:386] Recovering containerizer I0925 19:15:39.586380 27438 slave.cpp:4110] Finished recovery I0925 19:15:39.586845 27438 slave.cpp:4267] Querying resource estimator for oversubscribable resources I0925 19:15:39.587059 27430 status_update_manager.cpp:176] Pausing sending status updates I0925 19:15:39.587064 27438 slave.cpp:705] New master detected at master@172.17.1.195:41781 I0925 19:15:39.587139 27438 slave.cpp:768] Authenticating with master master@172.17.1.195:41781 I0925 19:15:39.587163 27438 slave.cpp:773] Using default CRAM-MD5 authenticatee I0925 19:15:39.587321 27438 slave.cpp:741] Detecting new master I0925 19:15:39.587357 27434 authenticatee.cpp:115] Creating new client SASL connection I0925 19:15:39.587574 27438 slave.cpp:4281] Received oversubscribable resources from the resource estimator I0925 19:15:39.587739 27442 master.cpp:5138] Authenticating slave(46)@172.17.1.195:41781 I0925 19:15:39.587853 27441 authenticator.cpp:407] Starting authentication session for crammd5_authenticatee(139)@172.17.1.195:41781 I0925 19:15:39.588052 27439 authenticator.cpp:92] Creating new server SASL connection I0925 19:15:39.588248 27431 authenticatee.cpp:206] Received SASL authentication mechanisms: CRAM-MD5 I0925 19:15:39.588297 27431 authenticatee.cpp:232] Attempting to authenticate with mechanism 'CRAM-MD5' I0925 19:15:39.588443 27437 authenticator.cpp:197] Received SASL authentication start I0925 19:15:39.588506 27437 authenticator.cpp:319] Authentication requires more steps I0925 19:15:39.588677 27443 authenticatee.cpp:252] Received SASL authentication step I0925 19:15:39.588814 27436 authenticator.cpp:225] Received SASL authentication step I0925 19:15:39.588855 27436 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: 'f57fd4291168' server FQDN: 'f57fd4291168' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0925 19:15:39.588876 27436 auxprop.cpp:174] Looking up auxiliary property '*userPassword' I0925 19:15:39.588937 27436 auxprop.cpp:174] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0925 19:15:39.588979 27436 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: 'f57fd4291168' server FQDN: 'f57fd4291168' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0925 19:15:39.588997 27436 auxprop.cpp:124] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0925 19:15:39.589011 27436 auxprop.cpp:124] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0925 19:15:39.589036 27436 authenticator.cpp:311] Authentication success I0925 19:15:39.589126 27443 authenticatee.cpp:292] Authentication success I0925 19:15:39.589192 27437 master.cpp:5168] Successfully authenticated principal 'test-principal' at slave(46)@172.17.1.195:41781 I0925 19:15:39.589238 27433 authenticator.cpp:425] Authentication session cleanup for crammd5_authenticatee(139)@172.17.1.195:41781 I0925 19:15:39.589412 27440 slave.cpp:836] Successfully authenticated with master master@172.17.1.195:41781 I0925 19:15:39.589540 27440 slave.cpp:1230] Will retry registration in 13.562027ms if necessary I0925 19:15:39.589745 27436 master.cpp:3862] Registering slave at slave(46)@172.17.1.195:41781 (f57fd4291168) with id c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 I0925 19:15:39.590121 27438 registrar.cpp:441] Applied 1 operations in 70627ns; attempting to update the 'registry' I0925 19:15:39.590831 27430 log.cpp:685] Attempting to append 345 bytes to the log I0925 19:15:39.590927 27439 coordinator.cpp:341] Coordinator attempting to write APPEND action at position 3 I0925 19:15:39.591809 27430 replica.cpp:511] Replica received write request for position 3 I0925 19:15:39.592072 27430 leveldb.cpp:343] Persisting action (364 bytes) to leveldb took 221734ns I0925 19:15:39.592099 27430 replica.cpp:679] Persisted action at 3 I0925 19:15:39.592643 27442 replica.cpp:658] Replica received learned notice for position 3 I0925 19:15:39.593215 27442 leveldb.cpp:343] Persisting action (366 bytes) to leveldb took 560946ns I0925 19:15:39.593237 27442 replica.cpp:679] Persisted action at 3 I0925 19:15:39.593255 27442 replica.cpp:664] Replica learned APPEND action at position 3 I0925 19:15:39.594663 27433 registrar.cpp:486] Successfully updated the 'registry' in 4.472832ms I0925 19:15:39.594874 27431 log.cpp:704] Attempting to truncate the log to 3 I0925 19:15:39.595407 27429 slave.cpp:3138] Received ping from slave-observer(45)@172.17.1.195:41781 I0925 19:15:39.595450 27433 coordinator.cpp:341] Coordinator attempting to write TRUNCATE action at position 4 I0925 19:15:39.596017 27442 replica.cpp:511] Replica received write request for position 4 I0925 19:15:39.596029 27429 hierarchical.hpp:675] Added slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 (f57fd4291168) with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] (allocated: ) I0925 19:15:39.595952 27441 master.cpp:3930] Registered slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 at slave(46)@172.17.1.195:41781 (f57fd4291168) with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] I0925 19:15:39.596240 27429 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:39.596263 27439 slave.cpp:880] Registered with master master@172.17.1.195:41781; given slave ID c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 I0925 19:15:39.596341 27439 fetcher.cpp:77] Clearing fetcher cache I0925 19:15:39.596345 27429 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:39.596367 27429 hierarchical.hpp:1239] Performed allocation for slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 in 299337ns I0925 19:15:39.596524 27434 status_update_manager.cpp:183] Resuming sending status updates I0925 19:15:39.596571 27442 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 575374ns I0925 19:15:39.596662 27442 replica.cpp:679] Persisted action at 4 I0925 19:15:39.596984 27439 slave.cpp:903] Checkpointing SlaveInfo to '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/meta/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/slave.info' I0925 19:15:39.597522 27434 replica.cpp:658] Replica received learned notice for position 4 I0925 19:15:39.597553 27410 sched.cpp:164] Version: 0.26.0 I0925 19:15:39.597746 27439 slave.cpp:939] Forwarding total oversubscribed resources I0925 19:15:39.598021 27429 master.cpp:4272] Received update of slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 at slave(46)@172.17.1.195:41781 (f57fd4291168) with total oversubscribed resources I0925 19:15:39.598070 27434 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 531503ns I0925 19:15:39.598162 27434 leveldb.cpp:401] Deleting ~2 keys from leveldb took 79081ns I0925 19:15:39.598170 27428 sched.cpp:262] New master detected at master@172.17.1.195:41781 I0925 19:15:39.598206 27434 replica.cpp:679] Persisted action at 4 I0925 19:15:39.598238 27434 replica.cpp:664] Replica learned TRUNCATE action at position 4 I0925 19:15:39.598276 27428 sched.cpp:318] Authenticating with master master@172.17.1.195:41781 I0925 19:15:39.598296 27428 sched.cpp:325] Using default CRAM-MD5 authenticatee I0925 19:15:39.598950 27430 hierarchical.hpp:735] Slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 (f57fd4291168) updated with oversubscribed resources (total: cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) I0925 19:15:39.599242 27430 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:39.599282 27430 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:39.599341 27430 hierarchical.hpp:1239] Performed allocation for slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 in 327742ns I0925 19:15:39.599632 27437 authenticatee.cpp:115] Creating new client SASL connection I0925 19:15:39.600005 27428 master.cpp:5138] Authenticating scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:39.600170 27435 authenticator.cpp:407] Starting authentication session for crammd5_authenticatee(140)@172.17.1.195:41781 I0925 19:15:39.600518 27433 authenticator.cpp:92] Creating new server SASL connection I0925 19:15:39.600788 27436 authenticatee.cpp:206] Received SASL authentication mechanisms: CRAM-MD5 I0925 19:15:39.600831 27436 authenticatee.cpp:232] Attempting to authenticate with mechanism 'CRAM-MD5' I0925 19:15:39.600944 27433 authenticator.cpp:197] Received SASL authentication start I0925 19:15:39.601019 27433 authenticator.cpp:319] Authentication requires more steps I0925 19:15:39.601150 27436 authenticatee.cpp:252] Received SASL authentication step I0925 19:15:39.601284 27436 authenticator.cpp:225] Received SASL authentication step I0925 19:15:39.601326 27436 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: 'f57fd4291168' server FQDN: 'f57fd4291168' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0925 19:15:39.601341 27436 auxprop.cpp:174] Looking up auxiliary property '*userPassword' I0925 19:15:39.601387 27436 auxprop.cpp:174] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0925 19:15:39.601413 27436 auxprop.cpp:102] Request to lookup properties for user: 'test-principal' realm: 'f57fd4291168' server FQDN: 'f57fd4291168' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0925 19:15:39.601421 27436 auxprop.cpp:124] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0925 19:15:39.601428 27436 auxprop.cpp:124] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0925 19:15:39.601439 27436 authenticator.cpp:311] Authentication success I0925 19:15:39.601508 27433 authenticatee.cpp:292] Authentication success I0925 19:15:39.601644 27433 master.cpp:5168] Successfully authenticated principal 'test-principal' at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:39.601671 27436 authenticator.cpp:425] Authentication session cleanup for crammd5_authenticatee(140)@172.17.1.195:41781 I0925 19:15:39.601842 27434 sched.cpp:407] Successfully authenticated with master master@172.17.1.195:41781 I0925 19:15:39.601869 27434 sched.cpp:714] Sending SUBSCRIBE call to master@172.17.1.195:41781 I0925 19:15:39.601955 27434 sched.cpp:747] Will retry registration in 749.975107ms if necessary I0925 19:15:39.602046 27443 master.cpp:2179] Received SUBSCRIBE call for framework 'default' at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 W0925 19:15:39.602128 27443 master.cpp:2186] Framework at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 (authenticated as 'test-principal') does not set 'principal' in FrameworkInfo I0925 19:15:39.602149 27443 master.cpp:1642] Authorizing framework principal '' to receive offers for role '*' I0925 19:15:39.602375 27437 master.cpp:2250] Subscribing framework default with checkpointing enabled and capabilities [ ] I0925 19:15:39.602712 27429 hierarchical.hpp:515] Added framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.602859 27437 sched.cpp:641] Framework registered with c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.602905 27437 sched.cpp:655] Scheduler::registered took 30086ns I0925 19:15:39.603204 27429 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:39.603234 27429 hierarchical.hpp:1221] Performed allocation for 1 slaves in 506104ns I0925 19:15:39.603520 27438 master.cpp:4967] Sending 1 offers to framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:39.603962 27431 sched.cpp:811] Scheduler::resourceOffers took 123790ns I0925 19:15:39.605443 27432 master.cpp:2918] Processing ACCEPT call for offers: [ c8bf1c95-50f4-4832-a570-c560f0b466ae-O0 ] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 at slave(46)@172.17.1.195:41781 (f57fd4291168) for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:39.605485 27432 master.cpp:2714] Authorizing framework principal '' to launch task 0 as user 'mesos' I0925 19:15:39.606487 27432 master.hpp:176] Adding task 0 with resources cpus(*):1; mem(*):1 on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 (f57fd4291168) I0925 19:15:39.606586 27432 master.cpp:3248] Launching task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 with resources cpus(*):1; mem(*):1 on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 at slave(46)@172.17.1.195:41781 (f57fd4291168) I0925 19:15:39.606875 27440 slave.cpp:1270] Got assigned task 0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.607050 27439 hierarchical.hpp:1103] Recovered cpus(*):999; mem(*):999; disk(*):3.70122e+06; ports(*):[31000-32000] (total: cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: cpus(*):1; mem(*):1) on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 from framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.607087 27440 slave.cpp:4773] Checkpointing FrameworkInfo to '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/meta/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/framework.info' I0925 19:15:39.607103 27439 hierarchical.hpp:1140] Framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 filtered slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for 5secs I0925 19:15:39.607573 27440 slave.cpp:4784] Checkpointing framework pid 'scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781' to '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/meta/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/framework.pid' I0925 19:15:39.608544 27440 slave.cpp:1386] Launching task 0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.615109 27440 slave.cpp:5209] Checkpointing ExecutorInfo to '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/meta/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/executors/0/executor.info' I0925 19:15:39.616000 27440 slave.cpp:4852] Launching executor 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/executors/0/runs/4bc31eb2-709b-4b09-a5a9-21a8387e355a' I0925 19:15:39.616510 27441 containerizer.cpp:640] Starting container '4bc31eb2-709b-4b09-a5a9-21a8387e355a' for executor '0' of framework 'c8bf1c95-50f4-4832-a570-c560f0b466ae-0000' I0925 19:15:39.616612 27440 slave.cpp:5232] Checkpointing TaskInfo to '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/meta/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/executors/0/runs/4bc31eb2-709b-4b09-a5a9-21a8387e355a/tasks/0/task.info' I0925 19:15:39.617144 27440 slave.cpp:1604] Queuing task '0' for executor 0 of framework 'c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.617277 27440 slave.cpp:658] Successfully attached file '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/executors/0/runs/4bc31eb2-709b-4b09-a5a9-21a8387e355a' I0925 19:15:39.619359 27437 launcher.cpp:132] Forked child with pid '30069' for container '4bc31eb2-709b-4b09-a5a9-21a8387e355a' I0925 19:15:39.619583 27437 containerizer.cpp:873] Checkpointing executor's forked pid 30069 to '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/meta/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/executors/0/runs/4bc31eb2-709b-4b09-a5a9-21a8387e355a/pids/forked.pid' I0925 19:15:39.622011 27441 fetcher.cpp:299] Starting to fetch URIs for container: 4bc31eb2-709b-4b09-a5a9-21a8387e355a, directory: /tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/executors/0/runs/4bc31eb2-709b-4b09-a5a9-21a8387e355a I0925 19:15:39.633872 27441 fetcher.cpp:756] Fetching URIs using command '/mesos/mesos-0.26.0/_build/src/mesos-fetcher' E0925 19:15:39.724884 27430 fetcher.cpp:515] Failed to run mesos-fetcher: Failed to fetch all URIs for container '4bc31eb2-709b-4b09-a5a9-21a8387e355a' with exit status: 256 Failed to synchronize with slave (it's probably exited) E0925 19:15:39.725486 27443 slave.cpp:3342] Container '4bc31eb2-709b-4b09-a5a9-21a8387e355a' for executor '0' of framework 'c8bf1c95-50f4-4832-a570-c560f0b466ae-0000' failed to start: Failed to fetch all URIs for container '4bc31eb2-709b-4b09-a5a9-21a8387e355a' with exit status: 256 I0925 19:15:39.725620 27430 containerizer.cpp:1097] Destroying container '4bc31eb2-709b-4b09-a5a9-21a8387e355a' I0925 19:15:39.725651 27430 containerizer.cpp:1126] Waiting for the isolators to complete for container '4bc31eb2-709b-4b09-a5a9-21a8387e355a' I0925 19:15:39.825744 27443 containerizer.cpp:1284] Executor for container '4bc31eb2-709b-4b09-a5a9-21a8387e355a' has exited I0925 19:15:39.827075 27429 slave.cpp:3440] Executor '0' of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 exited with status 1 I0925 19:15:39.827324 27429 slave.cpp:2717] Handling status update TASK_FAILED (UUID: 6bb8651c-0668-4724-8fbd-76db8a91adb7) for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 from @0.0.0.0:0 I0925 19:15:39.827514 27429 slave.cpp:5147] Terminating task 0 W0925 19:15:39.827745 27436 containerizer.cpp:988] Ignoring update for unknown container: 4bc31eb2-709b-4b09-a5a9-21a8387e355a I0925 19:15:39.828073 27440 status_update_manager.cpp:322] Received status update TASK_FAILED (UUID: 6bb8651c-0668-4724-8fbd-76db8a91adb7) for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.828168 27440 status_update_manager.cpp:499] Creating StatusUpdate stream for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.828661 27440 status_update_manager.cpp:826] Checkpointing UPDATE for status update TASK_FAILED (UUID: 6bb8651c-0668-4724-8fbd-76db8a91adb7) for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.830041 27440 status_update_manager.cpp:376] Forwarding update TASK_FAILED (UUID: 6bb8651c-0668-4724-8fbd-76db8a91adb7) for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 to the slave I0925 19:15:39.830292 27434 slave.cpp:3016] Forwarding the update TASK_FAILED (UUID: 6bb8651c-0668-4724-8fbd-76db8a91adb7) for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 to master@172.17.1.195:41781 I0925 19:15:39.830492 27434 slave.cpp:2940] Status update manager successfully handled status update TASK_FAILED (UUID: 6bb8651c-0668-4724-8fbd-76db8a91adb7) for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.830641 27432 master.cpp:4415] Status update TASK_FAILED (UUID: 6bb8651c-0668-4724-8fbd-76db8a91adb7) for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 from slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 at slave(46)@172.17.1.195:41781 (f57fd4291168) I0925 19:15:39.830682 27432 master.cpp:4454] Forwarding status update TASK_FAILED (UUID: 6bb8651c-0668-4724-8fbd-76db8a91adb7) for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.830842 27432 master.cpp:6081] Updating the latest state of task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 to TASK_FAILED I0925 19:15:39.831075 27431 sched.cpp:918] Scheduler::statusUpdate took 176815ns I0925 19:15:39.831204 27439 hierarchical.hpp:1103] Recovered cpus(*):1; mem(*):1 (total: cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 from framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.831357 27432 master.cpp:6149] Removing task 0 with resources cpus(*):1; mem(*):1 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 at slave(46)@172.17.1.195:41781 (f57fd4291168) I0925 19:15:39.831491 27432 master.cpp:3606] Processing ACKNOWLEDGE call 6bb8651c-0668-4724-8fbd-76db8a91adb7 for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 I0925 19:15:39.831763 27437 status_update_manager.cpp:394] Received status update acknowledgement (UUID: 6bb8651c-0668-4724-8fbd-76db8a91adb7) for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.831957 27437 status_update_manager.cpp:826] Checkpointing ACK for status update TASK_FAILED (UUID: 6bb8651c-0668-4724-8fbd-76db8a91adb7) for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.833057 27437 status_update_manager.cpp:530] Cleaning up status update stream for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.833407 27432 slave.cpp:2319] Status update manager successfully handled status update acknowledgement (UUID: 6bb8651c-0668-4724-8fbd-76db8a91adb7) for task 0 of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.833447 27432 slave.cpp:5188] Completing task 0 I0925 19:15:39.833470 27432 slave.cpp:3544] Cleaning up executor '0' of framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.833768 27437 gc.cpp:56] Scheduling '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/executors/0/runs/4bc31eb2-709b-4b09-a5a9-21a8387e355a' for gc 6.99999035100741days in the future I0925 19:15:39.833933 27437 gc.cpp:56] Scheduling '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/executors/0' for gc 6.99999034949333days in the future I0925 19:15:39.834005 27432 slave.cpp:3633] Cleaning up framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.834031 27437 gc.cpp:56] Scheduling '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/meta/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/executors/0/runs/4bc31eb2-709b-4b09-a5a9-21a8387e355a' for gc 6.99999034847111days in the future I0925 19:15:39.834106 27430 status_update_manager.cpp:284] Closing status update streams for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:39.834121 27437 gc.cpp:56] Scheduling '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/meta/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000/executors/0' for gc 6.99999034757926days in the future I0925 19:15:39.834266 27437 gc.cpp:56] Scheduling '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000' for gc 6.99999034594963days in the future I0925 19:15:39.834360 27437 gc.cpp:56] Scheduling '/tmp/FetcherCacheTest_LocalUncachedExtract_LwfzK4/meta/slaves/c8bf1c95-50f4-4832-a570-c560f0b466ae-S0/frameworks/c8bf1c95-50f4-4832-a570-c560f0b466ae-0000' for gc 6.99999034517333days in the future I0925 19:15:40.549545 27428 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:40.549640 27428 hierarchical.hpp:1221] Performed allocation for 1 slaves in 849712ns I0925 19:15:40.550092 27442 master.cpp:4967] Sending 1 offers to framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:40.550679 27442 sched.cpp:811] Scheduler::resourceOffers took 157498ns I0925 19:15:40.551633 27432 master.cpp:2918] Processing ACCEPT call for offers: [ c8bf1c95-50f4-4832-a570-c560f0b466ae-O1 ] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 at slave(46)@172.17.1.195:41781 (f57fd4291168) for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:40.552602 27432 hierarchical.hpp:1103] Recovered cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] (total: cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 from framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:40.552672 27432 hierarchical.hpp:1140] Framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 filtered slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for 5secs I0925 19:15:41.551115 27428 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:41.551200 27428 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:41.551224 27428 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:41.551239 27428 hierarchical.hpp:1221] Performed allocation for 1 slaves in 595589ns I0925 19:15:42.552183 27433 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:42.552254 27433 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:42.552271 27433 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:42.552281 27433 hierarchical.hpp:1221] Performed allocation for 1 slaves in 496429ns I0925 19:15:43.553062 27442 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:43.553134 27442 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:43.553151 27442 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:43.553163 27442 hierarchical.hpp:1221] Performed allocation for 1 slaves in 482544ns I0925 19:15:44.554844 27443 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:44.554930 27443 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:44.554954 27443 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:44.554970 27443 hierarchical.hpp:1221] Performed allocation for 1 slaves in 699469ns I0925 19:15:45.556754 27442 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:45.556805 27442 hierarchical.hpp:1221] Performed allocation for 1 slaves in 702577ns I0925 19:15:45.557119 27437 master.cpp:4967] Sending 1 offers to framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:45.557569 27435 sched.cpp:811] Scheduler::resourceOffers took 122887ns I0925 19:15:45.558279 27433 master.cpp:2918] Processing ACCEPT call for offers: [ c8bf1c95-50f4-4832-a570-c560f0b466ae-O2 ] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 at slave(46)@172.17.1.195:41781 (f57fd4291168) for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:45.559015 27441 hierarchical.hpp:1103] Recovered cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] (total: cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 from framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:45.559070 27441 hierarchical.hpp:1140] Framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 filtered slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for 5secs I0925 19:15:46.558176 27439 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:46.558245 27439 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:46.558262 27439 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:46.558274 27439 hierarchical.hpp:1221] Performed allocation for 1 slaves in 509658ns I0925 19:15:47.559289 27429 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:47.559360 27429 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:47.559376 27429 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:47.559386 27429 hierarchical.hpp:1221] Performed allocation for 1 slaves in 495131ns I0925 19:15:48.560979 27442 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:48.561064 27442 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:48.561087 27442 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:48.561101 27442 hierarchical.hpp:1221] Performed allocation for 1 slaves in 710782ns I0925 19:15:49.562594 27431 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:49.562666 27431 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:49.562683 27431 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:49.562695 27431 hierarchical.hpp:1221] Performed allocation for 1 slaves in 525867ns I0925 19:15:50.564564 27428 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:50.564620 27428 hierarchical.hpp:1221] Performed allocation for 1 slaves in 621850ns I0925 19:15:50.565004 27432 master.cpp:4967] Sending 1 offers to framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:50.565457 27428 sched.cpp:811] Scheduler::resourceOffers took 110220ns I0925 19:15:50.566159 27437 master.cpp:2918] Processing ACCEPT call for offers: [ c8bf1c95-50f4-4832-a570-c560f0b466ae-O3 ] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 at slave(46)@172.17.1.195:41781 (f57fd4291168) for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:50.566815 27428 hierarchical.hpp:1103] Recovered cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] (total: cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 from framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:50.566869 27428 hierarchical.hpp:1140] Framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 filtered slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for 5secs I0925 19:15:51.565913 27433 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:51.565981 27433 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:51.565999 27433 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:51.566009 27433 hierarchical.hpp:1221] Performed allocation for 1 slaves in 504883ns I0925 19:15:52.567260 27432 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:52.567333 27432 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:52.567350 27432 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:52.567361 27432 hierarchical.hpp:1221] Performed allocation for 1 slaves in 513500ns I0925 19:15:53.568176 27438 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:53.568248 27438 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:53.568266 27438 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:53.568281 27438 hierarchical.hpp:1221] Performed allocation for 1 slaves in 522293ns I0925 19:15:54.570142 27430 hierarchical.hpp:1521] Filtered offer with cpus(*):1000; mem(*):1000; disk(*):3.70122e+06; ports(*):[31000-32000] on slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 for framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:54.570226 27430 hierarchical.hpp:1326] No resources available to allocate! I0925 19:15:54.570250 27430 hierarchical.hpp:1421] No inverse offers to send out! I0925 19:15:54.570264 27430 hierarchical.hpp:1221] Performed allocation for 1 slaves in 626798ns I0925 19:15:54.588251 27442 slave.cpp:4267] Querying resource estimator for oversubscribable resources I0925 19:15:54.588673 27443 slave.cpp:4281] Received oversubscribable resources from the resource estimator I0925 19:15:54.596678 27428 slave.cpp:3138] Received ping from slave-observer(45)@172.17.1.195:41781 ../../src/tests/fetcher_cache_tests.cpp:681: Failure Failed to wait 15secs for awaitFinished(task.get()) I0925 19:15:54.606274 27410 sched.cpp:1771] Asked to stop the driver I0925 19:15:54.606623 27439 master.cpp:1119] Framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 disconnected I0925 19:15:54.606679 27439 master.cpp:2475] Disconnecting framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:54.606855 27439 master.cpp:2499] Deactivating framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:54.607441 27439 master.cpp:1143] Giving framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 0ns to failover I0925 19:15:54.607770 27433 hierarchical.hpp:599] Deactivated framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:54.609256 27432 master.cpp:4815] Framework failover timeout, removing framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:54.609297 27432 master.cpp:5571] Removing framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 (default) at scheduler-dda30e8e-47b7-4b1d-9a96-32364754be63@172.17.1.195:41781 I0925 19:15:54.609501 27433 slave.cpp:1980] Asked to shut down framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 by master@172.17.1.195:41781 W0925 19:15:54.609549 27433 slave.cpp:1995] Cannot shut down unknown framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:54.609881 27432 master.cpp:919] Master terminating I0925 19:15:54.610255 27440 hierarchical.hpp:552] Removed framework c8bf1c95-50f4-4832-a570-c560f0b466ae-0000 I0925 19:15:54.610627 27440 hierarchical.hpp:706] Removed slave c8bf1c95-50f4-4832-a570-c560f0b466ae-S0 I0925 19:15:54.611197 27436 slave.cpp:3184] master@172.17.1.195:41781 exited W0925 19:15:54.611233 27436 slave.cpp:3187] Master disconnected! Waiting for a new master to be elected I0925 19:15:54.616207 27410 slave.cpp:585] Slave terminating [ FAILED ] FetcherCacheTest.LocalUncachedExtract (15091 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3581","10/05/2015 09:38:27",2,"License headers show up all over doxygen documentation. ""Currently license headers are commented in something resembling Javadoc style, Since we use Javadoc-style comment blocks for doxygen documentation all license headers appear in the generated documentation, potentially and likely hiding the actual documentation. Using {{/*}} to start the comment blocks would be enough to hide them from doxygen, but would likely also result in a largish (though mostly uninteresting) patch."""," /** * Licensed ... ",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3584","10/05/2015 20:59:50",1,"rename libprocess tests to ""libprocess-tests"" ""Stout tests are in a binary named {{stout-tests}}, Mesos tests are in {{mesos-tests}}, but libprocess tests are just {{tests}}. It would be helpful to name them {{libprocess-tests}} ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3586","10/05/2015 23:31:41",1,"MemoryPressureMesosTest.CGROUPS_ROOT_Statistics and CGROUPS_ROOT_SlaveRecovery are flaky ""I am install Mesos 0.24.0 on 4 servers which have very similar hardware and software configurations. After performing {{../configure}}, {{make}}, and {{make check}} some servers have completed successfully and other failed on test {{[ RUN ] MemoryPressureMesosTest.CGROUPS_ROOT_Statistics}}. Is there something I should check in this test? """," PERFORMED MAKE CHECK NODE-001 [ RUN ] MemoryPressureMesosTest.CGROUPS_ROOT_Statistics I1005 14:37:35.585067 38479 exec.cpp:133] Version: 0.24.0 I1005 14:37:35.593789 38497 exec.cpp:207] Executor registered on slave 20151005-143735-2393768202-35106-27900-S0 Registered executor on svdidac038.techlabs.accenture.com Starting task 010b2fe9-4eac-4136-8a8a-6ce7665488b0 Forked command at 38510 sh -c 'while true; do dd count=512 bs=1M if=/dev/zero of=./temp; done' PERFORMED MAKE CHECK NODE-002 [ RUN ] MemoryPressureMesosTest.CGROUPS_ROOT_Statistics I1005 14:38:58.794112 36997 exec.cpp:133] Version: 0.24.0 I1005 14:38:58.802851 37022 exec.cpp:207] Executor registered on slave 20151005-143857-2360213770-50427-26325-S0 Registered executor on svdidac039.techlabs.accenture.com Starting task 9bb317ba-41cb-44a4-b507-d1c85ceabc28 sh -c 'while true; do dd count=512 bs=1M if=/dev/zero of=./temp; done' Forked command at 37028 ../../src/tests/containerizer/memory_pressure_tests.cpp:145: Failure Expected: (usage.get().mem_medium_pressure_counter()) >= (usage.get().mem_critical_pressure_counter()), actual: 5 vs 6 2015-10-05 14:39:00,130:26325(0x2af08cc78700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:37198] zk retcode=-4, errno=111(Connection refused): server refused to accept the client [ FAILED ] MemoryPressureMesosTest.CGROUPS_ROOT_Statistics (4303 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3595","10/06/2015 18:34:27",3,"Framework process hangs after master failover when number frameworks > libprocess thread pool size ""When running multi framework instances per process, if the number of framework created exceeds the libprocess threads then during master failover the zookeeper updates can cause deadlock. E.g. On a machine with 24 cpus, if the framework instance count exceeds 24 ( per process) then when the master fails over all the libprocess threads block updating the cache ( GroupProcess) leading to deadlock. Below is the stack trace of one the libprocess thread : Solution: Create master detector per url instead of per framework. Will send the review request. """," Thread 101 (Thread 0x7f42821f1700 (LWP 5974)): #0 0x000000314100b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f42870d1637 in Gate::arrive(long) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #2 0x00007f42870be87c in process::ProcessManager::wait(process::UPID const&) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.eg g/mesos/native/_mesos.so #3 0x00007f42870c25f7 in process::wait(process::UPID const&, Duration const&) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.e gg/mesos/native/_mesos.so #4 0x00007f428708e294 in process::Latch::await(Duration const&) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/nativ e/_mesos.so #5 0x00007f4286b67dea in process::Future::await(Duration const&) const () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg /mesos/native/_mesos.so #6 0x00007f4286b5a0df in process::Future::get() const () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_me sos.so #7 0x00007f4286ff0508 in ZooKeeper::getChildren(std::basic_string, std::allocator > const&, bool, std::vector, std::allocator >, std::allocator, std::allocator > > >*) () from /Users/mchadha/venv/lib/python2.7/site -packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #8 0x00007f4286cb394e in zookeeper::GroupProcess::cache() () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mes os.so #9 0x00007f4286cb1e63 in zookeeper::GroupProcess::updated(long, std::basic_string, std::allocator > const&) () from /Users/mchadha/venv/lib/py thon2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #10 0x00007f4286ce027a in std::tr1::_Mem_fn, std::allocator > const&)>::operator()(zo okeeper::GroupProcess*, long, std::basic_string, std::allocator > const&) const () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.n ative-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #11 0x00007f4286ce0067 in std::tr1::result_of, std::allocator > con st&)> ()(std::tr1::result_of, false, true> ()(std::tr1::_Placeholder<1>, std::tr1::tuple)>::type, std::tr1::res ult_of ()(long, std::tr1::_Mu, false, true> ()(std::tr1::_Placeholder<1>, std::tr1::tuple)) >::type, std::tr1::result_of, std::allocator >, false, false> ()(std::basic_string , std::allocator >, std::tr1::_Mu, false, true> ()(std::tr1::_Placeholder<1>, std::tr1::tuple))>::type)>::type std::tr1 ::_Bind, std::allocator > const&)> ()(std::tr1::_Placeholder<1>, lo ng, std::basic_string, std::allocator >)>::__call(std::tr1::_Mu, false, true> ( c onst&)(std::tr1::_Placeholder<1>, std::tr1::tuple), std::tr1::_Index_tuple<0, 1, 2>) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.nati ve-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #12 0x00007f4286cdfd16 in std::tr1::result_of, std::allocator > con st&)> ()(std::tr1::result_of, false, true> ()(std::tr1::_Placeholder<1>, std::tr1::tuple)>::type, std::tr1::resu lt_of ()(long, std::tr1::_Mu, false, true> ()(std::tr1::_Placeholder<1>, std::tr1::tuple))>: :type, std::tr1::result_of, std::allocator >, false, false> ()(std::basic_string, std::allocator >, std::tr1::_Mu, false, true> ()(std::tr1::_Placeholder<1>, std::tr1::tuple))>::type)>::type std::tr1::_ Bind, std::allocator > const&)> ()(std::tr1::_Placeholder<1>, long, std::basic_string, std::allocator >)>::operator()(zookeeper::GroupProcess*&) () from /Users/mchadha/venv/lib/python2 .7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #13 0x00007f4286cdf8be in std::tr1::_Function_handler, std::allocator > const&)> ()(std::tr1::_Placeholder<1>, long, std::basic_string, std::allocator >)> >::_ M_invoke(std::tr1::_Any_data const&, zookeeper::GroupProcess*) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/ _mesos.so #14 0x00007f4286cc2394 in std::tr1::function::operator()(zookeeper::GroupProcess*) const () from /Users/mchadha/venv/lib/python2.7/site-package s/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #15 0x00007f4286cbc3a2 in void process::internal::vdispatcher(process::ProcessBase*, std::tr1::shared_ptr >) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #16 0x00007f4286ccdca5 in std::tr1::result_of, false, true> ()(std::tr1::_Placeholder<1>, std::tr1::tuple)>::type, std::tr1::result_of >, false, false> ()(std::tr1::shared_p tr >, std::tr1::_Mu, false, true> ()(std::tr1::_Placeholder<1>, std::tr1::tuple))>::type))(process::ProcessBase*, std::tr1::shared_ptr >)>::type std::tr1::_Bind, std::tr1::shared_ptr >))(process::ProcessBase*, std::tr1::shared_ptr > )>::__call(std::tr1::_Mu, false, true> ( const&)(std::tr1::_Placeholder<1>, std::tr1::tuple), std: :tr1::_Index_tuple<0, 1>) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #17 0x00007f4286cc7a5a in std::tr1::result_of, false, true> ()(std::tr1::_Placeholder<1>, std::tr1::tuple)>::type, std::tr1::result_of >, false, false> ()(std::tr1::shared_pt r >, std::tr1::_Mu, false, true> ()(std::tr1::_Placeholder<1>, std::tr1::tuple))>::type))(process::ProcessBase*, std::tr1::shared_ptr >)>::type std::tr1::_Bind, st d::tr1::shared_ptr >))(process::ProcessBase*, std::tr1::shared_ptr >)> ::operator()(process::ProcessBase*&) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_me sos.so #18 0x00007f4286cc2480 in std::tr1::_Function_handler, std::tr1::shared_ptr >))(process::ProcessBase*, std::tr1::shared_ptr >)> >::_M_invoke(std::tr1::_Any_data con st&, process::ProcessBase*) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #19 0x00007f42870db546 in std::tr1::function::operator()(process::ProcessBase*) const () from /Users/mchadha/venv/lib/python2.7/site-packages/meso s.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #20 0x00007f42870c1013 in process::ProcessBase::visit(process::DispatchEvent const&) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x8 6_64.egg/mesos/native/_mesos.so #21 0x00007f42870c5582 in process::DispatchEvent::visit(process::EventVisitor*) const () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x 86_64.egg/mesos/native/_mesos.so #22 0x00007f428666680e in process::ProcessBase::serve(process::Event const&) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg /mesos/native/_mesos.so #23 0x00007f42870bd88f in process::ProcessManager::resume(process::ProcessBase*) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64 .egg/mesos/native/_mesos.so #24 0x00007f42870b1cb9 in process::schedule(void*) () from /Users/mchadha/venv/lib/python2.7/site-packages/mesos.native-0.22.1003-py2.7-linux-x86_64.egg/mesos/native/_mesos.so #25 0x00000031410079d1 in start_thread () from /lib64/libpthread.so.0 #26 0x00000031408e88fd in clone () from /lib64/libc.so.6 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3604","10/08/2015 01:26:59",3,"ExamplesTest.PersistentVolumeFramework does not work in OS X El Capitan ""The example persistent volume framework test does not pass in OS X El Capitan. It seems to be executing the {{/src/.libs/mesos-executor}} directly while it should be executing the wrapper script at {{/src/mesos-executor}} instead. The no-executor framework passes however, which seem to have a very similar configuration with the persistent volume framework. The following is the output that shows the {{dyld}} load error: """," I1008 01:22:52.280140 4284416 launcher.cpp:132] Forked child with pid '1706' for contain er 'b6d3bd96-2ebd-47b1-a16a-a22ffba992aa' I1008 01:22:52.280300 4284416 containerizer.cpp:873] Checkpointing executor's forked pid 1706 to '/var/folders/p6/nfxknpz52dzfc6zqnz23tq180000gn/T/mesos-XXXXXX.5OZ3locB/0/meta/ slaves/34d6329e-69cb-4a72-aee4-fe892bf1c70b-S2/frameworks/34d6329e-69cb-4a72-aee4-fe892b f1c70b-0000/executors/dec188d4-d2dc-40c5-ac4d-881adc3d81c0/runs/b6d3bd96-2ebd-47b1-a16a- a22ffba992aa/pids/forked.pid' dyld: Library not loaded: /usr/local/lib/libmesos-0.26.0.dylib Referenced from: /Users/mpark/Projects/mesos/build/src/.libs/mesos-executor Reason: image not found dyld: Library not loaded: /usr/local/lib/libmesos-0.26.0.dylib Referenced from: /Users/mpark/Projects/mesos/build/src/.libs/mesos-executor Reason: image not found dyld: Library not loaded: /usr/local/lib/libmesos-0.26.0.dylib Referenced from: /Users/mpark/Projects/mesos/build/src/.libs/mesos-executor Reason: image not found I1008 01:22:52.365397 3211264 containerizer.cpp:1284] Executor for container '06b649be-88c8-4047-8fb5-e89bdd096b66' has exited I1008 01:22:52.365433 3211264 containerizer.cpp:1097] Destroying container '06b649be-88c8-4047-8fb5-e89bdd096b66' ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3613","10/08/2015 21:12:55",1,"Port slave/paths.cpp to Windows ""Important subset of dependency tree of changes necessary: slave/paths.cpp: os, path""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3615","10/08/2015 21:17:23",3,"Port slave/state.cpp ""Important subset of changes this depends on: slave/state.cpp: pid, os, path, protobuf, paths, state pid.hpp: address.hpp, ip.hpp address.hpp: ip.hpp, net.hpp net.hpp: ip, networking stuff state: type_utils, pid, os, path, protobuf, uuid type_utils.hpp: uuid.hpp""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3618","10/08/2015 21:27:07",3,"Port slave/containerizer/fetcher.cpp ""Important subset of the dependency tree follows: slave/containerizer/fetcher.cpp: slave, fetcher, collect, dispatch, net collect: future, defer, process fetcher: type_utils, future, process, subprocess dispatch.hpp: process.hpp net.hpp: ip, networking stuff future.hpp: pid.hpp defer.hpp: deferred.hpp, dispatch.hpp deferred.hpp: dispatch.hpp, pid.hpp type_utils.hpp: uuid.hpp subprocess: os, future""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3619","10/08/2015 21:28:30",3,"Port slave/containerizer/isolator.cpp to Windows ""Important subset of the dependency tree follows: isolator.hpp: dispatch.hpp, path.hpp isolator: process dispatch.hpp: process.hpp ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3620","10/08/2015 21:29:25",3,"Create slave/containerizer/isolators/filesystem/windows.cpp ""Should look a lot like the posix.cpp flavor. Important subset of the dependency tree follows for the posix flavor: slave/containerizer/isolators/filesystem/posix.cpp: filesystem/posix, fs, os, path filesystem/posix: flags, isolator""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3623","10/08/2015 21:35:40",3,"Port slave/containerizer/mesos/containerizer.cpp to Windows ""Important subset of the dependency tree follows: slave/containerizer/mesos/containerizer.cpp: isolator, collect, defer, io, metrics, reap, subprocess, fs, os, path, protobuf_utils, paths, slave, containerizer, fetcher, launcher, posix, disk, containerizer, launch, provisioner""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3624","10/08/2015 21:37:52",3,"Port slave/containerizer/mesos/launch.cpp to Windows ""Important subset of the dependency tree follows: slave/containerizer/mesos/launch.cpp: os, protobuf, launch launch: subcommand subcommand: flags flags.hpp: os.hpp, path.hpp, fetch.hpp""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3639","10/08/2015 23:44:23",5,"Implement stout/os/windows/killtree.hpp ""killtree() is implemented using Windows Job Objects. The processes created by the executor are associated with a job object using `create_job'. killtree() is simply terminating the job object. Helper functions: `create_job` function creates a job object whose name is derived from the `pid` and associates the `pid` process with the job object. Every process started by the process which is part of the job object becomes part of the job object. The job name should match the name used in `kill_job`. The jobs should be create with JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE and allow the caller to decide how to handle the returned handle. `kill_job` function assumes the process identified by `pid` is associated with a job object whose name is derive from it. Every process started by the process which is part of the job object becomes part of the job object. Destroying the task will close all such processes.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3692","10/09/2015 16:35:50",1,"Clarify error message 'could not chown work directory' ""When deploying a framework I encountered the error message 'could not chown work directory'. It took me a while to figure out that this happened because my framework was registered as a user on my host machine which did not exist on the Docker container and the agent was running as root. I suggest to clarify this message by pointing out to either set {{--switch-user}} to {{false}} or to run the framework as the same user as the agent.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3705","10/12/2015 17:02:16",3,"HTTP Pipelining doesn't keep order of requests ""[HTTP 1.1 Pipelining|https://en.wikipedia.org/wiki/HTTP_pipelining] describes a mechanism by which multiple HTTP request can be performed over a single socket. The requirement here is that responses should be send in the same order as requests are being made. Libprocess has some mechanisms built in to deal with pipelining when multiple HTTP requests are made, it is still, however, possible to create a situation in which responses are scrambled respected to the requests arrival. Consider the situation in which there are two libprocess processes, {{processA}} and {{processB}}, each running in a different thread, {{thread2}} and {{thread3}} respectively. The [{{ProcessManager}}|https://github.com/apache/mesos/blob/1d68eed9089659b06a1e710f707818dbcafeec52/3rdparty/libprocess/src/process.cpp#L374] runs in {{thread1}}. {{processA}} is of type {{ProcessA}} which looks roughly as follows: {{processB}} is from type {{ProcessB}} which is just like {{ProcessA}} but routes {{""""bar""""}} instead of {{""""foo""""}}. The situation in which the bug arises is the following: # Two requests, one for {{""""http://server_uri/(1)/foo""""}} and one for {{""""http://server_uri/(2)//bar""""}} are made over the same socket. # The first request arrives to [{{ProcessManager::handle}}|https://github.com/apache/mesos/blob/1d68eed9089659b06a1e710f707818dbcafeec52/3rdparty/libprocess/src/process.cpp#L2202] which is still running in {{thread1}}. This one creates an {{HttpEvent}} and delivers to the handler, in this case {{processA}}. # [{{ProcessManager::deliver}}|https://github.com/apache/mesos/blob/1d68eed9089659b06a1e710f707818dbcafeec52/3rdparty/libprocess/src/process.cpp#L2361] enqueues the HTTP event in to the {{processA}} queue. This happens in {{thread1}}. # The second request arrives to [{{ProcessManager::handle}}|https://github.com/apache/mesos/blob/1d68eed9089659b06a1e710f707818dbcafeec52/3rdparty/libprocess/src/process.cpp#L2202] which is still running in {{thread1}}. Another {{HttpEvent}} is created and delivered to the handler, in this case {{processB}}. # [{{ProcessManager::deliver}}|https://github.com/apache/mesos/blob/1d68eed9089659b06a1e710f707818dbcafeec52/3rdparty/libprocess/src/process.cpp#L2361] enqueues the HTTP event in to the {{processB}} queue. This happens in {{thread1}}. # {{Thread2}} is blocked, so {{processA}} cannot handle the first request, it is stuck in the queue. # {{Thread3}} is idle, so it picks up the request to {{processB}} immediately. # [{{ProcessBase::visit(HttpEvent)}}|https://github.com/apache/mesos/blob/1d68eed9089659b06a1e710f707818dbcafeec52/3rdparty/libprocess/src/process.cpp#L3073] is called in {{thread3}}, this one in turn [dispatches|https://github.com/apache/mesos/blob/1d68eed9089659b06a1e710f707818dbcafeec52/3rdparty/libprocess/src/process.cpp#L3106] the response's future to the {{HttpProxy}} associated with the socket where the request came. At the last point, the bug is evident, the request to {{processB}} will be send before the request to {{processA}} even if the handler takes a long time and the {{processA::bar()}} actually finishes before. The responses are not send in the order the requests are done. h1. Reproducer The following is a test which successfully reproduces the issue: {code:title=3rdparty/libprocess/src/tests/http_tests.cpp} #include get1, get2, get3; Latch latch; EXPECT_CALL(*server1.process, get(_)) .WillOnce(DoAll(FutureArg<0>(&get1), InvokeWithoutArgs([&latch]() { latch.await(); }), Return(http::OK(""""1"""")))) .WillOnce(DoAll(FutureArg<0>(&get2), Return(http::OK(""""2"""")))); EXPECT_CALL(*server2.process, get(_)) .WillOnce(DoAll(FutureArg<0>(&get3), Return(http::OK(""""3"""")))); auto url1 = http::URL( """"http"""", server1.process->self().address.ip, server1.process->self().address.port, server1.process->self().id + """"/get""""); auto url2 = http::URL( """"http"""", server1.process->self().address.ip, server1.process->self().address.port, server2.process->self().id + """"/get""""); // Create a connection to the server for HTTP pipelining. Future connect = http::connect(url1); AWAIT_READY(connect); http::Connection connection = connect.get(); http::Request request1; request1.method = """"GET""""; request1.url = url1; request1.keepAlive = true; request1.body = """"1""""; Future response1 = connection.send(request1); http::Request request2 = request1; request2.body = """"2""""; Future response2 = connection.send(request2); http::Request request3; request3.method = """"GET""""; request3.url = url2; request3.keepAlive = true; request3.body = """"3""""; Future response3 = connection.send(request3); // Verify that request1 arrived at server1 and it is the right request. // Now server1 is blocked processing request1 and cannot pick up more events // in the queue. AWAIT_READY(get1); EXPECT_EQ(request1.body, get1->body); // Verify that request3 arrived at server2 and it is the right request. AWAIT_READY(get3); EXPECT_EQ(request3.body, get3->body); // Request2 hasn't been picked up since server1 is still blocked serving // request1. EXPECT_TRUE(get2.isPending()); // Free server1 so it can serve request2. latch.trigger(); // Verify that request2 arrived at server1 and it is the right request. AWAIT_READY(get2); EXPECT_EQ(request2.body, get2->body); // Wait for all responses. AWAIT_READY(response1); AWAIT_READY(response2); AWAIT_READY(response3); // If pipelining works as expected, even though server2 finished processing // its request before server1 even began with request2, the responses should // arrive in the order they were made. EXPECT_EQ(request1.body, response1->body); EXPECT_EQ(request2.body, response2->body); EXPECT_EQ(request3.body, response3->body); AWAIT_READY(connection.disconnect()); AWAIT_READY(connection.disconnected()); } {code}"""," class ProcessA : public ProcessBase { public: ProcessA() {} Future foo(const http::Request&) { // … Do something … return http::Ok(); } protected: virtual void initialize() { route(""""/foo"""", None(), &ProcessA::foo); } } #include get1, get2, get3; Latch latch; EXPECT_CALL(*server1.process, get(_)) .WillOnce(DoAll(FutureArg<0>(&get1), InvokeWithoutArgs([&latch]() { latch.await(); }), Return(http::OK(""""1"""")))) .WillOnce(DoAll(FutureArg<0>(&get2), Return(http::OK(""""2"""")))); EXPECT_CALL(*server2.process, get(_)) .WillOnce(DoAll(FutureArg<0>(&get3), Return(http::OK(""""3"""")))); auto url1 = http::URL( """"http"""", server1.process->self().address.ip, server1.process->self().address.port, server1.process->self().id + """"/get""""); auto url2 = http::URL( """"http"""", server1.process->self().address.ip, server1.process->self().address.port, server2.process->self().id + """"/get""""); // Create a connection to the server for HTTP pipelining. Future connect = http::connect(url1); AWAIT_READY(connect); http::Connection connection = connect.get(); http::Request request1; request1.method = """"GET""""; request1.url = url1; request1.keepAlive = true; request1.body = """"1""""; Future response1 = connection.send(request1); http::Request request2 = request1; request2.body = """"2""""; Future response2 = connection.send(request2); http::Request request3; request3.method = """"GET""""; request3.url = url2; request3.keepAlive = true; request3.body = """"3""""; Future response3 = connection.send(request3); // Verify that request1 arrived at server1 and it is the right request. // Now server1 is blocked processing request1 and cannot pick up more events // in the queue. AWAIT_READY(get1); EXPECT_EQ(request1.body, get1->body); // Verify that request3 arrived at server2 and it is the right request. AWAIT_READY(get3); EXPECT_EQ(request3.body, get3->body); // Request2 hasn't been picked up since server1 is still blocked serving // request1. EXPECT_TRUE(get2.isPending()); // Free server1 so it can serve request2. latch.trigger(); // Verify that request2 arrived at server1 and it is the right request. AWAIT_READY(get2); EXPECT_EQ(request2.body, get2->body); // Wait for all responses. AWAIT_READY(response1); AWAIT_READY(response2); AWAIT_READY(response3); // If pipelining works as expected, even though server2 finished processing // its request before server1 even began with request2, the responses should // arrive in the order they were made. EXPECT_EQ(request1.body, response1->body); EXPECT_EQ(request2.body, response2->body); EXPECT_EQ(request3.body, response3->body); AWAIT_READY(connection.disconnect()); AWAIT_READY(connection.disconnected()); } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3716","10/13/2015 12:46:28",3,"Update Allocator interface to support quota ""An allocator should be notified when a quota is being set/updated or removed. Also to support master failover in presence of quota, allocator should be notified about the reregistering agents and allocations towards quota.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3717","10/13/2015 13:03:58",5,"Master recovery in presence of quota ""Quota complicates master failover in several ways. The new master should determine if it is possible to satisfy the total quota and notify an operator in case it's not (imagine simultaneous failovers of multiple agents). The new master should hint the allocator how many agents might reconnect in the future to help it decide how to satisfy quota before the majority of agents reconnect.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3718","10/13/2015 13:13:05",5,"Implement Quota support in allocator ""The built-in Hierarchical DRF allocator should support Quota. This includes (but not limited to): adding, updating, removing and satisfying quota; avoiding both overcomitting resources and handing them to non-quota'ed roles in presence of master failover. A [design doc for Quota support in Allocator|https://issues.apache.org/jira/browse/MESOS-2937] provides an overview of a feature set required to be implemented.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3720","10/13/2015 18:49:28",5,"Tests for Quota support in master ""Allocator-agnostic tests for quota support in the master. They can be divided into several groups: * Heuristic check; * Master failover; * Functionality and quota guarantees.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3722","10/13/2015 19:16:47",5,"Prototype quota request authentication ""Quota requests need to be authenticated. This ticket will authenticate quota requests using credentials provided by the `Authorization` field of the HTTP request. This is similar to how authentication is implemented in `Master::Http`.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3723","10/13/2015 19:23:47",5,"Prototype quota request authorization ""When quotas are requested they should authorize their roles. This ticket will authorize quota requests with ACLs. The existing authorization support that has been implemented in MESOS-1342 will be extended to add a `request_quotas` ACL.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3732","10/14/2015 16:48:21",1,"Speed up FaultToleranceTest.FrameworkReregister test ""FaultToleranceTest.FrameworkReregister test takes more than one second to complete: There must be a {{1s}} timeout somewhere which we should mitigate via {{Clock::advance()}}."""," [ RUN ] FaultToleranceTest.FrameworkReregister [ OK ] FaultToleranceTest.FrameworkReregister (1056 ms) ",0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3739","10/15/2015 00:41:11",2,"Mesos does not set Content-Type for 400 Bad Request ""While integrating with the HTTP Scheduler API I encountered the following scenario. The message below was serialized to protobuf and sent as the POST body {code:title=message} call { type: ACKNOWLEDGE, acknowledge: { uuid: , agentID: { value: """"20151012-182734-16777343-5050-8978-S2"""" }, taskID: { value: """"task-1"""" } } } I received the following response {code:title=Response Headers} HTTP/1.1 400 Bad Request Date: Wed, 14 Oct 2015 23:21:36 GMT Content-Length: 74 Failed to validate Scheduler::Call: Expecting 'framework_id' to be present {code} Even though my accept header made no mention of {{text/plain}} the message body returned to me is {{text/plain}}. Additionally, there is no {{Content-Type}} header set on the response so I can't even do anything intelligently in my response handler."""," call { type: ACKNOWLEDGE, acknowledge: { uuid: , agentID: { value: """"20151012-182734-16777343-5050-8978-S2"""" }, taskID: { value: """"task-1"""" } } } POST /api/v1/scheduler HTTP/1.1 Content-Type: application/x-protobuf Accept: application/x-protobuf Content-Length: 73 Host: localhost:5050 User-Agent: RxNetty Client HTTP/1.1 400 Bad Request Date: Wed, 14 Oct 2015 23:21:36 GMT Content-Length: 74 Failed to validate Scheduler::Call: Expecting 'framework_id' to be present ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3743","10/15/2015 13:35:55",2,"Provide diagnostic output in agent log when fetching fails ""When fetching fails, the fetcher has written log output to stderr in the task sandbox, but it is not easy to get to. It may even be impossible to get to if one only has the agent log available and no more access to the sandbox. This is for instance the case when looking at output from a CI run. The fetcher actor in the agent detects if the external fetcher program claims to have succeeded or not. When it exits with an error code, we could grab the fetcher log from the stderr file in the sandbox and append it to the agent log. This is similar to this patch: https://reviews.apache.org/r/37813/ The difference is that the output of the latter is triggered by test failures outside the fetcher, whereas what is proposed here is triggering upon failures inside the fetcher.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3748","10/16/2015 00:24:27",1,"HTTP scheduler library does not gracefully parse invalid resource identifiers ""If you pass a nonsense string for """"master"""" into a framework using the C++ HTTP scheduler library, the framework segfaults. For example, using the example frameworks: {code:title=Scheduler Driver} build/src/test-framework --master=""""asdf://127.0.0.1:5050"""" Failed to create a master detector for 'asdf://127.0.0.1:5050': Failed to parse 'asdf://127.0.0.1:5050' Results in {code:title=Stack Trace} * thread #2: tid = 0x28b6bb, 0x0000000100ad03ca libmesos-0.26.0.dylib`mesos::v1::scheduler::MesosProcess::initialize(this=0x00000001076031a0) + 42 at scheduler.cpp:213, stop reason = EXC_BAD_ACCESS (code=1, address=0x0) * frame #0: 0x0000000100ad03ca libmesos-0.26.0.dylib`mesos::v1::scheduler::MesosProcess::initialize(this=0x00000001076031a0) + 42 at scheduler.cpp:213 frame #1: 0x0000000100ad05f2 libmesos-0.26.0.dylib`virtual thunk to mesos::v1::scheduler::MesosProcess::initialize(this=0x00000001076031a0) + 34 at scheduler.cpp:210 frame #2: 0x00000001022b60f3 libmesos-0.26.0.dylib`::resume() + 931 at process.cpp:2449 frame #3: 0x00000001022c131c libmesos-0.26.0.dylib`::operator()() + 268 at process.cpp:2174 frame #4: 0x00000001022c0fa2 libmesos-0.26.0.dylib`::__thread_proxy > > > >() [inlined] __invoke<(lambda at ../../../3rdparty/libprocess/src/process.cpp:2158:35) &, const std::__1::atomic &> + 27 at __functional_base:415 frame #5: 0x00000001022c0f87 libmesos-0.26.0.dylib`::__thread_proxy > > > >() [inlined] __apply_functor<(lambda at ../../../3rdparty/libprocess/src/process.cpp:2158:35), std::__1::tuple > >, 0, std::__1::tuple<> > + 55 at functional:2060 frame #6: 0x00000001022c0f50 libmesos-0.26.0.dylib`::__thread_proxy > > > >() [inlined] operator()<> + 41 at functional:2123 frame #7: 0x00000001022c0f27 libmesos-0.26.0.dylib`::__thread_proxy > > > >() [inlined] __invoke > >> + 14 at __functional_base:415 frame #8: 0x00000001022c0f19 libmesos-0.26.0.dylib`::__thread_proxy > > > >() [inlined] __thread_execute > >> + 25 at thread:337 frame #9: 0x00000001022c0f00 libmesos-0.26.0.dylib`::__thread_proxy > > > >() + 368 at thread:347 frame #10: 0x00007fff964c705a libsystem_pthread.dylib`_pthread_body + 131 frame #11: 0x00007fff964c6fd7 libsystem_pthread.dylib`_pthread_start + 176 frame #12: 0x00007fff964c43ed libsystem_pthread.dylib`thread_start + 13 {code}"""," build/src/test-framework --master=""""asdf://127.0.0.1:5050"""" Failed to create a master detector for 'asdf://127.0.0.1:5050': Failed to parse 'asdf://127.0.0.1:5050' export DEFAULT_PRINCIPAL=root build/src/event-call-framework --master=""""asdf://127.0.0.1:5050"""" I1015 16:18:45.432075 2062201600 scheduler.cpp:157] Version: 0.26.0 Segmentation fault: 11 * thread #2: tid = 0x28b6bb, 0x0000000100ad03ca libmesos-0.26.0.dylib`mesos::v1::scheduler::MesosProcess::initialize(this=0x00000001076031a0) + 42 at scheduler.cpp:213, stop reason = EXC_BAD_ACCESS (code=1, address=0x0) * frame #0: 0x0000000100ad03ca libmesos-0.26.0.dylib`mesos::v1::scheduler::MesosProcess::initialize(this=0x00000001076031a0) + 42 at scheduler.cpp:213 frame #1: 0x0000000100ad05f2 libmesos-0.26.0.dylib`virtual thunk to mesos::v1::scheduler::MesosProcess::initialize(this=0x00000001076031a0) + 34 at scheduler.cpp:210 frame #2: 0x00000001022b60f3 libmesos-0.26.0.dylib`::resume() + 931 at process.cpp:2449 frame #3: 0x00000001022c131c libmesos-0.26.0.dylib`::operator()() + 268 at process.cpp:2174 frame #4: 0x00000001022c0fa2 libmesos-0.26.0.dylib`::__thread_proxy > > > >() [inlined] __invoke<(lambda at ../../../3rdparty/libprocess/src/process.cpp:2158:35) &, const std::__1::atomic &> + 27 at __functional_base:415 frame #5: 0x00000001022c0f87 libmesos-0.26.0.dylib`::__thread_proxy > > > >() [inlined] __apply_functor<(lambda at ../../../3rdparty/libprocess/src/process.cpp:2158:35), std::__1::tuple > >, 0, std::__1::tuple<> > + 55 at functional:2060 frame #6: 0x00000001022c0f50 libmesos-0.26.0.dylib`::__thread_proxy > > > >() [inlined] operator()<> + 41 at functional:2123 frame #7: 0x00000001022c0f27 libmesos-0.26.0.dylib`::__thread_proxy > > > >() [inlined] __invoke > >> + 14 at __functional_base:415 frame #8: 0x00000001022c0f19 libmesos-0.26.0.dylib`::__thread_proxy > > > >() [inlined] __thread_execute > >> + 25 at thread:337 frame #9: 0x00000001022c0f00 libmesos-0.26.0.dylib`::__thread_proxy > > > >() + 368 at thread:347 frame #10: 0x00007fff964c705a libsystem_pthread.dylib`_pthread_body + 131 frame #11: 0x00007fff964c6fd7 libsystem_pthread.dylib`_pthread_start + 176 frame #12: 0x00007fff964c43ed libsystem_pthread.dylib`thread_start + 13 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3749","10/16/2015 17:46:42",1,"Configuration docs are missing --enable-libevent and --enable-ssl ""The {{\-\-enable-libevent}} and {{\-\-enable-ssl}} config flags are currently not documented in the """"Configuration"""" docs with the rest of the flags. They should be added.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3751","10/16/2015 18:45:36",2,"MESOS_NATIVE_JAVA_LIBRARY not set on MesosContainerize tasks with --executor_environmnent_variables ""When using --executor_environment_variables, and having MESOS_NATIVE_JAVA_LIBRARY in the environment of mesos-slave, the mesos containerizer does not set MESOS_NATIVE_JAVA_LIBRARY itself. Relevant code: https://github.com/apache/mesos/blob/14f7967ef307f3d98e3a4b93d92d6b3a56399b20/src/slave/containerizer/containerizer.cpp#L281 It sees that the variable is in the mesos-slave's environment (os::getenv), rather than checking if it is set in the environment variable set.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3753","10/16/2015 23:27:29",13,"Test the HTTP Scheduler library with SSL enabled ""Currently, the HTTP Scheduler library does not support SSL-enabled Mesos. (You can manually test this by spinning up an SSL-enabled master and attempt to run the event-call framework example against it.) We need to add tests that check the HTTP Scheduler library against SSL-enabled Mesos: * with downgrade support, * with required framework/client-side certifications, * with/without verification of certificates (master-side), * with/without verification of certificates (framework-side), * with a custom certificate authority (CA) These options should be controlled by the same environment variables found on the [SSL user doc|http://mesos.apache.org/documentation/latest/ssl/]. Note: This issue will be broken down into smaller sub-issues as bugs/problems are discovered.""","",0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3756","10/19/2015 11:23:21",13,"Generalized HTTP Authentication Modules ""Libprocess is going to factor out an authentication interface: MESOS-3231 Here we propose that Mesos can provide implementations for this interface as Mesos modules.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3759","10/19/2015 19:08:17",3,"Document messages.proto ""The messages we pass between Mesos components are largely undocumented. See this [TODO|https://github.com/apache/mesos/blob/19f14d06bac269b635657960d8ea8b2928b7830c/src/messages/messages.proto#L23].""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3762","10/19/2015 23:43:32",3,"Refactor SSLTest fixture such that MesosTest can use the same helpers. ""In order to write tests that exercise SSL with other components of Mesos, such as the HTTP scheduler library, we need to use the setup/teardown logic found in the {{SSLTest}} fixture. Currently, the test fixtures have separate inheritance structures like this: where {{::testing::Test}} is a gtest class. The plan is the following: # Change {{SSLTest}} to inherit from {{TemporaryDirectoryTest}}. This will require moving the setup (generation of keys and certs) from {{SetUpTestCase}} to {{SetUp}}. At the same time, *some* of the cleanup logic in the SSLTest will not be needed. # Move the logic of generating keys/certs into helpers, so that individual tests can call them when needed, much like {{MesosTest}}. # Write a child class of {{SSLTest}} which has the same functionality as the existing {{SSLTest}}, for use by the existing tests that rely on {{SSLTest}} or the {{RegistryClientTest}}. # Have {{MesosTest}} inherit from {{SSLTest}} (which might be renamed during the refactor). If Mesos is not compiled with {{--enable-ssl}}, then {{SSLTest}} could be {{#ifdef}}'d into any empty class. The resulting structure should be like: """," SSLTest <- ::testing::Test MesosTest <- TemporaryDirectoryTest <- ::testing::Test MesosTest <- SSLTest <- TemporaryDirectoryTest <- ::testing::Test ChildOfSSLTest / ",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3771","10/20/2015 19:05:36",2,"Mesos JSON API creates invalid JSON due to lack of binary data / non-ASCII handling ""Spark encodes some binary data into the ExecutorInfo.data field. This field is sent as a """"bytes"""" Protobuf value, which can have arbitrary non-UTF8 data. If you have such a field, it seems that it is splatted out into JSON without any regards to proper character encoding: I suspect this is because the HTTP api emits the executorInfo.data directly: I think this may be because the custom JSON processing library in stout seems to not have any idea of what a byte array is. I'm guessing that some implicit conversion makes it get written as a String instead, but: Thank you for any assistance here. Our cluster is currently entirely down -- the frameworks cannot handle parsing the invalid JSON produced (it is not even valid utf-8) """," 0006b0b0 2e 73 70 61 72 6b 2e 65 78 65 63 75 74 6f 72 2e |.spark.executor.| 0006b0c0 4d 65 73 6f 73 45 78 65 63 75 74 6f 72 42 61 63 |MesosExecutorBac| 0006b0d0 6b 65 6e 64 22 7d 2c 22 64 61 74 61 22 3a 22 ac |kend""""},""""data"""":"""".| 0006b0e0 ed 5c 75 30 30 30 30 5c 75 30 30 30 35 75 72 5c |.\u0000\u0005ur\| 0006b0f0 75 30 30 30 30 5c 75 30 30 30 66 5b 4c 73 63 61 |u0000\u000f[Lsca| 0006b100 6c 61 2e 54 75 70 6c 65 32 3b 2e cc 5c 75 30 30 |la.Tuple2;..\u00| JSON::Object model(const ExecutorInfo& executorInfo) { JSON::Object object; object.values[""""executor_id""""] = executorInfo.executor_id().value(); object.values[""""name""""] = executorInfo.name(); object.values[""""data""""] = executorInfo.data(); object.values[""""framework_id""""] = executorInfo.framework_id().value(); object.values[""""command""""] = model(executorInfo.command()); object.values[""""resources""""] = model(executorInfo.resources()); return object; } inline std::ostream& operator<<(std::ostream& out, const String& string) { // TODO(benh): This escaping DOES NOT handle unicode, it encodes as ASCII. // See RFC4627 for the JSON string specificiation. return out << picojson::value(string.value).serialize(); } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3785","10/21/2015 15:52:42",5,"Use URI content modification time to trigger fetcher cache updates. ""Instead of using checksums to trigger fetcher cache updates, we can for starters use the content modification time (mtime), which is available for a number of download protocols, e.g. HTTP and HDFS. Proposal: Instead of just fetching the content size, we fetch both size and mtime together. As before, if there is no size, then caching fails and we fall back on direct downloading to the sandbox. Assuming a size is given, we compare the mtime from the fetch URI with the mtime known to the cache. If it differs, we update the cache. (As a defensive measure, a difference in size should also trigger an update.) Not having an mtime available at the fetch URI is simply treated as a unique valid mtime value that differs from all others. This means that when initially there is no mtime, cache content remains valid until there is one. Thereafter, anew lack of an mtime invalidates the cache once. In other words: any change from no mtime to having one or back is the same as encountering a new mtime. Note that this scheme does not require any new protobuf fields. ""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3795","10/23/2015 09:49:53",2,"process::io::write takes parameter as void* which could be const ""In libprocess we have which expects a non-{{const}} {{void*}} for its {{data}} parameter. Under the covers {{data}} appears to be handled as a {{const}} (like one would expect from the signature its inspiration {{::write}}). This function is not used too often, but since it expects a non-{{const}} value for {{data}} automatic conversions to {{void*}} from other pointer types are disabled; instead callers seem cast manually to {{void*}} -- often with C-style casts. We should sync this method's signature with that of {{::write}}. In addition to following the expected semantics of {{::write}}, having this work without casts with any pointer value {{data}} would make it easier to interface this with character literals, or raw data ptrs from STL containers (e.g. {{Container::data}}). It would probably also indirectly eliminate temptation to use C-casts."""," Future write(int fd, void* data, size_t size); ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3820","11/02/2015 23:40:15",3,"Test-only libprocess reinitialization ""*Background* Libprocess initialization includes the spawning of a variety of global processes and the creation of the server socket which listens for incoming requests. Some properties of the server socket are configured via environment variables, such as the IP and port or the SSL configuration. In the case of tests, libprocess is initialized once per test binary. This means that testing different configurations (SSL in particular) is cumbersome as a separate process would be needed for every test case. *Proposal* # Add some optional code between some tests like: See [MESOS-3863] for more on {{process::finalize}}."""," // Cleanup all of libprocess's state, as if we're starting anew. process::finalize(); // For tests that need to test SSL connections with the Master: openssl::reinitialize(); process::initialize(); ",1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3831","11/04/2015 22:54:02",3,"Document operator HTTP endpoints ""These are not exhaustively documented; they probably should be. Some endpoints have docs: e.g., {{/reserve}} and {{/unreserve}} are described in the reservation doc page. But it would be good to have a single page that lists all the endpoints and their semantics.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3833","11/05/2015 00:29:42",2,"/help endpoints do not work for nested paths ""Mesos displays the list of all supported endpoints starting at a given path prefix using the {{/help}} suffix, e.g. {{master:5050/help}}. It seems that the {{help}} functionality is broken for URL's having nested paths e.g. {{master:5050/help/master/machine/down}}. The response returned is: {quote} Malformed URL, expecting '/help/id/name/' {quote}""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3849","11/07/2015 12:41:47",1,"Corrected style in Makefiles ""Order of files in Makefiles is not strictly alphabetic""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3851","11/08/2015 07:30:53",2,"Investigate recent crashes in Command Executor ""Post https://reviews.apache.org/r/38900 i.e. updating CommandExecutor to support rootfs. There seem to be some tests showing frequent crashes due to assert violations. {{FetcherCacheTest.SimpleEviction}} failed due to the following log: The reason seems to be a race between the executor receiving a {{RunTaskMessage}} before {{ExecutorRegisteredMessage}} leading to the {{CHECK_SOME(executorInfo)}} failure. Link to complete log: https://issues.apache.org/jira/browse/MESOS-2831?focusedCommentId=14995535&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14995535 Another related failure from {{ExamplesTest.PersistentVolumeFramework}} Full logs at: https://builds.apache.org/job/Mesos/1191/COMPILER=gcc,CONFIGURATION=--verbose,OS=centos:7,label_exp=docker%7C%7CHadoop/consoleFull"""," I1107 19:36:46.360908 30657 slave.cpp:1793] Sending queued task '3' to executor ''3' of framework 7d94c7fb-8950-4bcf-80c1-46112292dcd6-0000 at executor(1)@172.17.5.200:33871' I1107 19:36:46.363682 1236 exec.cpp:297] I1107 19:36:46.373569 1245 exec.cpp:210] Executor registered on slave 7d94c7fb-8950-4bcf-80c1-46112292dcd6-S0 @ 0x7f9f5a7db3fa google::LogMessage::Fail() I1107 19:36:46.394081 1245 exec.cpp:222] Executor::registered took 395411ns @ 0x7f9f5a7db359 google::LogMessage::SendToLog() @ 0x7f9f5a7dad6a google::LogMessage::Flush() @ 0x7f9f5a7dda9e google::LogMessageFatal::~LogMessageFatal() @ 0x48d00a _CheckFatal::~_CheckFatal() @ 0x49c99d mesos::internal::CommandExecutorProcess::launchTask() @ 0x4b3dd7 _ZZN7process8dispatchIN5mesos8internal22CommandExecutorProcessEPNS1_14ExecutorDriverERKNS1_8TaskInfoES5_S6_EEvRKNS_3PIDIT_EEMSA_FvT0_T1_ET2_T3_ENKUlPNS_11ProcessBaseEE_clESL_ @ 0x4c470c _ZNSt17_Function_handlerIFvPN7process11ProcessBaseEEZNS0_8dispatchIN5mesos8internal22CommandExecutorProcessEPNS5_14ExecutorDriverERKNS5_8TaskInfoES9_SA_EEvRKNS0_3PIDIT_EEMSE_FvT0_T1_ET2_T3_EUlS2_E_E9_M_invokeERKSt9_Any_dataS2_ @ 0x7f9f5a761b1b std::function<>::operator()() @ 0x7f9f5a749935 process::ProcessBase::visit() @ 0x7f9f5a74d700 process::DispatchEvent::visit() @ 0x48e004 process::ProcessBase::serve() @ 0x7f9f5a745d21 process::ProcessManager::resume() @ 0x7f9f5a742f52 _ZZN7process14ProcessManager12init_threadsEvENKUlRKSt11atomic_boolE_clES3_ @ 0x7f9f5a74cf2c _ZNSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS3_EEE6__callIvIEILm0EEEET_OSt5tupleIIDpT0_EESt12_Index_tupleIIXspT1_EEE @ 0x7f9f5a74cedc _ZNSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS3_EEEclIIEvEET0_DpOT_ @ 0x7f9f5a74ce6e _ZNSt12_Bind_simpleIFSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS4_EEEvEE9_M_invokeIIEEEvSt12_Index_tupleIIXspT_EEE @ 0x7f9f5a74cdc5 _ZNSt12_Bind_simpleIFSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS4_EEEvEEclEv @ 0x7f9f5a74cd5e _ZNSt6thread5_ImplISt12_Bind_simpleIFSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS6_EEEvEEE6_M_runEv @ 0x7f9f5624f1e0 (unknown) @ 0x7f9f564a8df5 start_thread @ 0x7f9f559b71ad __clone I1107 19:36:46.551370 30656 containerizer.cpp:1257] Executor for container '6553a617-6b4a-418d-9759-5681f45ff854' has exited I1107 19:36:46.551429 30656 containerizer.cpp:1074] Destroying container '6553a617-6b4a-418d-9759-5681f45ff854' I1107 19:36:46.553869 30656 containerizer.cpp:1257] Executor for container 'd2c1f924-c92a-453e-82b1-c294d09c4873' has exited @ 0x7f4f71529cbd google::LogMessage::SendToLog() I1107 13:15:09.949987 31573 slave.cpp:2337] Status update manager successfully handled status update acknowledgement (UUID: 721c7316-5580-4636-a83a-098e3bd4ed1f) for task ad90531f-d3d8-43f6-96f2-c81c4548a12d of framework ac4ea54a-7d19-4e41-9ee3-1a761f8e5b0f-0000 @ 0x7f4f715296ce google::LogMessage::Flush() @ 0x7f4f7152c402 google::LogMessageFatal::~LogMessageFatal() @ 0x48d00a _CheckFatal::~_CheckFatal() @ 0x49c99d mesos::internal::CommandExecutorProcess::launchTask() @ 0x4b3dd7 _ZZN7process8dispatchIN5mesos8internal22CommandExecutorProcessEPNS1_14ExecutorDriverERKNS1_8TaskInfoES5_S6_EEvRKNS_3PIDIT_EEMSA_FvT0_T1_ET2_T3_ENKUlPNS_11ProcessBaseEE_clESL_ @ 0x4c470c _ZNSt17_Function_handlerIFvPN7process11ProcessBaseEEZNS0_8dispatchIN5mesos8internal22CommandExecutorProcessEPNS5_14ExecutorDriverERKNS5_8TaskInfoES9_SA_EEvRKNS0_3PIDIT_EEMSE_FvT0_T1_ET2_T3_EUlS2_E_E9_M_invokeERKSt9_Any_dataS2_ @ 0x7f4f714b047f std::function<>::operator()() @ 0x7f4f71498299 process::ProcessBase::visit() @ 0x7f4f7149c064 process::DispatchEvent::visit() @ 0x48e004 process::ProcessBase::serve() @ 0x7f4f71494685 process::ProcessManager::resume() ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3854","11/09/2015 09:39:16",5,"Finalize design for generalized Authorizer interface ""Finalize the structure the interface and achieve consensus on the design doc proposed in MESOS-2949. https://docs.google.com/document/d/1-XARWJFUq0r_TgRHz_472NvLZNjbqE4G8c2JL44OSMQ/edit""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3861","11/09/2015 18:52:55",3,"Authenticate quota requests ""Quota requests need to be authenticated. This ticket will authenticate quota requests using credentials provided by the {{Authorization}} field of the HTTP request. This is similar to how authentication is implemented in {{Master::Http}}.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3862","11/09/2015 18:56:24",5,"Authorize set quota requests. ""When quotas are requested they should authorize their roles. This ticket will authorize quota requests with ACLs. The existing authorization support that has been implemented in MESOS-1342 will be extended to add a `request_quotas` ACL.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3863","11/09/2015 22:13:04",2,"Investigate the requirements of programmatically re-initializing libprocess ""This issue is for investigating what needs to be added/changed in {{process::finalize}} such that {{process::initialize}} will start on a clean slate. Additional issues will be created once done. Also see [the parent issue|MESOS-3820]. {{process::finalize}} should cover the following components: * {{__s__}} (the server socket) ** {{delete}} should be sufficient. This closes the socket and thereby prevents any further interaction from it. * {{process_manager}} ** Related prior work: [MESOS-3158] ** Cleans up the garbage collector, help, logging, profiler, statistics, route processes (including [this one|https://github.com/apache/mesos/blob/3bda55da1d0b580a1b7de43babfdc0d30fbc87ea/3rdparty/libprocess/src/process.cpp#L963], which currently leaks a pointer). ** Cleans up any other {{spawn}} 'd process. ** Manages the {{EventLoop}}. * {{Clock}} ** The goal here is to clear any timers so that nothing can deference {{process_manager}} while we're finalizing/finalized. It's probably not important to execute any remaining timers, since we're """"shutting down"""" libprocess. This means: *** The clock should be {{paused}} and {{settled}} before the clean up of {{process_manager}}. *** Processes, which might interact with the {{Clock}}, should be cleaned up next. *** A new {{Clock::finalize}} method would then clear timers, process-specific clocks, and {{tick}} s; and then {{resume}} the clock. * {{__address__}} (the advertised IP and port) ** Needs to be cleared after {{process_manager}} has been cleaned up. Processes use this to communicate events. If cleared prematurely, {{TerminateEvents}} will not be sent correctly, leading to infinite waits. * {{socket_manager}} ** The idea here is to close all sockets and deallocate any existing {{HttpProxy}} or {{Encoder}} objects. ** All sockets are created via {{__s__}}, so cleaning up the server socket prior will prevent any new activity. * {{mime}} ** This is effectively a static map. ** It should be possible to statically initialize it. * Synchronization atomics {{initialized}} & {{initializing}}. ** Once cleanup is done, these should be reset. *Summary*: * Implement {{Clock::finalize}}. [MESOS-3882] * Implement {{~SocketManager}}. [MESOS-3910] * Make sure the {{MetricsProcess}} and {{ReaperProcess}} are reinitialized. [MESOS-3934] * (Optional) Clean up {{mime}}. * Wrap everything up in {{process::finalize}}.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3864","11/09/2015 22:21:13",1,"Simplify and/or document the libprocess initialization synchronization logic ""Tracks this [TODO|https://github.com/apache/mesos/blob/3bda55da1d0b580a1b7de43babfdc0d30fbc87ea/3rdparty/libprocess/src/process.cpp#L749]. The [synchronization logic of libprocess|https://github.com/apache/mesos/commit/cd757cf75637c92c438bf4cd22f21ba1b5be702f#diff-128d3b56fc8c9ec0176fdbadcfd11fc2] [predates abstractions|https://github.com/apache/mesos/commit/6c3b107e4e02d5ba0673eb3145d71ec9d256a639#diff-0eebc8689450916990abe080d86c2acb] like {{process::Once}}, which is used in almost all other one-time initialization blocks. The logic should be documented. It can also be simplified (see the [review description|https://reviews.apache.org/r/39949/]). Or it can be replaced with {{process::Once}}.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3873","11/10/2015 16:33:26",3,"Enhance allocator interface with the recovery() method ""There are some scenarios (e.g. quota is set for some roles) when it makes sense to notify an allocator about the recovery. Introduce a method into the allocator interface that allows for this.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3874","11/10/2015 16:36:42",3,"Investigate recovery for the Hierarchical allocator ""The built-in Hierarchical allocator should implement the recovery (in the presence of quota).""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3875","11/10/2015 16:45:42",3,"Account dynamic reservations towards quota. ""Dynamic reservations—whether allocated or not—should be accounted towards role's quota. This requires update in at least two places: * The built-in allocator, which actually satisfies quota; * The sanity check in the master.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3877","11/10/2015 17:25:06",5,"Draft operator documentation for quota ""Draft an operator guide for quota which describes basic usage of the endpoints and few basic and advanced usage cases.""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3882","11/10/2015 21:54:56",3,"Libprocess: Implement process::Clock::finalize ""Tracks this [TODO|https://github.com/apache/mesos/blob/aa0cd7ed4edf1184cbc592b5caa2429a8373e813/3rdparty/libprocess/src/process.cpp#L974-L975]. The {{Clock}} is initialized with a callback that, among other things, will dereference the global {{process_manager}} object. When libprocess is shutting down, the {{process_manager}} is cleaned up. Between cleanup and termination of libprocess, there is some chance that a {{Timer}} will time out and result in dereferencing {{process_manager}}. *Proposal* * Implement {{Clock::finalize}}. This would clear: ** existing timers ** process-specific clocks ** ticks * Change {{process::finalize}}. *# Resume the clock. (The clock is only paused during some tests.) When the clock is not paused, the callback does not dereference {{process_manager}}. *# Clean up {{process_manager}}. This terminates all the processes that would potentially interact with {{Clock}}. *# Call {{Clock::finalize}}.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3884","11/11/2015 09:57:47",1,"Corrected style in hierarchical allocator ""The built-in allocator code has some style issues (namespaces in the .cpp file, unfortunate formatting) which should be corrected for readability.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3899","11/11/2015 14:11:38",1,"Wrong syntax and inconsistent formatting of JSON examples in flag documentation ""The JSON examples in the documentation of the commandline flags ({{mesos-master.sh --help}} and {{mesos-slave.sh --help}}) don't have a consistent formatting. Furthermore, some examples aren't even compliant JSON because they have trailing commas were they shouldn't.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3900","11/11/2015 18:54:54",3,"Enable mesos-reviewbot project on jenkins to use docker ""As a first step to adding capability for building multiple configurations on reviewbot, we need to change the build scripts to use docker. ""","",0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3905","11/12/2015 15:43:28",1,"Five new docker-related slave flags are not covered by the configuration documentation. ""These flags were added to """"slave/flags.cpp"""", but are not mentioned in """"docs/configuration.md"""": add(&Flags::docker_auth_server, """"docker_auth_server"""", """"Docker authentication server"""", """"auth.docker.io""""); add(&Flags::docker_auth_server_port, """"docker_auth_server_port"""", """"Docker authentication server port"""", """"443""""); add(&Flags::docker_puller_timeout_secs, """"docker_puller_timeout"""", """"Timeout value in seconds for pulling images from Docker registry"""", """"60""""); add(&Flags::docker_registry, """"docker_registry"""", """"Default Docker image registry server host"""", """"registry-1.docker.io""""); add(&Flags::docker_registry_port, """"docker_registry_port"""", """"Default Docker registry server port"""", """"443""""); ""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3909","11/13/2015 00:11:31",3,"isolator module headers depend on picojson headers ""When trying to build an isolator module, stout headers end up depending on {{picojson.hpp}} which is not installed. """," In file included from /opt/mesos/include/mesos/module/isolator.hpp:25: In file included from /opt/mesos/include/mesos/slave/isolator.hpp:30: In file included from /opt/mesos/include/process/dispatch.hpp:22: In file included from /opt/mesos/include/process/process.hpp:26: In file included from /opt/mesos/include/process/event.hpp:21: In file included from /opt/mesos/include/process/http.hpp:39: /opt/mesos/include/stout/json.hpp:23:10: fatal error: 'picojson.h' file not found #include ^ 8 warnings and 1 error generated. ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3910","11/13/2015 00:33:16",5,"Libprocess: Implement cleanup of the SocketManager in process::finalize ""The {{socket_manager}} and {{process_manager}} are intricately tied together. Currently, only the {{process_manager}} is cleaned up by {{process::finalize}}. To clean up the {{socket_manager}}, we must close all sockets and deallocate any existing {{HttpProxy}} or {{Encoder}} objects. And we should prevent further objects from being created/tracked by the {{socket_manager}}. *Proposal* # Clean up all processes other than {{gc}}. This will clear all links and delete all {{HttpProxy}} s while {{socket_manager}} still exists. # Close all sockets via {{SocketManager::close}}. All of {{socket_manager}} 's state is cleaned up via {{SocketManager::close}}, including termination of {{HttpProxy}} (termination is idempotent, meaning that killing {{HttpProxy}} s via {{process_manager}} is safe). # At this point, {{socket_manager}} should be empty and only the {{gc}} process should be running. (Since we're finalizing, assume there are no threads trying to spawn processes.) {{socket_manager}} can be deleted. # {{gc}} can be deleted. This is currently a leaked pointer, so we'll also need to track and delete that. # {{process_manager}} should be devoid of processes, so we can proceed with cleanup (join threads, stop the {{EventLoop}}, etc).""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3911","11/13/2015 09:04:10",1,"Add a `--force` flag to disable sanity check in quota ""There are use cases when an operator may want to disable the sanity check for quota endpoints (MESOS-3074), even if this renders the cluster under quota. For example, an operator sets quota before adding more agents in order to make sure that no non-quota allocations from new agents are made. ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3912","11/13/2015 09:56:40",3,"Rescind offers in order to satisfy quota ""When a quota request comes in, we may need to rescind a certain amount of outstanding offers in order to satisfy it. Because resources are allocated in the allocator, there can be a race between rescinding and allocating. This race makes it hard to determine the exact amount of offers that should be rescinded in the master.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3923","11/13/2015 20:05:26",5,"Implement AuthN handling in Master for the Scheduler endpoint ""If authentication(AuthN) is enabled on a master, frameworks attempting to use the HTTP Scheduler API can't register. Authorization(AuthZ) is already supported for HTTP based frameworks."""," $ cat /tmp/subscribe-943257503176798091.bin | http --print=HhBb --stream --pretty=colors --auth verification:password1 POST :5050/api/v1/scheduler Accept:application/x-protobuf Content-Type:application/x-protobuf POST /api/v1/scheduler HTTP/1.1 Connection: keep-alive Content-Type: application/x-protobuf Accept-Encoding: gzip, deflate Accept: application/x-protobuf Content-Length: 126 User-Agent: HTTPie/0.9.0 Host: localhost:5050 Authorization: Basic dmVyaWZpY2F0aW9uOnBhc3N3b3JkMQ== +-----------------------------------------+ | NOTE: binary data not shown in terminal | +-----------------------------------------+ HTTP/1.1 401 Unauthorized Date: Fri, 13 Nov 2015 20:00:45 GMT WWW-authenticate: Basic realm=""""Mesos master"""" Content-Length: 65 HTTP schedulers are not supported when authentication is required ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3934","11/16/2015 21:12:06",3,"Libprocess: Unify the initialization of the MetricsProcess and ReaperProcess ""Related to this [TODO|https://github.com/apache/mesos/blob/aa0cd7ed4edf1184cbc592b5caa2429a8373e813/3rdparty/libprocess/src/process.cpp#L949-L950]. The {{MetricsProcess}} and {{ReaperProcess}} are global processes (singletons) which are initialized upon first use. The two processes could be initialized alongside the {{gc}}, {{help}}, {{logging}}, {{profiler}}, and {{system}} (statistics) processes inside {{process::initialize}}. This is also necessary for libprocess re-initialization.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3936","11/17/2015 06:42:32",5,"Document possible task state transitions for framework authors ""We should document the possible ways in which the state of a task can evolve over time; what happens when an agent is partitioned from the master; and more generally, how we recommend that framework authors develop fault-tolerant schedulers and do task state reconciliation.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3939","11/17/2015 17:15:30",2,"ubsan error in net::IP::create(sockaddr const&): misaligned address ""Running ubsan from GCC 5.2 on the current Mesos unit tests yields this, among other problems: """," /mesos/3rdparty/libprocess/3rdparty/stout/include/stout/ip.hpp:230:56: runtime error: reference binding to misaligned address 0x00000199629c for type 'const struct sockaddr_storage', which requires 8 byte alignment 0x00000199629c: note: pointer points here 00 00 00 00 02 00 00 00 ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ^ #0 0x5950cb in net::IP::create(sockaddr const&) (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x5950cb) #1 0x5970cd in net::IPNetwork::fromLinkDevice(std::__cxx11::basic_string, std::allocator > const&, int) (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x5970cd) #2 0x58e006 in NetTest_LinkDevice_Test::TestBody() (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x58e006) #3 0x85abd5 in void testing::internal::HandleSehExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x85abd5) #4 0x848abc in void testing::internal::HandleExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x848abc) #5 0x7e2755 in testing::Test::Run() (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x7e2755) #6 0x7e44a0 in testing::TestInfo::Run() (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x7e44a0) #7 0x7e5ffa in testing::TestCase::Run() (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x7e5ffa) #8 0x7ffe21 in testing::internal::UnitTestImpl::RunAllTests() (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x7ffe21) #9 0x85d7a5 in bool testing::internal::HandleSehExceptionsInMethodIfSupported(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x85d7a5) #10 0x84b37a in bool testing::internal::HandleExceptionsInMethodIfSupported(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x84b37a) #11 0x7f8a4a in testing::UnitTest::Run() (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x7f8a4a) #12 0x608a96 in RUN_ALL_TESTS() (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x608a96) #13 0x60896b in main (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x60896b) #14 0x7fd0f0c7fa3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20a3f) #15 0x4145c8 in _start (/home/vagrant/build-mesos-ubsan/3rdparty/libprocess/3rdparty/stout-tests+0x4145c8) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3949","11/18/2015 17:03:18",3,"User CGroup Isolation tests fail on Centos 6. ""UserCgroupIsolatorTest/0.ROOT_CGROUPS_UserCgroup and UserCgroupIsolatorTest/1.ROOT_CGROUPS_UserCgroup fail on CentOS 6.6 with similar output when libevent and SSL are enabled. """," sudo ./bin/mesos-tests.sh --gtest_filter=""""UserCgroupIsolatorTest/0.ROOT_CGROUPS_UserCgroup"""" --verbose [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from UserCgroupIsolatorTest/0, where TypeParam = mesos::internal::slave::CgroupsMemIsolatorProcess userdel: user 'mesos.test.unprivileged.user' does not exist [ RUN ] UserCgroupIsolatorTest/0.ROOT_CGROUPS_UserCgroup I1118 16:53:35.273717 30249 mem.cpp:605] Started listening for OOM events for container 867a829e-4a26-43f5-86e0-938bf1f47688 I1118 16:53:35.274538 30249 mem.cpp:725] Started listening on low memory pressure events for container 867a829e-4a26-43f5-86e0-938bf1f47688 I1118 16:53:35.275164 30249 mem.cpp:725] Started listening on medium memory pressure events for container 867a829e-4a26-43f5-86e0-938bf1f47688 I1118 16:53:35.275784 30249 mem.cpp:725] Started listening on critical memory pressure events for container 867a829e-4a26-43f5-86e0-938bf1f47688 I1118 16:53:35.276448 30249 mem.cpp:356] Updated 'memory.soft_limit_in_bytes' to 1GB for container 867a829e-4a26-43f5-86e0-938bf1f47688 I1118 16:53:35.277331 30249 mem.cpp:391] Updated 'memory.limit_in_bytes' to 1GB for container 867a829e-4a26-43f5-86e0-938bf1f47688 -bash: /sys/fs/cgroup/memory/mesos/867a829e-4a26-43f5-86e0-938bf1f47688/cgroup.procs: No such file or directory mkdir: cannot create directory `/sys/fs/cgroup/memory/mesos/867a829e-4a26-43f5-86e0-938bf1f47688/user': No such file or directory ../../src/tests/containerizer/isolator_tests.cpp:1307: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'mkdir """" + path::join(flags.cgroups_hierarchy, userCgroup) + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/memory/mesos/867a829e-4a26-43f5-86e0-938bf1f47688/user/cgroup.procs: No such file or directory ../../src/tests/containerizer/isolator_tests.cpp:1316: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'echo $$ >"""" + path::join(flags.cgroups_hierarchy, userCgroup, """"cgroup.procs"""") + """"'"""") Actual: 256 Expected: 0 [ FAILED ] UserCgroupIsolatorTest/0.ROOT_CGROUPS_UserCgroup, where TypeParam = mesos::internal::slave::CgroupsMemIsolatorProcess (149 ms) sudo ./bin/mesos-tests.sh --gtest_filter=""""UserCgroupIsolatorTest/1.ROOT_CGROUPS_UserCgroup"""" --verbose [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from UserCgroupIsolatorTest/1, where TypeParam = mesos::internal::slave::CgroupsCpushareIsolatorProcess userdel: user 'mesos.test.unprivileged.user' does not exist [ RUN ] UserCgroupIsolatorTest/1.ROOT_CGROUPS_UserCgroup I1118 17:01:00.550706 30357 cpushare.cpp:392] Updated 'cpu.shares' to 1024 (cpus 1) for container e57f4343-1a97-4b44-b347-803be47ace80 -bash: /sys/fs/cgroup/cpuacct/mesos/e57f4343-1a97-4b44-b347-803be47ace80/cgroup.procs: No such file or directory mkdir: cannot create directory `/sys/fs/cgroup/cpuacct/mesos/e57f4343-1a97-4b44-b347-803be47ace80/user': No such file or directory ../../src/tests/containerizer/isolator_tests.cpp:1307: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'mkdir """" + path::join(flags.cgroups_hierarchy, userCgroup) + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/cpuacct/mesos/e57f4343-1a97-4b44-b347-803be47ace80/user/cgroup.procs: No such file or directory ../../src/tests/containerizer/isolator_tests.cpp:1316: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'echo $$ >"""" + path::join(flags.cgroups_hierarchy, userCgroup, """"cgroup.procs"""") + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/cpu/mesos/e57f4343-1a97-4b44-b347-803be47ace80/cgroup.procs: No such file or directory mkdir: cannot create directory `/sys/fs/cgroup/cpu/mesos/e57f4343-1a97-4b44-b347-803be47ace80/user': No such file or directory ../../src/tests/containerizer/isolator_tests.cpp:1307: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'mkdir """" + path::join(flags.cgroups_hierarchy, userCgroup) + """"'"""") Actual: 256 Expected: 0 -bash: /sys/fs/cgroup/cpu/mesos/e57f4343-1a97-4b44-b347-803be47ace80/user/cgroup.procs: No such file or directory ../../src/tests/containerizer/isolator_tests.cpp:1316: Failure Value of: os::system( """"su - """" + UNPRIVILEGED_USERNAME + """" -c 'echo $$ >"""" + path::join(flags.cgroups_hierarchy, userCgroup, """"cgroup.procs"""") + """"'"""") Actual: 256 Expected: 0 [ FAILED ] UserCgroupIsolatorTest/1.ROOT_CGROUPS_UserCgroup, where TypeParam = mesos::internal::slave::CgroupsCpushareIsolatorProcess (116 ms) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3960","11/19/2015 18:23:26",3,"Standardize quota endpoints ""To be consistent with other operator endpoints, require a single JSON object in the request as opposed to key-value pairs encoded in a string.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3964","11/20/2015 11:25:52",2,"LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs and LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs_Big_Quota fail on Debian 8. ""sudo ./bin/mesos-test.sh --gtest_filter=""""LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs"""" """," ... F1119 14:34:52.514742 30706 isolator_tests.cpp:455] CHECK_SOME(isolator): Failed to find 'cpu.cfs_quota_us'. Your kernel might be too old to use the CFS cgroups feature. ",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3965","11/20/2015 11:32:36",3,"Ensure resources in `QuotaInfo` protobuf do not contain `role` ""{{QuotaInfo}} protobuf currently stores per-role quotas, including {{Resource}} objects. These resources are neither statically nor dynamically reserved, hence they may not contain {{role}} field. We should ensure this field is unset, as well as update validation routine for {{QuotaInfo}}""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3976","11/20/2015 20:01:15",3,"C++ HTTP Scheduler Library does not work with SSL enabled ""The C++ HTTP scheduler library does not work against Mesos when SSL is enabled (without downgrade). The fix should be simple: * The library should detect if SSL is enabled. * If SSL is enabled, connections should be made with HTTPS instead of HTTP.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3979","11/23/2015 00:24:28",3,"Replace `QuotaInfo` with `Quota` in allocator interface ""After introduction of C++ wrapper `Quota` for `QuotaInfo`, all allocator methods using `QuotaInfo` should be updated.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3981","11/23/2015 09:48:47",3,"Implement recovery in the Hierarchical allocator ""The built-in Hierarchical allocator should implement the recovery (in the presence of quota).""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3983","11/23/2015 09:59:41",3,"Tests for quota request validation ""Tests should include: * JSON validation; * Absence of irrelevant fields; * Semantic validation.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-3994","11/23/2015 19:09:52",3,"Refactor registry client/puller to avoid JSON and struct. ""We should get rid of all JSON and struct for message passing as function returned type. By using the methods provided by spec.hpp to refactor all unnecessary JSON message and struct in registry client and registry puller. Also, remove all redundant check in registry client that are already checked by spec validation. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4004","11/24/2015 23:24:52",3,"Support default entrypoint and command runtime config in Mesos containerizer ""We need to use the entrypoint and command runtime configuration returned from image to be used in Mesos containerizer.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4005","11/24/2015 23:25:53",2,"Support workdir runtime configuration from image ""We need to support workdir runtime configuration returned from image such as Dockerfile.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4009","11/25/2015 10:48:13",1,"RegistryClientTest.SimpleRegistryPuller doesn't compile with GCC 5.1.1 ""GCC 5.1.1 has {{-Werror=sign-compare}} in {{-Wall}} and stumbles over a comparison between signed and unsigned int in {{provisioner_docker_tests.cpp}}.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4013","11/25/2015 14:35:56",5,"Introduce status endpoint for quota ""This endpoint is for querying quota status via the GET method.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4014","11/25/2015 14:38:32",3,"Introduce remove endpoint for quota ""This endpoint is for removing quotas via the DELETE method.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4021","11/26/2015 12:09:34",1,"Remove quota from Registry for quota remove request ""When a remove quota requests hits the endpoint and passes validation, quota should be removed from the registry before the allocator is notified about the change.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4046","12/02/2015 17:54:32",3,"Enable `Env` specified in docker image can be returned from docker pull ""Currently docker pull only return an image structure, which only contains entrypoint info. We have docker inspect as a subprocess inside docker pull, which contains many other useful information of a docker image. We should be able to support returning environment variables information from the image.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4047","12/02/2015 19:19:09",1,"MemoryPressureMesosTest.CGROUPS_ROOT_SlaveRecovery is flaky ""{code:title=Output from passed test} [----------] 1 test from MemoryPressureMesosTest 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.000430889 s, 2.4 GB/s [ RUN ] MemoryPressureMesosTest.CGROUPS_ROOT_SlaveRecovery I1202 11:09:14.319327 5062 exec.cpp:134] Version: 0.27.0 I1202 11:09:14.333317 5079 exec.cpp:208] Executor registered on slave bea15b35-9aa1-4b57-96fb-29b5f70638ac-S0 Registered executor on ubuntu Starting task 4e62294c-cfcf-4a13-b699-c6a4b7ac5162 sh -c 'while true; do dd count=512 bs=1M if=/dev/zero of=./temp; done' Forked command at 5085 I1202 11:09:14.391739 5077 exec.cpp:254] Received reconnect request from slave bea15b35-9aa1-4b57-96fb-29b5f70638ac-S0 I1202 11:09:14.398598 5082 exec.cpp:231] Executor re-registered on slave bea15b35-9aa1-4b57-96fb-29b5f70638ac-S0 Re-registered executor on ubuntu Shutting down Sending SIGTERM to process tree at pid 5085 Killing the following process trees: [ -+- 5085 sh -c while true; do dd count=512 bs=1M if=/dev/zero of=./temp; done \--- 5086 dd count=512 bs=1M if=/dev/zero of=./temp ] [ OK ] MemoryPressureMesosTest.CGROUPS_ROOT_SlaveRecovery (1096 ms) Notice that in the failed test, the executor is asked to shutdown when it tries to reconnect to the agent."""," [----------] 1 test from MemoryPressureMesosTest 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.000430889 s, 2.4 GB/s [ RUN ] MemoryPressureMesosTest.CGROUPS_ROOT_SlaveRecovery I1202 11:09:14.319327 5062 exec.cpp:134] Version: 0.27.0 I1202 11:09:14.333317 5079 exec.cpp:208] Executor registered on slave bea15b35-9aa1-4b57-96fb-29b5f70638ac-S0 Registered executor on ubuntu Starting task 4e62294c-cfcf-4a13-b699-c6a4b7ac5162 sh -c 'while true; do dd count=512 bs=1M if=/dev/zero of=./temp; done' Forked command at 5085 I1202 11:09:14.391739 5077 exec.cpp:254] Received reconnect request from slave bea15b35-9aa1-4b57-96fb-29b5f70638ac-S0 I1202 11:09:14.398598 5082 exec.cpp:231] Executor re-registered on slave bea15b35-9aa1-4b57-96fb-29b5f70638ac-S0 Re-registered executor on ubuntu Shutting down Sending SIGTERM to process tree at pid 5085 Killing the following process trees: [ -+- 5085 sh -c while true; do dd count=512 bs=1M if=/dev/zero of=./temp; done \--- 5086 dd count=512 bs=1M if=/dev/zero of=./temp ] [ OK ] MemoryPressureMesosTest.CGROUPS_ROOT_SlaveRecovery (1096 ms) [----------] 1 test from MemoryPressureMesosTest 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.000404489 s, 2.6 GB/s [ RUN ] MemoryPressureMesosTest.CGROUPS_ROOT_SlaveRecovery I1202 11:09:15.509950 5109 exec.cpp:134] Version: 0.27.0 I1202 11:09:15.568183 5123 exec.cpp:208] Executor registered on slave 88734acc-718e-45b0-95b9-d8f07cea8a9e-S0 Registered executor on ubuntu Starting task 14b6bab9-9f60-4130-bdc4-44efba262bc6 Forked command at 5132 sh -c 'while true; do dd count=512 bs=1M if=/dev/zero of=./temp; done' I1202 11:09:15.665498 5129 exec.cpp:254] Received reconnect request from slave 88734acc-718e-45b0-95b9-d8f07cea8a9e-S0 I1202 11:09:15.670995 5123 exec.cpp:381] Executor asked to shutdown Shutting down Sending SIGTERM to process tree at pid 5132 ../../src/tests/containerizer/memory_pressure_tests.cpp:283: Failure (usage).failure(): Unknown container: ebe90e15-72fa-4519-837b-62f43052c913 *** Aborted at 1449083355 (unix time) try """"date -d @1449083355"""" if you are using GNU date *** ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4058","12/03/2015 16:39:16",1,"Do not use `Resource.role` for resources in quota request. ""To be consistent with other operator endpoints and to adhere to the principal of least surprise, move role from each {{Resource}} in quota set request to the request itself. {{Resource.role}} is used for reserved resources. Since quota is not a direct reservation request, to avoid confusion we shall not reuse this field for communicating the role for which quota should be reserved. Food for thought: Shall we try to keep internal storage protobufs as close as possible to operator's JSON to provide some sort of a schema or decouple those two for the sake of flexibility?""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4066","12/03/2015 22:43:31",3,"Agent should not return partial state when a request is made to /state endpoint during recovery. ""Currently when a user is hitting /state.json on the agent, it may return partial state if the agent has failed over and is recovering. There is currently no clear way to tell if this is the case when looking at a response, so the user may incorrectly interpret the agent as being empty of tasks. We could consider exposing the 'state' enum of the agent in the endpoint: This may be a bit tricky to maintain as far as backwards-compatibility of the endpoint, if we were to alter this enum. Exposing this would allow users to be more informed about the state of the agent."""," enum State { RECOVERING, // Slave is doing recovery. DISCONNECTED, // Slave is not connected to the master. RUNNING, // Slave has (re-)registered. TERMINATING, // Slave is shutting down. } state; ",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4069","12/04/2015 19:32:42",8,"libevent_ssl_socket assertion fails ""Have been seeing the following socket receive error frequently: In this case this was a HTTP get over SSL. The url being: https://dseasb33srnrn.cloudfront.net:443/registry-v2/docker/registry/v2/blobs/sha256/44/44be94a95984bb47dc3a193f59bf8c04d5e877160b745b119278f38753a6f58f/data?Expires=1449259252&Signature=Q4CQdr1LbxsiYyVebmetrx~lqDgQfHVkGxpbMM3PoISn6r07DXIzBX6~tl1iZx9uXdfr~5awH8Kxwh-y8b0dTV3mLTZAVlneZlHbhBAX9qbYMd180-QvUvrFezwOlSmX4B3idvo-zK0CarUu3Ev1hbJz5y3olwe2ZC~RXHEwzkQ_&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q *Steps to reproduce:* 1. Run master 2. Run slave from your build directory as as: 3. Run mesos-execute from your build directory as : """," F1204 11:12:47.301839 54104 libevent_ssl_socket.cpp:245] Check failed: length > 0 *** Check failure stack trace: *** @ 0x7f73227fe5a6 google::LogMessage::Fail() @ 0x7f73227fe4f2 google::LogMessage::SendToLog() @ 0x7f73227fdef4 google::LogMessage::Flush() @ 0x7f7322800e08 google::LogMessageFatal::~LogMessageFatal() @ 0x7f73227b93e2 process::network::LibeventSSLSocketImpl::recv_callback() @ 0x7f73227b9182 process::network::LibeventSSLSocketImpl::recv_callback() @ 0x7f731cbc75cc bufferevent_run_deferred_callbacks_locked @ 0x7f731cbbdc5d event_base_loop @ 0x7f73227d9ded process::EventLoop::run() @ 0x7f73227a3101 _ZNSt12_Bind_simpleIFPFvvEvEE9_M_invokeIJEEEvSt12_Index_tupleIJXspT_EEE @ 0x7f73227a305b std::_Bind_simple<>::operator()() @ 0x7f73227a2ff4 std::thread::_Impl<>::_M_run() @ 0x7f731e0d1a40 (unknown) @ 0x7f731de0a182 start_thread @ 0x7f731db3730d (unknown) @ (nil) (unknown) GLOG_v=1;SSL_ENABLED=1;SSL_KEY_FILE=;SSL_CERT_FILE=;sudo -E ./bin/mesos-slave.sh \ --master=127.0.0.1:5050 \ --executor_registration_timeout=5mins \ --containerizers=mesos \ --isolation=filesystem/linux \ --image_providers=DOCKER \ --docker_puller_timeout=600 \ --launcher_dir=$MESOS_BUILD_DIR/src/.libs \ --switch_user=""""false"""" \ --docker_puller=""""registry"""" ./src/mesos-execute \ --master=127.0.0.1:5050 \ --command=""""uname -a"""" \ --name=test \ --docker_image=ubuntu ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4087","12/07/2015 20:53:36",5,"Introduce a module for logging executor/task output ""Existing executor/task logs are logged to files in their sandbox directory, with some nuances based on which containerizer is used (see background section in linked document). A logger for executor/task logs has the following requirements: * The logger is given a command to run and must handle the stdout/stderr of the command. * The handling of stdout/stderr must be resilient across agent failover. Logging should not stop if the agent fails. * Logs should be readable, presumably via the web UI, or via some other module-specific UI.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4088","12/07/2015 21:01:42",2,"Modularize existing plain-file logging for executor/task logs launched with the Mesos Containerizer ""Once a module for executor/task output logging has been introduced, the default module will mirror the existing behavior. Executor/task stdout/stderr is piped into files within the executor's sandbox directory. The files are exposed in the web UI, via the {{/files}} endpoint.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4090","12/07/2015 22:13:46",5,"Create light-weight executor only and scheduler only mesos eggs ""Currently, when running tasks in docker containers, if the executor uses the mesos.native python library, the execution environment inside the container (OS, native libs, etc) must match the execution environment outside the container fairly closely in order to load the mesos.so library. The solution here can be to introduce a much lighter weight python egg, mesos.executor, which only includes code (and dependencies) needed to create and run an MesosExecutorDriver. Executors can then use this native library instead of mesos.native.""","",0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4099","12/08/2015 20:40:56",1,"parallel make tests does not build all test targets ""When inside 3rdparty/libprocess: Running {{make -j8 tests}} from a clean build does not yield the {{libprocess-tests}} binary. Running it a subsequent time triggers more compilation and ends up yielding the {{libprocess-tests}} binary. This suggests the {{test}} target is not being built correctly.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4102","12/09/2015 01:30:43",5,"Quota doesn't allocate resources on slave joining. ""See attached patch. {{framework1}} is not allocated any resources, despite the fact that the resources on {{agent2}} can safely be allocated to it without risk of violating {{quota1}}. If I understand the intended quota behavior correctly, this doesn't seem intended. Note that if the framework is added _after_ the slaves are added, the resources on {{agent2}} are allocated to {{framework1}}.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4107","12/10/2015 00:58:36",1,"`os::strerror_r` breaks the Windows build ""`os::strerror_r` does not exist on Windows.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4108","12/10/2015 01:14:32",5,"Implement `os::mkdtemp` for Windows ""Used basically exclusively for testing, this insecure and otherwise-not-quite-suitable-for-prod function needs to work to run what will eventually become the FS tests.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4109","12/10/2015 01:55:08",1,"HTTPConnectionTest.ClosingResponse is flaky ""Output of the test: """," [ RUN ] HTTPConnectionTest.ClosingResponse I1210 01:20:27.048532 26671 process.cpp:3077] Handling HTTP event for process '(22)' with path: '/(22)/get' ../../../3rdparty/libprocess/src/tests/http_tests.cpp:919: Failure Actual function call count doesn't match EXPECT_CALL(*http.process, get(_))... Expected: to be called twice Actual: called once - unsatisfied and active [ FAILED ] HTTPConnectionTest.ClosingResponse (43 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4110","12/10/2015 02:14:08",5,"Implement `WindowsError` to correspond with `ErrnoError`. ""In the C standard library, `errno` records the last error on a thread. You can pretty-print it with `strerror`. In Stout, we report these errors with `ErrnoError`. The Windows API has something similar, called `GetLastError()`. The way to pretty-print this is hilariously unintuitive and terrible, so in this case it is actually very beneficial to wrap it with something similar to `ErrnoError`, maybe called `WindowsError`.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4112","12/10/2015 12:05:52",2,"Clean up libprocess gtest macros ""This ticket is regarding the libprocess gtest helpers in {{3rdparty/libprocess/include/process/gtest.hpp}}. The pattern in this file seems to be a set of macros: * {{AWAIT_ASSERT__FOR}} * {{AWAIT_ASSERT_}} -- default of 15 seconds * {{AWAIT_\_FOR}} -- alias for {{AWAIT_ASSERT__FOR}} * {{AWAIT_}} -- alias for {{AWAIT_ASSERT_}} * {{AWAIT_EXPECT__FOR}} * {{AWAIT_EXPECT_}} -- default of 15 seconds (1) {{AWAIT_EQ_FOR}} should be added for completeness. (2) In {{gtest}}, we've got {{EXPECT_EQ}} as well as the {{bool}}-specific versions: {{EXPECT_TRUE}} and {{EXPECT_FALSE}}. We should adopt this pattern in these helpers as well. Keeping the pattern above in mind, the following are missing: * {{AWAIT_ASSERT_TRUE_FOR}} * {{AWAIT_ASSERT_TRUE}} * {{AWAIT_ASSERT_FALSE_FOR}} * {{AWAIT_ASSERT_FALSE}} * {{AWAIT_EXPECT_TRUE_FOR}} * {{AWAIT_EXPECT_FALSE_FOR}} (3) There are HTTP response related macros at the bottom of the file, e.g. {{AWAIT_EXPECT_RESPONSE_STATUS_EQ}}, however these are missing their {{ASSERT}} counterparts. -(4) The reason for (3) presumably is because we reach for {{EXPECT}} over {{ASSERT}} in general due to the test suite crashing behavior of {{ASSERT}}. If this is the case, it would be worthwhile considering whether macros such as {{AWAIT_READY}} should alias {{AWAIT_EXPECT_READY}} rather than {{AWAIT_ASSERT_READY}}.- (5) There are a few more missing macros, given {{AWAIT_EQ_FOR}} and {{AWAIT_EQ}} which aliases to {{AWAIT_ASSERT_EQ_FOR}} and {{AWAIT_ASSERT_EQ}} respectively, we should also add {{AWAIT_TRUE_FOR}}, {{AWAIT_TRUE}}, {{AWAIT_FALSE_FOR}}, and {{AWAIT_FALSE}} as well.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4128","12/11/2015 08:40:07",3,"Refactor sorter factories in allocator and improve comments around them. ""For clarity we want to refactor the factory section in the allocator and explain the purpose (and necessity) of all sorters.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4130","12/11/2015 09:44:39",1,"Document how the fetcher can reach across a proxy connection. ""The fetcher uses libcurl for downloading content from HTTP, HTTPS, etc. There is no source code in the pertinent parts of """"net.hpp"""" that deals with proxy settings. However, libcurl automatically picks up certain environment variables and adjusts its settings accordingly. See """"man libcurl-tutorial"""" for details. See section """"Proxies"""", subsection """"Environment Variables"""". If you follow this recipe in your Mesos agent startup script, you can use a proxy. We should document this in the fetcher (cache) doc (http://mesos.apache.org/documentation/latest/fetcher/). ""","",0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4136","12/11/2015 23:40:22",3,"Add a ContainerLogger module that restrains log sizes ""One of the major problems this logger module aims to solve is overflowing executor/task log files. Log files are simply written to disk, and are not managed other than via occasional garbage collection by the agent process (and this only deals with terminated executors). We should add a {{ContainerLogger}} module that truncates logs as it reaches a configurable maximum size. Additionally, we should determine if the web UI's {{pailer}} needs to be changed to deal with logs that are not append-only. This will be a non-default module which will also serve as an example for how to implement the module.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4137","12/11/2015 23:51:28",3,"Modularize plain-file logging for executor/task logs launched with the Docker Containerizer ""Adding a hook inside the Docker containerizer is slightly more involved than the Mesos containerizer. Docker executors/tasks perform plain-file logging in different places depending on whether the agent is in a Docker container itself || Agent || Code || | Not in container | {{DockerContainerizerProcess::launchExecutorProcess}} | | In container | {{Docker::run}} in a {{mesos-docker-executor}} process | This means a {{ContainerLogger}} will need to be loaded or hooked into the {{mesos-docker-executor}}. Or we will need to change how piping in done in {{mesos-docker-executor}}.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4143","12/13/2015 18:27:31",2,"Reserve/UnReserve Dynamic Reservation Endpoints allow reservations on non-existing roles ""When working with Dynamic reservations via the /reserve and /unreserve endpoints, it is possible to reserve resources for roles that have not been specified via the --roles flag on the master. However, these roles are not usable because the roles have not been defined, nor are they added to the list of roles available. Per the mailing list, changing roles after the fact is not possible at this time. (That may be another JIRA), more importantly, the /reserve and /unreserve end points should not allow reservation of roles not specified by --roles. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4150","12/14/2015 20:32:54",3,"Implement container logger module metadata recovery ""The {{ContainerLoggers}} are intended to be isolated from agent failover, in the same way that executors do not crash when the agent process crashes. For default {{ContainerLogger}} s, like the {{SandboxContainerLogger}} and the (tentatively named) {{TruncatingSandboxContainerLogger}}, the log files are exposed during agent recovery regardless. For non-default {{ContainerLogger}} s, the recovery of executor metadata may be necessary to rebuild endpoints that expose the logs. This can be implemented as part of {{Containerizer::recover}}.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4154","12/15/2015 05:37:25",2,"Rename shutdown_frameworks to teardown_frameworks ""The mesos is now using teardown framework to shutdown a framework but the acls are still using shutdown_framework, it is better to rename shutdown_framework to teardown_framework for acl to keep consistent. This is a post review request for https://reviews.apache.org/r/40829/""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4183","12/16/2015 15:37:08",3,"Move operator<< definitions to .cpp files and include in .hpp where possible. ""We often include complex headers like {{}} in """".hpp"""" files to define {{operator<<()}} inline (e.g. """"mesos/authorizer/authorizer.hpp""""). Instead, we can move definitions to corresponding """".cpp"""" files and replace stream headers with {{iosfwd}}, for example, this is partially done for {{URI}} in """"mesos/uri/uri.hpp"""".""","",0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4186","12/17/2015 02:31:01",2,"Serialize docker v1 image spec as protobuf ""Currently we only support v2 docker manifest serialization method. When we read docker image spec locally from disk, we should be able to parse v1 docker manifest as protobuf, which will make it easier to gather runtime config and other necessary info.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4190","12/17/2015 11:19:24",3,"Create a Design Doc for dynamic weights. ""A short design doc for dynamic weights, it will focus on /weights API and the changes to the allocator API.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4192","12/17/2015 19:59:07",3,"Add documentation for API Versioning ""Currently, we don't have any documentation for: - How Mesos implements API versioning ? - How are protobufs versioned and how does mesos handle them internally ? - What do contributors need to do when they make a change to a external user facing protobuf ? The relevant design doc: https://docs.google.com/document/d/1-iQjo6778H_fU_1Zi_Yk6szg8qj-wqYgVgnx7u3h6OU/edit#heading=h.2gkbjz6amn7b ""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4200","12/18/2015 19:30:34",2,"Test case(s) for weights + allocation behavior ""As far as I can see, we currently have NO test cases for behavior when weights are defined.""","",0,0,0,1,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4206","12/18/2015 23:59:21",3,"Write new logging-related documentation ""This should include: * Default logging behavior for master, agent, framework, executor, task. * Master/agent: ** A summary of log-related flags. ** {{glog}} specific options. * Separation of master/agent logs from container logs. * The {{ContainerLogger}} module.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4207","12/19/2015 00:22:14",2,"Add an example bug due to a lack of defer() to the defer() documentation ""In the past, some bugs have been introduced into the codebase due to a lack of {{defer()}} where it should have been used. It would be useful to add an example of this to the {{defer()}} documentation.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4209","12/19/2015 18:48:31",3,"Document ""how to program with dynamic reservations and persistent volumes"" ""Specifically, some of the gotchas around: * Retrying reservation attempts after a timeout * Fuzzy-matching resources to determine whether a reservation/PV is successful * Represent client state as a state machine and repeatedly move """"toward"""" successful terminate stats Should also point to persistent volume example framework. We should also ask Gabriel and others (Arango?) who have built frameworks with PVs/DRs for feedback.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4222","12/21/2015 19:22:41",3,"Document containerizer from user perspective. ""Add documentation that covers: * Purpose of containerizers from a use case perspective. * What purpose does each containerizer (mesos. docker, compose) serve. * What criteria could be used to choose a containerizer.""","",0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4225","12/21/2015 19:34:59",2,"Exposed docker/appc image manifest to mesos containerizer. ""Collect docker image manifest from disk(which contains all runtime configurations), and pass it back to provisioner, so that mesos containerizer can grab all necessary info from provisioner.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4226","12/21/2015 19:38:35",1,"Enable passing docker image environment variables runtime config to provisioner ""Collect environment variables runtime config information from a docker image, and save as a map. Pass it back to provisioner, and handling environment variables merge issue.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4227","12/21/2015 19:41:17",1,"Enable passing docker image cmd runtime config to provisioner ""Cmd is the command to run when starting a container. We should be able to collect Cmd config information from a docker image, and pass it back to provisioner.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4229","12/21/2015 19:53:25",3,"Docker containers left running on disk after reviewbot builds ""The Mesos Reviewbot builds recently failed due to Docker containers being left running on the disk, eventually leading to a full disk: https://issues.apache.org/jira/browse/INFRA-10984 These containers should be automatically cleaned up to avoid this problem in the future.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4241","12/22/2015 21:20:48",3,"Consolidate docker store slave flags ""Currently there are too many slave flags for configuring the docker store/puller. We can remove the following flags: docker_auth_server_port docker_local_archives_dir docker_registry_port docker_puller And consolidate them into the existing flags.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4261","12/30/2015 08:26:01",3,"Remove docker auth server flag ""We currently use a configured docker auth server from a slave flag to get token auth for docker registry. However this doesn't work for private registries as docker registry supports sending down the correct auth server to contact. We should remove docker auth server flag completely and ask the docker registry for auth server.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4262","12/30/2015 16:07:23",5,"Enable net_cls subsytem in cgroup infrastructure ""Currently the control group infrastructure within mesos supports only the memory and CPU subsystems. We need to enhance this infrastructure to support the net_cls subsystem as well. Details of the net_cls subsystem and its use-cases can be found here: https://www.kernel.org/doc/Documentation/cgroups/net_cls.txt Enabling the net_cls will allow us to provide operators to, potentially, regulate framework traffic on a per-container basis. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4272","01/04/2016 09:20:47",1,"DurationTest.Arithmetic performs inexact float calculation in test ""{{DurationTest.Arithmetic}} does a calculation with not exactly representable floating point values and also performs an equality check, Here neither the value {{3.3}} nor {{0.33}} cannot be represented exactly as a floating point number so the check might fail incorrectly (as it does e.g. when compiling and executing the test under 32-bit on Debian8). Instead we should just use exactly representable values to make sure the test will succeed as long as the implementation behaves as expected."""," EXPECT_EQ(Duration::create(3.3).get(), Seconds(10) * 0.33); ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4273","01/04/2016 09:38:03",1,"Replace variadic List constructor with one taking a initializer_list ""{{List}} provides a variadic constructor currently implemented with some preprocessor magic. Given that we already require C++11 we can replace that one with a much simpler one just taking a {{std::initializer_list}}. This would change the invocations, This addresses an existing {{TODO}}. """," auto l1 = List(1, 2, 3); // now auto l2 = List({1, 2, 3}); // proposed ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4275","01/04/2016 10:15:58",2,"Duration uses fixed-width types inconsistently ""The implementation of the {{Duration}} class correctly uses fixed-width types (here {{int64_t}}) for portability internally, but uses {{long}} types in a few places (in particular {{LLONG_MIN}} and {{LLONG_MAX}}). This is inconsistent on 64-bit platforms, and probably incorrect on 32-bit as there {{long}} is 32 bit wide. Additionally, the longer {{Duration}} types ({{Minutes}}, {{Hours}}, {{Days}}, and {{Weeks}}) construct from {{int32_t}}, while shorter ones take {{int64_t}}. Probably as a left-over this is matched with a redundant {{Duration}} constructor taking an {{int32_t}} value where the other one taking an {{int64_t}} value would be sufficient. It should be safe to just construct from {{int64_t}} in all places.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4279","01/04/2016 15:57:28",5,"Docker executor truncates task's output when the task is killed. ""I'm implementing a graceful restarts of our mesos-marathon-docker setup and I came to a following issue: (it was already discussed on https://github.com/mesosphere/marathon/issues/2876 and guys form mesosphere got to a point that its probably a docker containerizer problem...) To sum it up: When i deploy simple python script to all mesos-slaves: {code} #!/usr/bin/python from time import sleep import signal import sys import datetime def sigterm_handler(_signo, _stack_frame): print """"got %i"""" % _signo print datetime.datetime.now().time() sys.stdout.flush() sleep(2) print datetime.datetime.now().time() print """"ending"""" sys.stdout.flush() sys.exit(0) signal.signal(signal.SIGTERM, sigterm_handler) signal.signal(signal.SIGINT, sigterm_handler) try: print """"Hello"""" i = 0 while True: i += 1 print datetime.datetime.now().time() print """"Iteration #%i"""" % i sys.stdout.flush() sleep(1) finally: print """"Goodbye"""" {code} and I run it through Marathon like {code:javascript} data = { args: [""""/tmp/script.py""""], instances: 1, cpus: 0.1, mem: 256, id: """"marathon-test-api"""" } {code} During the app restart I get expected result - the task receives sigterm and dies peacefully (during my script-specified 2 seconds period) But when i wrap this python script in a docker: {code} FROM node:4.2 RUN mkdir /app ADD . /app WORKDIR /app ENTRYPOINT [] {code} and run appropriate application by Marathon: {code:javascript} data = { args: [""""./script.py""""], container: { type: """"DOCKER"""", docker: { image: """"bydga/marathon-test-api"""" }, forcePullImage: yes }, cpus: 0.1, mem: 256, instances: 1, id: """"marathon-test-api"""" } {code} The task during restart (issued from marathon) dies immediately without having a chance to do any cleanup. """," #!/usr/bin/python from time import sleep import signal import sys import datetime def sigterm_handler(_signo, _stack_frame): print """"got %i"""" % _signo print datetime.datetime.now().time() sys.stdout.flush() sleep(2) print datetime.datetime.now().time() print """"ending"""" sys.stdout.flush() sys.exit(0) signal.signal(signal.SIGTERM, sigterm_handler) signal.signal(signal.SIGINT, sigterm_handler) try: print """"Hello"""" i = 0 while True: i += 1 print datetime.datetime.now().time() print """"Iteration #%i"""" % i sys.stdout.flush() sleep(1) finally: print """"Goodbye"""" data = { args: [""""/tmp/script.py""""], instances: 1, cpus: 0.1, mem: 256, id: """"marathon-test-api"""" } FROM node:4.2 RUN mkdir /app ADD . /app WORKDIR /app ENTRYPOINT [] data = { args: [""""./script.py""""], container: { type: """"DOCKER"""", docker: { image: """"bydga/marathon-test-api"""" }, forcePullImage: yes }, cpus: 0.1, mem: 256, instances: 1, id: """"marathon-test-api"""" } ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4282","01/04/2016 17:16:19",2,"Update isolator prepare function to use ContainerLaunchInfo ""Currently we have the isolator's prepare function returning ContainerPrepareInfo protobuf. We should enable ContainerLaunchInfo (contains environment variables, namespaces, etc.) to be returned which will be used by Mesos containerize to launch containers. By doing this (ContainerPrepareInfo -> ContainerLaunchInfo), we can select any necessary information and passing then to launcher.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4284","01/04/2016 19:35:13",8,"Draft design doc for multi-role frameworks ""Create a document that describes the problems with having only single-role frameworks and proposes an MVP solution and implementation approach.""","",0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4285","01/04/2016 20:13:26",3,"Mesos command task doesn't support volumes with image ""Currently volumes are stripped when an image is specified running a command task with Mesos containerizer. ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4289","01/04/2016 22:42:04",5,"Design doc for simple appc image discovery ""Create a design document describing the following: - Model and abstraction of the Discoverer - Workflow of the discovery process ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4292","01/05/2016 11:29:34",3,"Tests for quota with implicit roles. ""With the introduction of implicit roles (MESOS-3988), we should make sure quota can be set for an inactive role (unknown to the master) and maybe transition it to the active state.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4294","01/05/2016 19:48:35",1,"Protobuf parse should support parsing JSON object containing JSON Null. ""(This bug was exposed by MESOS-4184, when serializing docker v1 image manifest as protobuf). Currently protobuf::parse returns failures when parsing any JSON containing JSON::Null. If we have any protobuf field set as `JSON::Null`, any other non-repeated field cannot capture their value. For example, assuming we have a protobuf message: If there exists any field containing JSON::Null, like below: When we do protobuf::parse, it would return the following failure: """," message Nested { optional string str = 1; repeated string json_null = 2; } { \""""str\"""": \""""message\"""", \""""json_null\"""": null } Failure parse: Not expecting a JSON null ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4295","01/05/2016 20:05:12",3,"Change documentation links to ""*.md"" ""Right now, links either use the form or . We should probably switch to using the latter form consistently -- it previews better on Github, and it will make it easier to have multiple versions of the docs on the website at once in the future.""","[label](/documentation/latest/foo/)[label](foo.md)",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4300","01/06/2016 16:12:48",3,"Add AuthN and AuthZ to maintenance endpoints. ""Maintenance endpoints are currently only restricted by firewall settings. They should also support authentication/authorization like other HTTP endpoints.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4301","01/06/2016 16:21:14",1,"Accepting an inverse offer prints misleading logs ""Whenever a scheduler accepts an inverse offer, Mesos will print a line like this in the master logs: Inverse offers should not trigger this warning."""," W1125 10:05:53.155109 29362 master.cpp:2897] ACCEPT call used invalid offers '[ 932f7d7b-f2d4-42c7-9391-222c19b9d35b-O2 ]': Offer 932f7d7b-f2d4-42c7-9391-222c19b9d35b-O2 is no longer valid ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4304","01/07/2016 02:23:23",1,"hdfs operations fail due to prepended / on path for non-hdfs hadoop clients. ""This bug was resolved for the hdfs protocol for MESOS-3602 but since the process checks for the """"hdfs"""" protocol at the beginning of the URI, the fix does not extend itself to non-hdfs hadoop clients. After a brief chat with [~jieyu], it was recommended to fix the current hdfs client code because the new hadoop fetcher plugin is slated to use it."""," I0107 01:22:01.259490 17678 logging.cpp:172] INFO level logging started! I0107 01:22:01.259856 17678 fetcher.cpp:422] Fetcher Info: {""""cache_directory"""":""""\/tmp\/mesos\/fetch\/slaves\/530dda5a-481a-4117-8154-3aee637d3b38-S3\/root"""",""""items"""":[{""""action"""":""""BYPASS_CACHE"""",""""uri"""":{""""extract"""":true,""""value"""":""""maprfs:\/\/\/mesos\/storm-mesos-0.9.3.tgz""""}},{""""action"""":""""BYPASS_CACHE"""",""""uri"""":{""""extract"""":true,""""value"""":""""http:\/\/s0121.stag.urbanairship.com:36373\/conf\/storm.yaml""""}}],""""sandbox_directory"""":""""\/mnt\/data\/mesos\/slaves\/530dda5a-481a-4117-8154-3aee637d3b38-S3\/frameworks\/530dda5a-481a-4117-8154-3aee637d3b38-0000\/executors\/word-count-1-1452129714\/runs\/4443d5ac-d034-49b3-bf12-08fb9b0d92d0"""",""""user"""":""""root""""} I0107 01:22:01.262171 17678 fetcher.cpp:377] Fetching URI 'maprfs:///mesos/storm-mesos-0.9.3.tgz' I0107 01:22:01.262212 17678 fetcher.cpp:248] Fetching directly into the sandbox directory I0107 01:22:01.262243 17678 fetcher.cpp:185] Fetching URI 'maprfs:///mesos/storm-mesos-0.9.3.tgz' I0107 01:22:01.671777 17678 fetcher.cpp:110] Downloading resource with Hadoop client from 'maprfs:///mesos/storm-mesos-0.9.3.tgz' to '/mnt/data/mesos/slaves/530dda5a-481a-4117-8154-3aee637d3b38-S3/frameworks/530dda5a-481a-4117-8154-3aee637d3b38-0000/executors/word-count-1-1452129714/runs/4443d5ac-d034-49b3-bf12-08fb9b0d92d0/storm-mesos-0.9.3.tgz' copyToLocal: java.net.URISyntaxException: Expected scheme-specific part at index 7: maprfs: Usage: java FsShell [-copyToLocal [-ignoreCrc] [-crc] ] E0107 01:22:02.435556 17678 shell.hpp:90] Command 'hadoop fs -copyToLocal '/maprfs:///mesos/storm-mesos-0.9.3.tgz' '/mnt/data/mesos/slaves/530dda5a-481a-4117-8154-3aee637d3b38-S3/frameworks/530dda5a-481a-4117-8154-3aee637d3b38-0000/executors/word-count-1-1452129714/runs/4443d5ac-d034-49b3-bf12-08fb9b0d92d0/storm-mesos-0.9.3.tgz'' failed; this is the output: Failed to fetch 'maprfs:///mesos/storm-mesos-0.9.3.tgz': HDFS copyToLocal failed: Failed to execute 'hadoop fs -copyToLocal '/maprfs:///mesos/storm-mesos-0.9.3.tgz' '/mnt/data/mesos/slaves/530dda5a-481a-4117-8154-3aee637d3b38-S3/frameworks/530dda5a-481a-4117-8154-3aee637d3b38-0000/executors/word-count-1-1452129714/runs/4443d5ac-d034-49b3-bf12-08fb9b0d92d0/storm-mesos-0.9.3.tgz''; the command was either not found or exited with a non-zero exit status: 255 Failed to synchronize with slave (it's probably exited) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4311","01/08/2016 02:04:10",1,"Protobuf parse should pass error messages when parsing nested JSON. ""Currently when protobuf::parse handles nested JSON objects, it cannot pass any error message out. We should enable showing those error messages.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4329","01/11/2016 12:38:15",1,"SlaveTest.LaunchTaskInfoWithContainerInfo cannot be execute in isolation ""Executing {{SlaveTest.LaunchTaskInfoWithContainerInfo}} from {{468b8ec}} under OS X 10.10.5 in isolation fails due to missing cleanup, """," % ./bin/mesos-tests.sh --gtest_filter=SlaveTest.LaunchTaskInfoWithContainerInfo Source directory: /ABC/DEF/src/mesos Build directory: /ABC/DEF/src/mesos/build ------------------------------------------------------------- We cannot run any Docker tests because: Docker tests not supported on non-Linux systems ------------------------------------------------------------- /usr/bin/nc /usr/bin/curl Note: Google Test filter = SlaveTest.LaunchTaskInfoWithContainerInfo-HealthCheckTest.ROOT_DOCKER_DockerHealthyTask:HealthCheckTest.ROOT_DOCKER_DockerHealthStatusChange:HierarchicalAllocator_BENCHMARK_Test.DeclineOffers:HookTest.ROOT_DOCKER_VerifySlavePreLaunchDockerHook:SlaveTest.ROOT_RunTaskWithCommandInfoWithoutUser:SlaveTest.DISABLED_ROOT_RunTaskWithCommandInfoWithUser:DockerContainerizerTest.ROOT_DOCKER_Launch:DockerContainerizerTest.ROOT_DOCKER_Kill:DockerContainerizerTest.ROOT_DOCKER_Usage:DockerContainerizerTest.ROOT_DOCKER_Recover:DockerContainerizerTest.ROOT_DOCKER_SkipRecoverNonDocker:DockerContainerizerTest.ROOT_DOCKER_Logs:DockerContainerizerTest.ROOT_DOCKER_Default_CMD:DockerContainerizerTest.ROOT_DOCKER_Default_CMD_Override:DockerContainerizerTest.ROOT_DOCKER_Default_CMD_Args:DockerContainerizerTest.ROOT_DOCKER_SlaveRecoveryTaskContainer:DockerContainerizerTest.DISABLED_ROOT_DOCKER_SlaveRecoveryExecutorContainer:DockerContainerizerTest.ROOT_DOCKER_NC_PortMapping:DockerContainerizerTest.ROOT_DOCKER_LaunchSandboxWithColon:DockerContainerizerTest.ROOT_DOCKER_DestroyWhileFetching:DockerContainerizerTest.ROOT_DOCKER_DestroyWhilePulling:DockerContainerizerTest.ROOT_DOCKER_ExecutorCleanupWhenLaunchFailed:DockerContainerizerTest.ROOT_DOCKER_FetchFailure:DockerContainerizerTest.ROOT_DOCKER_DockerPullFailure:DockerContainerizerTest.ROOT_DOCKER_DockerInspectDiscard:DockerTest.ROOT_DOCKER_interface:DockerTest.ROOT_DOCKER_parsing_version:DockerTest.ROOT_DOCKER_CheckCommandWithShell:DockerTest.ROOT_DOCKER_CheckPortResource:DockerTest.ROOT_DOCKER_CancelPull:DockerTest.ROOT_DOCKER_MountRelative:DockerTest.ROOT_DOCKER_MountAbsolute:CopyBackendTest.ROOT_CopyBackend:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/0:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/1:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/2:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/3:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/4:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/5:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/6:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/7:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/8:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/9:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/10:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/11:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/12:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/13:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/14:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/15:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/16:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/17:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/18:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/19:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/20:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/21:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/22:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/23:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/24:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/25:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/26:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/27:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/28:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/29:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/30:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/31:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/32:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/33:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/34:SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.AddAndUpdateSlave/35:SlaveCount/Registrar_BENCHMARK_Test.Performance/0:SlaveCount/Registrar_BENCHMARK_Test.Performance/1:SlaveCount/Registrar_BENCHMARK_Test.Performance/2:SlaveCount/Registrar_BENCHMARK_Test.Performance/3 [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from SlaveTest [ RUN ] SlaveTest.LaunchTaskInfoWithContainerInfo [ OK ] SlaveTest.LaunchTaskInfoWithContainerInfo (79 ms) [----------] 1 test from SlaveTest (79 ms total) [----------] Global test environment tear-down ../../src/tests/environment.cpp:569: Failure Failed Tests completed with child processes remaining: -+- 54487 /ABC/DEF/src/mesos/build/src/.libs/mesos-tests --gtest_filter=SlaveTest.LaunchTaskInfoWithContainerInfo \--- 54503 /bin/sh /ABC/DEF/src/mesos/build/src/mesos-containerizer launch --command={""""shell"""":true,""""value"""":""""\/ABC\/DEF\/src\/mesos\/build\/src\/mesos-executor""""} --commands={""""commands"""":[]} --directory=/tmp --help=false --pipe_read=10 --pipe_write=13 --user=test [==========] 1 test from 1 test case ran. (87 ms total) [ PASSED ] 1 test. [ FAILED ] 0 tests, listed below: 0 FAILED TESTS ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4333","01/11/2016 20:08:06",2,"Refactor Appc provisioner tests ""Current tests can be refactored so that we can reuse some common tasks like test image creation. This will benefit future tests like appc image puller tests.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4336","01/11/2016 23:46:50",1,"Document supported file types for archive extraction by fetcher ""The Mesos fetcher extracts specified URIs if requested to do so by the scheduler. However, the documentation at http://mesos.apache.org/documentation/latest/fetcher/ doesn't list the file types /extensions that will be extracted by the fetcher. [The relevant code|https://github.com/apache/mesos/blob/master/src/launcher/fetcher.cpp#L63] specifies an exhaustive list of extensions that will be extracted, the documentation should be updated to match.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4338","01/12/2016 01:37:50",5,"Create utilities for common shell commands used. ""We spawn shell for command line utilities like tar, untar, sha256 etc. Would be great for resuse if we can create a common utilities class/file for all these utilities. ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4344","01/12/2016 17:00:32",1,"Allow operators to assign net_cls major handles to mesos agents ""The net_cls cgroup associates a 16-bit major and 16-bit minor network handle to packets originating from tasks associated with a specific net_cls cgroup. In mesos we need to give the operator the ability to fix the 16-bit major handle used in an agent (the minor handle will be allocated by the agent. See MESOS-4345). Fixing the parent handle on the agent allows operators to install default firewall rules using the parent handle to enforce a default policy (say DENY ALL) for all container traffic till the container is allocated a minor handle. A simple way to achieve this requirement is to pass the major handle as a flag to the agent at startup. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4345","01/12/2016 17:09:56",3,"Implement a network-handle manager for net_cls cgroup subsystem ""As part of implementing the net_cls cgroup isolator we need a mechanism to manage the minor handles that will be allocated to containers when they are associated with a net_cls cgroup. The network-handle manager needs to provide the following functionality: a) During normal operation keep track of the free and allocated network handles. There can be a total of 64K such network handles. b) On startup, learn the allocated network handle by walking the net_cls cgroup tree for mesos and build a map of free network handles available to the agent. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4347","01/12/2016 21:08:10",1,"GMock warning in ReservationTest.ACLMultipleOperations "" Seems to occur non-deterministically for me, maybe once per 50 runs or so. OSX 10.10"""," [ RUN ] ReservationTest.ACLMultipleOperations GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: shutdown(0x7fa2a311b300) Stack trace: [ OK ] ReservationTest.ACLMultipleOperations (174 ms) [----------] 1 test from ReservationTest (174 ms total) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4348","01/12/2016 21:13:18",1,"GMock warning in HookTest.VerifySlaveRunTaskHook, HookTest.VerifySlaveTaskStatusDecorator "" Occurs non-deterministically for me. OSX 10.10."""," [ RUN ] HookTest.VerifySlaveRunTaskHook GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: shutdown(0x7ff079cb2420) Stack trace: [ OK ] HookTest.VerifySlaveRunTaskHook (51 ms) [ RUN ] HookTest.VerifySlaveTaskStatusDecorator GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: shutdown(0x7ff079cbb790) Stack trace: [ OK ] HookTest.VerifySlaveTaskStatusDecorator (54 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4349","01/12/2016 21:16:30",1,"GMock warning in SlaveTest.ContainerUpdatedBeforeTaskReachesExecutor "" Occurs non-deterministically for me on OSX 10.10, perhaps one run in ten."""," [ RUN ] SlaveTest.ContainerUpdatedBeforeTaskReachesExecutor GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: shutdown(0x7fe189cae850) Stack trace: [ OK ] SlaveTest.ContainerUpdatedBeforeTaskReachesExecutor (51 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4353","01/13/2016 13:17:13",1,"Limit the number of processes created by libprocess ""Currently libprocess will create {{max(8, number of CPU cores)}} processes during the initialization, see https://github.com/apache/mesos/blob/0.26.0/3rdparty/libprocess/src/process.cpp#L2146 for details. This should be OK for a normal machine which has no much cores (e.g., 16, 32), but for a powerful machine which may have a large number of cores (e.g., an IBM Power machine may have 192 cores), this will cause too much worker threads which are not necessary. And since libprocess is widely used in Mesos (master, agent, scheduler, executor), it may also cause some performance issue. For example, when user creates a Docker container via Mesos in a Mesos agent which is running on a powerful machine with 192 cores, the DockerContainerizer in Mesos agent will create a dedicated executor for the container, and there will be 192 worker threads in that executor. And if user creates 1000 Docker containers in that machine, then there will be 1000 executors, i.e., 1000 * 192 worker threads which is a large number and may thrash the OS. ""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4357","01/13/2016 19:07:36",1,"GMock warning in RoleTest.ImplicitRoleStaticReservation """""," [ RUN ] RoleTest.ImplicitRoleStaticReservation GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: shutdown(0x7fe37a4752f0) Stack trace: [ OK ] RoleTest.ImplicitRoleStaticReservation (52 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4358","01/13/2016 20:04:07",2,"Expose net_cls network handles in agent's state endpoint ""We need to expose net_cls network handles, associated with containers, to operators and network utilities that would use these network handles to enforce network policy. In order to achieve the above we need to add a new field in the `NetworkInfo` protobuf (say NetHandles) and update this field when a container gets assigned to a net_cls cgroup. The `ContainerStatus` protobuf already has the `NetworkInfo` protobuf as a nested message, and the `ContainerStatus` itself is exposed to operators as part of TaskInfo (for tasks associated with the container) in an agent's state.json. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4363","01/14/2016 12:24:20",1,"Add a roles field to FrameworkInfo ""To represent multiple roles per framework a new repeated string field for roles is needed.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4364","01/14/2016 12:25:43",5,"Add roles validation code to master ""A {{FrameworkInfo}} can only have one of role or roles. A natural location for this appears to be under {{validation::operation::validate}}.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4365","01/14/2016 12:27:34",3,"Add internal migration from role to roles to master ""If only the {{role}} field is given, add it as single entry to {{roles}}. Add a note to {{CHANGELOG}}/release notes on deprecation of the existing {{role}} field. File a JIRA issue for removal of that migration code once the deprecation cycle is over. ""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4367","01/14/2016 12:33:57",5,"Add tracking of the role a Resource was offered for ""If a framework can have multiple roles, we need a way to identify for which of the framework's role a resource was offered for (e.g., for resource recovery and reconciliation).""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4368","01/14/2016 12:35:06",3,"Make HierarchicalAllocatorProcess set a Resource's active role during allocation ""The concrete implementation here depends on the implementation strategy used to solve MESOS-4367.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4376","01/14/2016 18:19:55",2,"Document semantics of `slaveLost` ""We should clarify the semantics of this callback: * Is it always invoked, or just a hint? * Can a slave ever come back from `slaveLost`? * What happens to persistent resources on a lost slave? The new HA framework development guide might be a good place to put (some of?) this information. ""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4377","01/14/2016 18:21:22",1,"Document units associated with resource types ""We should document the units associated with memory and disk resources.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4381","01/14/2016 19:37:20",3,"Improve upgrade compatibility documentation. ""Investigate and document upgrade compatibility for 0.27 release.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4383","01/14/2016 21:09:16",2,"Support docker runtime configuration env var from image. ""We need to support env var configuration returned from docker image in mesos containerizer.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4385","01/14/2016 21:32:03",2,"Offers and InverseOffers cannot be accepted in the same ACCEPT call ""*Problem* * In {{Master::accept}}, {{validation::offer::validate}} returns an error when an {{InverseOffer}} is included in the list of {{OfferIDs}} in an {{ACCEPT}} call. * If an {{Offer}} is part of the same {{ACCEPT}}, the master sees {{error.isSome()}} and returns a {{TASK_LOST}} for normal offers. (https://github.com/apache/mesos/blob/fafbdca610d0a150b9fa9cb62d1c63cb7a6fdaf3/src/master/master.cpp#L3117) Here's a regression test: https://reviews.apache.org/r/42092/ *Proprosal* The question is whether we want to allow the mixing of {{Offers}} and {{InverseOffers}}. Arguments for mixing: * The design/structure of the maintenance originally intended to overload {{ACCEPT}} and {{DECLINE}} to take inverse offers. * Enforcing non-mixing may require breaking changes to {{scheduler.proto}}. Arguments against mixing: * Some semantics are difficult to explain. What does it mean to supply {{InverseOffers}} with {{Offer::Operations}}? What about {{DECLINE}} with {{Offers}} and {{InverseOffers}}, including a """"reason""""? * What happens if we presumably add a third type of offer? * Does it make sense to {{TASK_LOST}} valid normal offers if {{InverseOffers}} are invalid?""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4410","01/15/2016 23:34:28",3,"Introduce protobuf for quota set request. ""To document quota request JSON schema and simplify request processing, introduce a {{QuotaRequest}} protobuf wrapper.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4411","01/15/2016 23:43:54",3,"Traverse all roles for quota allocation. ""There might be a bug in how resources are allocated to multiple quota'ed roles if one role's quota is met. We need to investigate this behavior.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4417","01/17/2016 08:43:44",3,"Prevent allocator from crashing on successful recovery. ""There might be a bug that may crash the master as pointed out by [~bmahler] in https://reviews.apache.org/r/42222/: """," It looks like if we trip the resume call in addSlave, this delayed resume will crash the master due to the CHECK(paused) that currently resides in resume. ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4421","01/18/2016 22:34:19",3,"Document that /reserve, /create-volumes endpoints can return misleading ""success"" ""The docs for the {{/reserve}} endpoint say: This is not true: the master returns {{200}} when the request has been validated and a {{CheckpointResourcesMessage}} has been sent to the agent, but the master does not attempt to verify that the message has been received or that the agent successfully checkpointed. Same behavior applies to {{/unreserve}}, {{/create-volumes}}, and {{/destroy-volumes}}. We should _either_: 1. Accurately document what {{200}} return code means. 2. Change the implementation to wait for the agent's next checkpoint to succeed (and to include the effect of the operation) before returning success to the HTTP client."""," 200 OK: Success (the requested resources have been reserved). ",0,0,0,1,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4425","01/19/2016 02:07:26",3,"Introduce filtering test abstractions for HTTP events to libprocess ""We need a test abstraction for {{HttpEvent}} similar to the already existing one's for {{DispatchEvent}}, {{MessageEvent}} in libprocess. The abstraction can look similar in semantics to the already existing {{FUTURE_DISPATCH}}/{{FUTURE_MESSAGE}}.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4434","01/19/2016 22:01:09",3,"Install 3rdparty package boost, glog, protobuf and picojson when installing Mesos ""Mesos modules depend on having these packages installed with the exact version as Mesos was compiled with.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4435","01/20/2016 00:05:30",3,"Update `Master::Http::stateSummary` to use `jsonify`. ""Update {{state-summary}} to use {{jsonify}} to stay consistent with {{state}} HTTP endpoint.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4437","01/20/2016 00:44:45",1,"Disable the test RegistryClientTest.BadTokenServerAddress. ""As we are retiring registry client, disable this test which looks flaky.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4438","01/20/2016 01:06:44",1,"Add 'dependency' message to 'AppcImageManifest' protobuf. ""AppcImageManifest protobuf currently lacks 'dependencies' which is necessary for image discovery.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4439","01/20/2016 01:10:56",1,"Fix appc CachedImage image validation ""Currently image validation is done assuming that the image's filename will have digest (SHA-512) information. This is not part of the spec (https://github.com/appc/spec/blob/master/spec/discovery.md). The spec specifies the tuple as unique identifier for discovering an image. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4452","01/21/2016 22:07:25",2,"Improve documentation around roles, principals, authz, and reservations ""* What is the difference between a role and a principal? * Why do some ACL entities reference """"roles"""" but others reference """"principals""""? In a typical organization, what real-world entities would my roles vs. principals map to? The ACL documentation could use more information about the motivation of ACLs and examples of configuring ACLs to meet real-world security policies. * We should give some examples of making reservations when the role and principal are different, and why you would want to do that * We should add an example to the ACL page that includes setting ACLs for reservations and/or persistent volumes""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4454","01/22/2016 17:51:59",2,"Create common sha512 compute utility function. ""Add common utility function for computing digests. Start with `sha512` since its immediately needed by appc image fetcher. ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4487","01/24/2016 19:07:08",2,"Introduce status() interface in `Containerizer` ""In the Containerizer, during container isolation, the isolators end up modifying the state of the containers. Examples would be IP address allocation to a container by the 'network isolator, or net_cls handle allocation by the cgroup/net_cls isolator. Often times the state of the container, needs to be exposed to operators through the state.json end-point. For e.g. operators or frameworks might want to know the IP-address configured on a particular container, or the net_cls handle associated with a container to configure the right TC rules. However, at present, there is no clean interface for the slave to retrieve the state of a container from the Containerizer for any of the launched containers. Thus, we need to introduce a `status` interface in the `Containerizer` base class, in order for the slave to expose container state information in its state.json. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4488","01/24/2016 19:23:18",1,"Define a CgroupInfo protobuf to expose cgroup isolator configuration. ""Within `MesosContainerizer` we have an isolator associated with each linux cgroup subsystem. The isolators apply subsystem specific configuration on the containers before launching the containers. For e.g cgroup/net_cls isolator applies net_cls handles, cgroup/mem isolator applies memory quotas, cgroups/cpu-share isolator configures cpu shares. Currently, there is no message structure defined to capture the configuration information of the container, for each cgroup isolator that has been applied to the container. We therefore need to define a protobuf that can capture the cgroup configuration of each cgroup isolator that has been applied to the container. This protobuf will be filled in by the cgroup isolator and will be stored as part of `ContainerConfig` in the containerizer. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4489","01/24/2016 19:33:09",1,"The `cgroups/net_cls` isolator needs to expose handles in the ContainerStatus ""The `cgroup/net_cls` isolator is responsible for allocating network handles to containers launched within a net_cls cgroup. The `cgroup/net_cls` isolator needs to expose these handles to the containerizer as part of the `ContainerStatus` when the containerizer queries the status() method of the isolator. The information itself will go as part of a `CgroupInfo` protobuf that will be defined as part of MESOS-4488 . ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4490","01/24/2016 19:42:20",3,"Get container status information in slave. ""As part of MESOS-4487 an interface will be introduce into the `Containerizer` to allow agents to retrieve container state information. The agent needs to use this interface to retrieve container state information during status updates from the executor. The container state information can be then use by the agent to expose various isolator specific configuration (for e.g., IP address allocated by network isolators, net_cls handles allocated by `cgroups/net_cls` isolator), that has been applied to the container, in the state.json endpoint. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4505","01/25/2016 23:50:52",3,"Hierarchical allocator performance is slow due to Quota ""Since we do not strip the non-scalar resources during the resource arithmetic for quota, the performance can degrade significantly, as currently resource arithmetic is expensive. One approach to resolving this is to filter the resources we use to perform this arithmetic to only use scalars. This is valid as quota can currently only be set for scalar resource types.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4512","01/26/2016 16:22:49",3,"Render quota status consistently with other endpoints. ""Currently quota status endpoint returns a collection of {{QuotaInfo}} protos converted to JSON. An example response looks like this: Presence of some fields, e.g. """"role"""", is misleading. To address this issue and make the output more informative, we should probably introduce a {{model()}} function for {{QuotaStatus}}."""," { """"infos"""": [ { """"role"""": """"role1"""", """"guarantee"""": [ { """"name"""": """"cpus"""", """"role"""": """"*"""", """"type"""": """"SCALAR"""", """"scalar"""": { """"value"""": 12 } }, { """"name"""": """"mem"""", """"role"""": """"*"""", """"type"""": """"SCALAR"""", """"scalar"""": { """"value"""": 6144 } } ] } ] } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4517","01/26/2016 21:35:48",3,"Introduce docker runtime isolator. ""Currently docker image default configuration are included in `ProvisionInfo`. We should grab necessary config from `ProvisionInfo` into `ContainerInfo`, and handle all these runtime informations inside of docker runtime isolator. Return a `ContainerLaunchInfo` containing `working_dir`, `env` and merged `commandInfo`, etc.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4520","01/26/2016 21:53:02",1,"Introduce a status() interface for isolators ""While launching a container mesos isolators end up configuring/modifying various properties of the container. For e.g., cgroup isolators (mem, cpu, net_cls) configure/change the properties associated with their respective subsystems before launching a container. Similary network isolator (net-modules, port mapping) configure the IP address and ports associated with a container. Currently, there are not interface in the isolator to extract the run time state of these properties for a given container. Therefore a status() method needs to be implemented in the isolators to allow the containerizer to extract the container status information from the isolator. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4526","01/27/2016 04:55:27",1,"Include the allocated portion of reserved resources in the role sorter for DRF. ""Reserved resources should be accounted for fairness calculation whether they are allocated or not, since they model a long or forever running task. That is, the effect of reserving resources is equivalent to launching a task in that the resources that make up the reservation are not available to other roles as non-revocable. In the short-term, we should at least account for the allocated portion of the reservation.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4527","01/27/2016 05:03:30",5,"Roles can exceed limit allocation via reservations. ""Since unallocated reservations are not accounted towards the guarantee (which today is also a limit), we might exceed the limit.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4528","01/27/2016 05:07:28",2,"Account for reserved resources in the quota guarantee check. ""Reserved resources should be accounted for in the quota guarantee check so that frameworks cannot continually reserve resources to pull them out of the quota pool.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4529","01/27/2016 05:15:14",2,"Update the allocator to not offer unreserved resources beyond quota. ""Eventually, we will want to offer unreserved resources as revocable beyond the role's quota. Rather than offering non-revocable resources beyond the role's quota's guarantee, in the short term, we choose to not offer resources beyond a role's quota.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4530","01/27/2016 14:40:13",1,"NetClsIsolatorTest.ROOT_CGROUPS_NetClsIsolate is flaky ""While running the command One eventually gets the following output: """," sudo ./bin/mesos-tests.sh --gtest_filter=""""-CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Listen:CgroupsAnyHierarchyMemoryPressureTest.ROOT_IncreaseRSS"""" --gtest_repeat=10 --gtest_break_on_failure [ RUN ] NetClsIsolatorTest.ROOT_CGROUPS_NetClsIsolate ../../src/tests/containerizer/isolator_tests.cpp:870: Failure containerizer: Could not create isolator 'cgroups/net_cls': Unexpected subsystems found attached to the hierarchy /sys/fs/cgroup/net_cls,net_prio [ FAILED ] NetClsIsolatorTest.ROOT_CGROUPS_NetClsIsolate (75 ms) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4535","01/27/2016 20:12:33",1,"Logrotate ContainerLogger may not handle FD ownership correctly ""One of the patches for [MESOS-4136] introduced the {{FDType::OWNED}} enum for {{Subprocess::IO::FD}}. The way the logrotate module uses this is slightly incorrect: # The module starts a subprocess with an output {{Subprocess::PIPE()}}. # That pipe's FD is passed into another subprocess via {{Subprocess::IO::FD(pipe, IO::OWNED)}}. # When the second subprocess starts, the pipe's FD is closed in the parent. # When the first subprocess terminates, the existing code will try to close the pipe again. This effectively closes a random FD.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4542","01/28/2016 11:13:59",3,"MasterQuotaTest.AvailableResourcesAfterRescinding is flaky. ""Can be reproduced by running {{GLOG_v=1 GTEST_FILTER=""""MasterQuotaTest.AvailableResourcesAfterRescinding"""" ./bin/mesos-tests.sh --gtest_shuffle --gtest_break_on_failure --gtest_repeat=1000 --verbose}}. h5. Verbose log from a bad run: h5. Verbose log from a good run: """," [ RUN ] MasterQuotaTest.AvailableResourcesAfterRescinding I0128 12:20:27.568657 2080858880 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0128 12:20:27.570142 2080858880 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0128 12:20:27.583225 2080858880 leveldb.cpp:174] Opened db in 6241us I0128 12:20:27.584353 2080858880 leveldb.cpp:181] Compacted db in 1026us I0128 12:20:27.584429 2080858880 leveldb.cpp:196] Created db iterator in 12us I0128 12:20:27.584442 2080858880 leveldb.cpp:202] Seeked to beginning of db in 7us I0128 12:20:27.584453 2080858880 leveldb.cpp:271] Iterated through 0 keys in the db in 6us I0128 12:20:27.584475 2080858880 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0128 12:20:27.584918 300445696 recover.cpp:447] Starting replica recovery I0128 12:20:27.585113 300445696 recover.cpp:473] Replica is in EMPTY status I0128 12:20:27.585916 297226240 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (18274)@192.168.178.24:51278 I0128 12:20:27.586086 297762816 recover.cpp:193] Received a recover response from a replica in EMPTY status I0128 12:20:27.586449 297226240 recover.cpp:564] Updating replica status to STARTING I0128 12:20:27.587204 300445696 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 624us I0128 12:20:27.587242 300445696 replica.cpp:320] Persisted replica status to STARTING I0128 12:20:27.587376 299372544 recover.cpp:473] Replica is in STARTING status I0128 12:20:27.588050 300982272 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (18275)@192.168.178.24:51278 I0128 12:20:27.588235 300445696 recover.cpp:193] Received a recover response from a replica in STARTING status I0128 12:20:27.588572 297762816 recover.cpp:564] Updating replica status to VOTING I0128 12:20:27.588850 297226240 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 140us I0128 12:20:27.588879 297226240 replica.cpp:320] Persisted replica status to VOTING I0128 12:20:27.588975 299909120 recover.cpp:578] Successfully joined the Paxos group I0128 12:20:27.589154 299909120 recover.cpp:462] Recover process terminated I0128 12:20:27.599486 298835968 master.cpp:374] Master 531344bd-56f4-4e4f-8f6f-a6a9d36058c7 (alexr.fritz.box) started on 192.168.178.24:51278 I0128 12:20:27.599520 298835968 master.cpp:376] Flags at startup: --acls="""""""" --allocation_interval=""""50ms"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_http=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/private/tmp/NlzPSo/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""25secs"""" --registry_strict=""""true"""" --roles=""""role1,role2"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/private/tmp/NlzPSo/master"""" --zk_session_timeout=""""10secs"""" I0128 12:20:27.599753 298835968 master.cpp:421] Master only allowing authenticated frameworks to register I0128 12:20:27.599769 298835968 master.cpp:426] Master only allowing authenticated slaves to register I0128 12:20:27.599781 298835968 credentials.hpp:35] Loading credentials for authentication from '/private/tmp/NlzPSo/credentials' I0128 12:20:27.600082 298835968 master.cpp:466] Using default 'crammd5' authenticator I0128 12:20:27.600163 298835968 master.cpp:535] Using default 'basic' HTTP authenticator I0128 12:20:27.600327 298835968 master.cpp:569] Authorization enabled W0128 12:20:27.600345 298835968 master.cpp:629] The '--roles' flag is deprecated. This flag will be removed in the future. See the Mesos 0.27 upgrade notes for more information I0128 12:20:27.600497 297762816 whitelist_watcher.cpp:77] No whitelist given I0128 12:20:27.600503 297226240 hierarchical.cpp:144] Initialized hierarchical allocator process I0128 12:20:27.601965 297226240 master.cpp:1710] The newly elected leader is master@192.168.178.24:51278 with id 531344bd-56f4-4e4f-8f6f-a6a9d36058c7 I0128 12:20:27.601995 297226240 master.cpp:1723] Elected as the leading master! I0128 12:20:27.602007 297226240 master.cpp:1468] Recovering from registrar I0128 12:20:27.602083 300445696 registrar.cpp:307] Recovering registrar I0128 12:20:27.602460 297226240 log.cpp:659] Attempting to start the writer I0128 12:20:27.603514 299909120 replica.cpp:493] Replica received implicit promise request from (18277)@192.168.178.24:51278 with proposal 1 I0128 12:20:27.603734 299909120 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 205us I0128 12:20:27.603768 299909120 replica.cpp:342] Persisted promised to 1 I0128 12:20:27.604194 299909120 coordinator.cpp:238] Coordinator attempting to fill missing positions I0128 12:20:27.605311 299372544 replica.cpp:388] Replica received explicit promise request from (18278)@192.168.178.24:51278 for position 0 with proposal 2 I0128 12:20:27.605468 299372544 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 133us I0128 12:20:27.605494 299372544 replica.cpp:712] Persisted action at 0 I0128 12:20:27.606441 298835968 replica.cpp:537] Replica received write request for position 0 from (18279)@192.168.178.24:51278 I0128 12:20:27.606492 298835968 leveldb.cpp:436] Reading position from leveldb took 29us I0128 12:20:27.606665 298835968 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 151us I0128 12:20:27.606688 298835968 replica.cpp:712] Persisted action at 0 I0128 12:20:27.607244 297226240 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0128 12:20:27.607409 297226240 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 152us I0128 12:20:27.607441 297226240 replica.cpp:712] Persisted action at 0 I0128 12:20:27.607457 297226240 replica.cpp:697] Replica learned NOP action at position 0 I0128 12:20:27.607853 297226240 log.cpp:675] Writer started with ending position 0 I0128 12:20:27.608649 299372544 leveldb.cpp:436] Reading position from leveldb took 158us I0128 12:20:27.609539 298835968 registrar.cpp:340] Successfully fetched the registry (0B) in 7.426816ms I0128 12:20:27.609763 298835968 registrar.cpp:439] Applied 1 operations in 54us; attempting to update the 'registry' I0128 12:20:27.610216 300982272 log.cpp:683] Attempting to append 186 bytes to the log I0128 12:20:27.610297 298835968 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0128 12:20:27.611016 299909120 replica.cpp:537] Replica received write request for position 1 from (18280)@192.168.178.24:51278 I0128 12:20:27.611188 299909120 leveldb.cpp:341] Persisting action (205 bytes) to leveldb took 153us I0128 12:20:27.611222 299909120 replica.cpp:712] Persisted action at 1 I0128 12:20:27.611843 299909120 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0128 12:20:27.612004 299909120 leveldb.cpp:341] Persisting action (207 bytes) to leveldb took 147us I0128 12:20:27.612035 299909120 replica.cpp:712] Persisted action at 1 I0128 12:20:27.612052 299909120 replica.cpp:697] Replica learned APPEND action at position 1 I0128 12:20:27.612742 300982272 registrar.cpp:484] Successfully updated the 'registry' in 2.924032ms I0128 12:20:27.612846 300982272 registrar.cpp:370] Successfully recovered registrar I0128 12:20:27.612936 298835968 log.cpp:702] Attempting to truncate the log to 1 I0128 12:20:27.613005 297762816 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0128 12:20:27.613323 298299392 master.cpp:1520] Recovered 0 slaves from the Registry (147B) ; allowing 10mins for slaves to re-register I0128 12:20:27.613364 298835968 hierarchical.cpp:171] Skipping recovery of hierarchical allocator: nothing to recover I0128 12:20:27.613966 300445696 replica.cpp:537] Replica received write request for position 2 from (18281)@192.168.178.24:51278 I0128 12:20:27.614131 300445696 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 151us I0128 12:20:27.614166 300445696 replica.cpp:712] Persisted action at 2 I0128 12:20:27.614660 299372544 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0128 12:20:27.614828 299372544 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 158us I0128 12:20:27.614876 299372544 leveldb.cpp:399] Deleting ~1 keys from leveldb took 28us I0128 12:20:27.614898 299372544 replica.cpp:712] Persisted action at 2 I0128 12:20:27.614915 299372544 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0128 12:20:27.625591 2080858880 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix I0128 12:20:27.629758 298299392 slave.cpp:192] Slave started on 871)@192.168.178.24:51278 I0128 12:20:27.629791 298299392 slave.cpp:193] Flags at startup: --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_gS9Qcf/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_auth_server=""""https://auth.docker.io"""" --docker_kill_orphans=""""true"""" --docker_puller_timeout=""""60"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_gS9Qcf/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/Users/alex/Projects/mesos/build/default/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_gS9Qcf"""" I0128 12:20:27.630067 298299392 credentials.hpp:83] Loading credential for authentication from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_gS9Qcf/credential' I0128 12:20:27.630223 298299392 slave.cpp:323] Slave using credential for: test-principal I0128 12:20:27.630360 298299392 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0128 12:20:27.630818 298299392 slave.cpp:463] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.630869 298299392 slave.cpp:471] Slave attributes: [ ] I0128 12:20:27.630882 298299392 slave.cpp:476] Slave hostname: alexr.fritz.box I0128 12:20:27.631352 300982272 state.cpp:58] Recovering state from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_gS9Qcf/meta' I0128 12:20:27.631515 299909120 status_update_manager.cpp:200] Recovering status update manager I0128 12:20:27.631702 298835968 containerizer.cpp:390] Recovering containerizer I0128 12:20:27.632589 297226240 provisioner.cpp:245] Provisioner recovery complete I0128 12:20:27.632807 298835968 slave.cpp:4495] Finished recovery I0128 12:20:27.633539 298835968 slave.cpp:4667] Querying resource estimator for oversubscribable resources I0128 12:20:27.633752 300445696 status_update_manager.cpp:174] Pausing sending status updates I0128 12:20:27.633754 298835968 slave.cpp:795] New master detected at master@192.168.178.24:51278 I0128 12:20:27.633806 298835968 slave.cpp:858] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.633824 298835968 slave.cpp:863] Using default CRAM-MD5 authenticatee I0128 12:20:27.633903 298835968 slave.cpp:831] Detecting new master I0128 12:20:27.633913 299372544 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.634016 298835968 slave.cpp:4681] Received oversubscribable resources from the resource estimator I0128 12:20:27.634076 297226240 master.cpp:5521] Authenticating slave(871)@192.168.178.24:51278 I0128 12:20:27.634130 299372544 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1741)@192.168.178.24:51278 I0128 12:20:27.634255 297226240 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.634348 300982272 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.634367 300982272 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.634454 298835968 authenticator.cpp:203] Received SASL authentication start I0128 12:20:27.634515 298835968 authenticator.cpp:325] Authentication requires more steps I0128 12:20:27.634572 298835968 authenticatee.cpp:258] Received SASL authentication step I0128 12:20:27.634706 297226240 authenticator.cpp:231] Received SASL authentication step I0128 12:20:27.634757 297226240 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.634771 297226240 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.634793 297226240 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.634809 297226240 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.634819 297226240 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.634827 297226240 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.634893 297226240 authenticator.cpp:317] Authentication success I0128 12:20:27.634958 298835968 authenticatee.cpp:298] Authentication success I0128 12:20:27.635030 298299392 master.cpp:5551] Successfully authenticated principal 'test-principal' at slave(871)@192.168.178.24:51278 I0128 12:20:27.635079 300445696 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1741)@192.168.178.24:51278 I0128 12:20:27.635195 299372544 slave.cpp:926] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.635273 299372544 slave.cpp:1320] Will retry registration in 5.823453ms if necessary I0128 12:20:27.635365 299909120 master.cpp:4235] Registering slave at slave(871)@192.168.178.24:51278 (alexr.fritz.box) with id 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S0 I0128 12:20:27.635542 297762816 registrar.cpp:439] Applied 1 operations in 41us; attempting to update the 'registry' I0128 12:20:27.635889 299372544 log.cpp:683] Attempting to append 358 bytes to the log I0128 12:20:27.636011 298299392 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0128 12:20:27.636693 300982272 replica.cpp:537] Replica received write request for position 3 from (18295)@192.168.178.24:51278 I0128 12:20:27.636860 300982272 leveldb.cpp:341] Persisting action (377 bytes) to leveldb took 139us I0128 12:20:27.636885 300982272 replica.cpp:712] Persisted action at 3 I0128 12:20:27.637380 299909120 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0128 12:20:27.637547 299909120 leveldb.cpp:341] Persisting action (379 bytes) to leveldb took 132us I0128 12:20:27.637573 299909120 replica.cpp:712] Persisted action at 3 I0128 12:20:27.637589 299909120 replica.cpp:697] Replica learned APPEND action at position 3 I0128 12:20:27.638362 298835968 registrar.cpp:484] Successfully updated the 'registry' in 2.77504ms I0128 12:20:27.638589 300445696 log.cpp:702] Attempting to truncate the log to 3 I0128 12:20:27.638684 298299392 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0128 12:20:27.638825 300445696 slave.cpp:3435] Received ping from slave-observer(871)@192.168.178.24:51278 I0128 12:20:27.639081 300982272 hierarchical.cpp:473] Added slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S0 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0128 12:20:27.639117 299909120 master.cpp:4303] Registered slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S0 at slave(871)@192.168.178.24:51278 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.639165 300982272 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.639168 297226240 slave.cpp:970] Registered with master master@192.168.178.24:51278; given slave ID 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S0 I0128 12:20:27.639189 297226240 fetcher.cpp:81] Clearing fetcher cache I0128 12:20:27.639183 300982272 hierarchical.cpp:1116] Performed allocation for slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S0 in 77us I0128 12:20:27.639348 297762816 status_update_manager.cpp:181] Resuming sending status updates I0128 12:20:27.639519 298835968 replica.cpp:537] Replica received write request for position 4 from (18296)@192.168.178.24:51278 I0128 12:20:27.639678 298835968 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 142us I0128 12:20:27.639708 298835968 replica.cpp:712] Persisted action at 4 I0128 12:20:27.640115 300982272 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0128 12:20:27.640276 300982272 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 137us I0128 12:20:27.640312 300982272 leveldb.cpp:399] Deleting ~2 keys from leveldb took 21us I0128 12:20:27.640326 300982272 replica.cpp:712] Persisted action at 4 I0128 12:20:27.640336 300982272 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0128 12:20:27.642145 297226240 slave.cpp:993] Checkpointing SlaveInfo to '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_gS9Qcf/meta/slaves/531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S0/slave.info' I0128 12:20:27.643354 297226240 slave.cpp:1029] Forwarding total oversubscribed resources I0128 12:20:27.643458 300445696 master.cpp:4644] Received update of slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S0 at slave(871)@192.168.178.24:51278 (alexr.fritz.box) with total oversubscribed resources I0128 12:20:27.643710 298299392 hierarchical.cpp:531] Slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S0 (alexr.fritz.box) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) I0128 12:20:27.643769 298299392 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.643805 298299392 hierarchical.cpp:1116] Performed allocation for slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S0 in 78us I0128 12:20:27.644645 2080858880 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix I0128 12:20:27.649093 297226240 slave.cpp:192] Slave started on 872)@192.168.178.24:51278 I0128 12:20:27.649138 297226240 slave.cpp:193] Flags at startup: --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_6ycfWv/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_auth_server=""""https://auth.docker.io"""" --docker_kill_orphans=""""true"""" --docker_puller_timeout=""""60"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_6ycfWv/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/Users/alex/Projects/mesos/build/default/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_6ycfWv"""" I0128 12:20:27.649353 297226240 credentials.hpp:83] Loading credential for authentication from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_6ycfWv/credential' I0128 12:20:27.649451 297226240 slave.cpp:323] Slave using credential for: test-principal I0128 12:20:27.649569 297226240 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0128 12:20:27.650039 297226240 slave.cpp:463] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.650085 297226240 slave.cpp:471] Slave attributes: [ ] I0128 12:20:27.650096 297226240 slave.cpp:476] Slave hostname: alexr.fritz.box I0128 12:20:27.650509 299909120 state.cpp:58] Recovering state from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_6ycfWv/meta' I0128 12:20:27.650699 298299392 status_update_manager.cpp:200] Recovering status update manager I0128 12:20:27.650701 300445696 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.650738 300445696 hierarchical.cpp:1096] Performed allocation for 1 slaves in 101us I0128 12:20:27.650887 297226240 containerizer.cpp:390] Recovering containerizer I0128 12:20:27.651747 299909120 provisioner.cpp:245] Provisioner recovery complete I0128 12:20:27.651974 300982272 slave.cpp:4495] Finished recovery I0128 12:20:27.653733 300982272 slave.cpp:4667] Querying resource estimator for oversubscribable resources I0128 12:20:27.653928 300982272 slave.cpp:795] New master detected at master@192.168.178.24:51278 I0128 12:20:27.653928 299372544 status_update_manager.cpp:174] Pausing sending status updates I0128 12:20:27.653975 300982272 slave.cpp:858] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.653991 300982272 slave.cpp:863] Using default CRAM-MD5 authenticatee I0128 12:20:27.654091 300982272 slave.cpp:831] Detecting new master I0128 12:20:27.654098 297226240 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.654216 300982272 slave.cpp:4681] Received oversubscribable resources from the resource estimator I0128 12:20:27.654276 297762816 master.cpp:5521] Authenticating slave(872)@192.168.178.24:51278 I0128 12:20:27.654350 299909120 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1742)@192.168.178.24:51278 I0128 12:20:27.654498 298299392 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.654602 300982272 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.654625 300982272 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.654700 299909120 authenticator.cpp:203] Received SASL authentication start I0128 12:20:27.654752 299909120 authenticator.cpp:325] Authentication requires more steps I0128 12:20:27.654819 299909120 authenticatee.cpp:258] Received SASL authentication step I0128 12:20:27.654940 299372544 authenticator.cpp:231] Received SASL authentication step I0128 12:20:27.654965 299372544 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.654978 299372544 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.654997 299372544 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.655012 299372544 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.655024 299372544 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.655031 299372544 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.655047 299372544 authenticator.cpp:317] Authentication success I0128 12:20:27.655143 299909120 authenticatee.cpp:298] Authentication success I0128 12:20:27.655120 297762816 master.cpp:5551] Successfully authenticated principal 'test-principal' at slave(872)@192.168.178.24:51278 I0128 12:20:27.655163 299372544 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1742)@192.168.178.24:51278 I0128 12:20:27.655326 300445696 slave.cpp:926] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.655465 300445696 slave.cpp:1320] Will retry registration in 13.985296ms if necessary I0128 12:20:27.655565 299909120 master.cpp:4235] Registering slave at slave(872)@192.168.178.24:51278 (alexr.fritz.box) with id 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S1 I0128 12:20:27.655823 300982272 registrar.cpp:439] Applied 1 operations in 64us; attempting to update the 'registry' I0128 12:20:27.656354 297226240 log.cpp:683] Attempting to append 527 bytes to the log I0128 12:20:27.656429 300445696 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 5 I0128 12:20:27.657187 300445696 replica.cpp:537] Replica received write request for position 5 from (18310)@192.168.178.24:51278 I0128 12:20:27.657429 300445696 leveldb.cpp:341] Persisting action (546 bytes) to leveldb took 224us I0128 12:20:27.657464 300445696 replica.cpp:712] Persisted action at 5 I0128 12:20:27.658007 300445696 replica.cpp:691] Replica received learned notice for position 5 from @0.0.0.0:0 I0128 12:20:27.658190 300445696 leveldb.cpp:341] Persisting action (548 bytes) to leveldb took 170us I0128 12:20:27.658223 300445696 replica.cpp:712] Persisted action at 5 I0128 12:20:27.658239 300445696 replica.cpp:697] Replica learned APPEND action at position 5 I0128 12:20:27.659104 300982272 registrar.cpp:484] Successfully updated the 'registry' in 3.227904ms I0128 12:20:27.659373 298835968 log.cpp:702] Attempting to truncate the log to 5 I0128 12:20:27.659446 298299392 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 6 I0128 12:20:27.659636 300982272 slave.cpp:3435] Received ping from slave-observer(872)@192.168.178.24:51278 I0128 12:20:27.659855 297226240 hierarchical.cpp:473] Added slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S1 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0128 12:20:27.659960 297226240 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.659936 297762816 master.cpp:4303] Registered slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S1 at slave(872)@192.168.178.24:51278 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.659981 297226240 hierarchical.cpp:1116] Performed allocation for slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S1 in 80us I0128 12:20:27.659986 299909120 slave.cpp:970] Registered with master master@192.168.178.24:51278; given slave ID 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S1 I0128 12:20:27.660013 299909120 fetcher.cpp:81] Clearing fetcher cache I0128 12:20:27.660092 297226240 replica.cpp:537] Replica received write request for position 6 from (18311)@192.168.178.24:51278 I0128 12:20:27.660246 297226240 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 131us I0128 12:20:27.660270 297226240 replica.cpp:712] Persisted action at 6 I0128 12:20:27.660454 300445696 status_update_manager.cpp:181] Resuming sending status updates I0128 12:20:27.660742 299372544 replica.cpp:691] Replica received learned notice for position 6 from @0.0.0.0:0 I0128 12:20:27.660924 299372544 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 209us I0128 12:20:27.661015 299372544 leveldb.cpp:399] Deleting ~2 keys from leveldb took 37us I0128 12:20:27.661039 299372544 replica.cpp:712] Persisted action at 6 I0128 12:20:27.661061 299372544 replica.cpp:697] Replica learned TRUNCATE action at position 6 I0128 12:20:27.661752 299909120 slave.cpp:993] Checkpointing SlaveInfo to '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_6ycfWv/meta/slaves/531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S1/slave.info' I0128 12:20:27.662113 299909120 slave.cpp:1029] Forwarding total oversubscribed resources I0128 12:20:27.662199 297762816 master.cpp:4644] Received update of slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S1 at slave(872)@192.168.178.24:51278 (alexr.fritz.box) with total oversubscribed resources I0128 12:20:27.662508 297762816 hierarchical.cpp:531] Slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S1 (alexr.fritz.box) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) I0128 12:20:27.662577 297762816 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.662590 297762816 hierarchical.cpp:1116] Performed allocation for slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S1 in 51us I0128 12:20:27.663261 2080858880 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix I0128 12:20:27.669075 299372544 slave.cpp:192] Slave started on 873)@192.168.178.24:51278 I0128 12:20:27.669107 299372544 slave.cpp:193] Flags at startup: --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_eAr35P/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_auth_server=""""https://auth.docker.io"""" --docker_kill_orphans=""""true"""" --docker_puller_timeout=""""60"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_eAr35P/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/Users/alex/Projects/mesos/build/default/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_eAr35P"""" I0128 12:20:27.669395 299372544 credentials.hpp:83] Loading credential for authentication from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_eAr35P/credential' I0128 12:20:27.669497 299372544 slave.cpp:323] Slave using credential for: test-principal I0128 12:20:27.669631 299372544 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0128 12:20:27.670105 299372544 slave.cpp:463] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.670146 299372544 slave.cpp:471] Slave attributes: [ ] I0128 12:20:27.670155 299372544 slave.cpp:476] Slave hostname: alexr.fritz.box I0128 12:20:27.670567 298835968 state.cpp:58] Recovering state from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_eAr35P/meta' I0128 12:20:27.670752 300445696 status_update_manager.cpp:200] Recovering status update manager I0128 12:20:27.670913 300445696 containerizer.cpp:390] Recovering containerizer I0128 12:20:27.671908 297226240 provisioner.cpp:245] Provisioner recovery complete I0128 12:20:27.672124 298835968 slave.cpp:4495] Finished recovery I0128 12:20:27.673331 298835968 slave.cpp:4667] Querying resource estimator for oversubscribable resources I0128 12:20:27.673528 297762816 status_update_manager.cpp:174] Pausing sending status updates I0128 12:20:27.673527 298835968 slave.cpp:795] New master detected at master@192.168.178.24:51278 I0128 12:20:27.673573 298835968 slave.cpp:858] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.673599 298835968 slave.cpp:863] Using default CRAM-MD5 authenticatee I0128 12:20:27.673671 298835968 slave.cpp:831] Detecting new master I0128 12:20:27.673686 298299392 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.673797 298835968 slave.cpp:4681] Received oversubscribable resources from the resource estimator I0128 12:20:27.673849 300982272 master.cpp:5521] Authenticating slave(873)@192.168.178.24:51278 I0128 12:20:27.673909 300445696 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1743)@192.168.178.24:51278 I0128 12:20:27.674026 297226240 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.674127 299909120 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.674154 299909120 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.674247 297762816 authenticator.cpp:203] Received SASL authentication start I0128 12:20:27.674293 297762816 authenticator.cpp:325] Authentication requires more steps I0128 12:20:27.674357 299909120 authenticatee.cpp:258] Received SASL authentication step I0128 12:20:27.674450 299909120 authenticator.cpp:231] Received SASL authentication step I0128 12:20:27.674471 299909120 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.674484 299909120 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.674505 299909120 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.674522 299909120 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.674535 299909120 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.674545 299909120 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.674558 299909120 authenticator.cpp:317] Authentication success I0128 12:20:27.674621 297762816 authenticatee.cpp:298] Authentication success I0128 12:20:27.674667 299372544 master.cpp:5551] Successfully authenticated principal 'test-principal' at slave(873)@192.168.178.24:51278 I0128 12:20:27.674734 298835968 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1743)@192.168.178.24:51278 I0128 12:20:27.674832 298299392 slave.cpp:926] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.674908 298299392 slave.cpp:1320] Will retry registration in 13.363505ms if necessary I0128 12:20:27.674993 297226240 master.cpp:4235] Registering slave at slave(873)@192.168.178.24:51278 (alexr.fritz.box) with id 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S2 I0128 12:20:27.675194 299372544 registrar.cpp:439] Applied 1 operations in 52us; attempting to update the 'registry' I0128 12:20:27.675604 300445696 log.cpp:683] Attempting to append 696 bytes to the log I0128 12:20:27.675693 297762816 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 7 I0128 12:20:27.676316 297226240 replica.cpp:537] Replica received write request for position 7 from (18325)@192.168.178.24:51278 I0128 12:20:27.676472 297226240 leveldb.cpp:341] Persisting action (715 bytes) to leveldb took 146us I0128 12:20:27.676506 297226240 replica.cpp:712] Persisted action at 7 I0128 12:20:27.677014 300982272 replica.cpp:691] Replica received learned notice for position 7 from @0.0.0.0:0 I0128 12:20:27.677176 300982272 leveldb.cpp:341] Persisting action (717 bytes) to leveldb took 138us I0128 12:20:27.677201 300982272 replica.cpp:712] Persisted action at 7 I0128 12:20:27.677211 300982272 replica.cpp:697] Replica learned APPEND action at position 7 I0128 12:20:27.678407 299909120 registrar.cpp:484] Successfully updated the 'registry' in 3.181056ms I0128 12:20:27.678652 300982272 log.cpp:702] Attempting to truncate the log to 7 I0128 12:20:27.678741 297762816 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 8 I0128 12:20:27.678907 300445696 slave.cpp:3435] Received ping from slave-observer(873)@192.168.178.24:51278 I0128 12:20:27.679098 297762816 hierarchical.cpp:473] Added slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S2 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0128 12:20:27.679177 297762816 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.679195 297762816 hierarchical.cpp:1116] Performed allocation for slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S2 in 73us I0128 12:20:27.679214 299372544 slave.cpp:970] Registered with master master@192.168.178.24:51278; given slave ID 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S2 I0128 12:20:27.679186 298299392 master.cpp:4303] Registered slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S2 at slave(873)@192.168.178.24:51278 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.679239 299372544 fetcher.cpp:81] Clearing fetcher cache I0128 12:20:27.679404 298835968 status_update_manager.cpp:181] Resuming sending status updates I0128 12:20:27.679461 300445696 replica.cpp:537] Replica received write request for position 8 from (18326)@192.168.178.24:51278 I0128 12:20:27.679652 300445696 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 182us I0128 12:20:27.679687 300445696 replica.cpp:712] Persisted action at 8 I0128 12:20:27.679913 299372544 slave.cpp:993] Checkpointing SlaveInfo to '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_eAr35P/meta/slaves/531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S2/slave.info' I0128 12:20:27.680150 298835968 replica.cpp:691] Replica received learned notice for position 8 from @0.0.0.0:0 I0128 12:20:27.680279 299372544 slave.cpp:1029] Forwarding total oversubscribed resources I0128 12:20:27.680302 298835968 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 140us I0128 12:20:27.680351 298835968 leveldb.cpp:399] Deleting ~2 keys from leveldb took 28us I0128 12:20:27.680371 298835968 replica.cpp:712] Persisted action at 8 I0128 12:20:27.680377 299372544 master.cpp:4644] Received update of slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S2 at slave(873)@192.168.178.24:51278 (alexr.fritz.box) with total oversubscribed resources I0128 12:20:27.680387 298835968 replica.cpp:697] Replica learned TRUNCATE action at position 8 I0128 12:20:27.680680 299909120 hierarchical.cpp:531] Slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S2 (alexr.fritz.box) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) I0128 12:20:27.680749 299909120 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.680768 299909120 hierarchical.cpp:1116] Performed allocation for slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S2 in 60us I0128 12:20:27.682180 2080858880 sched.cpp:222] Version: 0.28.0 I0128 12:20:27.682505 298299392 sched.cpp:326] New master detected at master@192.168.178.24:51278 I0128 12:20:27.682551 298299392 sched.cpp:382] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.682566 298299392 sched.cpp:389] Using default CRAM-MD5 authenticatee I0128 12:20:27.682714 300982272 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.682870 300445696 master.cpp:5521] Authenticating scheduler-05b9ac89-54ac-4d24-84e7-bb9cedfa77c4@192.168.178.24:51278 I0128 12:20:27.682965 298835968 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1744)@192.168.178.24:51278 I0128 12:20:27.683362 297762816 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.683498 297226240 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.683526 297226240 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.683636 298299392 authenticator.cpp:203] Received SASL authentication start I0128 12:20:27.683687 298299392 authenticator.cpp:325] Authentication requires more steps I0128 12:20:27.683758 297226240 authenticatee.cpp:258] Received SASL authentication step I0128 12:20:27.683857 297226240 authenticator.cpp:231] Received SASL authentication step I0128 12:20:27.683877 297226240 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.683897 297226240 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.683917 297226240 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.683930 297226240 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.683940 297226240 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.683948 297226240 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.683962 297226240 authenticator.cpp:317] Authentication success I0128 12:20:27.684010 299909120 authenticatee.cpp:298] Authentication success I0128 12:20:27.684079 300445696 master.cpp:5551] Successfully authenticated principal 'test-principal' at scheduler-05b9ac89-54ac-4d24-84e7-bb9cedfa77c4@192.168.178.24:51278 I0128 12:20:27.684172 300982272 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1744)@192.168.178.24:51278 I0128 12:20:27.684288 298835968 sched.cpp:471] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.684306 298835968 sched.cpp:780] Sending SUBSCRIBE call to master@192.168.178.24:51278 I0128 12:20:27.684378 298835968 sched.cpp:813] Will retry registration in 1.868624245secs if necessary I0128 12:20:27.684437 299909120 master.cpp:2278] Received SUBSCRIBE call for framework 'framework(18327)' at scheduler-05b9ac89-54ac-4d24-84e7-bb9cedfa77c4@192.168.178.24:51278 W0128 12:20:27.684456 299909120 master.cpp:2285] Framework at scheduler-05b9ac89-54ac-4d24-84e7-bb9cedfa77c4@192.168.178.24:51278 (authenticated as 'test-principal') does not set 'principal' in FrameworkInfo I0128 12:20:27.684471 299909120 master.cpp:1749] Authorizing framework principal '' to receive offers for role 'role1' I0128 12:20:27.684675 300445696 master.cpp:2349] Subscribing framework framework(18327) with checkpointing disabled and capabilities [ ] I0128 12:20:27.684921 297226240 hierarchical.cpp:265] Added framework framework(18327) I0128 12:20:27.685066 299909120 sched.cpp:707] Framework registered with framework(18327) I0128 12:20:27.685122 299909120 sched.cpp:721] Scheduler::registered took 48us W0128 12:20:27.685184 299372544 slave.cpp:2236] Ignoring updating pid for framework framework(18327) because it does not exist W0128 12:20:27.685223 299909120 slave.cpp:2236] Ignoring updating pid for framework framework(18327) because it does not exist W0128 12:20:27.685281 297762816 slave.cpp:2236] Ignoring updating pid for framework framework(18327) because it does not exist I0128 12:20:27.685915 297226240 hierarchical.cpp:1498] No inverse offers to send out! I0128 12:20:27.685945 297226240 hierarchical.cpp:1096] Performed allocation for 3 slaves in 1015us I0128 12:20:27.686295 299372544 master.cpp:5350] Sending 3 offers to framework framework(18327) (framework(18327)) at scheduler-05b9ac89-54ac-4d24-84e7-bb9cedfa77c4@192.168.178.24:51278 I0128 12:20:27.686750 298299392 sched.cpp:877] Scheduler::resourceOffers took 161us I0128 12:20:27.688932 2080858880 sched.cpp:222] Version: 0.28.0 I0128 12:20:27.689265 298299392 sched.cpp:326] New master detected at master@192.168.178.24:51278 I0128 12:20:27.689319 298299392 sched.cpp:382] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.689333 298299392 sched.cpp:389] Using default CRAM-MD5 authenticatee I0128 12:20:27.689450 300445696 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.689577 300982272 master.cpp:5521] Authenticating scheduler-7953b576-d913-4dce-993d-375e2fb34aba@192.168.178.24:51278 I0128 12:20:27.689654 298835968 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1745)@192.168.178.24:51278 I0128 12:20:27.689810 299909120 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.689904 300982272 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.689931 300982272 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.690032 297762816 authenticator.cpp:203] Received SASL authentication start I0128 12:20:27.690073 297762816 authenticator.cpp:325] Authentication requires more steps I0128 12:20:27.690166 298299392 authenticatee.cpp:258] Received SASL authentication step I0128 12:20:27.690271 298835968 authenticator.cpp:231] Received SASL authentication step I0128 12:20:27.690291 298835968 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.690304 298835968 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.690325 298835968 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.690340 298835968 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.690351 298835968 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.690382 298835968 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.690393 298835968 authenticator.cpp:317] Authentication success I0128 12:20:27.690402 2080858880 sched.cpp:222] Version: 0.28.0 I0128 12:20:27.690454 299909120 authenticatee.cpp:298] Authentication success I0128 12:20:27.690512 299372544 master.cpp:5551] Successfully authenticated principal 'test-principal' at scheduler-7953b576-d913-4dce-993d-375e2fb34aba@192.168.178.24:51278 I0128 12:20:27.690639 300982272 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1745)@192.168.178.24:51278 I0128 12:20:27.690727 299909120 sched.cpp:471] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.690744 299909120 sched.cpp:780] Sending SUBSCRIBE call to master@192.168.178.24:51278 I0128 12:20:27.690832 300445696 sched.cpp:326] New master detected at master@192.168.178.24:51278 I0128 12:20:27.690878 299909120 sched.cpp:813] Will retry registration in 370.645806ms if necessary I0128 12:20:27.690896 300445696 sched.cpp:382] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.690909 300445696 sched.cpp:389] Using default CRAM-MD5 authenticatee I0128 12:20:27.690932 300982272 master.cpp:2278] Received SUBSCRIBE call for framework 'framework(18328)' at scheduler-7953b576-d913-4dce-993d-375e2fb34aba@192.168.178.24:51278 W0128 12:20:27.690953 300982272 master.cpp:2285] Framework at scheduler-7953b576-d913-4dce-993d-375e2fb34aba@192.168.178.24:51278 (authenticated as 'test-principal') does not set 'principal' in FrameworkInfo I0128 12:20:27.690979 300982272 master.cpp:1749] Authorizing framework principal '' to receive offers for role 'role2' I0128 12:20:27.691057 299372544 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.691160 300982272 master.cpp:5521] Authenticating scheduler-1ed5257a-cda6-4cef-aaf1-87934636893e@192.168.178.24:51278 I0128 12:20:27.691257 298299392 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1746)@192.168.178.24:51278 I0128 12:20:27.691284 300982272 master.cpp:2349] Subscribing framework framework(18328) with checkpointing disabled and capabilities [ ] I0128 12:20:27.691396 297762816 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.691516 300445696 hierarchical.cpp:265] Added framework framework(18328) I0128 12:20:27.691515 299372544 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.691553 299372544 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.691562 298299392 sched.cpp:707] Framework registered with framework(18328) I0128 12:20:27.691599 300445696 hierarchical.cpp:1403] No resources available to allocate! W0128 12:20:27.691620 297226240 slave.cpp:2236] Ignoring updating pid for framework framework(18328) because it does not exist I0128 12:20:27.691627 300445696 hierarchical.cpp:1498] No inverse offers to send out! I0128 12:20:27.691629 298835968 authenticator.cpp:203] Received SASL authentication start W0128 12:20:27.691661 299909120 slave.cpp:2236] Ignoring updating pid for framework framework(18328) because it does not exist I0128 12:20:27.691656 300445696 hierarchical.cpp:1096] Performed allocation for 3 slaves in 128us I0128 12:20:27.691653 298299392 sched.cpp:721] Scheduler::registered took 86us I0128 12:20:27.691680 298835968 authenticator.cpp:325] Authentication requires more steps W0128 12:20:27.691718 297762816 slave.cpp:2236] Ignoring updating pid for framework framework(18328) because it does not exist I0128 12:20:27.691797 298299392 authenticatee.cpp:258] Received SASL authentication step I0128 12:20:27.691896 299372544 authenticator.cpp:231] Received SASL authentication step I0128 12:20:27.691917 299372544 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.691929 299372544 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.691947 299372544 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.691962 299372544 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.691973 299372544 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.691982 299372544 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.691993 299372544 authenticator.cpp:317] Authentication success I0128 12:20:27.692075 299909120 authenticatee.cpp:298] Authentication success I0128 12:20:27.692142 298299392 master.cpp:5551] Successfully authenticated principal 'test-principal' at scheduler-1ed5257a-cda6-4cef-aaf1-87934636893e@192.168.178.24:51278 I0128 12:20:27.692203 298835968 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1746)@192.168.178.24:51278 I0128 12:20:27.692313 297762816 sched.cpp:471] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.692330 297762816 sched.cpp:780] Sending SUBSCRIBE call to master@192.168.178.24:51278 I0128 12:20:27.692392 297762816 sched.cpp:813] Will retry registration in 284.450047ms if necessary I0128 12:20:27.692456 299909120 master.cpp:2278] Received SUBSCRIBE call for framework 'framework(18329)' at scheduler-1ed5257a-cda6-4cef-aaf1-87934636893e@192.168.178.24:51278 W0128 12:20:27.692473 299909120 master.cpp:2285] Framework at scheduler-1ed5257a-cda6-4cef-aaf1-87934636893e@192.168.178.24:51278 (authenticated as 'test-principal') does not set 'principal' in FrameworkInfo I0128 12:20:27.692486 299909120 master.cpp:1749] Authorizing framework principal '' to receive offers for role 'role2' I0128 12:20:27.692632 297762816 master.cpp:2349] Subscribing framework framework(18329) with checkpointing disabled and capabilities [ ] I0128 12:20:27.692858 298299392 hierarchical.cpp:265] Added framework framework(18329) I0128 12:20:27.692924 299372544 sched.cpp:707] Framework registered with framework(18329) W0128 12:20:27.692945 297226240 slave.cpp:2236] Ignoring updating pid for framework framework(18329) because it does not exist I0128 12:20:27.692952 298299392 hierarchical.cpp:1403] No resources available to allocate! W0128 12:20:27.692972 297762816 slave.cpp:2236] Ignoring updating pid for framework framework(18329) because it does not exist I0128 12:20:27.692982 298299392 hierarchical.cpp:1498] No inverse offers to send out! I0128 12:20:27.692986 299372544 sched.cpp:721] Scheduler::registered took 52us W0128 12:20:27.693003 297226240 slave.cpp:2236] Ignoring updating pid for framework framework(18329) because it does not exist I0128 12:20:27.693001 298299392 hierarchical.cpp:1096] Performed allocation for 3 slaves in 124us I0128 12:20:27.693220 2080858880 resources.cpp:564] Parsing resources as JSON failed: cpus:1;mem:512 Trying semicolon-delimited string format instead I0128 12:20:27.694814 297762816 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/quota' I0128 12:20:27.695245 299372544 http.cpp:503] HTTP POST for /master/quota from 192.168.178.24:51641 I0128 12:20:27.695274 299372544 quota_handler.cpp:211] Setting quota from request: '{""""force"""":false,""""guarantee"""":[{""""name"""":""""cpus"""",""""role"""":""""*"""",""""scalar"""":{""""value"""":1.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""role"""":""""*"""",""""scalar"""":{""""value"""":512.0},""""type"""":""""SCALAR""""}],""""role"""":""""role2""""}' I0128 12:20:27.695569 299372544 quota_handler.cpp:446] Authorizing principal 'test-principal' to request quota for role 'role2' I0128 12:20:27.695719 299372544 quota_handler.cpp:70] Performing capacity heuristic check for a set quota request I0128 12:20:27.695996 297226240 registrar.cpp:439] Applied 1 operations in 77us; attempting to update the 'registry' I0128 12:20:27.696362 300445696 log.cpp:683] Attempting to append 770 bytes to the log I0128 12:20:27.696458 299372544 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 9 I0128 12:20:27.697157 297226240 replica.cpp:537] Replica received write request for position 9 from (18332)@192.168.178.24:51278 I0128 12:20:27.697350 297226240 leveldb.cpp:341] Persisting action (789 bytes) to leveldb took 182us I0128 12:20:27.697382 297226240 replica.cpp:712] Persisted action at 9 I0128 12:20:27.697803 297762816 replica.cpp:691] Replica received learned notice for position 9 from @0.0.0.0:0 I0128 12:20:27.697944 297762816 leveldb.cpp:341] Persisting action (791 bytes) to leveldb took 127us I0128 12:20:27.697968 297762816 replica.cpp:712] Persisted action at 9 I0128 12:20:27.697978 297762816 replica.cpp:697] Replica learned APPEND action at position 9 I0128 12:20:27.699586 297762816 registrar.cpp:484] Successfully updated the 'registry' in 3.547904ms I0128 12:20:27.699905 300982272 log.cpp:702] Attempting to truncate the log to 9 I0128 12:20:27.700006 298835968 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 10 I0128 12:20:27.699996 297762816 hierarchical.cpp:1024] Set quota cpus(*):1; mem(*):512 for role 'role2' I0128 12:20:27.700466 300982272 sched.cpp:903] Rescinded offer 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-O0 I0128 12:20:27.700542 300982272 sched.cpp:914] Scheduler::offerRescinded took 59us I0128 12:20:27.700582 297762816 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.700615 297762816 hierarchical.cpp:1498] No inverse offers to send out! I0128 12:20:27.700635 297762816 hierarchical.cpp:1096] Performed allocation for 3 slaves in 607us I0128 12:20:27.700886 297762816 hierarchical.cpp:892] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S2 from framework framework(18327) I0128 12:20:27.701004 299372544 sched.cpp:903] Rescinded offer 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-O1 I0128 12:20:27.701046 299372544 sched.cpp:914] Scheduler::offerRescinded took 29us I0128 12:20:27.701098 298299392 replica.cpp:537] Replica received write request for position 10 from (18333)@192.168.178.24:51278 I0128 12:20:27.701093 297762816 hierarchical.cpp:892] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S1 from framework framework(18327) I0128 12:20:27.701261 298299392 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 237us I0128 12:20:27.701382 298299392 replica.cpp:712] Persisted action at 10 I0128 12:20:27.702061 299909120 replica.cpp:691] Replica received learned notice for position 10 from @0.0.0.0:0 I0128 12:20:27.702244 299909120 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 171us I0128 12:20:27.702301 299909120 leveldb.cpp:399] Deleting ~2 keys from leveldb took 34us I0128 12:20:27.702322 299909120 replica.cpp:712] Persisted action at 10 I0128 12:20:27.702339 299909120 replica.cpp:697] Replica learned TRUNCATE action at position 10 I0128 12:20:27.702792 297762816 hierarchical.cpp:1498] No inverse offers to send out! I0128 12:20:27.702831 297762816 hierarchical.cpp:1096] Performed allocation for 3 slaves in 1644us I0128 12:20:27.703033 299909120 master.cpp:5350] Sending 1 offers to framework framework(18327) (framework(18327)) at scheduler-05b9ac89-54ac-4d24-84e7-bb9cedfa77c4@192.168.178.24:51278 I0128 12:20:27.703428 299909120 master.cpp:5350] Sending 1 offers to framework framework(18328) (framework(18328)) at scheduler-7953b576-d913-4dce-993d-375e2fb34aba@192.168.178.24:51278 GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: resourceOffers(0x7fff5bdd4c88, @0x111e86250 { 144-byte object <90-06 1B-0B 01-00 00-00 00-00 00-00 00-00 00-00 00-A9 61-FB FD-7F 00-00 70-A9 61-FB FD-7F 00-00 B0-A9 61-FB FD-7F 00-00 F0-B0 61-FB FD-7F 00-00 10-B1 61-FB FD-7F 00-00../../../src/tests/master_quota_tests.cpp:899: Failure Mock function called more times than expected - returning directly. Function call: resourceOffers(0x7fff5bdd4dd0, @0x111c7a250 { 144-byte object <90-06 1B-0B 01-00 00-00 00-00 00-00 00-00 00-00 30-C1 C4-FC FD-7F 00-00 80-C1 4A-FB FD-7F 00-00 F0-BD 48-FB FD-7F 00-00 E0-64 4A-FB FD-7F 00-00 D0-56 C0-FC FD-7F 00-00 C0-94 C8-FC FD-7F 00-00 ... 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 FD-7F 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 1F-00 00-00> }) Expected: to be called once Actual: called twice - over-saturated and active I0128 12:20:27.704130 297762816 master.cpp:1025] Master terminating 7*** Aborted at 1453980027 (unix time) try """"date -d @1453980027"""" if you are using GNU date *** 0-D6 60-FB FD-7F 00-00 ... 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 FD-7F 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 1F-00 00-00> }) Stack trace: I0128 12:20:27.704699 299372544 hierarchical.cpp:505] Removed slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S2 I0128 12:20:27.705009 297226240 hierarchical.cpp:505] Removed slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S1 I0128 12:20:27.705271 300445696 sched.cpp:877] Scheduler::resourceOffers took 1623us I0128 12:20:27.705306 299909120 hierarchical.cpp:505] Removed slave 531344bd-56f4-4e4f-8f6f-a6a9d36058c7-S0 I0128 12:20:27.705575 298835968 hierarchical.cpp:326] Removed framework framework(18329) I0128 12:20:27.705803 298835968 hierarchical.cpp:326] Removed framework framework(18328) I0128 12:20:27.705878 300445696 slave.cpp:3481] master@192.168.178.24:51278 exited W0128 12:20:27.705901 300445696 slave.cpp:3484] Master disconnected! Waiting for a new master to be elected I0128 12:20:27.705904 300982272 slave.cpp:3481] master@192.168.178.24:51278 exited W0128 12:20:27.705921 300982272 slave.cpp:3484] Master disconnected! Waiting for a new master to be elected I0128 12:20:27.705955 300445696 slave.cpp:3481] master@192.168.178.24:51278 exited W0128 12:20:27.705971 300445696 slave.cpp:3484] Master disconnected! Waiting for a new master to be elected I0128 12:20:27.706085 298835968 hierarchical.cpp:326] Removed framework framework(18327) I0128 12:20:27.709558 299372544 slave.cpp:667] Slave terminating I0128 12:20:27.712924 299372544 slave.cpp:667] Slave terminating I0128 12:20:27.716169 299909120 slave.cpp:667] Slave terminating PC: @ 0x1054638f5 testing::UnitTest::AddTestPartResult() *** SIGSEGV (@0x0) received by PID 98630 (TID 0x111c7b000) stack trace: *** @ 0x7fff91018f1a _sigtramp @ 0x7fff93077187 malloc @ 0x1054630a7 testing::internal::AssertHelper::operator=() @ 0x1054cebcb testing::internal::GoogleTestFailureReporter::ReportFailure() @ 0x103f20058 testing::internal::Expect() @ 0x1054a2d86 testing::internal::UntypedFunctionMockerBase::UntypedInvokeWith() @ 0x104b3ec18 testing::internal::FunctionMockerBase<>::InvokeWith() @ 0x104b3ebdb testing::internal::FunctionMocker<>::Invoke() @ 0x104b068dd mesos::internal::tests::MockScheduler::resourceOffers() I0128 12:20:27.756392 299909120 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.756412 299909120 hierarchical.cpp:1096] Performed allocation for 0 slaves in 81us I0128 12:20:27.808101 297226240 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.808130 297226240 hierarchical.cpp:1096] Performed allocation for 0 slaves in 99us @ 0x108d500fb mesos::internal::SchedulerProcess::resourceOffers() @ 0x108d78d0f ProtobufProcess<>::handler2<>() @ 0x108d7ba6b _ZNSt3__110__function6__funcINS_6__bindIPFvPN5mesos8internal16SchedulerProcessEMS5_FvRKN7process4UPIDERKNS_6vectorINS3_5OfferENS_9allocatorISC_EEEERKNSB_INS_12basic_stringIcNS_11char_traitsIcEENSD_IcEEEENSD_ISM_EEEEEMNS4_21ResourceOffersMessageEKFRKN6google8protobuf16RepeatedPtrFieldISC_EEvEMST_KFRKNSW_ISM_EEvESA_RKSM_EJRS6_RSS_RS11_RS16_RNS_12placeholders4__phILi1EEERNS1G_ILi2EEEEEENSD_IS1L_EEFvSA_S18_EEclESA_S18_ @ 0x1080af494 std::__1::function<>::operator()() @ 0x108d4932e ProtobufProcess<>::visit() I0128 12:20:27.858326 300445696 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.858350 300445696 hierarchical.cpp:1096] Performed allocation for 0 slaves in 99us @ 0x108d49477 ProtobufProcess<>::visit() @ 0x108b099be process::MessageEvent::visit() @ 0x108097df1 process::ProcessBase::serve() @ 0x10a86d4d0 process::ProcessManager::resume() @ 0x10a87d8af process::ProcessManager::init_threads()::$_1::operator()() @ 0x10a87d522 _ZNSt3__114__thread_proxyINS_5tupleIJNS_6__bindIZN7process14ProcessManager12init_threadsEvE3$_1JNS_17reference_wrapperIKNS_6atomicIbEEEEEEEEEEEEPvSD_ @ 0x7fff8f415268 _pthread_body @ 0x7fff8f4151e5 _pthread_start @ 0x7fff8f41341d thread_start zsh: segmentation fault GLOG_v=1 GTEST_FILTER=""""MasterQuotaTest.AvailableResourcesAfterRescinding"""" [ RUN ] MasterQuotaTest.AvailableResourcesAfterRescinding I0128 12:20:27.320648 2080858880 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0128 12:20:27.321002 2080858880 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0128 12:20:27.327103 2080858880 leveldb.cpp:174] Opened db in 3111us I0128 12:20:27.327848 2080858880 leveldb.cpp:181] Compacted db in 700us I0128 12:20:27.327899 2080858880 leveldb.cpp:196] Created db iterator in 17us I0128 12:20:27.327922 2080858880 leveldb.cpp:202] Seeked to beginning of db in 11us I0128 12:20:27.327940 2080858880 leveldb.cpp:271] Iterated through 0 keys in the db in 11us I0128 12:20:27.327973 2080858880 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0128 12:20:27.328737 298299392 recover.cpp:447] Starting replica recovery I0128 12:20:27.329010 298299392 recover.cpp:473] Replica is in EMPTY status I0128 12:20:27.330369 299909120 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (18211)@192.168.178.24:51278 I0128 12:20:27.330749 297226240 recover.cpp:193] Received a recover response from a replica in EMPTY status I0128 12:20:27.331166 297762816 recover.cpp:564] Updating replica status to STARTING I0128 12:20:27.331825 300445696 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 437us I0128 12:20:27.331897 300445696 replica.cpp:320] Persisted replica status to STARTING I0128 12:20:27.332077 300445696 recover.cpp:473] Replica is in STARTING status I0128 12:20:27.332474 298299392 master.cpp:374] Master bd6f3479-18eb-478d-8a08-a41364ecbd05 (alexr.fritz.box) started on 192.168.178.24:51278 I0128 12:20:27.332520 298299392 master.cpp:376] Flags at startup: --acls="""""""" --allocation_interval=""""50ms"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_http=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/private/tmp/rkmzt7/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""25secs"""" --registry_strict=""""true"""" --roles=""""role1,role2"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/private/tmp/rkmzt7/master"""" --zk_session_timeout=""""10secs"""" I0128 12:20:27.332871 298299392 master.cpp:421] Master only allowing authenticated frameworks to register I0128 12:20:27.332898 298299392 master.cpp:426] Master only allowing authenticated slaves to register I0128 12:20:27.332957 298299392 credentials.hpp:35] Loading credentials for authentication from '/private/tmp/rkmzt7/credentials' I0128 12:20:27.333255 300982272 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (18212)@192.168.178.24:51278 I0128 12:20:27.333359 298299392 master.cpp:466] Using default 'crammd5' authenticator I0128 12:20:27.333549 298299392 master.cpp:535] Using default 'basic' HTTP authenticator I0128 12:20:27.333550 299909120 recover.cpp:193] Received a recover response from a replica in STARTING status I0128 12:20:27.333890 298299392 master.cpp:569] Authorization enabled W0128 12:20:27.333930 298299392 master.cpp:629] The '--roles' flag is deprecated. This flag will be removed in the future. See the Mesos 0.27 upgrade notes for more information I0128 12:20:27.334220 298835968 recover.cpp:564] Updating replica status to VOTING I0128 12:20:27.334226 300445696 whitelist_watcher.cpp:77] No whitelist given I0128 12:20:27.334276 300982272 hierarchical.cpp:144] Initialized hierarchical allocator process I0128 12:20:27.334724 300982272 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 242us I0128 12:20:27.334772 300982272 replica.cpp:320] Persisted replica status to VOTING I0128 12:20:27.334895 298835968 recover.cpp:578] Successfully joined the Paxos group I0128 12:20:27.335198 298835968 recover.cpp:462] Recover process terminated I0128 12:20:27.336282 298299392 master.cpp:1710] The newly elected leader is master@192.168.178.24:51278 with id bd6f3479-18eb-478d-8a08-a41364ecbd05 I0128 12:20:27.336321 298299392 master.cpp:1723] Elected as the leading master! I0128 12:20:27.336340 298299392 master.cpp:1468] Recovering from registrar I0128 12:20:27.336454 300982272 registrar.cpp:307] Recovering registrar I0128 12:20:27.336956 299909120 log.cpp:659] Attempting to start the writer I0128 12:20:27.338927 298299392 replica.cpp:493] Replica received implicit promise request from (18214)@192.168.178.24:51278 with proposal 1 I0128 12:20:27.339247 298299392 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 291us I0128 12:20:27.339287 298299392 replica.cpp:342] Persisted promised to 1 I0128 12:20:27.340066 297226240 coordinator.cpp:238] Coordinator attempting to fill missing positions I0128 12:20:27.341961 297226240 replica.cpp:388] Replica received explicit promise request from (18215)@192.168.178.24:51278 for position 0 with proposal 2 I0128 12:20:27.342239 297226240 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 247us I0128 12:20:27.342278 297226240 replica.cpp:712] Persisted action at 0 I0128 12:20:27.343778 299372544 replica.cpp:537] Replica received write request for position 0 from (18216)@192.168.178.24:51278 I0128 12:20:27.343852 299372544 leveldb.cpp:436] Reading position from leveldb took 42us I0128 12:20:27.344118 299372544 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 238us I0128 12:20:27.344156 299372544 replica.cpp:712] Persisted action at 0 I0128 12:20:27.344981 297762816 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0128 12:20:27.345227 297762816 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 224us I0128 12:20:27.345264 297762816 replica.cpp:712] Persisted action at 0 I0128 12:20:27.345284 297762816 replica.cpp:697] Replica learned NOP action at position 0 I0128 12:20:27.346266 298835968 log.cpp:675] Writer started with ending position 0 I0128 12:20:27.347331 298835968 leveldb.cpp:436] Reading position from leveldb took 63us I0128 12:20:27.348314 300982272 registrar.cpp:340] Successfully fetched the registry (0B) in 11.819264ms I0128 12:20:27.348456 300982272 registrar.cpp:439] Applied 1 operations in 71us; attempting to update the 'registry' I0128 12:20:27.349019 297226240 log.cpp:683] Attempting to append 186 bytes to the log I0128 12:20:27.349143 300982272 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0128 12:20:27.350441 298835968 replica.cpp:537] Replica received write request for position 1 from (18217)@192.168.178.24:51278 I0128 12:20:27.350811 298835968 leveldb.cpp:341] Persisting action (205 bytes) to leveldb took 337us I0128 12:20:27.350846 298835968 replica.cpp:712] Persisted action at 1 I0128 12:20:27.351572 298835968 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0128 12:20:27.351801 298835968 leveldb.cpp:341] Persisting action (207 bytes) to leveldb took 212us I0128 12:20:27.351836 298835968 replica.cpp:712] Persisted action at 1 I0128 12:20:27.351851 298835968 replica.cpp:697] Replica learned APPEND action at position 1 I0128 12:20:27.352608 299372544 registrar.cpp:484] Successfully updated the 'registry' in 4.096768ms I0128 12:20:27.352735 299372544 registrar.cpp:370] Successfully recovered registrar I0128 12:20:27.352877 298299392 log.cpp:702] Attempting to truncate the log to 1 I0128 12:20:27.353091 300982272 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0128 12:20:27.353314 300445696 master.cpp:1520] Recovered 0 slaves from the Registry (147B) ; allowing 10mins for slaves to re-register I0128 12:20:27.353341 298835968 hierarchical.cpp:171] Skipping recovery of hierarchical allocator: nothing to recover I0128 12:20:27.354064 297762816 replica.cpp:537] Replica received write request for position 2 from (18218)@192.168.178.24:51278 I0128 12:20:27.354333 297762816 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 242us I0128 12:20:27.354372 297762816 replica.cpp:712] Persisted action at 2 I0128 12:20:27.355080 299909120 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0128 12:20:27.355311 299909120 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 214us I0128 12:20:27.355362 299909120 leveldb.cpp:399] Deleting ~1 keys from leveldb took 27us I0128 12:20:27.355382 299909120 replica.cpp:712] Persisted action at 2 I0128 12:20:27.355398 299909120 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0128 12:20:27.366089 2080858880 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix I0128 12:20:27.369942 300445696 slave.cpp:192] Slave started on 868)@192.168.178.24:51278 I0128 12:20:27.369989 300445696 slave.cpp:193] Flags at startup: --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_UwHS2p/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_auth_server=""""https://auth.docker.io"""" --docker_kill_orphans=""""true"""" --docker_puller_timeout=""""60"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_UwHS2p/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/Users/alex/Projects/mesos/build/default/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_UwHS2p"""" I0128 12:20:27.370420 300445696 credentials.hpp:83] Loading credential for authentication from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_UwHS2p/credential' I0128 12:20:27.370616 300445696 slave.cpp:323] Slave using credential for: test-principal I0128 12:20:27.370738 300445696 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0128 12:20:27.371325 300445696 slave.cpp:463] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.371383 300445696 slave.cpp:471] Slave attributes: [ ] I0128 12:20:27.371399 300445696 slave.cpp:476] Slave hostname: alexr.fritz.box I0128 12:20:27.371879 298835968 state.cpp:58] Recovering state from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_UwHS2p/meta' I0128 12:20:27.372112 299909120 status_update_manager.cpp:200] Recovering status update manager I0128 12:20:27.372323 297226240 containerizer.cpp:390] Recovering containerizer I0128 12:20:27.373388 298835968 provisioner.cpp:245] Provisioner recovery complete I0128 12:20:27.373706 297226240 slave.cpp:4495] Finished recovery I0128 12:20:27.374156 297226240 slave.cpp:4667] Querying resource estimator for oversubscribable resources I0128 12:20:27.374402 298299392 status_update_manager.cpp:174] Pausing sending status updates I0128 12:20:27.374408 297226240 slave.cpp:795] New master detected at master@192.168.178.24:51278 I0128 12:20:27.374454 297226240 slave.cpp:858] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.374469 297226240 slave.cpp:863] Using default CRAM-MD5 authenticatee I0128 12:20:27.374605 297226240 slave.cpp:831] Detecting new master I0128 12:20:27.374637 300982272 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.374800 297226240 slave.cpp:4681] Received oversubscribable resources from the resource estimator I0128 12:20:27.374876 297762816 master.cpp:5521] Authenticating slave(868)@192.168.178.24:51278 I0128 12:20:27.374999 297226240 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1735)@192.168.178.24:51278 I0128 12:20:27.375211 298835968 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.375344 297762816 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.375380 297762816 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.375537 298835968 authenticator.cpp:203] Received SASL authentication start I0128 12:20:27.375601 298835968 authenticator.cpp:325] Authentication requires more steps I0128 12:20:27.375694 297762816 authenticatee.cpp:258] Received SASL authentication step I0128 12:20:27.375838 299909120 authenticator.cpp:231] Received SASL authentication step I0128 12:20:27.375860 299909120 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.375871 299909120 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.375890 299909120 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.375902 299909120 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.375911 299909120 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.375918 299909120 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.375929 299909120 authenticator.cpp:317] Authentication success I0128 12:20:27.376003 297762816 authenticatee.cpp:298] Authentication success I0128 12:20:27.376076 300445696 master.cpp:5551] Successfully authenticated principal 'test-principal' at slave(868)@192.168.178.24:51278 I0128 12:20:27.376180 298299392 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1735)@192.168.178.24:51278 I0128 12:20:27.376325 300982272 slave.cpp:926] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.376410 300982272 slave.cpp:1320] Will retry registration in 18.436231ms if necessary I0128 12:20:27.376557 297762816 master.cpp:4235] Registering slave at slave(868)@192.168.178.24:51278 (alexr.fritz.box) with id bd6f3479-18eb-478d-8a08-a41364ecbd05-S0 I0128 12:20:27.376785 298299392 registrar.cpp:439] Applied 1 operations in 59us; attempting to update the 'registry' I0128 12:20:27.377285 300445696 log.cpp:683] Attempting to append 358 bytes to the log I0128 12:20:27.377408 297762816 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0128 12:20:27.378199 300445696 replica.cpp:537] Replica received write request for position 3 from (18232)@192.168.178.24:51278 I0128 12:20:27.378413 300445696 leveldb.cpp:341] Persisting action (377 bytes) to leveldb took 192us I0128 12:20:27.378446 300445696 replica.cpp:712] Persisted action at 3 I0128 12:20:27.379106 298299392 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0128 12:20:27.379331 298299392 leveldb.cpp:341] Persisting action (379 bytes) to leveldb took 220us I0128 12:20:27.379406 298299392 replica.cpp:712] Persisted action at 3 I0128 12:20:27.379429 298299392 replica.cpp:697] Replica learned APPEND action at position 3 I0128 12:20:27.380581 299372544 registrar.cpp:484] Successfully updated the 'registry' in 3.747328ms I0128 12:20:27.381187 300445696 log.cpp:702] Attempting to truncate the log to 3 I0128 12:20:27.381304 299909120 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0128 12:20:27.381430 300445696 slave.cpp:3435] Received ping from slave-observer(868)@192.168.178.24:51278 I0128 12:20:27.381759 297762816 hierarchical.cpp:473] Added slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S0 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0128 12:20:27.381801 297226240 slave.cpp:970] Registered with master master@192.168.178.24:51278; given slave ID bd6f3479-18eb-478d-8a08-a41364ecbd05-S0 I0128 12:20:27.381765 298835968 master.cpp:4303] Registered slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S0 at slave(868)@192.168.178.24:51278 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.381829 297226240 fetcher.cpp:81] Clearing fetcher cache I0128 12:20:27.381908 297762816 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.381950 297762816 hierarchical.cpp:1116] Performed allocation for slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S0 in 156us I0128 12:20:27.382125 299372544 status_update_manager.cpp:181] Resuming sending status updates I0128 12:20:27.382279 300982272 replica.cpp:537] Replica received write request for position 4 from (18233)@192.168.178.24:51278 I0128 12:20:27.382540 300982272 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 240us I0128 12:20:27.382585 300982272 replica.cpp:712] Persisted action at 4 I0128 12:20:27.382804 297226240 slave.cpp:993] Checkpointing SlaveInfo to '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_UwHS2p/meta/slaves/bd6f3479-18eb-478d-8a08-a41364ecbd05-S0/slave.info' I0128 12:20:27.384191 298299392 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0128 12:20:27.384424 298299392 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 235us I0128 12:20:27.384517 298299392 leveldb.cpp:399] Deleting ~2 keys from leveldb took 52us I0128 12:20:27.384557 298299392 replica.cpp:712] Persisted action at 4 I0128 12:20:27.384578 298299392 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0128 12:20:27.384601 299372544 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.384624 299372544 hierarchical.cpp:1096] Performed allocation for 1 slaves in 124us I0128 12:20:27.385365 297226240 slave.cpp:1029] Forwarding total oversubscribed resources I0128 12:20:27.385526 297226240 master.cpp:4644] Received update of slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S0 at slave(868)@192.168.178.24:51278 (alexr.fritz.box) with total oversubscribed resources I0128 12:20:27.386008 297226240 hierarchical.cpp:531] Slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S0 (alexr.fritz.box) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) I0128 12:20:27.386137 297226240 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.386155 297226240 hierarchical.cpp:1116] Performed allocation for slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S0 in 91us I0128 12:20:27.386893 2080858880 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix I0128 12:20:27.413722 299909120 slave.cpp:192] Slave started on 869)@192.168.178.24:51278 I0128 12:20:27.413766 299909120 slave.cpp:193] Flags at startup: --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_TNjwCO/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_auth_server=""""https://auth.docker.io"""" --docker_kill_orphans=""""true"""" --docker_puller_timeout=""""60"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_TNjwCO/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/Users/alex/Projects/mesos/build/default/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_TNjwCO"""" I0128 12:20:27.414484 299909120 credentials.hpp:83] Loading credential for authentication from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_TNjwCO/credential' I0128 12:20:27.414650 299909120 slave.cpp:323] Slave using credential for: test-principal I0128 12:20:27.416365 299909120 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0128 12:20:27.418332 299909120 slave.cpp:463] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.418475 299909120 slave.cpp:471] Slave attributes: [ ] I0128 12:20:27.418500 299909120 slave.cpp:476] Slave hostname: alexr.fritz.box I0128 12:20:27.419270 298835968 state.cpp:58] Recovering state from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_TNjwCO/meta' I0128 12:20:27.420717 298299392 status_update_manager.cpp:200] Recovering status update manager I0128 12:20:27.420953 299909120 containerizer.cpp:390] Recovering containerizer I0128 12:20:27.421883 298835968 provisioner.cpp:245] Provisioner recovery complete I0128 12:20:27.422145 299909120 slave.cpp:4495] Finished recovery I0128 12:20:27.422695 299909120 slave.cpp:4667] Querying resource estimator for oversubscribable resources I0128 12:20:27.422929 299909120 slave.cpp:795] New master detected at master@192.168.178.24:51278 I0128 12:20:27.422935 299372544 status_update_manager.cpp:174] Pausing sending status updates I0128 12:20:27.422971 299909120 slave.cpp:858] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.422986 299909120 slave.cpp:863] Using default CRAM-MD5 authenticatee I0128 12:20:27.423073 299909120 slave.cpp:831] Detecting new master I0128 12:20:27.423151 298835968 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.423187 299909120 slave.cpp:4681] Received oversubscribable resources from the resource estimator I0128 12:20:27.423333 299909120 master.cpp:5521] Authenticating slave(869)@192.168.178.24:51278 I0128 12:20:27.423431 299909120 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1736)@192.168.178.24:51278 I0128 12:20:27.423530 297762816 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.423630 299909120 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.423657 299909120 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.423738 299909120 authenticator.cpp:203] Received SASL authentication start I0128 12:20:27.423781 299909120 authenticator.cpp:325] Authentication requires more steps I0128 12:20:27.423833 299909120 authenticatee.cpp:258] Received SASL authentication step I0128 12:20:27.423897 299909120 authenticator.cpp:231] Received SASL authentication step I0128 12:20:27.423918 299909120 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.423931 299909120 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.423956 299909120 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.423992 299909120 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.424006 299909120 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.424016 299909120 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.424031 299909120 authenticator.cpp:317] Authentication success I0128 12:20:27.424095 297762816 authenticatee.cpp:298] Authentication success I0128 12:20:27.424125 299909120 master.cpp:5551] Successfully authenticated principal 'test-principal' at slave(869)@192.168.178.24:51278 I0128 12:20:27.424180 299909120 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1736)@192.168.178.24:51278 I0128 12:20:27.424351 297762816 slave.cpp:926] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.424444 297762816 slave.cpp:1320] Will retry registration in 10.561574ms if necessary I0128 12:20:27.424577 300982272 master.cpp:4235] Registering slave at slave(869)@192.168.178.24:51278 (alexr.fritz.box) with id bd6f3479-18eb-478d-8a08-a41364ecbd05-S1 I0128 12:20:27.424949 298835968 registrar.cpp:439] Applied 1 operations in 86us; attempting to update the 'registry' I0128 12:20:27.425549 297226240 log.cpp:683] Attempting to append 527 bytes to the log I0128 12:20:27.425662 298835968 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 5 I0128 12:20:27.426563 300982272 replica.cpp:537] Replica received write request for position 5 from (18247)@192.168.178.24:51278 I0128 12:20:27.426820 300982272 leveldb.cpp:341] Persisting action (546 bytes) to leveldb took 215us I0128 12:20:27.426857 300982272 replica.cpp:712] Persisted action at 5 I0128 12:20:27.427487 297226240 replica.cpp:691] Replica received learned notice for position 5 from @0.0.0.0:0 I0128 12:20:27.427820 297226240 leveldb.cpp:341] Persisting action (548 bytes) to leveldb took 197us I0128 12:20:27.427855 297226240 replica.cpp:712] Persisted action at 5 I0128 12:20:27.427940 297226240 replica.cpp:697] Replica learned APPEND action at position 5 I0128 12:20:27.432288 299372544 registrar.cpp:484] Successfully updated the 'registry' in 7.28704ms I0128 12:20:27.432765 299372544 log.cpp:702] Attempting to truncate the log to 5 I0128 12:20:27.433254 299372544 master.cpp:4303] Registered slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S1 at slave(869)@192.168.178.24:51278 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.433334 299372544 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 6 I0128 12:20:27.433584 299372544 hierarchical.cpp:473] Added slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S1 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0128 12:20:27.433667 299372544 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.433686 299372544 hierarchical.cpp:1116] Performed allocation for slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S1 in 78us I0128 12:20:27.433753 299372544 slave.cpp:970] Registered with master master@192.168.178.24:51278; given slave ID bd6f3479-18eb-478d-8a08-a41364ecbd05-S1 I0128 12:20:27.433770 299372544 fetcher.cpp:81] Clearing fetcher cache I0128 12:20:27.434334 298299392 status_update_manager.cpp:181] Resuming sending status updates I0128 12:20:27.435118 298835968 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.435144 298835968 hierarchical.cpp:1096] Performed allocation for 2 slaves in 146us I0128 12:20:27.435294 297226240 replica.cpp:537] Replica received write request for position 6 from (18248)@192.168.178.24:51278 I0128 12:20:27.435482 297226240 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 173us I0128 12:20:27.435516 297226240 replica.cpp:712] Persisted action at 6 I0128 12:20:27.435883 297226240 replica.cpp:691] Replica received learned notice for position 6 from @0.0.0.0:0 I0128 12:20:27.436043 299372544 slave.cpp:993] Checkpointing SlaveInfo to '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_TNjwCO/meta/slaves/bd6f3479-18eb-478d-8a08-a41364ecbd05-S1/slave.info' I0128 12:20:27.436040 297226240 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 148us I0128 12:20:27.436091 297226240 leveldb.cpp:399] Deleting ~2 keys from leveldb took 29us I0128 12:20:27.436112 297226240 replica.cpp:712] Persisted action at 6 I0128 12:20:27.436153 297226240 replica.cpp:697] Replica learned TRUNCATE action at position 6 I0128 12:20:27.436594 299372544 slave.cpp:1029] Forwarding total oversubscribed resources I0128 12:20:27.436656 299372544 slave.cpp:3435] Received ping from slave-observer(869)@192.168.178.24:51278 I0128 12:20:27.436753 299372544 master.cpp:4644] Received update of slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S1 at slave(869)@192.168.178.24:51278 (alexr.fritz.box) with total oversubscribed resources I0128 12:20:27.437094 299372544 hierarchical.cpp:531] Slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S1 (alexr.fritz.box) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) I0128 12:20:27.437599 299372544 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.437620 299372544 hierarchical.cpp:1116] Performed allocation for slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S1 in 499us I0128 12:20:27.438273 2080858880 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix I0128 12:20:27.441591 299372544 slave.cpp:192] Slave started on 870)@192.168.178.24:51278 I0128 12:20:27.441622 299372544 slave.cpp:193] Flags at startup: --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_iHhziB/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_auth_server=""""https://auth.docker.io"""" --docker_kill_orphans=""""true"""" --docker_puller_timeout=""""60"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_iHhziB/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/Users/alex/Projects/mesos/build/default/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_iHhziB"""" I0128 12:20:27.442208 299372544 credentials.hpp:83] Loading credential for authentication from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_iHhziB/credential' I0128 12:20:27.442344 299372544 slave.cpp:323] Slave using credential for: test-principal I0128 12:20:27.442428 299372544 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0128 12:20:27.443301 299372544 slave.cpp:463] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.443349 299372544 slave.cpp:471] Slave attributes: [ ] I0128 12:20:27.443361 299372544 slave.cpp:476] Slave hostname: alexr.fritz.box I0128 12:20:27.443904 298835968 state.cpp:58] Recovering state from '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_iHhziB/meta' I0128 12:20:27.444108 297226240 status_update_manager.cpp:200] Recovering status update manager I0128 12:20:27.444362 300982272 containerizer.cpp:390] Recovering containerizer I0128 12:20:27.446213 297226240 provisioner.cpp:245] Provisioner recovery complete I0128 12:20:27.446542 297226240 slave.cpp:4495] Finished recovery I0128 12:20:27.447430 297226240 slave.cpp:4667] Querying resource estimator for oversubscribable resources I0128 12:20:27.447684 300445696 status_update_manager.cpp:174] Pausing sending status updates I0128 12:20:27.447726 297226240 slave.cpp:795] New master detected at master@192.168.178.24:51278 I0128 12:20:27.447760 297226240 slave.cpp:858] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.447773 297226240 slave.cpp:863] Using default CRAM-MD5 authenticatee I0128 12:20:27.448114 297762816 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.448084 297226240 slave.cpp:831] Detecting new master I0128 12:20:27.448473 297226240 slave.cpp:4681] Received oversubscribable resources from the resource estimator I0128 12:20:27.448545 300982272 master.cpp:5521] Authenticating slave(870)@192.168.178.24:51278 I0128 12:20:27.449328 297762816 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1737)@192.168.178.24:51278 I0128 12:20:27.449581 298299392 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.449713 300982272 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.449743 300982272 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.449923 297762816 authenticator.cpp:203] Received SASL authentication start I0128 12:20:27.449990 297762816 authenticator.cpp:325] Authentication requires more steps I0128 12:20:27.450142 300445696 authenticatee.cpp:258] Received SASL authentication step I0128 12:20:27.450389 300982272 authenticator.cpp:231] Received SASL authentication step I0128 12:20:27.450413 300982272 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.450489 300982272 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.450513 300982272 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.450531 300982272 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.450558 300982272 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.451807 300982272 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.451854 300982272 authenticator.cpp:317] Authentication success I0128 12:20:27.451932 298299392 authenticatee.cpp:298] Authentication success I0128 12:20:27.452075 298835968 master.cpp:5551] Successfully authenticated principal 'test-principal' at slave(870)@192.168.178.24:51278 I0128 12:20:27.452178 297762816 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1737)@192.168.178.24:51278 I0128 12:20:27.452294 298299392 slave.cpp:926] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.452376 298299392 slave.cpp:1320] Will retry registration in 15.028781ms if necessary I0128 12:20:27.452481 297226240 master.cpp:4235] Registering slave at slave(870)@192.168.178.24:51278 (alexr.fritz.box) with id bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 I0128 12:20:27.452783 298299392 registrar.cpp:439] Applied 1 operations in 86us; attempting to update the 'registry' I0128 12:20:27.455883 298299392 log.cpp:683] Attempting to append 696 bytes to the log I0128 12:20:27.455989 299909120 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 7 I0128 12:20:27.456845 300445696 replica.cpp:537] Replica received write request for position 7 from (18262)@192.168.178.24:51278 I0128 12:20:27.457120 300445696 leveldb.cpp:341] Persisting action (715 bytes) to leveldb took 276us I0128 12:20:27.457268 300445696 replica.cpp:712] Persisted action at 7 I0128 12:20:27.459020 298835968 replica.cpp:691] Replica received learned notice for position 7 from @0.0.0.0:0 I0128 12:20:27.459255 298835968 leveldb.cpp:341] Persisting action (717 bytes) to leveldb took 210us I0128 12:20:27.459290 298835968 replica.cpp:712] Persisted action at 7 I0128 12:20:27.459307 298835968 replica.cpp:697] Replica learned APPEND action at position 7 I0128 12:20:27.467411 299372544 registrar.cpp:484] Successfully updated the 'registry' in 12.083968ms I0128 12:20:27.467475 300982272 log.cpp:702] Attempting to truncate the log to 7 I0128 12:20:27.467695 297226240 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 8 I0128 12:20:27.467697 300982272 slave.cpp:1320] Will retry registration in 16.052253ms if necessary I0128 12:20:27.468121 297762816 slave.cpp:3435] Received ping from slave-observer(870)@192.168.178.24:51278 I0128 12:20:27.468437 299909120 master.cpp:4303] Registered slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 at slave(870)@192.168.178.24:51278 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0128 12:20:27.468613 297226240 replica.cpp:537] Replica received write request for position 8 from (18263)@192.168.178.24:51278 I0128 12:20:27.468813 298299392 hierarchical.cpp:473] Added slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 (alexr.fritz.box) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0128 12:20:27.469081 297226240 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 230us I0128 12:20:27.469113 297226240 replica.cpp:712] Persisted action at 8 I0128 12:20:27.469173 299909120 master.cpp:4205] Slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 at slave(870)@192.168.178.24:51278 (alexr.fritz.box) already registered, resending acknowledgement I0128 12:20:27.469702 298299392 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.470820 298299392 hierarchical.cpp:1116] Performed allocation for slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 in 1920us I0128 12:20:27.471000 299372544 slave.cpp:970] Registered with master master@192.168.178.24:51278; given slave ID bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 I0128 12:20:27.471022 299372544 fetcher.cpp:81] Clearing fetcher cache I0128 12:20:27.471740 299372544 slave.cpp:993] Checkpointing SlaveInfo to '/tmp/MasterQuotaTest_AvailableResourcesAfterRescinding_iHhziB/meta/slaves/bd6f3479-18eb-478d-8a08-a41364ecbd05-S2/slave.info' I0128 12:20:27.471943 297762816 status_update_manager.cpp:181] Resuming sending status updates I0128 12:20:27.472684 298835968 replica.cpp:691] Replica received learned notice for position 8 from @0.0.0.0:0 I0128 12:20:27.472903 298835968 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 201us I0128 12:20:27.472951 298835968 leveldb.cpp:399] Deleting ~2 keys from leveldb took 27us I0128 12:20:27.472970 298835968 replica.cpp:712] Persisted action at 8 I0128 12:20:27.473085 299372544 slave.cpp:1029] Forwarding total oversubscribed resources I0128 12:20:27.473274 298835968 replica.cpp:697] Replica learned TRUNCATE action at position 8 W0128 12:20:27.473520 299372544 slave.cpp:1015] Already registered with master master@192.168.178.24:51278 I0128 12:20:27.473541 299372544 slave.cpp:1029] Forwarding total oversubscribed resources I0128 12:20:27.473604 297226240 master.cpp:4644] Received update of slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 at slave(870)@192.168.178.24:51278 (alexr.fritz.box) with total oversubscribed resources I0128 12:20:27.473798 297226240 master.cpp:4644] Received update of slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 at slave(870)@192.168.178.24:51278 (alexr.fritz.box) with total oversubscribed resources I0128 12:20:27.473970 300982272 hierarchical.cpp:531] Slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 (alexr.fritz.box) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) I0128 12:20:27.474051 300982272 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.474071 300982272 hierarchical.cpp:1116] Performed allocation for slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 in 74us I0128 12:20:27.474261 300982272 hierarchical.cpp:531] Slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 (alexr.fritz.box) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) I0128 12:20:27.474347 300982272 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.474370 300982272 hierarchical.cpp:1116] Performed allocation for slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 in 69us I0128 12:20:27.476686 2080858880 sched.cpp:222] Version: 0.28.0 I0128 12:20:27.477182 299372544 sched.cpp:326] New master detected at master@192.168.178.24:51278 I0128 12:20:27.477264 299372544 sched.cpp:382] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.477284 299372544 sched.cpp:389] Using default CRAM-MD5 authenticatee I0128 12:20:27.477462 298835968 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.477645 298299392 master.cpp:5521] Authenticating scheduler-5f6db5d6-b088-4d13-a14c-94734bb39bf7@192.168.178.24:51278 I0128 12:20:27.477763 299909120 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1738)@192.168.178.24:51278 I0128 12:20:27.477926 298299392 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.478040 300982272 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.478076 300982272 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.478152 298299392 authenticator.cpp:203] Received SASL authentication start I0128 12:20:27.478196 298299392 authenticator.cpp:325] Authentication requires more steps I0128 12:20:27.478247 298299392 authenticatee.cpp:258] Received SASL authentication step I0128 12:20:27.478337 299909120 authenticator.cpp:231] Received SASL authentication step I0128 12:20:27.478356 299909120 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.478369 299909120 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.478392 299909120 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.478462 299909120 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.478478 299909120 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.478487 299909120 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.478502 299909120 authenticator.cpp:317] Authentication success I0128 12:20:27.478562 300982272 authenticatee.cpp:298] Authentication success I0128 12:20:27.478618 297226240 master.cpp:5551] Successfully authenticated principal 'test-principal' at scheduler-5f6db5d6-b088-4d13-a14c-94734bb39bf7@192.168.178.24:51278 I0128 12:20:27.478713 298835968 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1738)@192.168.178.24:51278 I0128 12:20:27.478826 297762816 sched.cpp:471] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.478871 297762816 sched.cpp:780] Sending SUBSCRIBE call to master@192.168.178.24:51278 I0128 12:20:27.478971 297762816 sched.cpp:813] Will retry registration in 250.550205ms if necessary I0128 12:20:27.479032 298835968 master.cpp:2278] Received SUBSCRIBE call for framework 'framework(18264)' at scheduler-5f6db5d6-b088-4d13-a14c-94734bb39bf7@192.168.178.24:51278 W0128 12:20:27.479054 298835968 master.cpp:2285] Framework at scheduler-5f6db5d6-b088-4d13-a14c-94734bb39bf7@192.168.178.24:51278 (authenticated as 'test-principal') does not set 'principal' in FrameworkInfo I0128 12:20:27.479069 298835968 master.cpp:1749] Authorizing framework principal '' to receive offers for role 'role1' I0128 12:20:27.479228 298835968 master.cpp:2349] Subscribing framework framework(18264) with checkpointing disabled and capabilities [ ] I0128 12:20:27.479449 300445696 hierarchical.cpp:265] Added framework framework(18264) I0128 12:20:27.479570 298299392 sched.cpp:707] Framework registered with framework(18264) I0128 12:20:27.479625 298299392 sched.cpp:721] Scheduler::registered took 45us W0128 12:20:27.479655 297226240 slave.cpp:2236] Ignoring updating pid for framework framework(18264) because it does not exist W0128 12:20:27.479701 297762816 slave.cpp:2236] Ignoring updating pid for framework framework(18264) because it does not exist W0128 12:20:27.479756 297226240 slave.cpp:2236] Ignoring updating pid for framework framework(18264) because it does not exist I0128 12:20:27.480363 300445696 hierarchical.cpp:1498] No inverse offers to send out! I0128 12:20:27.480406 300445696 hierarchical.cpp:1096] Performed allocation for 3 slaves in 948us I0128 12:20:27.480887 300982272 master.cpp:5350] Sending 3 offers to framework framework(18264) (framework(18264)) at scheduler-5f6db5d6-b088-4d13-a14c-94734bb39bf7@192.168.178.24:51278 I0128 12:20:27.481411 299909120 sched.cpp:877] Scheduler::resourceOffers took 180us I0128 12:20:27.485344 300445696 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.485376 300445696 hierarchical.cpp:1498] No inverse offers to send out! I0128 12:20:27.485397 300445696 hierarchical.cpp:1096] Performed allocation for 3 slaves in 135us I0128 12:20:27.486169 2080858880 sched.cpp:222] Version: 0.28.0 I0128 12:20:27.486497 300445696 sched.cpp:326] New master detected at master@192.168.178.24:51278 I0128 12:20:27.486548 300445696 sched.cpp:382] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.486562 300445696 sched.cpp:389] Using default CRAM-MD5 authenticatee I0128 12:20:27.486661 299909120 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.486810 297226240 master.cpp:5521] Authenticating scheduler-ee0b23b6-ac62-4d64-86ce-daf6a4e30f92@192.168.178.24:51278 I0128 12:20:27.486898 300982272 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1739)@192.168.178.24:51278 I0128 12:20:27.487023 299909120 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.487139 297762816 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.487190 297762816 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.487296 300445696 authenticator.cpp:203] Received SASL authentication start I0128 12:20:27.487432 300445696 authenticator.cpp:325] Authentication requires more steps I0128 12:20:27.487490 300982272 authenticatee.cpp:258] Received SASL authentication step I0128 12:20:27.488359 299909120 authenticator.cpp:231] Received SASL authentication step I0128 12:20:27.488391 299909120 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.488405 299909120 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.488428 299909120 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.488445 299909120 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.488471 299909120 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.488481 299909120 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.488507 2080858880 sched.cpp:222] Version: 0.28.0 I0128 12:20:27.488694 299909120 authenticator.cpp:317] Authentication success I0128 12:20:27.488770 300445696 authenticatee.cpp:298] Authentication success I0128 12:20:27.488823 298299392 master.cpp:5551] Successfully authenticated principal 'test-principal' at scheduler-ee0b23b6-ac62-4d64-86ce-daf6a4e30f92@192.168.178.24:51278 I0128 12:20:27.488911 298835968 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1739)@192.168.178.24:51278 I0128 12:20:27.489013 298299392 sched.cpp:326] New master detected at master@192.168.178.24:51278 I0128 12:20:27.489037 297762816 sched.cpp:471] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.489061 297762816 sched.cpp:780] Sending SUBSCRIBE call to master@192.168.178.24:51278 I0128 12:20:27.489102 298299392 sched.cpp:382] Authenticating with master master@192.168.178.24:51278 I0128 12:20:27.489122 298299392 sched.cpp:389] Using default CRAM-MD5 authenticatee I0128 12:20:27.489151 297762816 sched.cpp:813] Will retry registration in 955.991763ms if necessary I0128 12:20:27.489215 299909120 master.cpp:2278] Received SUBSCRIBE call for framework 'framework(18265)' at scheduler-ee0b23b6-ac62-4d64-86ce-daf6a4e30f92@192.168.178.24:51278 W0128 12:20:27.489248 299909120 master.cpp:2285] Framework at scheduler-ee0b23b6-ac62-4d64-86ce-daf6a4e30f92@192.168.178.24:51278 (authenticated as 'test-principal') does not set 'principal' in FrameworkInfo I0128 12:20:27.489266 299909120 master.cpp:1749] Authorizing framework principal '' to receive offers for role 'role2' I0128 12:20:27.489315 297762816 authenticatee.cpp:121] Creating new client SASL connection I0128 12:20:27.489439 299909120 master.cpp:5521] Authenticating scheduler-a5c3a776-b3e5-42dc-a9fa-63686dca3249@192.168.178.24:51278 I0128 12:20:27.489513 299372544 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(1740)@192.168.178.24:51278 I0128 12:20:27.489545 299909120 master.cpp:2349] Subscribing framework framework(18265) with checkpointing disabled and capabilities [ ] I0128 12:20:27.489681 298835968 authenticator.cpp:98] Creating new server SASL connection I0128 12:20:27.489807 298299392 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0128 12:20:27.489835 298299392 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0128 12:20:27.489869 299372544 hierarchical.cpp:265] Added framework framework(18265) I0128 12:20:27.489958 298299392 authenticator.cpp:203] Received SASL authentication start I0128 12:20:27.489969 297762816 sched.cpp:707] Framework registered with framework(18265) I0128 12:20:27.489982 299372544 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.490000 298299392 authenticator.cpp:325] Authentication requires more steps I0128 12:20:27.490015 299372544 hierarchical.cpp:1498] No inverse offers to send out! I0128 12:20:27.490034 299372544 hierarchical.cpp:1096] Performed allocation for 3 slaves in 151us I0128 12:20:27.490067 297762816 sched.cpp:721] Scheduler::registered took 92us W0128 12:20:27.490088 300445696 slave.cpp:2236] Ignoring updating pid for framework framework(18265) because it does not exist I0128 12:20:27.490102 297226240 authenticatee.cpp:258] Received SASL authentication step W0128 12:20:27.490145 298299392 slave.cpp:2236] Ignoring updating pid for framework framework(18265) because it does not exist I0128 12:20:27.490262 300445696 authenticator.cpp:231] Received SASL authentication step W0128 12:20:27.490279 298299392 slave.cpp:2236] Ignoring updating pid for framework framework(18265) because it does not exist I0128 12:20:27.490291 300445696 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0128 12:20:27.490304 300445696 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0128 12:20:27.490324 300445696 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0128 12:20:27.490341 300445696 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'alexr.fritz.box' server FQDN: 'alexr.fritz.box' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0128 12:20:27.490352 300445696 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.490362 300445696 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0128 12:20:27.490375 300445696 authenticator.cpp:317] Authentication success I0128 12:20:27.490464 299372544 authenticatee.cpp:298] Authentication success I0128 12:20:27.490517 297226240 master.cpp:5551] Successfully authenticated principal 'test-principal' at scheduler-a5c3a776-b3e5-42dc-a9fa-63686dca3249@192.168.178.24:51278 I0128 12:20:27.490598 299909120 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(1740)@192.168.178.24:51278 I0128 12:20:27.490722 298299392 sched.cpp:471] Successfully authenticated with master master@192.168.178.24:51278 I0128 12:20:27.490741 298299392 sched.cpp:780] Sending SUBSCRIBE call to master@192.168.178.24:51278 I0128 12:20:27.490811 298299392 sched.cpp:813] Will retry registration in 922.212316ms if necessary I0128 12:20:27.490871 300982272 master.cpp:2278] Received SUBSCRIBE call for framework 'framework(18266)' at scheduler-a5c3a776-b3e5-42dc-a9fa-63686dca3249@192.168.178.24:51278 W0128 12:20:27.490893 300982272 master.cpp:2285] Framework at scheduler-a5c3a776-b3e5-42dc-a9fa-63686dca3249@192.168.178.24:51278 (authenticated as 'test-principal') does not set 'principal' in FrameworkInfo I0128 12:20:27.490907 300982272 master.cpp:1749] Authorizing framework principal '' to receive offers for role 'role2' I0128 12:20:27.491086 299909120 master.cpp:2349] Subscribing framework framework(18266) with checkpointing disabled and capabilities [ ] I0128 12:20:27.491291 297226240 hierarchical.cpp:265] Added framework framework(18266) I0128 12:20:27.491345 300445696 sched.cpp:707] Framework registered with framework(18266) I0128 12:20:27.491374 297226240 hierarchical.cpp:1403] No resources available to allocate! W0128 12:20:27.491379 300982272 slave.cpp:2236] Ignoring updating pid for framework framework(18266) because it does not exist I0128 12:20:27.491402 297226240 hierarchical.cpp:1498] No inverse offers to send out! I0128 12:20:27.491418 297226240 hierarchical.cpp:1096] Performed allocation for 3 slaves in 116us I0128 12:20:27.491427 300445696 sched.cpp:721] Scheduler::registered took 73us W0128 12:20:27.491439 299372544 slave.cpp:2236] Ignoring updating pid for framework framework(18266) because it does not exist W0128 12:20:27.491523 297762816 slave.cpp:2236] Ignoring updating pid for framework framework(18266) because it does not exist I0128 12:20:27.491674 2080858880 resources.cpp:564] Parsing resources as JSON failed: cpus:1;mem:512 Trying semicolon-delimited string format instead I0128 12:20:27.493041 297226240 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/quota' I0128 12:20:27.493536 299909120 http.cpp:503] HTTP POST for /master/quota from 192.168.178.24:51640 I0128 12:20:27.493561 299909120 quota_handler.cpp:211] Setting quota from request: '{""""force"""":false,""""guarantee"""":[{""""name"""":""""cpus"""",""""role"""":""""*"""",""""scalar"""":{""""value"""":1.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""role"""":""""*"""",""""scalar"""":{""""value"""":512.0},""""type"""":""""SCALAR""""}],""""role"""":""""role2""""}' I0128 12:20:27.493954 299909120 quota_handler.cpp:446] Authorizing principal 'test-principal' to request quota for role 'role2' I0128 12:20:27.494176 299909120 quota_handler.cpp:70] Performing capacity heuristic check for a set quota request I0128 12:20:27.494499 300445696 registrar.cpp:439] Applied 1 operations in 80us; attempting to update the 'registry' I0128 12:20:27.494951 297226240 log.cpp:683] Attempting to append 770 bytes to the log I0128 12:20:27.495055 298299392 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 9 I0128 12:20:27.495833 297226240 replica.cpp:537] Replica received write request for position 9 from (18269)@192.168.178.24:51278 I0128 12:20:27.496049 297226240 leveldb.cpp:341] Persisting action (789 bytes) to leveldb took 197us I0128 12:20:27.496085 297226240 replica.cpp:712] Persisted action at 9 I0128 12:20:27.496620 298835968 replica.cpp:691] Replica received learned notice for position 9 from @0.0.0.0:0 I0128 12:20:27.496810 298835968 leveldb.cpp:341] Persisting action (791 bytes) to leveldb took 177us I0128 12:20:27.496841 298835968 replica.cpp:712] Persisted action at 9 I0128 12:20:27.496857 298835968 replica.cpp:697] Replica learned APPEND action at position 9 I0128 12:20:27.498211 300445696 registrar.cpp:484] Successfully updated the 'registry' in 3.668992ms I0128 12:20:27.498492 300982272 log.cpp:702] Attempting to truncate the log to 9 I0128 12:20:27.498584 298835968 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 10 I0128 12:20:27.498679 298299392 hierarchical.cpp:1024] Set quota cpus(*):1; mem(*):512 for role 'role2' I0128 12:20:27.499006 300982272 sched.cpp:903] Rescinded offer bd6f3479-18eb-478d-8a08-a41364ecbd05-O2 I0128 12:20:27.499060 300982272 sched.cpp:914] Scheduler::offerRescinded took 34us I0128 12:20:27.499174 298299392 hierarchical.cpp:1403] No resources available to allocate! I0128 12:20:27.499207 298299392 hierarchical.cpp:1498] No inverse offers to send out! I0128 12:20:27.499222 299372544 replica.cpp:537] Replica received write request for position 10 from (18270)@192.168.178.24:51278 I0128 12:20:27.499229 298299392 hierarchical.cpp:1096] Performed allocation for 3 slaves in 516us I0128 12:20:27.499322 300445696 sched.cpp:903] Rescinded offer bd6f3479-18eb-478d-8a08-a41364ecbd05-O1 I0128 12:20:27.499373 300445696 sched.cpp:914] Scheduler::offerRescinded took 35us I0128 12:20:27.499429 299372544 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 194us I0128 12:20:27.499469 299372544 replica.cpp:712] Persisted action at 10 I0128 12:20:27.499477 298299392 hierarchical.cpp:892] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 from framework framework(18264) I0128 12:20:27.499708 298299392 hierarchical.cpp:892] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S1 from framework framework(18264) I0128 12:20:27.500056 299909120 replica.cpp:691] Replica received learned notice for position 10 from @0.0.0.0:0 I0128 12:20:27.500241 299909120 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 175us I0128 12:20:27.500306 299909120 leveldb.cpp:399] Deleting ~2 keys from leveldb took 34us I0128 12:20:27.500331 299909120 replica.cpp:712] Persisted action at 10 I0128 12:20:27.500358 299909120 replica.cpp:697] Replica learned TRUNCATE action at position 10 I0128 12:20:27.501042 297226240 master.cpp:1025] Master terminating I0128 12:20:27.501637 298835968 hierarchical.cpp:505] Removed slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S2 I0128 12:20:27.501849 300982272 hierarchical.cpp:505] Removed slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S1 I0128 12:20:27.502151 300982272 hierarchical.cpp:505] Removed slave bd6f3479-18eb-478d-8a08-a41364ecbd05-S0 I0128 12:20:27.502595 300982272 hierarchical.cpp:326] Removed framework framework(18266) I0128 12:20:27.502773 299909120 hierarchical.cpp:326] Removed framework framework(18265) I0128 12:20:27.503046 299909120 hierarchical.cpp:326] Removed framework framework(18264) I0128 12:20:27.503330 299372544 slave.cpp:3481] master@192.168.178.24:51278 exited W0128 12:20:27.503353 299372544 slave.cpp:3484] Master disconnected! Waiting for a new master to be elected I0128 12:20:27.503422 299372544 slave.cpp:3481] master@192.168.178.24:51278 exited W0128 12:20:27.503446 299372544 slave.cpp:3484] Master disconnected! Waiting for a new master to be elected I0128 12:20:27.503530 299372544 slave.cpp:3481] master@192.168.178.24:51278 exited W0128 12:20:27.503548 299372544 slave.cpp:3484] Master disconnected! Waiting for a new master to be elected I0128 12:20:27.512778 2080858880 slave.cpp:667] Slave terminating I0128 12:20:27.515347 2080858880 slave.cpp:667] Slave terminating I0128 12:20:27.517685 2080858880 slave.cpp:667] Slave terminating [ OK ] MasterQuotaTest.AvailableResourcesAfterRescinding (205 ms) ",0,0,1,0,0,0,0,0,0,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4546","01/28/2016 21:24:55",3,"Mesos Agents needs to re-resolve hosts in zk string on leader change / failure to connect ""Sample Mesos Agent log: https://gist.github.com/brndnmtthws/fb846fa988487250a809 Note, zookeeper has a function to change the list of servers at runtime: https://github.com/apache/zookeeper/blob/735ea78909e67c648a4978c8d31d63964986af73/src/c/src/zookeeper.c#L1207-L1232 This comes up when using an AWS AutoScalingGroup for managing the set of masters. The agent when it comes up the first time, resolves the zk:// string. Once all the hosts that were in the original string fail (Each fails, is replaced by a new machine, which has the same DNS name), the agent just keeps spinning in an internal loop, never re-resolving the DNS names. Two solutions I see are 1. Update the list of servers / re-resolve 2. Have the agent detect it hasn't connected recently, and kill itself (Which will force a re-resolution when the agent starts back up)""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4562","01/29/2016 23:20:21",2,"Mesos UI shows wrong count for ""started"" tasks ""The task started field shows the number of tasks in state """"TASKS_STARTING"""" as opposed to those in """"TASK_RUNNING"""" state.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4564","01/30/2016 02:11:11",2,"Separate Appc protobuf messages to its own file. ""It would be cleaner to keep the Appc protobuf messages separate from other mesos messages.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4566","01/31/2016 00:05:32",1,"Avoid unnecessary temporary `std::string` constructions and copies in `jsonify`. ""A few of the critical code paths in {{jsonify}} involve unnecessary temporary string construction and copies (inherited from the {{JSON::*}}). For example, {{strings::trim}} is used to remove trailing 0s from printing {{double}}. We print {{double}} a lot, and therefore constructing a temporary {{std::string}} on printing of every double is extremely costly. This ticket captures the work involved in avoiding them.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4573","02/01/2016 16:28:19",5,"Design doc for scheduler HTTP Stream IDs ""This ticket is for the design of HTTP stream IDs, for use with HTTP schedulers. These IDs allow Mesos to distinguish between different instances of HTTP framework schedulers.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4576","02/01/2016 21:12:13",2,"Introduce a stout helper for ""which"" ""We may want to add a helper to {{stout/os.hpp}} that will natively emulate the functionality of the Linux utility {{which}}. i.e. This helper may be useful: * for test filters in {{src/tests/environment.cpp}} * a few tests in {{src/tests/containerizer/port_mapping_tests.cpp}} * the {{sha512}} utility in {{src/common/command_utils.cpp}} * as runtime checks in the {{LogrotateContainerLogger}} * etc."""," Option which(const string& command) { Option path = os::getenv(""""PATH""""); // Loop through path and return the first one which os::exists(...). return None(); } ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4584","02/03/2016 00:12:50",2,"Update Rakefile for mesos site generation ""The stuff in site/ directory needs some updates to make it easier to generate updates for mesos.apache.org site.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4590","02/03/2016 20:10:17",2,"Add test case for reservations with same role, different principals ""We don't have a test case that covers $SUBJECT; we probably should.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4596","02/04/2016 18:32:45",2,"Add common Appc spec utilities. "" Add common utility functions such as : - validating image information against actual data in the image directory. - getting list of dependencies at depth 1 for an image. - getting image path simple image discovery. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4604","02/05/2016 09:24:18",2,"ROOT_DOCKER_DockerHealthyTask is flaky. ""Log from Teamcity that is running {{sudo ./bin/mesos-tests.sh}} on AWS EC2 instances: Happens with Ubuntu 15.04, CentOS 6, CentOS 7 _quite_ often. """," [18:27:14][Step 8/8] [----------] 8 tests from HealthCheckTest [18:27:14][Step 8/8] [ RUN ] HealthCheckTest.HealthyTask [18:27:17][Step 8/8] [ OK ] HealthCheckTest.HealthyTask (2222 ms) [18:27:17][Step 8/8] [ RUN ] HealthCheckTest.ROOT_DOCKER_DockerHealthyTask [18:27:36][Step 8/8] ../../src/tests/health_check_tests.cpp:388: Failure [18:27:36][Step 8/8] Failed to wait 15secs for termination [18:27:36][Step 8/8] F0204 18:27:35.981302 23085 logging.cpp:64] RAW: Pure virtual method called [18:27:36][Step 8/8] @ 0x7f7077055e1c google::LogMessage::Fail() [18:27:36][Step 8/8] @ 0x7f707705ba6f google::RawLog__() [18:27:36][Step 8/8] @ 0x7f70760f76c9 __cxa_pure_virtual [18:27:36][Step 8/8] @ 0xa9423c mesos::internal::tests::Cluster::Slaves::shutdown() [18:27:36][Step 8/8] @ 0x1074e45 mesos::internal::tests::MesosTest::ShutdownSlaves() [18:27:36][Step 8/8] @ 0x1074de4 mesos::internal::tests::MesosTest::Shutdown() [18:27:36][Step 8/8] @ 0x1070ec7 mesos::internal::tests::MesosTest::TearDown() [18:27:36][Step 8/8] @ 0x16eb7b2 testing::internal::HandleSehExceptionsInMethodIfSupported<>() [18:27:36][Step 8/8] @ 0x16e61a9 testing::internal::HandleExceptionsInMethodIfSupported<>() [18:27:36][Step 8/8] @ 0x16c56aa testing::Test::Run() [18:27:36][Step 8/8] @ 0x16c5e89 testing::TestInfo::Run() [18:27:36][Step 8/8] @ 0x16c650a testing::TestCase::Run() [18:27:36][Step 8/8] @ 0x16cd1f6 testing::internal::UnitTestImpl::RunAllTests() [18:27:36][Step 8/8] @ 0x16ec513 testing::internal::HandleSehExceptionsInMethodIfSupported<>() [18:27:36][Step 8/8] @ 0x16e6df1 testing::internal::HandleExceptionsInMethodIfSupported<>() [18:27:36][Step 8/8] @ 0x16cbe26 testing::UnitTest::Run() [18:27:36][Step 8/8] @ 0xe54c84 RUN_ALL_TESTS() [18:27:36][Step 8/8] @ 0xe54867 main [18:27:36][Step 8/8] @ 0x7f7071560a40 (unknown) [18:27:36][Step 8/8] @ 0x9b52d9 _start [18:27:36][Step 8/8] Aborted (core dumped) [18:27:36][Step 8/8] Process exited with code 134 ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4611","02/05/2016 20:55:37",5,"Passing a lambda to dispatch() always matches the template returning void ""The following idiom does not currently compile: This seems non-intuitive because the following template exists for dispatch: However, lambdas cannot be implicitly cast to a corresponding std::function type. To make this work, you have to explicitly type the lambda before passing it to dispatch. We should add template support to allow lambdas to be passed to dispatch() without explicit typing. """," Future initialized = dispatch(pid, [] () -> Nothing { return Nothing(); }); template Future dispatch(const UPID& pid, const std::function& f) { std::shared_ptr> promise(new Promise()); std::shared_ptr> f_( new std::function( [=](ProcessBase*) { promise->set(f()); })); internal::dispatch(pid, f_); return promise->future(); } std::function f = []() { return Nothing(); }; Future initialized = dispatch(pid, f); ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4614","02/06/2016 00:56:24",3,"SlaveRecoveryTest/0.CleanupHTTPExecutor is flaky ""Just saw this failure on the ASF CI: """," [ RUN ] SlaveRecoveryTest/0.CleanupHTTPExecutor I0206 00:22:44.791671 2824 leveldb.cpp:174] Opened db in 2.539372ms I0206 00:22:44.792459 2824 leveldb.cpp:181] Compacted db in 740473ns I0206 00:22:44.792510 2824 leveldb.cpp:196] Created db iterator in 24164ns I0206 00:22:44.792532 2824 leveldb.cpp:202] Seeked to beginning of db in 1831ns I0206 00:22:44.792548 2824 leveldb.cpp:271] Iterated through 0 keys in the db in 342ns I0206 00:22:44.792605 2824 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0206 00:22:44.793256 2847 recover.cpp:447] Starting replica recovery I0206 00:22:44.793480 2847 recover.cpp:473] Replica is in EMPTY status I0206 00:22:44.794538 2847 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (9472)@172.17.0.2:43484 I0206 00:22:44.795040 2848 recover.cpp:193] Received a recover response from a replica in EMPTY status I0206 00:22:44.795644 2848 recover.cpp:564] Updating replica status to STARTING I0206 00:22:44.796519 2850 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 752810ns I0206 00:22:44.796545 2850 replica.cpp:320] Persisted replica status to STARTING I0206 00:22:44.796725 2848 recover.cpp:473] Replica is in STARTING status I0206 00:22:44.797828 2857 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (9473)@172.17.0.2:43484 I0206 00:22:44.798355 2850 recover.cpp:193] Received a recover response from a replica in STARTING status I0206 00:22:44.799193 2850 recover.cpp:564] Updating replica status to VOTING I0206 00:22:44.799583 2855 master.cpp:376] Master 0b206a40-a9c3-4d44-a5bd-8032d60a32ca (6632562f1ade) started on 172.17.0.2:43484 I0206 00:22:44.799609 2855 master.cpp:378] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_http=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/n2FxQV/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-0.28.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/n2FxQV/master"""" --zk_session_timeout=""""10secs"""" I0206 00:22:44.799991 2855 master.cpp:423] Master only allowing authenticated frameworks to register I0206 00:22:44.800009 2855 master.cpp:428] Master only allowing authenticated slaves to register I0206 00:22:44.800020 2855 credentials.hpp:35] Loading credentials for authentication from '/tmp/n2FxQV/credentials' I0206 00:22:44.800245 2850 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 679345ns I0206 00:22:44.800370 2850 replica.cpp:320] Persisted replica status to VOTING I0206 00:22:44.800397 2855 master.cpp:468] Using default 'crammd5' authenticator I0206 00:22:44.800693 2855 master.cpp:537] Using default 'basic' HTTP authenticator I0206 00:22:44.800815 2855 master.cpp:571] Authorization enabled I0206 00:22:44.801216 2850 recover.cpp:578] Successfully joined the Paxos group I0206 00:22:44.801604 2850 recover.cpp:462] Recover process terminated I0206 00:22:44.801759 2856 whitelist_watcher.cpp:77] No whitelist given I0206 00:22:44.801725 2847 hierarchical.cpp:144] Initialized hierarchical allocator process I0206 00:22:44.803982 2855 master.cpp:1712] The newly elected leader is master@172.17.0.2:43484 with id 0b206a40-a9c3-4d44-a5bd-8032d60a32ca I0206 00:22:44.804026 2855 master.cpp:1725] Elected as the leading master! I0206 00:22:44.804059 2855 master.cpp:1470] Recovering from registrar I0206 00:22:44.804424 2855 registrar.cpp:307] Recovering registrar I0206 00:22:44.805202 2855 log.cpp:659] Attempting to start the writer I0206 00:22:44.806782 2856 replica.cpp:493] Replica received implicit promise request from (9475)@172.17.0.2:43484 with proposal 1 I0206 00:22:44.807368 2856 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 547939ns I0206 00:22:44.807395 2856 replica.cpp:342] Persisted promised to 1 I0206 00:22:44.808375 2856 coordinator.cpp:238] Coordinator attempting to fill missing positions I0206 00:22:44.809460 2848 replica.cpp:388] Replica received explicit promise request from (9476)@172.17.0.2:43484 for position 0 with proposal 2 I0206 00:22:44.809929 2848 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 427561ns I0206 00:22:44.809967 2848 replica.cpp:712] Persisted action at 0 I0206 00:22:44.811035 2850 replica.cpp:537] Replica received write request for position 0 from (9477)@172.17.0.2:43484 I0206 00:22:44.811149 2850 leveldb.cpp:436] Reading position from leveldb took 36452ns I0206 00:22:44.811532 2850 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 318924ns I0206 00:22:44.811615 2850 replica.cpp:712] Persisted action at 0 I0206 00:22:44.812532 2850 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0206 00:22:44.813117 2850 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 476530ns I0206 00:22:44.813143 2850 replica.cpp:712] Persisted action at 0 I0206 00:22:44.813166 2850 replica.cpp:697] Replica learned NOP action at position 0 I0206 00:22:44.813984 2848 log.cpp:675] Writer started with ending position 0 I0206 00:22:44.815549 2848 leveldb.cpp:436] Reading position from leveldb took 31800ns I0206 00:22:44.817061 2848 registrar.cpp:340] Successfully fetched the registry (0B) in 12.591104ms I0206 00:22:44.817319 2848 registrar.cpp:439] Applied 1 operations in 63480ns; attempting to update the 'registry' I0206 00:22:44.818780 2845 log.cpp:683] Attempting to append 170 bytes to the log I0206 00:22:44.818981 2845 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0206 00:22:44.819941 2845 replica.cpp:537] Replica received write request for position 1 from (9478)@172.17.0.2:43484 I0206 00:22:44.820582 2845 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 600949ns I0206 00:22:44.820608 2845 replica.cpp:712] Persisted action at 1 I0206 00:22:44.821552 2845 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0206 00:22:44.821934 2845 leveldb.cpp:341] Persisting action (191 bytes) to leveldb took 352813ns I0206 00:22:44.821960 2845 replica.cpp:712] Persisted action at 1 I0206 00:22:44.821979 2845 replica.cpp:697] Replica learned APPEND action at position 1 I0206 00:22:44.823447 2845 registrar.cpp:484] Successfully updated the 'registry' in 5.987072ms I0206 00:22:44.823580 2845 registrar.cpp:370] Successfully recovered registrar I0206 00:22:44.823833 2845 log.cpp:702] Attempting to truncate the log to 1 I0206 00:22:44.824203 2845 master.cpp:1522] Recovered 0 slaves from the Registry (131B) ; allowing 10mins for slaves to re-register I0206 00:22:44.824291 2845 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0206 00:22:44.824645 2845 hierarchical.cpp:171] Skipping recovery of hierarchical allocator: nothing to recover I0206 00:22:44.825222 2850 replica.cpp:537] Replica received write request for position 2 from (9479)@172.17.0.2:43484 I0206 00:22:44.825742 2850 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 481617ns I0206 00:22:44.825772 2850 replica.cpp:712] Persisted action at 2 I0206 00:22:44.826748 2852 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0206 00:22:44.827368 2852 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 588591ns I0206 00:22:44.827432 2852 leveldb.cpp:399] Deleting ~1 keys from leveldb took 33059ns I0206 00:22:44.827450 2852 replica.cpp:712] Persisted action at 2 I0206 00:22:44.827468 2852 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0206 00:22:44.838011 2824 containerizer.cpp:149] Using isolation: posix/cpu,posix/mem,filesystem/posix W0206 00:22:44.838873 2824 backend.cpp:48] Failed to create 'bind' backend: BindBackend requires root privileges I0206 00:22:44.843785 2857 slave.cpp:193] Slave started on 172.17.0.2:43484 I0206 00:22:44.843819 2857 slave.cpp:194] Flags at startup: --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_auth_server=""""https://auth.docker.io"""" --docker_kill_orphans=""""true"""" --docker_puller_timeout=""""60"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.28.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw"""" I0206 00:22:44.844292 2857 credentials.hpp:83] Loading credential for authentication from '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/credential' I0206 00:22:44.844518 2857 slave.cpp:324] Slave using credential for: test-principal I0206 00:22:44.844696 2857 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0206 00:22:44.845243 2857 slave.cpp:464] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0206 00:22:44.845326 2857 slave.cpp:472] Slave attributes: [ ] I0206 00:22:44.845342 2857 slave.cpp:477] Slave hostname: 6632562f1ade I0206 00:22:44.845953 2824 sched.cpp:222] Version: 0.28.0 I0206 00:22:44.846853 2848 sched.cpp:326] New master detected at master@172.17.0.2:43484 I0206 00:22:44.846936 2848 sched.cpp:382] Authenticating with master master@172.17.0.2:43484 I0206 00:22:44.846958 2848 sched.cpp:389] Using default CRAM-MD5 authenticatee I0206 00:22:44.847692 2858 state.cpp:58] Recovering state from '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/meta' I0206 00:22:44.848108 2850 status_update_manager.cpp:200] Recovering status update manager I0206 00:22:44.848325 2852 containerizer.cpp:397] Recovering containerizer I0206 00:22:44.848603 2845 authenticatee.cpp:121] Creating new client SASL connection I0206 00:22:44.849719 2845 master.cpp:5523] Authenticating scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 I0206 00:22:44.850052 2852 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(662)@172.17.0.2:43484 I0206 00:22:44.850227 2854 provisioner.cpp:245] Provisioner recovery complete I0206 00:22:44.850410 2852 authenticator.cpp:98] Creating new server SASL connection I0206 00:22:44.850692 2852 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0206 00:22:44.850720 2852 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 00:22:44.850805 2852 authenticator.cpp:203] Received SASL authentication start I0206 00:22:44.850862 2852 authenticator.cpp:325] Authentication requires more steps I0206 00:22:44.850939 2852 authenticatee.cpp:258] Received SASL authentication step I0206 00:22:44.851027 2852 authenticator.cpp:231] Received SASL authentication step I0206 00:22:44.851052 2852 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '6632562f1ade' server FQDN: '6632562f1ade' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0206 00:22:44.851063 2852 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0206 00:22:44.851102 2852 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0206 00:22:44.851121 2852 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '6632562f1ade' server FQDN: '6632562f1ade' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0206 00:22:44.851130 2852 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0206 00:22:44.851136 2852 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0206 00:22:44.851150 2852 authenticator.cpp:317] Authentication success I0206 00:22:44.851219 2850 authenticatee.cpp:298] Authentication success I0206 00:22:44.851310 2850 master.cpp:5553] Successfully authenticated principal 'test-principal' at scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 I0206 00:22:44.851485 2849 slave.cpp:4496] Finished recovery I0206 00:22:44.852154 2843 sched.cpp:471] Successfully authenticated with master master@172.17.0.2:43484 I0206 00:22:44.852175 2843 sched.cpp:776] Sending SUBSCRIBE call to master@172.17.0.2:43484 I0206 00:22:44.852262 2843 sched.cpp:809] Will retry registration in 939.183679ms if necessary I0206 00:22:44.852375 2844 master.cpp:2280] Received SUBSCRIBE call for framework 'default' at scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 I0206 00:22:44.852448 2844 master.cpp:1751] Authorizing framework principal 'test-principal' to receive offers for role '*' I0206 00:22:44.852699 2852 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(662)@172.17.0.2:43484 I0206 00:22:44.852782 2844 master.cpp:2351] Subscribing framework default with checkpointing enabled and capabilities [ ] I0206 00:22:44.853056 2849 slave.cpp:4668] Querying resource estimator for oversubscribable resources I0206 00:22:44.853421 2856 hierarchical.cpp:265] Added framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:44.853513 2856 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:44.853582 2844 sched.cpp:703] Framework registered with 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:44.853613 2852 slave.cpp:4682] Received oversubscribable resources from the resource estimator I0206 00:22:44.853663 2844 sched.cpp:717] Scheduler::registered took 53762ns I0206 00:22:44.853899 2843 slave.cpp:796] New master detected at master@172.17.0.2:43484 I0206 00:22:44.853955 2854 status_update_manager.cpp:174] Pausing sending status updates I0206 00:22:44.853997 2856 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:44.853960 2843 slave.cpp:859] Authenticating with master master@172.17.0.2:43484 I0206 00:22:44.854035 2843 slave.cpp:864] Using default CRAM-MD5 authenticatee I0206 00:22:44.854030 2856 hierarchical.cpp:1096] Performed allocation for 0 slaves in 581355ns I0206 00:22:44.854182 2843 slave.cpp:832] Detecting new master I0206 00:22:44.854277 2854 authenticatee.cpp:121] Creating new client SASL connection I0206 00:22:44.854517 2843 master.cpp:5523] Authenticating slave@172.17.0.2:43484 I0206 00:22:44.854603 2854 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(663)@172.17.0.2:43484 I0206 00:22:44.854836 2855 authenticator.cpp:98] Creating new server SASL connection I0206 00:22:44.855013 2852 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0206 00:22:44.855044 2852 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 00:22:44.855139 2855 authenticator.cpp:203] Received SASL authentication start I0206 00:22:44.855186 2855 authenticator.cpp:325] Authentication requires more steps I0206 00:22:44.855263 2855 authenticatee.cpp:258] Received SASL authentication step I0206 00:22:44.855352 2855 authenticator.cpp:231] Received SASL authentication step I0206 00:22:44.855381 2855 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '6632562f1ade' server FQDN: '6632562f1ade' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0206 00:22:44.855389 2855 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0206 00:22:44.855419 2855 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0206 00:22:44.855438 2855 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '6632562f1ade' server FQDN: '6632562f1ade' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0206 00:22:44.855448 2855 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0206 00:22:44.855453 2855 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0206 00:22:44.855464 2855 authenticator.cpp:317] Authentication success I0206 00:22:44.855540 2851 authenticatee.cpp:298] Authentication success I0206 00:22:44.855721 2851 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(663)@172.17.0.2:43484 I0206 00:22:44.855832 2852 slave.cpp:927] Successfully authenticated with master master@172.17.0.2:43484 I0206 00:22:44.855615 2855 master.cpp:5553] Successfully authenticated principal 'test-principal' at slave@172.17.0.2:43484 I0206 00:22:44.855973 2852 slave.cpp:1321] Will retry registration in 9.327708ms if necessary I0206 00:22:44.856145 2854 master.cpp:4237] Registering slave at slave@172.17.0.2:43484 (6632562f1ade) with id 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 I0206 00:22:44.856598 2851 registrar.cpp:439] Applied 1 operations in 59112ns; attempting to update the 'registry' I0206 00:22:44.857403 2851 log.cpp:683] Attempting to append 339 bytes to the log I0206 00:22:44.857525 2855 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0206 00:22:44.858482 2844 replica.cpp:537] Replica received write request for position 3 from (9493)@172.17.0.2:43484 I0206 00:22:44.858755 2844 leveldb.cpp:341] Persisting action (358 bytes) to leveldb took 228484ns I0206 00:22:44.858855 2844 replica.cpp:712] Persisted action at 3 I0206 00:22:44.859751 2852 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0206 00:22:44.860332 2852 leveldb.cpp:341] Persisting action (360 bytes) to leveldb took 549638ns I0206 00:22:44.860358 2852 replica.cpp:712] Persisted action at 3 I0206 00:22:44.860411 2852 replica.cpp:697] Replica learned APPEND action at position 3 I0206 00:22:44.862709 2856 registrar.cpp:484] Successfully updated the 'registry' in 6.020864ms I0206 00:22:44.863106 2850 log.cpp:702] Attempting to truncate the log to 3 I0206 00:22:44.863358 2850 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0206 00:22:44.864321 2850 slave.cpp:3436] Received ping from slave-observer(288)@172.17.0.2:43484 I0206 00:22:44.864706 2849 hierarchical.cpp:473] Added slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 (6632562f1ade) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0206 00:22:44.864716 2843 replica.cpp:537] Replica received write request for position 4 from (9494)@172.17.0.2:43484 I0206 00:22:44.865309 2843 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 410199ns I0206 00:22:44.865337 2843 replica.cpp:712] Persisted action at 4 I0206 00:22:44.866092 2849 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:44.866132 2848 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0206 00:22:44.866137 2849 hierarchical.cpp:1116] Performed allocation for slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 in 1.30657ms I0206 00:22:44.866497 2856 master.cpp:4305] Registered slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 at slave@172.17.0.2:43484 (6632562f1ade) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0206 00:22:44.866564 2843 slave.cpp:1321] Will retry registration in 32.803438ms if necessary I0206 00:22:44.866690 2843 slave.cpp:971] Registered with master master@172.17.0.2:43484; given slave ID 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 I0206 00:22:44.866716 2843 fetcher.cpp:81] Clearing fetcher cache I0206 00:22:44.867066 2856 master.cpp:5352] Sending 1 offers to framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 (default) at scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 I0206 00:22:44.867105 2843 slave.cpp:994] Checkpointing SlaveInfo to '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/meta/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/slave.info' I0206 00:22:44.867347 2856 master.cpp:4207] Slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 at slave@172.17.0.2:43484 (6632562f1ade) already registered, resending acknowledgement I0206 00:22:44.867441 2856 status_update_manager.cpp:181] Resuming sending status updates I0206 00:22:44.867465 2843 slave.cpp:1030] Forwarding total oversubscribed resources W0206 00:22:44.867547 2843 slave.cpp:1016] Already registered with master master@172.17.0.2:43484 I0206 00:22:44.867574 2843 slave.cpp:1030] Forwarding total oversubscribed resources I0206 00:22:44.867710 2843 master.cpp:4646] Received update of slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 at slave@172.17.0.2:43484 (6632562f1ade) with total oversubscribed resources I0206 00:22:44.867951 2856 sched.cpp:873] Scheduler::resourceOffers took 133371ns I0206 00:22:44.867961 2843 master.cpp:4646] Received update of slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 at slave@172.17.0.2:43484 (6632562f1ade) with total oversubscribed resources I0206 00:22:44.868484 2856 hierarchical.cpp:531] Slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 (6632562f1ade) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I0206 00:22:44.868599 2848 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 2.418545ms I0206 00:22:44.868700 2848 leveldb.cpp:399] Deleting ~2 keys from leveldb took 54053ns I0206 00:22:44.868751 2848 replica.cpp:712] Persisted action at 4 I0206 00:22:44.868811 2848 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0206 00:22:44.869241 2856 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:44.869287 2856 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:44.869321 2856 hierarchical.cpp:1116] Performed allocation for slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 in 782848ns I0206 00:22:44.869840 2856 hierarchical.cpp:531] Slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 (6632562f1ade) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I0206 00:22:44.869985 2856 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:44.870028 2856 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:44.870053 2856 hierarchical.cpp:1116] Performed allocation for slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 in 160104ns I0206 00:22:44.871824 2853 master.cpp:3138] Processing ACCEPT call for offers: [ 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-O0 ] on slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 at slave@172.17.0.2:43484 (6632562f1ade) for framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 (default) at scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 I0206 00:22:44.871868 2853 master.cpp:2825] Authorizing framework principal 'test-principal' to launch task 1 as user 'mesos' W0206 00:22:44.873613 2843 validation.cpp:404] Executor http for task 1 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0206 00:22:44.873667 2843 validation.cpp:416] Executor http for task 1 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0206 00:22:44.874035 2843 master.hpp:176] Adding task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 (6632562f1ade) I0206 00:22:44.874223 2843 master.cpp:3623] Launching task 1 of framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 (default) at scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 at slave@172.17.0.2:43484 (6632562f1ade) I0206 00:22:44.874802 2843 slave.cpp:1361] Got assigned task 1 for framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:44.874966 2843 slave.cpp:5202] Checkpointing FrameworkInfo to '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/meta/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/framework.info' I0206 00:22:44.875440 2843 slave.cpp:5213] Checkpointing framework pid 'scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484' to '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/meta/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/framework.pid' I0206 00:22:44.876106 2843 slave.cpp:1480] Launching task 1 for framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:44.876644 2843 paths.cpp:474] Trying to chown '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/executors/http/runs/fd4649a4-1c82-4eda-b663-b568b6110d17' to user 'mesos' I0206 00:22:44.884089 2843 slave.cpp:5654] Checkpointing ExecutorInfo to '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/meta/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/executors/http/executor.info' I0206 00:22:44.900928 2843 slave.cpp:5282] Launching executor http of framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 with resources in work directory '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/executors/http/runs/fd4649a4-1c82-4eda-b663-b568b6110d17' I0206 00:22:44.901449 2853 containerizer.cpp:656] Starting container 'fd4649a4-1c82-4eda-b663-b568b6110d17' for executor 'http' of framework '0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000' I0206 00:22:44.901561 2843 slave.cpp:5677] Checkpointing TaskInfo to '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/meta/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/executors/http/runs/fd4649a4-1c82-4eda-b663-b568b6110d17/tasks/1/task.info' I0206 00:22:44.902060 2843 slave.cpp:1698] Queuing task '1' for executor 'http' of framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:44.902207 2843 slave.cpp:749] Successfully attached file '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/executors/http/runs/fd4649a4-1c82-4eda-b663-b568b6110d17' I0206 00:22:44.907027 2850 launcher.cpp:132] Forked child with pid '8875' for container 'fd4649a4-1c82-4eda-b663-b568b6110d17' I0206 00:22:44.907229 2850 containerizer.cpp:1094] Checkpointing executor's forked pid 8875 to '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/meta/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/executors/http/runs/fd4649a4-1c82-4eda-b663-b568b6110d17/pids/forked.pid' WARNING: Logging before InitGoogleLogging() is written to STDERR I0206 00:22:45.080060 8875 process.cpp:991] libprocess is initialized on 172.17.0.2:49724 for 16 cpus I0206 00:22:45.082499 8875 logging.cpp:193] Logging to STDERR I0206 00:22:45.082862 8875 executor.cpp:172] Version: 0.28.0 I0206 00:22:45.087201 8903 executor.cpp:316] Connected with the agent I0206 00:22:45.802878 2858 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:45.802969 2858 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:45.803014 2858 hierarchical.cpp:1096] Performed allocation for 1 slaves in 424120ns 2016-02-06 00:22:45,982:2824(0x7fd8c5ffb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:40712] zk retcode=-4, errno=111(Connection refused): server refused to accept the client W0206 00:22:46.588022 2854 group.cpp:503] Timed out waiting to connect to ZooKeeper. Forcing ZooKeeper session (sessionId=0) expiration I0206 00:22:46.588969 2854 group.cpp:519] ZooKeeper session expired 2016-02-06 00:22:46,589:2824(0x7fd9fefd1700):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0 2016-02-06 00:22:46,589:2824(0x7fda03fdb700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2016-02-06 00:22:46,589:2824(0x7fda03fdb700):ZOO_INFO@log_env@716: Client environment:host.name=6632562f1ade 2016-02-06 00:22:46,589:2824(0x7fda03fdb700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2016-02-06 00:22:46,589:2824(0x7fda03fdb700):ZOO_INFO@log_env@724: Client environment:os.arch=3.13.0-36-lowlatency 2016-02-06 00:22:46,589:2824(0x7fda03fdb700):ZOO_INFO@log_env@725: Client environment:os.version=#63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 2016-02-06 00:22:46,589:2824(0x7fda03fdb700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2016-02-06 00:22:46,589:2824(0x7fda03fdb700):ZOO_INFO@log_env@741: Client environment:user.home=/home/mesos 2016-02-06 00:22:46,589:2824(0x7fda03fdb700):ZOO_INFO@log_env@753: Client environment:user.dir=/tmp/n2FxQV 2016-02-06 00:22:46,590:2824(0x7fda03fdb700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:40712 sessionTimeout=10000 watcher=0x7fda10e9e520 sessionId=0 sessionPasswd= context=0x7fd9d401bc10 flags=0 2016-02-06 00:22:46,590:2824(0x7fd8c67fc700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:40712] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0206 00:22:46.804400 2844 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:46.804481 2844 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:46.804514 2844 hierarchical.cpp:1096] Performed allocation for 1 slaves in 347954ns I0206 00:22:47.805842 2847 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:47.805934 2847 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:47.805980 2847 hierarchical.cpp:1096] Performed allocation for 1 slaves in 415449ns I0206 00:22:48.807723 2851 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:48.807814 2851 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:48.807857 2851 hierarchical.cpp:1096] Performed allocation for 1 slaves in 442104ns I0206 00:22:49.808733 2848 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:49.808816 2848 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:49.808856 2848 hierarchical.cpp:1096] Performed allocation for 1 slaves in 384959ns 2016-02-06 00:22:49,926:2824(0x7fd8c67fc700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:40712] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0206 00:22:50.810307 2847 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:50.810400 2847 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:50.810443 2847 hierarchical.cpp:1096] Performed allocation for 1 slaves in 389572ns I0206 00:22:51.811586 2849 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:51.811681 2849 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:51.811722 2849 hierarchical.cpp:1096] Performed allocation for 1 slaves in 404450ns I0206 00:22:52.812860 2851 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:52.812944 2851 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:52.812981 2851 hierarchical.cpp:1096] Performed allocation for 1 slaves in 359671ns 2016-02-06 00:22:53,263:2824(0x7fd8c67fc700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:40712] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0206 00:22:53.814512 2847 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:53.814599 2847 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:53.814651 2847 hierarchical.cpp:1096] Performed allocation for 1 slaves in 386669ns I0206 00:22:54.815238 2852 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:54.815321 2852 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:54.815356 2852 hierarchical.cpp:1096] Performed allocation for 1 slaves in 376235ns I0206 00:22:55.816453 2846 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:55.816550 2846 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:55.816596 2846 hierarchical.cpp:1096] Performed allocation for 1 slaves in 416350ns W0206 00:22:56.592408 2849 group.cpp:503] Timed out waiting to connect to ZooKeeper. Forcing ZooKeeper session (sessionId=0) expiration I0206 00:22:56.593480 2849 group.cpp:519] ZooKeeper session expired 2016-02-06 00:22:56,593:2824(0x7fda017d6700):ZOO_INFO@zookeeper_close@2522: Freeing zookeeper resources for sessionId=0 2016-02-06 00:22:56,594:2824(0x7fda007d4700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 2016-02-06 00:22:56,594:2824(0x7fda007d4700):ZOO_INFO@log_env@716: Client environment:host.name=6632562f1ade 2016-02-06 00:22:56,594:2824(0x7fda007d4700):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2016-02-06 00:22:56,594:2824(0x7fda007d4700):ZOO_INFO@log_env@724: Client environment:os.arch=3.13.0-36-lowlatency 2016-02-06 00:22:56,594:2824(0x7fda007d4700):ZOO_INFO@log_env@725: Client environment:os.version=#63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 2016-02-06 00:22:56,594:2824(0x7fda007d4700):ZOO_INFO@log_env@733: Client environment:user.name=(null) 2016-02-06 00:22:56,594:2824(0x7fda007d4700):ZOO_INFO@log_env@741: Client environment:user.home=/home/mesos 2016-02-06 00:22:56,594:2824(0x7fda007d4700):ZOO_INFO@log_env@753: Client environment:user.dir=/tmp/n2FxQV 2016-02-06 00:22:56,594:2824(0x7fda007d4700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:40712 sessionTimeout=10000 watcher=0x7fda10e9e520 sessionId=0 sessionPasswd= context=0x7fd9e401f350 flags=0 2016-02-06 00:22:56,595:2824(0x7fd8c5ffb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [127.0.0.1:40712] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I0206 00:22:56.817683 2848 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:56.817766 2848 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:56.817803 2848 hierarchical.cpp:1096] Performed allocation for 1 slaves in 374115ns I0206 00:22:57.818447 2844 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:57.818526 2844 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:57.818562 2844 hierarchical.cpp:1096] Performed allocation for 1 slaves in 344545ns I0206 00:22:58.819828 2851 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:58.819914 2851 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:58.819957 2851 hierarchical.cpp:1096] Performed allocation for 1 slaves in 376948ns I0206 00:22:59.820874 2848 hierarchical.cpp:1403] No resources available to allocate! I0206 00:22:59.820957 2848 hierarchical.cpp:1498] No inverse offers to send out! I0206 00:22:59.820991 2848 hierarchical.cpp:1096] Performed allocation for 1 slaves in 344192ns I0206 00:22:59.854698 2845 slave.cpp:4668] Querying resource estimator for oversubscribable resources I0206 00:22:59.854991 2845 slave.cpp:4682] Received oversubscribable resources from the resource estimator I0206 00:22:59.864612 2857 slave.cpp:3436] Received ping from slave-observer(288)@172.17.0.2:43484 ../../src/tests/slave_recovery_tests.cpp:1105: Failure Failed to wait 15secs for updateCall1 I0206 00:22:59.876358 2852 master.cpp:1213] Framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 (default) at scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 disconnected I0206 00:22:59.876410 2852 master.cpp:2576] Disconnecting framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 (default) at scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 I0206 00:22:59.876456 2852 master.cpp:2600] Deactivating framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 (default) at scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 I0206 00:22:59.876569 2852 master.cpp:1237] Giving framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 (default) at scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 0ns to failover I0206 00:22:59.876981 2844 hierarchical.cpp:375] Deactivated framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:59.877049 2844 master.cpp:5204] Framework failover timeout, removing framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 (default) at scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 I0206 00:22:59.877075 2844 master.cpp:5935] Removing framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 (default) at scheduler-63899759-d7fc-42b2-8371-57484f352895@172.17.0.2:43484 I0206 00:22:59.877276 2844 master.cpp:6447] Updating the state of task 1 of framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 (latest state: TASK_KILLED, status update state: TASK_KILLED) I0206 00:22:59.878051 2844 master.cpp:6513] Removing task 1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 on slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 at slave@172.17.0.2:43484 (6632562f1ade) I0206 00:22:59.878433 2844 master.cpp:6542] Removing executor 'http' with resources of framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 on slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 at slave@172.17.0.2:43484 (6632562f1ade) I0206 00:22:59.878667 2852 slave.cpp:2079] Asked to shut down framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 by master@172.17.0.2:43484 I0206 00:22:59.878733 2852 slave.cpp:2104] Shutting down framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:59.878806 2852 slave.cpp:4129] Shutting down executor 'http' of framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 W0206 00:22:59.878834 2852 slave.hpp:655] Unable to send event to executor 'http' of framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000: unknown connection type I0206 00:22:59.879550 2844 master.cpp:1027] Master terminating I0206 00:22:59.879703 2854 hierarchical.cpp:892] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 from framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:59.879947 2854 hierarchical.cpp:326] Removed framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:59.880306 2854 hierarchical.cpp:505] Removed slave 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0 I0206 00:22:59.880666 2852 slave.cpp:3482] master@172.17.0.2:43484 exited W0206 00:22:59.880695 2852 slave.cpp:3485] Master disconnected! Waiting for a new master to be elected I0206 00:22:59.885498 2857 containerizer.cpp:1318] Destroying container 'fd4649a4-1c82-4eda-b663-b568b6110d17' I0206 00:22:59.904532 2858 containerizer.cpp:1534] Executor for container 'fd4649a4-1c82-4eda-b663-b568b6110d17' has exited I0206 00:22:59.907024 2858 provisioner.cpp:306] Ignoring destroy request for unknown container fd4649a4-1c82-4eda-b663-b568b6110d17 I0206 00:22:59.907428 2858 slave.cpp:3817] Executor 'http' of framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 terminated with signal Killed I0206 00:22:59.907538 2858 slave.cpp:3921] Cleaning up executor 'http' of framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:59.908213 2858 slave.cpp:4009] Cleaning up framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:59.908555 2858 gc.cpp:54] Scheduling '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/executors/http/runs/fd4649a4-1c82-4eda-b663-b568b6110d17' for gc 6.99998949252444days in the future I0206 00:22:59.908720 2858 gc.cpp:54] Scheduling '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/executors/http' for gc 6.99998949082074days in the future I0206 00:22:59.908807 2858 gc.cpp:54] Scheduling '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/meta/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/executors/http/runs/fd4649a4-1c82-4eda-b663-b568b6110d17' for gc 6.99998948980444days in the future I0206 00:22:59.908927 2858 gc.cpp:54] Scheduling '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/meta/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000/executors/http' for gc 6.99998948890074days in the future I0206 00:22:59.909009 2858 gc.cpp:54] Scheduling '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000' for gc 6.99998948710518days in the future I0206 00:22:59.909121 2858 gc.cpp:54] Scheduling '/tmp/SlaveRecoveryTest_0_CleanupHTTPExecutor_kAXwvw/meta/slaves/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-S0/frameworks/0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000' for gc 6.99998948630815days in the future I0206 00:22:59.909211 2858 status_update_manager.cpp:282] Closing status update streams for framework 0b206a40-a9c3-4d44-a5bd-8032d60a32ca-0000 I0206 00:22:59.910423 2853 slave.cpp:668] Slave terminating ../../3rdparty/libprocess/include/process/gmock.hpp:425: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: union http matcher (72-byte object , UPDATE, 1-byte object <1B>, 16-byte object <1B-F4 34-01 00-00 00-00 00-00 00-00 DA-7F 00-00>) Expected: to be called once Actual: never called - unsatisfied and active ../../3rdparty/libprocess/include/process/gmock.hpp:425: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: union http matcher (72-byte object , UPDATE, 1-byte object <1B>, 16-byte object <1B-F4 34-01 00-00 00-00 00-00 00-00 DA-7F 00-00>) Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] SlaveRecoveryTest/0.CleanupHTTPExecutor, where TypeParam = mesos::internal::slave::MesosContainerizer (15126 ms) ",0,0,1,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4615","02/06/2016 01:43:12",1,"ContainerLoggerTest.DefaultToSandbox is flaky ""Just saw this failure on the ASF CI: """," [ RUN ] ContainerLoggerTest.DefaultToSandbox I0206 01:25:03.766458 2824 leveldb.cpp:174] Opened db in 72.979786ms I0206 01:25:03.811712 2824 leveldb.cpp:181] Compacted db in 45.162067ms I0206 01:25:03.811810 2824 leveldb.cpp:196] Created db iterator in 26090ns I0206 01:25:03.811828 2824 leveldb.cpp:202] Seeked to beginning of db in 3173ns I0206 01:25:03.811839 2824 leveldb.cpp:271] Iterated through 0 keys in the db in 497ns I0206 01:25:03.811900 2824 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0206 01:25:03.812785 2849 recover.cpp:447] Starting replica recovery I0206 01:25:03.813043 2849 recover.cpp:473] Replica is in EMPTY status I0206 01:25:03.814668 2854 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (371)@172.17.0.8:37843 I0206 01:25:03.815210 2849 recover.cpp:193] Received a recover response from a replica in EMPTY status I0206 01:25:03.815732 2854 recover.cpp:564] Updating replica status to STARTING I0206 01:25:03.819664 2857 master.cpp:376] Master 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de (74ef606c4063) started on 172.17.0.8:37843 I0206 01:25:03.819703 2857 master.cpp:378] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_http=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/h5vu5I/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-0.28.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/h5vu5I/master"""" --zk_session_timeout=""""10secs"""" I0206 01:25:03.820241 2857 master.cpp:423] Master only allowing authenticated frameworks to register I0206 01:25:03.820257 2857 master.cpp:428] Master only allowing authenticated slaves to register I0206 01:25:03.820269 2857 credentials.hpp:35] Loading credentials for authentication from '/tmp/h5vu5I/credentials' I0206 01:25:03.821110 2857 master.cpp:468] Using default 'crammd5' authenticator I0206 01:25:03.821311 2857 master.cpp:537] Using default 'basic' HTTP authenticator I0206 01:25:03.821636 2857 master.cpp:571] Authorization enabled I0206 01:25:03.821979 2846 hierarchical.cpp:144] Initialized hierarchical allocator process I0206 01:25:03.822057 2846 whitelist_watcher.cpp:77] No whitelist given I0206 01:25:03.825460 2847 master.cpp:1712] The newly elected leader is master@172.17.0.8:37843 with id 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de I0206 01:25:03.825512 2847 master.cpp:1725] Elected as the leading master! I0206 01:25:03.825533 2847 master.cpp:1470] Recovering from registrar I0206 01:25:03.825835 2847 registrar.cpp:307] Recovering registrar I0206 01:25:03.848212 2854 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 32.226093ms I0206 01:25:03.848299 2854 replica.cpp:320] Persisted replica status to STARTING I0206 01:25:03.848702 2854 recover.cpp:473] Replica is in STARTING status I0206 01:25:03.850728 2858 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (373)@172.17.0.8:37843 I0206 01:25:03.851230 2854 recover.cpp:193] Received a recover response from a replica in STARTING status I0206 01:25:03.852018 2854 recover.cpp:564] Updating replica status to VOTING I0206 01:25:03.881681 2854 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 29.184163ms I0206 01:25:03.881772 2854 replica.cpp:320] Persisted replica status to VOTING I0206 01:25:03.882058 2854 recover.cpp:578] Successfully joined the Paxos group I0206 01:25:03.882258 2854 recover.cpp:462] Recover process terminated I0206 01:25:03.883076 2854 log.cpp:659] Attempting to start the writer I0206 01:25:03.885040 2854 replica.cpp:493] Replica received implicit promise request from (374)@172.17.0.8:37843 with proposal 1 I0206 01:25:03.915132 2854 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 29.980589ms I0206 01:25:03.915215 2854 replica.cpp:342] Persisted promised to 1 I0206 01:25:03.916038 2856 coordinator.cpp:238] Coordinator attempting to fill missing positions I0206 01:25:03.917659 2856 replica.cpp:388] Replica received explicit promise request from (375)@172.17.0.8:37843 for position 0 with proposal 2 I0206 01:25:03.948698 2856 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 30.974607ms I0206 01:25:03.948786 2856 replica.cpp:712] Persisted action at 0 I0206 01:25:03.950920 2849 replica.cpp:537] Replica received write request for position 0 from (376)@172.17.0.8:37843 I0206 01:25:03.951011 2849 leveldb.cpp:436] Reading position from leveldb took 44263ns I0206 01:25:03.982026 2849 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 30.947321ms I0206 01:25:03.982225 2849 replica.cpp:712] Persisted action at 0 I0206 01:25:03.983867 2849 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0206 01:25:04.015499 2849 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 30.957888ms I0206 01:25:04.015591 2849 replica.cpp:712] Persisted action at 0 I0206 01:25:04.015682 2849 replica.cpp:697] Replica learned NOP action at position 0 I0206 01:25:04.016666 2849 log.cpp:675] Writer started with ending position 0 I0206 01:25:04.017881 2855 leveldb.cpp:436] Reading position from leveldb took 56779ns I0206 01:25:04.018934 2852 registrar.cpp:340] Successfully fetched the registry (0B) in 193.048064ms I0206 01:25:04.019076 2852 registrar.cpp:439] Applied 1 operations in 38180ns; attempting to update the 'registry' I0206 01:25:04.020100 2844 log.cpp:683] Attempting to append 170 bytes to the log I0206 01:25:04.020288 2855 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0206 01:25:04.021323 2844 replica.cpp:537] Replica received write request for position 1 from (377)@172.17.0.8:37843 I0206 01:25:04.054726 2844 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 33.309419ms I0206 01:25:04.054818 2844 replica.cpp:712] Persisted action at 1 I0206 01:25:04.055933 2844 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0206 01:25:04.088142 2844 leveldb.cpp:341] Persisting action (191 bytes) to leveldb took 32.116643ms I0206 01:25:04.088230 2844 replica.cpp:712] Persisted action at 1 I0206 01:25:04.088265 2844 replica.cpp:697] Replica learned APPEND action at position 1 I0206 01:25:04.090070 2856 registrar.cpp:484] Successfully updated the 'registry' in 70.90816ms I0206 01:25:04.090338 2851 log.cpp:702] Attempting to truncate the log to 1 I0206 01:25:04.090358 2856 registrar.cpp:370] Successfully recovered registrar I0206 01:25:04.090507 2847 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0206 01:25:04.090867 2858 master.cpp:1522] Recovered 0 slaves from the Registry (131B) ; allowing 10mins for slaves to re-register I0206 01:25:04.091449 2858 hierarchical.cpp:171] Skipping recovery of hierarchical allocator: nothing to recover I0206 01:25:04.092280 2857 replica.cpp:537] Replica received write request for position 2 from (378)@172.17.0.8:37843 I0206 01:25:04.125702 2857 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 33.192265ms I0206 01:25:04.125804 2857 replica.cpp:712] Persisted action at 2 I0206 01:25:04.127400 2857 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0206 01:25:04.157727 2857 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 30.268594ms I0206 01:25:04.157905 2857 leveldb.cpp:399] Deleting ~1 keys from leveldb took 88436ns I0206 01:25:04.157941 2857 replica.cpp:712] Persisted action at 2 I0206 01:25:04.157984 2857 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0206 01:25:04.166174 2824 containerizer.cpp:149] Using isolation: posix/cpu,posix/mem,filesystem/posix W0206 01:25:04.166954 2824 backend.cpp:48] Failed to create 'bind' backend: BindBackend requires root privileges I0206 01:25:04.172008 2844 slave.cpp:193] Slave started on 9)@172.17.0.8:37843 I0206 01:25:04.172046 2844 slave.cpp:194] Flags at startup: --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_auth_server=""""https://auth.docker.io"""" --docker_kill_orphans=""""true"""" --docker_puller_timeout=""""60"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.28.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw"""" I0206 01:25:04.172569 2844 credentials.hpp:83] Loading credential for authentication from '/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw/credential' I0206 01:25:04.172886 2844 slave.cpp:324] Slave using credential for: test-principal I0206 01:25:04.173141 2844 resources.cpp:564] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0206 01:25:04.173620 2844 slave.cpp:464] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0206 01:25:04.173686 2844 slave.cpp:472] Slave attributes: [ ] I0206 01:25:04.173702 2844 slave.cpp:477] Slave hostname: 74ef606c4063 I0206 01:25:04.174816 2847 state.cpp:58] Recovering state from '/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw/meta' I0206 01:25:04.175441 2847 status_update_manager.cpp:200] Recovering status update manager I0206 01:25:04.175678 2858 containerizer.cpp:397] Recovering containerizer I0206 01:25:04.177573 2858 provisioner.cpp:245] Provisioner recovery complete I0206 01:25:04.178231 2847 slave.cpp:4496] Finished recovery I0206 01:25:04.178834 2847 slave.cpp:4668] Querying resource estimator for oversubscribable resources I0206 01:25:04.179405 2847 slave.cpp:796] New master detected at master@172.17.0.8:37843 I0206 01:25:04.179500 2847 slave.cpp:859] Authenticating with master master@172.17.0.8:37843 I0206 01:25:04.179525 2847 slave.cpp:864] Using default CRAM-MD5 authenticatee I0206 01:25:04.179656 2858 status_update_manager.cpp:174] Pausing sending status updates I0206 01:25:04.179798 2847 slave.cpp:832] Detecting new master I0206 01:25:04.179891 2852 authenticatee.cpp:121] Creating new client SASL connection I0206 01:25:04.179916 2847 slave.cpp:4682] Received oversubscribable resources from the resource estimator I0206 01:25:04.180286 2847 master.cpp:5523] Authenticating slave(9)@172.17.0.8:37843 I0206 01:25:04.180569 2847 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(32)@172.17.0.8:37843 I0206 01:25:04.181000 2847 authenticator.cpp:98] Creating new server SASL connection I0206 01:25:04.181315 2847 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0206 01:25:04.181387 2847 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 01:25:04.181562 2847 authenticator.cpp:203] Received SASL authentication start I0206 01:25:04.181648 2847 authenticator.cpp:325] Authentication requires more steps I0206 01:25:04.181843 2847 authenticatee.cpp:258] Received SASL authentication step I0206 01:25:04.182034 2853 authenticator.cpp:231] Received SASL authentication step I0206 01:25:04.182071 2853 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '74ef606c4063' server FQDN: '74ef606c4063' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0206 01:25:04.182093 2853 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0206 01:25:04.182145 2853 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0206 01:25:04.182173 2853 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '74ef606c4063' server FQDN: '74ef606c4063' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0206 01:25:04.182185 2853 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0206 01:25:04.182193 2853 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0206 01:25:04.182211 2853 authenticator.cpp:317] Authentication success I0206 01:25:04.182333 2849 authenticatee.cpp:298] Authentication success I0206 01:25:04.182422 2853 master.cpp:5553] Successfully authenticated principal 'test-principal' at slave(9)@172.17.0.8:37843 I0206 01:25:04.182510 2853 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(32)@172.17.0.8:37843 I0206 01:25:04.182945 2849 slave.cpp:927] Successfully authenticated with master master@172.17.0.8:37843 I0206 01:25:04.183178 2849 slave.cpp:1321] Will retry registration in 9.87937ms if necessary I0206 01:25:04.183466 2852 master.cpp:4237] Registering slave at slave(9)@172.17.0.8:37843 (74ef606c4063) with id 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 I0206 01:25:04.184039 2845 registrar.cpp:439] Applied 1 operations in 89453ns; attempting to update the 'registry' I0206 01:25:04.185288 2856 log.cpp:683] Attempting to append 339 bytes to the log I0206 01:25:04.185672 2850 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0206 01:25:04.186674 2846 replica.cpp:537] Replica received write request for position 3 from (392)@172.17.0.8:37843 I0206 01:25:04.195863 2856 slave.cpp:1321] Will retry registration in 11.038094ms if necessary I0206 01:25:04.196233 2856 master.cpp:4225] Ignoring register slave message from slave(9)@172.17.0.8:37843 (74ef606c4063) as admission is already in progress I0206 01:25:04.208094 2856 slave.cpp:1321] Will retry registration in 27.881223ms if necessary I0206 01:25:04.208472 2856 master.cpp:4225] Ignoring register slave message from slave(9)@172.17.0.8:37843 (74ef606c4063) as admission is already in progress I0206 01:25:04.216698 2846 leveldb.cpp:341] Persisting action (358 bytes) to leveldb took 29.961291ms I0206 01:25:04.216789 2846 replica.cpp:712] Persisted action at 3 I0206 01:25:04.218246 2845 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0206 01:25:04.237861 2846 slave.cpp:1321] Will retry registration in 1.006941ms if necessary I0206 01:25:04.238221 2846 master.cpp:4225] Ignoring register slave message from slave(9)@172.17.0.8:37843 (74ef606c4063) as admission is already in progress I0206 01:25:04.239858 2856 slave.cpp:1321] Will retry registration in 167.305686ms if necessary I0206 01:25:04.240044 2856 master.cpp:4225] Ignoring register slave message from slave(9)@172.17.0.8:37843 (74ef606c4063) as admission is already in progress I0206 01:25:04.241482 2845 leveldb.cpp:341] Persisting action (360 bytes) to leveldb took 23.193162ms I0206 01:25:04.241524 2845 replica.cpp:712] Persisted action at 3 I0206 01:25:04.241557 2845 replica.cpp:697] Replica learned APPEND action at position 3 I0206 01:25:04.243746 2844 registrar.cpp:484] Successfully updated the 'registry' in 59.587072ms I0206 01:25:04.244210 2857 log.cpp:702] Attempting to truncate the log to 3 I0206 01:25:04.244344 2845 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0206 01:25:04.244597 2856 master.cpp:4305] Registered slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 at slave(9)@172.17.0.8:37843 (74ef606c4063) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0206 01:25:04.244746 2843 slave.cpp:3436] Received ping from slave-observer(8)@172.17.0.8:37843 I0206 01:25:04.244976 2845 hierarchical.cpp:473] Added slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 (74ef606c4063) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0206 01:25:04.245072 2843 slave.cpp:971] Registered with master master@172.17.0.8:37843; given slave ID 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 I0206 01:25:04.245121 2843 fetcher.cpp:81] Clearing fetcher cache I0206 01:25:04.245146 2845 hierarchical.cpp:1403] No resources available to allocate! I0206 01:25:04.245178 2845 hierarchical.cpp:1116] Performed allocation for slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 in 159744ns I0206 01:25:04.245465 2846 status_update_manager.cpp:181] Resuming sending status updates I0206 01:25:04.245776 2843 slave.cpp:994] Checkpointing SlaveInfo to '/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw/meta/slaves/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0/slave.info' I0206 01:25:04.245745 2846 replica.cpp:537] Replica received write request for position 4 from (393)@172.17.0.8:37843 I0206 01:25:04.246273 2843 slave.cpp:1030] Forwarding total oversubscribed resources I0206 01:25:04.246507 2850 master.cpp:4646] Received update of slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 at slave(9)@172.17.0.8:37843 (74ef606c4063) with total oversubscribed resources I0206 01:25:04.247180 2824 sched.cpp:222] Version: 0.28.0 I0206 01:25:04.247155 2850 hierarchical.cpp:531] Slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 (74ef606c4063) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) I0206 01:25:04.247357 2850 hierarchical.cpp:1403] No resources available to allocate! I0206 01:25:04.247406 2850 hierarchical.cpp:1116] Performed allocation for slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 in 183250ns I0206 01:25:04.247938 2854 sched.cpp:326] New master detected at master@172.17.0.8:37843 I0206 01:25:04.248157 2854 sched.cpp:382] Authenticating with master master@172.17.0.8:37843 I0206 01:25:04.248265 2854 sched.cpp:389] Using default CRAM-MD5 authenticatee I0206 01:25:04.248769 2854 authenticatee.cpp:121] Creating new client SASL connection I0206 01:25:04.249311 2854 master.cpp:5523] Authenticating scheduler-f50aad75-78d0-4d9f-b1a4-488d5ab932d6@172.17.0.8:37843 I0206 01:25:04.249646 2854 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(33)@172.17.0.8:37843 I0206 01:25:04.250114 2854 authenticator.cpp:98] Creating new server SASL connection I0206 01:25:04.250453 2854 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0206 01:25:04.250525 2854 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0206 01:25:04.250814 2853 authenticator.cpp:203] Received SASL authentication start I0206 01:25:04.250881 2853 authenticator.cpp:325] Authentication requires more steps I0206 01:25:04.250982 2853 authenticatee.cpp:258] Received SASL authentication step I0206 01:25:04.251092 2853 authenticator.cpp:231] Received SASL authentication step I0206 01:25:04.251128 2853 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '74ef606c4063' server FQDN: '74ef606c4063' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0206 01:25:04.251144 2853 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0206 01:25:04.251200 2853 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0206 01:25:04.251242 2853 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '74ef606c4063' server FQDN: '74ef606c4063' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0206 01:25:04.251260 2853 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0206 01:25:04.251269 2853 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0206 01:25:04.251288 2853 authenticator.cpp:317] Authentication success I0206 01:25:04.251471 2853 authenticatee.cpp:298] Authentication success I0206 01:25:04.251574 2853 master.cpp:5553] Successfully authenticated principal 'test-principal' at scheduler-f50aad75-78d0-4d9f-b1a4-488d5ab932d6@172.17.0.8:37843 I0206 01:25:04.251669 2851 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(33)@172.17.0.8:37843 I0206 01:25:04.252162 2854 sched.cpp:471] Successfully authenticated with master master@172.17.0.8:37843 I0206 01:25:04.252188 2854 sched.cpp:776] Sending SUBSCRIBE call to master@172.17.0.8:37843 I0206 01:25:04.252286 2854 sched.cpp:809] Will retry registration in 1.575999657secs if necessary I0206 01:25:04.252583 2853 master.cpp:2280] Received SUBSCRIBE call for framework 'default' at scheduler-f50aad75-78d0-4d9f-b1a4-488d5ab932d6@172.17.0.8:37843 I0206 01:25:04.252694 2853 master.cpp:1751] Authorizing framework principal 'test-principal' to receive offers for role '*' I0206 01:25:04.253110 2853 master.cpp:2351] Subscribing framework default with checkpointing disabled and capabilities [ ] I0206 01:25:04.253703 2843 hierarchical.cpp:265] Added framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:04.255300 2843 hierarchical.cpp:1498] No inverse offers to send out! I0206 01:25:04.255367 2843 hierarchical.cpp:1096] Performed allocation for 1 slaves in 1.621522ms I0206 01:25:04.255820 2844 sched.cpp:703] Framework registered with 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:04.256006 2844 sched.cpp:717] Scheduler::registered took 105156ns I0206 01:25:04.256572 2853 master.cpp:5352] Sending 1 offers to framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 (default) at scheduler-f50aad75-78d0-4d9f-b1a4-488d5ab932d6@172.17.0.8:37843 I0206 01:25:04.257524 2853 sched.cpp:873] Scheduler::resourceOffers took 173470ns I0206 01:25:04.260818 2855 master.cpp:3138] Processing ACCEPT call for offers: [ 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-O0 ] on slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 at slave(9)@172.17.0.8:37843 (74ef606c4063) for framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 (default) at scheduler-f50aad75-78d0-4d9f-b1a4-488d5ab932d6@172.17.0.8:37843 I0206 01:25:04.260968 2855 master.cpp:2825] Authorizing framework principal 'test-principal' to launch task 0e7267ed-c5ed-4914-9042-5970b2aaec1c as user 'mesos' I0206 01:25:04.264458 2844 master.hpp:176] Adding task 0e7267ed-c5ed-4914-9042-5970b2aaec1c with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 (74ef606c4063) I0206 01:25:04.264796 2844 master.cpp:3623] Launching task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 (default) at scheduler-f50aad75-78d0-4d9f-b1a4-488d5ab932d6@172.17.0.8:37843 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 at slave(9)@172.17.0.8:37843 (74ef606c4063) I0206 01:25:04.265341 2855 slave.cpp:1361] Got assigned task 0e7267ed-c5ed-4914-9042-5970b2aaec1c for framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:04.265941 2855 resources.cpp:564] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0206 01:25:04.267323 2855 slave.cpp:1480] Launching task 0e7267ed-c5ed-4914-9042-5970b2aaec1c for framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:04.267627 2855 resources.cpp:564] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0206 01:25:04.268705 2855 paths.cpp:474] Trying to chown '/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw/slaves/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0/frameworks/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000/executors/0e7267ed-c5ed-4914-9042-5970b2aaec1c/runs/5c952202-44cf-427a-8452-0f501140a4b7' to user 'mesos' I0206 01:25:04.274116 2855 slave.cpp:5282] Launching executor 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw/slaves/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0/frameworks/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000/executors/0e7267ed-c5ed-4914-9042-5970b2aaec1c/runs/5c952202-44cf-427a-8452-0f501140a4b7' I0206 01:25:04.275185 2844 containerizer.cpp:656] Starting container '5c952202-44cf-427a-8452-0f501140a4b7' for executor '0e7267ed-c5ed-4914-9042-5970b2aaec1c' of framework '914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000' I0206 01:25:04.275311 2846 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 29.403837ms I0206 01:25:04.275390 2846 replica.cpp:712] Persisted action at 4 I0206 01:25:04.275511 2855 slave.cpp:1698] Queuing task '0e7267ed-c5ed-4914-9042-5970b2aaec1c' for executor '0e7267ed-c5ed-4914-9042-5970b2aaec1c' of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:04.275832 2855 slave.cpp:749] Successfully attached file '/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw/slaves/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0/frameworks/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000/executors/0e7267ed-c5ed-4914-9042-5970b2aaec1c/runs/5c952202-44cf-427a-8452-0f501140a4b7' I0206 01:25:04.276707 2855 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0206 01:25:04.284708 2844 launcher.cpp:132] Forked child with pid '2872' for container '5c952202-44cf-427a-8452-0f501140a4b7' I0206 01:25:04.301365 2855 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 24.497489ms I0206 01:25:04.301528 2855 leveldb.cpp:399] Deleting ~2 keys from leveldb took 92156ns I0206 01:25:04.301563 2855 replica.cpp:712] Persisted action at 4 I0206 01:25:04.301640 2855 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0206 01:25:04.823314 2854 hierarchical.cpp:1403] No resources available to allocate! I0206 01:25:04.823387 2854 hierarchical.cpp:1498] No inverse offers to send out! I0206 01:25:04.823420 2854 hierarchical.cpp:1096] Performed allocation for 1 slaves in 327509ns I0206 01:25:05.825943 2850 hierarchical.cpp:1403] No resources available to allocate! I0206 01:25:05.826027 2850 hierarchical.cpp:1498] No inverse offers to send out! I0206 01:25:05.826066 2850 hierarchical.cpp:1096] Performed allocation for 1 slaves in 362856ns I0206 01:25:06.827154 2857 hierarchical.cpp:1403] No resources available to allocate! I0206 01:25:06.827235 2857 hierarchical.cpp:1498] No inverse offers to send out! I0206 01:25:06.827275 2857 hierarchical.cpp:1096] Performed allocation for 1 slaves in 328221ns I0206 01:25:07.828547 2843 hierarchical.cpp:1403] No resources available to allocate! I0206 01:25:07.828753 2843 hierarchical.cpp:1498] No inverse offers to send out! I0206 01:25:07.828907 2843 hierarchical.cpp:1096] Performed allocation for 1 slaves in 624979ns I0206 01:25:08.829737 2855 hierarchical.cpp:1403] No resources available to allocate! I0206 01:25:08.829918 2855 hierarchical.cpp:1498] No inverse offers to send out! I0206 01:25:08.830070 2855 hierarchical.cpp:1096] Performed allocation for 1 slaves in 596793ns I0206 01:25:09.831233 2856 hierarchical.cpp:1403] No resources available to allocate! I0206 01:25:09.831316 2856 hierarchical.cpp:1498] No inverse offers to send out! I0206 01:25:09.831352 2856 hierarchical.cpp:1096] Performed allocation for 1 slaves in 353864ns I0206 01:25:10.832953 2849 hierarchical.cpp:1403] No resources available to allocate! I0206 01:25:10.833307 2849 hierarchical.cpp:1498] No inverse offers to send out! I0206 01:25:10.833411 2849 hierarchical.cpp:1096] Performed allocation for 1 slaves in 731864ns I0206 01:25:11.834967 2847 hierarchical.cpp:1403] No resources available to allocate! I0206 01:25:11.835149 2847 hierarchical.cpp:1498] No inverse offers to send out! I0206 01:25:11.835294 2847 hierarchical.cpp:1096] Performed allocation for 1 slaves in 586988ns I0206 01:25:12.174247 2853 slave.cpp:2643] Got registration for executor '0e7267ed-c5ed-4914-9042-5970b2aaec1c' of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 from executor(1)@172.17.0.8:43659 I0206 01:25:12.179061 2844 slave.cpp:1863] Sending queued task '0e7267ed-c5ed-4914-9042-5970b2aaec1c' to executor '0e7267ed-c5ed-4914-9042-5970b2aaec1c' of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 at executor(1)@172.17.0.8:43659 I0206 01:25:12.194753 2858 slave.cpp:3002] Handling status update TASK_RUNNING (UUID: 9d924a5b-76ab-4886-8091-7af3428ff179) for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 from executor(1)@172.17.0.8:43659 I0206 01:25:12.195852 2858 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 9d924a5b-76ab-4886-8091-7af3428ff179) for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.196094 2858 status_update_manager.cpp:497] Creating StatusUpdate stream for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.197000 2858 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 9d924a5b-76ab-4886-8091-7af3428ff179) for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 to the slave I0206 01:25:12.197739 2855 slave.cpp:3354] Forwarding the update TASK_RUNNING (UUID: 9d924a5b-76ab-4886-8091-7af3428ff179) for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 to master@172.17.0.8:37843 I0206 01:25:12.198442 2855 master.cpp:4791] Status update TASK_RUNNING (UUID: 9d924a5b-76ab-4886-8091-7af3428ff179) for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 from slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 at slave(9)@172.17.0.8:37843 (74ef606c4063) I0206 01:25:12.198673 2855 master.cpp:4839] Forwarding status update TASK_RUNNING (UUID: 9d924a5b-76ab-4886-8091-7af3428ff179) for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.199038 2855 master.cpp:6447] Updating the state of task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0206 01:25:12.199581 2854 sched.cpp:981] Scheduler::statusUpdate took 159022ns I0206 01:25:12.200568 2854 master.cpp:3949] Processing ACKNOWLEDGE call 9d924a5b-76ab-4886-8091-7af3428ff179 for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 (default) at scheduler-f50aad75-78d0-4d9f-b1a4-488d5ab932d6@172.17.0.8:37843 on slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 I0206 01:25:12.201513 2858 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 9d924a5b-76ab-4886-8091-7af3428ff179) for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 ../../src/tests/container_logger_tests.cpp:350: Failure Value of: strings::contains(stdout.get(), """"Hello World!"""") Actual: false Expected: true I0206 01:25:12.201702 2824 sched.cpp:1903] Asked to stop the driver I0206 01:25:12.202831 2848 sched.cpp:1143] Stopping framework '914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000' I0206 01:25:12.203284 2848 master.cpp:5923] Processing TEARDOWN call for framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 (default) at scheduler-f50aad75-78d0-4d9f-b1a4-488d5ab932d6@172.17.0.8:37843 I0206 01:25:12.203321 2848 master.cpp:5935] Removing framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 (default) at scheduler-f50aad75-78d0-4d9f-b1a4-488d5ab932d6@172.17.0.8:37843 I0206 01:25:12.201762 2854 slave.cpp:3248] Status update manager successfully handled status update TASK_RUNNING (UUID: 9d924a5b-76ab-4886-8091-7af3428ff179) for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.203384 2854 slave.cpp:3264] Sending acknowledgement for status update TASK_RUNNING (UUID: 9d924a5b-76ab-4886-8091-7af3428ff179) for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 to executor(1)@172.17.0.8:43659 I0206 01:25:12.204712 2843 hierarchical.cpp:375] Deactivated framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.204953 2848 master.cpp:6447] Updating the state of task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 (latest state: TASK_KILLED, status update state: TASK_KILLED) I0206 01:25:12.205885 2854 slave.cpp:2412] Status update manager successfully handled status update acknowledgement (UUID: 9d924a5b-76ab-4886-8091-7af3428ff179) for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.206082 2854 slave.cpp:2079] Asked to shut down framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 by master@172.17.0.8:37843 I0206 01:25:12.206125 2854 slave.cpp:2104] Shutting down framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.206331 2854 slave.cpp:4129] Shutting down executor '0e7267ed-c5ed-4914-9042-5970b2aaec1c' of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 at executor(1)@172.17.0.8:43659 I0206 01:25:12.206408 2843 hierarchical.cpp:892] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 from framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.207352 2848 master.cpp:6513] Removing task 0e7267ed-c5ed-4914-9042-5970b2aaec1c with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 on slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 at slave(9)@172.17.0.8:37843 (74ef606c4063) I0206 01:25:12.208258 2848 master.cpp:1027] Master terminating I0206 01:25:12.208703 2857 hierarchical.cpp:326] Removed framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.209658 2857 hierarchical.cpp:505] Removed slave 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0 I0206 01:25:12.212208 2848 slave.cpp:3482] master@172.17.0.8:37843 exited W0206 01:25:12.212261 2848 slave.cpp:3485] Master disconnected! Waiting for a new master to be elected I0206 01:25:12.224596 2854 containerizer.cpp:1318] Destroying container '5c952202-44cf-427a-8452-0f501140a4b7' I0206 01:25:12.241466 2852 slave.cpp:3482] executor(1)@172.17.0.8:43659 exited I0206 01:25:12.250931 2856 containerizer.cpp:1534] Executor for container '5c952202-44cf-427a-8452-0f501140a4b7' has exited I0206 01:25:12.253350 2850 provisioner.cpp:306] Ignoring destroy request for unknown container 5c952202-44cf-427a-8452-0f501140a4b7 I0206 01:25:12.253885 2850 slave.cpp:3817] Executor '0e7267ed-c5ed-4914-9042-5970b2aaec1c' of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 terminated with signal Killed I0206 01:25:12.254125 2850 slave.cpp:3921] Cleaning up executor '0e7267ed-c5ed-4914-9042-5970b2aaec1c' of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 at executor(1)@172.17.0.8:43659 I0206 01:25:12.254545 2847 gc.cpp:54] Scheduling '/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw/slaves/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0/frameworks/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000/executors/0e7267ed-c5ed-4914-9042-5970b2aaec1c/runs/5c952202-44cf-427a-8452-0f501140a4b7' for gc 6.99999705530074days in the future I0206 01:25:12.254803 2850 slave.cpp:4009] Cleaning up framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.254822 2847 gc.cpp:54] Scheduling '/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw/slaves/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0/frameworks/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000/executors/0e7267ed-c5ed-4914-9042-5970b2aaec1c' for gc 6.99999705202667days in the future I0206 01:25:12.255084 2857 status_update_manager.cpp:282] Closing status update streams for framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.255143 2856 gc.cpp:54] Scheduling '/tmp/ContainerLoggerTest_DefaultToSandbox_FMaKSw/slaves/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-S0/frameworks/914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000' for gc 6.99999704808days in the future I0206 01:25:12.255190 2857 status_update_manager.cpp:528] Cleaning up status update stream for task 0e7267ed-c5ed-4914-9042-5970b2aaec1c of framework 914b62f9-95f6-4c57-a7e3-9b06e2c1c8de-0000 I0206 01:25:12.255192 2850 slave.cpp:668] Slave terminating [ FAILED ] ContainerLoggerTest.DefaultToSandbox (8566 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4619","02/08/2016 00:39:36",1,"Remove markdown files from doxygen pages ""The doxygen html pages corresponding to doc/* markdown files are redundant and have broken links. They don't serve any reasonable purpose in doxygen site.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4622","02/08/2016 22:39:29",1,"Update configuration.md with `--cgroups_net_cls_primary_handle` agent flag. ""As part of the net_cls epic, we introduce an agent flag called `--cgroup_net_cls_primary_handle` . We need to update configuration.md with the corresponding help string. ""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4623","02/09/2016 09:43:13",3,"Add a stub Nvidia GPU isolator. ""We'll first wire up a skeleton Nvidia GPU isolator, which needs to be guarded by a configure flag due to the dependency on NVML.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4624","02/09/2016 09:47:55",1,"Add allocation metrics for ""gpus"" resources. ""Allocation metrics are currently hard-coded to include only {{\[""""cpus"""", """"mem"""", """"disk""""\]}} resources. We'll need to add """"gpus"""" to the list to start, possibly following up on the TODO to remove the hard-coding. See: https://github.com/apache/mesos/blob/0.27.0/src/master/metrics.cpp#L266-L269 https://github.com/apache/mesos/blob/0.27.0/src/slave/metrics.cpp#L123-L126 ""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4625","02/09/2016 09:52:27",5,"Implement Nvidia GPU isolation w/o filesystem isolation enabled. ""The Nvidia GPU isolator will need to use the device cgroup to restrict access to GPU resources, and will need to recover this information after agent failover. For now this will require that the operator specifies the GPU devices via a flag. To handle filesystem isolation requires that we provide mechanisms for operators to inject volumes with the necessary libraries into all containers using GPU resources, we'll tackle this in a separate ticket.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4626","02/09/2016 09:59:44",13,"Support Nvidia GPUs with filesystem isolation enabled in mesos containerizer. ""When filesystem isolation is enabled in the mesos containerizer, containers that use Nvidia GPU resources need access to GPU libraries residing on the host. We'll need to provide a means for operators to inject the necessary volumes into *all* containers that use """"gpus"""" resources. See the nvidia-docker project for more details: [nvidia-docker/tools/src/nvidia/volumes.go|https://github.com/NVIDIA/nvidia-docker/blob/fda10b2d27bf5578cc5337c23877f827e4d1ed77/tools/src/nvidia/volumes.go#L50-L103]""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4657","02/11/2016 15:32:51",1,"Add LOG(INFO) in `cgroups/net_cls` for debugging allocation of net_cls handles. ""We need to add LOG(INFO) during the prepare phase of `cgroups/net_cls` for debugging management of `net_cls` handles within the isolator. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4667","02/12/2016 19:26:51",3,"Expose persistent volume information in HTTP endpoints ""The per-slave {{reserved_resources}} information returned by {{/state}} does not seem to include information about persistent volumes. This makes it hard for operators to use the {{/destroy-volumes}} endpoint.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4669","02/12/2016 22:10:23",2,"Add common compression utility ""We need GZIP uncompress utility for Appc image fetching functionality. The images are tar + gzip'ed and they needs to be first uncompressed so that we can compute sha 512 checksum on it.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4671","02/13/2016 01:59:47",1,"Status updates from executor can be forwarded out of order by the Agent. ""Previously, all status update messages from the executor were forwarded by the agent to the master in the order that they had been received. However, that seems to be no longer valid due to a recently introduced change in the agent: This can sometimes lead to status updates being sent out of order depending on the order the {{Future}} is fulfilled from the call to {{status(...)}}."""," // Before sending update, we need to retrieve the container status. containerizer->status(executor->containerId) .onAny(defer(self(), &Slave::_statusUpdate, update, pid, executor->id, lambda::_1)); ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4674","02/15/2016 16:37:06",3,"Linux filesystem isolator tests are flaky. ""LinuxFilesystemIsolatorTest.ROOT_ImageInVolumeWithRootFilesystem sometimes fails on CentOS 7 with this kind of output: LinuxFilesystemIsolatorTest.ROOT_MultipleContainers often has this output: Whether SSL is configured makes no difference. This test may also fail on other platforms, but more rarely. """," ../../src/tests/containerizer/filesystem_isolator_tests.cpp:1054: Failure Failed to wait 2mins for launch ../../src/tests/containerizer/filesystem_isolator_tests.cpp:1138: Failure Failed to wait 1mins for launch1 ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4683","02/16/2016 07:07:19",2,"Document docker runtime isolator. ""Should include the following information: *What features are currently supported in docker runtime isolator. *How to use the docker runtime isolator (user manual). *Compare the different semantics v.s. docker containerizer, and explain why.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4684","02/16/2016 07:15:15",3,"Create base docker image for test suite. ""This should be widely used for unified containerizer testing. Should basically include: *at least one layer. *repositories. For each layer: *root file system as a layer tar ball. *docker image json (manifest). *docker version.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4687","02/16/2016 18:13:36",5,"Implement reliable floating point for scalar resources ""Design doc: https://docs.google.com/document/d/14qLxjZsfIpfynbx0USLJR0GELSq8hdZJUWw6kaY_DXc/edit?usp=sharing""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4695","02/17/2016 19:55:29",1,"SlaveTest.StateEndpoint is flaky "" Even though this test does {{Clock::pause()}} before starting the agent, there's a possibility that a numified-stringified double to not equal itself, even after rounding to the nearest int."""," [ RUN ] SlaveTest.StateEndpoint ../../src/tests/slave_tests.cpp:1220: Failure Value of: state.values[""""start_time""""].as().as() Actual: 1458159086 Expected: static_cast(Clock::now().secs()) Which is: 1458159085 [ FAILED ] SlaveTest.StateEndpoint (193 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4702","02/18/2016 01:28:15",1,"Document default value of ""offer_timeout"" ""There isn't a default value (i.e., offers do not timeout by default), but we should clarify this in {{flags.cpp}} and {{configuration.md}}.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4703","02/18/2016 03:50:35",1,"Make Stout configuration modular and consumable by downstream (e.g., libprocess and agent) ""Stout configuration is replicated in at least 3 configuration files -- stout itself, libprocess, and agent. More will follow in the future. We should make a StoutConfigure.cmake that can be included by any package downstream.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4713","02/18/2016 22:19:28",2,"ReviewBot should not fail hard if there are circular dependencies in a review chain ""Instead of failing hard, ReviewBot should post an error to the review that a circular dependency is detected.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4714","02/18/2016 22:39:02",2,"""make DESTDIR= install"" broken ""There is a missing '$(DESTDIR)' prefix in the install-data-hook that causes DESTDIR builds to be broken.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4720","02/19/2016 12:03:48",2,"Add allocator metrics for total vs offered/allocated resources. ""Exposing the current allocation breakdown as seen by the allocator will allow us to correlated the corresponding metrics in the master with what the allocator sees. We should expose at least allocated or available, and total.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4721","02/19/2016 12:04:46",1,"Expose allocation algorithm latency via a metric. ""The allocation algorithm has grown to become fairly expensive, gaining visibility into its latency enables monitoring and alerting. Similar allocator timing-related information is already exposed in the log, but should also be exposed via an endpoint.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4722","02/19/2016 12:05:32",1,"Add allocator metric for number of active offer filters ""To diagnose scenarios where frameworks unexpectedly do not receive offers information on currently active filters are needed.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4723","02/19/2016 12:06:22",2,"Add allocator metric for currently satisfied quotas ""We currently expose information on set quotas via dedicated quota endpoints. To diagnose allocator problems one additionally needs information about used quotas.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4731","02/22/2016 01:32:13",3,"Update /frameworks to use jsonify ""This should let us remove the duplicated code in {{http.cpp}} between {{model(Framework)}} and {{json(Full)}}.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4747","02/23/2016 19:40:33",1,"ContainerLoggerTest.MesosContainerizerRecover cannot be executed in isolation ""Some cleanup of spawned processes is missing in {{ContainerLoggerTest.MesosContainerizerRecover}} so that when the test is run in isolation the global teardown might find lingering processes. Observered on OS X with clang-trunk and an unoptimized build. """," [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from ContainerLoggerTest [ RUN ] ContainerLoggerTest.MesosContainerizerRecover [ OK ] ContainerLoggerTest.MesosContainerizerRecover (13 ms) [----------] 1 test from ContainerLoggerTest (13 ms total) [----------] Global test environment tear-down ../../src/tests/environment.cpp:728: Failure Failed Tests completed with child processes remaining: -+- 7112 /SOME/PATH/src/mesos/build/src/.libs/mesos-tests --gtest_filter=ContainerLoggerTest.MesosContainerizerRecover \--- 7130 (sh) [==========] 1 test from 1 test case ran. (23 ms total) [ PASSED ] 1 test. [ FAILED ] 0 tests, listed below: 0 FAILED TESTS ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4748","02/23/2016 19:50:23",3,"Add Appc image fetcher tests. ""Mesos now has support for fetching Appc images. Add tests that verifies the new component.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4750","02/23/2016 21:04:52",2,"Document: Mesos Executor expects all SSL_* environment variables to be set ""I was trying to run Docker containers in a fully SSL-ized Mesos cluster but ran into problems because the executor was failing with a """"Failed to shutdown socket with fd 10: Transport endpoint is not connected"""". My understanding of why this is happening is because the executor was trying to report its status to Mesos slave over HTTPS, but doesnt have the appropriate certs/env setup inside the executor. (Thanks to mslackbot/joseph for helping me figure this out on #mesos) It turns out, the executor expects all SSL_* variables to be set inside `CommandInfo.environment` which gets picked up by the executor to successfully reports its status to the slave. This part of __executor needing all the SSL_* variables to be set in its environment__ is missing in the Mesos SSL transitioning guide. I request you to please add this vital information to the doc.""","",0,0,0,0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4754","02/24/2016 08:49:32",2,"The ""executors"" field is exposed under a backwards incompatible schema. ""In 0.26.0, the master's {{/state}} endpoint generated the following: In 0.27.1, the {{ExecutorInfo}} is mistakenly exposed in the raw protobuf schema: This is a backwards incompatible API change."""," { /* ... */ """"frameworks"""": [ { /* ... */ """"executors"""": [ { """"command"""": { """"argv"""": [], """"uris"""": [], """"value"""": """"/Users/mpark/Projects/mesos/build/opt/src/long-lived-executor"""" }, """"executor_id"""": """"default"""", """"framework_id"""": """"0ea528a9-64ba-417f-98ea-9c4b8d418db6-0000"""", """"name"""": """"Long Lived Executor (C++)"""", """"resources"""": { """"cpus"""": 0, """"disk"""": 0, """"mem"""": 0 }, """"slave_id"""": """"8a513678-03a1-4cb5-9279-c3c0c591f1d8-S0"""" } ], /* ... */ } ] /* ... */ } { /* ... */ """"frameworks"""": [ { /* ... */ """"executors"""": [ { """"command"""": { """"shell"""": true, """"value"""": """"/Users/mpark/Projects/mesos/build/opt/src/long-lived-executor"""" }, """"executor_id"""": { """"value"""": """"default"""" }, """"framework_id"""": { """"value"""": """"368a5a49-480b-41f6-a13b-24a69c92a72e-0000"""" }, """"name"""": """"Long Lived Executor (C++)"""", """"slave_id"""": """"8a513678-03a1-4cb5-9279-c3c0c591f1d8-S0"""", """"source"""": """"cpp_long_lived_framework"""" } ], /* ... */ } ] /* ... */ } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4768","02/24/2016 22:57:24",1,"MasterMaintenanceTest.InverseOffers is flaky ""[MESOS-4169] significantly sped up this test, but also surfaced some more flakiness. This can be fixed in the same way as [MESOS-4059]. Verbose logs from ASF Centos7 build: """," [ RUN ] MasterMaintenanceTest.InverseOffers I0224 22:35:53.714018 1948 leveldb.cpp:174] Opened db in 2.034387ms I0224 22:35:53.714663 1948 leveldb.cpp:181] Compacted db in 608839ns I0224 22:35:53.714709 1948 leveldb.cpp:196] Created db iterator in 19043ns I0224 22:35:53.714844 1948 leveldb.cpp:202] Seeked to beginning of db in 2330ns I0224 22:35:53.714956 1948 leveldb.cpp:271] Iterated through 0 keys in the db in 518ns I0224 22:35:53.715092 1948 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0224 22:35:53.715646 1968 recover.cpp:447] Starting replica recovery I0224 22:35:53.715915 1981 recover.cpp:473] Replica is in EMPTY status I0224 22:35:53.717067 1972 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (4533)@172.17.0.1:36678 I0224 22:35:53.717445 1981 recover.cpp:193] Received a recover response from a replica in EMPTY status I0224 22:35:53.717888 1978 recover.cpp:564] Updating replica status to STARTING I0224 22:35:53.718585 1979 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 525061ns I0224 22:35:53.718618 1979 replica.cpp:320] Persisted replica status to STARTING I0224 22:35:53.718827 1982 recover.cpp:473] Replica is in STARTING status I0224 22:35:53.719728 1969 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (4534)@172.17.0.1:36678 I0224 22:35:53.719974 1971 recover.cpp:193] Received a recover response from a replica in STARTING status I0224 22:35:53.720369 1970 recover.cpp:564] Updating replica status to VOTING I0224 22:35:53.720789 1982 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 322308ns I0224 22:35:53.720823 1982 replica.cpp:320] Persisted replica status to VOTING I0224 22:35:53.720968 1982 recover.cpp:578] Successfully joined the Paxos group I0224 22:35:53.721101 1982 recover.cpp:462] Recover process terminated I0224 22:35:53.721698 1982 master.cpp:376] Master aab18b61-7811-4c43-a672-d1a63818c880 (4db5fa128d2d) started on 172.17.0.1:36678 I0224 22:35:53.721719 1982 master.cpp:378] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""false"""" --authenticate_http=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/MjbcWP/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-0.28.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/MjbcWP/master"""" --zk_session_timeout=""""10secs"""" I0224 22:35:53.722039 1982 master.cpp:425] Master allowing unauthenticated frameworks to register I0224 22:35:53.722053 1982 master.cpp:428] Master only allowing authenticated slaves to register I0224 22:35:53.722061 1982 credentials.hpp:35] Loading credentials for authentication from '/tmp/MjbcWP/credentials' I0224 22:35:53.722394 1982 master.cpp:468] Using default 'crammd5' authenticator I0224 22:35:53.722525 1982 master.cpp:537] Using default 'basic' HTTP authenticator I0224 22:35:53.722661 1982 master.cpp:571] Authorization enabled I0224 22:35:53.722813 1968 hierarchical.cpp:144] Initialized hierarchical allocator process I0224 22:35:53.722846 1980 whitelist_watcher.cpp:77] No whitelist given I0224 22:35:53.724957 1977 master.cpp:1712] The newly elected leader is master@172.17.0.1:36678 with id aab18b61-7811-4c43-a672-d1a63818c880 I0224 22:35:53.725000 1977 master.cpp:1725] Elected as the leading master! I0224 22:35:53.725023 1977 master.cpp:1470] Recovering from registrar I0224 22:35:53.725306 1967 registrar.cpp:307] Recovering registrar I0224 22:35:53.725808 1977 log.cpp:659] Attempting to start the writer I0224 22:35:53.727145 1973 replica.cpp:493] Replica received implicit promise request from (4536)@172.17.0.1:36678 with proposal 1 I0224 22:35:53.727728 1973 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 424560ns I0224 22:35:53.727828 1973 replica.cpp:342] Persisted promised to 1 I0224 22:35:53.729080 1973 coordinator.cpp:238] Coordinator attempting to fill missing positions I0224 22:35:53.731009 1979 replica.cpp:388] Replica received explicit promise request from (4537)@172.17.0.1:36678 for position 0 with proposal 2 I0224 22:35:53.731580 1979 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 478479ns I0224 22:35:53.731613 1979 replica.cpp:712] Persisted action at 0 I0224 22:35:53.734354 1979 replica.cpp:537] Replica received write request for position 0 from (4538)@172.17.0.1:36678 I0224 22:35:53.734485 1979 leveldb.cpp:436] Reading position from leveldb took 60879ns I0224 22:35:53.735877 1979 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 1.324061ms I0224 22:35:53.735930 1979 replica.cpp:712] Persisted action at 0 I0224 22:35:53.737061 1970 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0224 22:35:53.738881 1970 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.772814ms I0224 22:35:53.738939 1970 replica.cpp:712] Persisted action at 0 I0224 22:35:53.738975 1970 replica.cpp:697] Replica learned NOP action at position 0 I0224 22:35:53.740136 1976 log.cpp:675] Writer started with ending position 0 I0224 22:35:53.741750 1976 leveldb.cpp:436] Reading position from leveldb took 74863ns I0224 22:35:53.743479 1976 registrar.cpp:340] Successfully fetched the registry (0B) in 18.11968ms I0224 22:35:53.743755 1976 registrar.cpp:439] Applied 1 operations in 56670ns; attempting to update the 'registry' I0224 22:35:53.745604 1978 log.cpp:683] Attempting to append 170 bytes to the log I0224 22:35:53.745905 1977 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0224 22:35:53.746968 1981 replica.cpp:537] Replica received write request for position 1 from (4539)@172.17.0.1:36678 I0224 22:35:53.747480 1981 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 456947ns I0224 22:35:53.747609 1981 replica.cpp:712] Persisted action at 1 I0224 22:35:53.750448 1981 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0224 22:35:53.751158 1981 leveldb.cpp:341] Persisting action (191 bytes) to leveldb took 535163ns I0224 22:35:53.751258 1981 replica.cpp:712] Persisted action at 1 I0224 22:35:53.751389 1981 replica.cpp:697] Replica learned APPEND action at position 1 I0224 22:35:53.753149 1979 registrar.cpp:484] Successfully updated the 'registry' in 9.228032ms I0224 22:35:53.753324 1979 registrar.cpp:370] Successfully recovered registrar I0224 22:35:53.753593 1979 log.cpp:702] Attempting to truncate the log to 1 I0224 22:35:53.753805 1979 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0224 22:35:53.754055 1981 master.cpp:1522] Recovered 0 slaves from the Registry (131B) ; allowing 10mins for slaves to re-register I0224 22:35:53.754349 1979 hierarchical.cpp:171] Skipping recovery of hierarchical allocator: nothing to recover I0224 22:35:53.755764 1977 replica.cpp:537] Replica received write request for position 2 from (4540)@172.17.0.1:36678 I0224 22:35:53.756459 1977 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 488559ns I0224 22:35:53.756561 1977 replica.cpp:712] Persisted action at 2 I0224 22:35:53.757932 1972 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0224 22:35:53.758400 1972 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 343827ns I0224 22:35:53.758539 1972 leveldb.cpp:399] Deleting ~1 keys from leveldb took 34231ns I0224 22:35:53.758658 1972 replica.cpp:712] Persisted action at 2 I0224 22:35:53.758782 1972 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0224 22:35:53.778059 1978 slave.cpp:193] Slave started on 115)@172.17.0.1:36678 I0224 22:35:53.778105 1978 slave.cpp:194] Flags at startup: --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname=""""maintenance-host"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.28.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF"""" I0224 22:35:53.778609 1978 credentials.hpp:83] Loading credential for authentication from '/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF/credential' I0224 22:35:53.779175 1978 slave.cpp:324] Slave using credential for: test-principal I0224 22:35:53.779520 1978 resources.cpp:576] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0224 22:35:53.780192 1978 slave.cpp:464] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0224 22:35:53.780362 1978 slave.cpp:472] Slave attributes: [ ] I0224 22:35:53.780483 1978 slave.cpp:477] Slave hostname: maintenance-host I0224 22:35:53.782126 1967 state.cpp:58] Recovering state from '/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF/meta' I0224 22:35:53.782892 1969 status_update_manager.cpp:200] Recovering status update manager I0224 22:35:53.783242 1969 slave.cpp:4565] Finished recovery I0224 22:35:53.784001 1969 slave.cpp:4737] Querying resource estimator for oversubscribable resources I0224 22:35:53.784678 1969 slave.cpp:796] New master detected at master@172.17.0.1:36678 I0224 22:35:53.784874 1967 status_update_manager.cpp:174] Pausing sending status updates I0224 22:35:53.784808 1969 slave.cpp:859] Authenticating with master master@172.17.0.1:36678 I0224 22:35:53.784945 1969 slave.cpp:864] Using default CRAM-MD5 authenticatee I0224 22:35:53.785181 1969 slave.cpp:832] Detecting new master I0224 22:35:53.785326 1969 slave.cpp:4751] Received oversubscribable resources from the resource estimator I0224 22:35:53.785557 1969 authenticatee.cpp:121] Creating new client SASL connection I0224 22:35:53.786227 1969 master.cpp:5526] Authenticating slave(115)@172.17.0.1:36678 I0224 22:35:53.786492 1969 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(298)@172.17.0.1:36678 I0224 22:35:53.786962 1969 authenticator.cpp:98] Creating new server SASL connection I0224 22:35:53.787274 1969 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0224 22:35:53.787308 1969 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0224 22:35:53.787400 1969 authenticator.cpp:203] Received SASL authentication start I0224 22:35:53.787470 1969 authenticator.cpp:325] Authentication requires more steps I0224 22:35:53.787884 1972 authenticatee.cpp:258] Received SASL authentication step I0224 22:35:53.787992 1972 authenticator.cpp:231] Received SASL authentication step I0224 22:35:53.788027 1972 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '4db5fa128d2d' server FQDN: '4db5fa128d2d' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0224 22:35:53.788040 1972 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0224 22:35:53.788090 1972 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0224 22:35:53.788122 1972 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '4db5fa128d2d' server FQDN: '4db5fa128d2d' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0224 22:35:53.788136 1972 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0224 22:35:53.788146 1972 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0224 22:35:53.788164 1972 authenticator.cpp:317] Authentication success I0224 22:35:53.788331 1972 authenticatee.cpp:298] Authentication success I0224 22:35:53.788439 1972 master.cpp:5556] Successfully authenticated principal 'test-principal' at slave(115)@172.17.0.1:36678 I0224 22:35:53.788529 1972 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(298)@172.17.0.1:36678 I0224 22:35:53.788988 1972 slave.cpp:927] Successfully authenticated with master master@172.17.0.1:36678 I0224 22:35:53.789139 1972 slave.cpp:1321] Will retry registration in 1.535786ms if necessary I0224 22:35:53.789515 1972 master.cpp:4240] Registering slave at slave(115)@172.17.0.1:36678 (maintenance-host) with id aab18b61-7811-4c43-a672-d1a63818c880-S0 I0224 22:35:53.790577 1972 registrar.cpp:439] Applied 1 operations in 78745ns; attempting to update the 'registry' I0224 22:35:53.791128 1971 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/maintenance/schedule' I0224 22:35:53.791877 1971 http.cpp:501] HTTP POST for /master/maintenance/schedule from 172.17.0.1:45095 I0224 22:35:53.793313 1972 log.cpp:683] Attempting to append 343 bytes to the log I0224 22:35:53.793586 1972 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0224 22:35:53.794533 1971 replica.cpp:537] Replica received write request for position 3 from (4547)@172.17.0.1:36678 I0224 22:35:53.794862 1971 leveldb.cpp:341] Persisting action (362 bytes) to leveldb took 283614ns I0224 22:35:53.794893 1971 replica.cpp:712] Persisted action at 3 I0224 22:35:53.796646 1979 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0224 22:35:53.797102 1972 slave.cpp:1321] Will retry registration in 17.198963ms if necessary I0224 22:35:53.797186 1979 leveldb.cpp:341] Persisting action (364 bytes) to leveldb took 498502ns I0224 22:35:53.797230 1979 replica.cpp:712] Persisted action at 3 I0224 22:35:53.797260 1979 replica.cpp:697] Replica learned APPEND action at position 3 I0224 22:35:53.797417 1972 master.cpp:4228] Ignoring register slave message from slave(115)@172.17.0.1:36678 (maintenance-host) as admission is already in progress I0224 22:35:53.799119 1978 registrar.cpp:484] Successfully updated the 'registry' in 8.45824ms I0224 22:35:53.799613 1978 registrar.cpp:439] Applied 1 operations in 176193ns; attempting to update the 'registry' I0224 22:35:53.800472 1972 master.cpp:4308] Registered slave aab18b61-7811-4c43-a672-d1a63818c880-S0 at slave(115)@172.17.0.1:36678 (maintenance-host) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0224 22:35:53.800623 1978 log.cpp:702] Attempting to truncate the log to 3 I0224 22:35:53.801255 1969 hierarchical.cpp:473] Added slave aab18b61-7811-4c43-a672-d1a63818c880-S0 (maintenance-host) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0224 22:35:53.801301 1978 slave.cpp:971] Registered with master master@172.17.0.1:36678; given slave ID aab18b61-7811-4c43-a672-d1a63818c880-S0 I0224 22:35:53.801331 1978 fetcher.cpp:81] Clearing fetcher cache I0224 22:35:53.801431 1969 hierarchical.cpp:1434] No resources available to allocate! I0224 22:35:53.801466 1969 hierarchical.cpp:1147] Performed allocation for slave aab18b61-7811-4c43-a672-d1a63818c880-S0 in 162751ns I0224 22:35:53.801532 1969 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0224 22:35:53.801867 1978 slave.cpp:994] Checkpointing SlaveInfo to '/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF/meta/slaves/aab18b61-7811-4c43-a672-d1a63818c880-S0/slave.info' I0224 22:35:53.801877 1969 status_update_manager.cpp:181] Resuming sending status updates I0224 22:35:53.802898 1977 replica.cpp:537] Replica received write request for position 4 from (4548)@172.17.0.1:36678 I0224 22:35:53.803252 1978 slave.cpp:1030] Forwarding total oversubscribed resources I0224 22:35:53.803640 1970 master.cpp:4649] Received update of slave aab18b61-7811-4c43-a672-d1a63818c880-S0 at slave(115)@172.17.0.1:36678 (maintenance-host) with total oversubscribed resources I0224 22:35:53.803858 1977 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 912626ns I0224 22:35:53.803889 1977 replica.cpp:712] Persisted action at 4 I0224 22:35:53.804144 1978 slave.cpp:3482] Received ping from slave-observer(117)@172.17.0.1:36678 I0224 22:35:53.804535 1971 hierarchical.cpp:531] Slave aab18b61-7811-4c43-a672-d1a63818c880-S0 (maintenance-host) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) I0224 22:35:53.804684 1971 hierarchical.cpp:1434] No resources available to allocate! I0224 22:35:53.804714 1971 hierarchical.cpp:1147] Performed allocation for slave aab18b61-7811-4c43-a672-d1a63818c880-S0 in 131453ns I0224 22:35:53.805541 1967 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0224 22:35:53.805941 1967 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 366444ns I0224 22:35:53.806015 1967 leveldb.cpp:399] Deleting ~2 keys from leveldb took 42808ns I0224 22:35:53.806041 1967 replica.cpp:712] Persisted action at 4 I0224 22:35:53.806066 1967 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0224 22:35:53.807355 1978 log.cpp:683] Attempting to append 465 bytes to the log I0224 22:35:53.807551 1978 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 5 I0224 22:35:53.809638 1979 replica.cpp:537] Replica received write request for position 5 from (4549)@172.17.0.1:36678 I0224 22:35:53.810858 1979 leveldb.cpp:341] Persisting action (484 bytes) to leveldb took 1.167663ms I0224 22:35:53.810904 1979 replica.cpp:712] Persisted action at 5 I0224 22:35:53.811997 1979 replica.cpp:691] Replica received learned notice for position 5 from @0.0.0.0:0 I0224 22:35:53.812348 1979 leveldb.cpp:341] Persisting action (486 bytes) to leveldb took 318928ns I0224 22:35:53.812376 1979 replica.cpp:712] Persisted action at 5 I0224 22:35:53.812397 1979 replica.cpp:697] Replica learned APPEND action at position 5 I0224 22:35:53.815132 1973 registrar.cpp:484] Successfully updated the 'registry' in 15.437312ms I0224 22:35:53.815491 1976 log.cpp:702] Attempting to truncate the log to 5 I0224 22:35:53.815610 1973 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 6 I0224 22:35:53.815661 1968 master.cpp:4705] Updating unavailability of slave aab18b61-7811-4c43-a672-d1a63818c880-S0 at slave(115)@172.17.0.1:36678 (maintenance-host), starting at 2410.99235909694weeks I0224 22:35:53.815845 1968 master.cpp:4705] Updating unavailability of slave aab18b61-7811-4c43-a672-d1a63818c880-S0 at slave(115)@172.17.0.1:36678 (maintenance-host), starting at 2410.99235909694weeks I0224 22:35:53.816069 1975 hierarchical.cpp:1434] No resources available to allocate! I0224 22:35:53.816103 1975 hierarchical.cpp:1147] Performed allocation for slave aab18b61-7811-4c43-a672-d1a63818c880-S0 in 175822ns I0224 22:35:53.816272 1975 hierarchical.cpp:1434] No resources available to allocate! I0224 22:35:53.816303 1975 hierarchical.cpp:1147] Performed allocation for slave aab18b61-7811-4c43-a672-d1a63818c880-S0 in 110913ns I0224 22:35:53.817291 1972 replica.cpp:537] Replica received write request for position 6 from (4550)@172.17.0.1:36678 I0224 22:35:53.817908 1972 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 576032ns I0224 22:35:53.817932 1972 replica.cpp:712] Persisted action at 6 I0224 22:35:53.818686 1980 replica.cpp:691] Replica received learned notice for position 6 from @0.0.0.0:0 I0224 22:35:53.819021 1980 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 305298ns I0224 22:35:53.819095 1980 leveldb.cpp:399] Deleting ~2 keys from leveldb took 44332ns I0224 22:35:53.819120 1980 replica.cpp:712] Persisted action at 6 I0224 22:35:53.819162 1980 replica.cpp:697] Replica learned TRUNCATE action at position 6 I0224 22:35:53.820662 1967 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/maintenance/status' I0224 22:35:53.821190 1976 http.cpp:501] HTTP GET for /master/maintenance/status from 172.17.0.1:45096 I0224 22:35:53.823709 1948 scheduler.cpp:154] Version: 0.28.0 I0224 22:35:53.824424 1972 scheduler.cpp:236] New master detected at master@172.17.0.1:36678 I0224 22:35:53.825402 1982 scheduler.cpp:298] Sending SUBSCRIBE call to master@172.17.0.1:36678 I0224 22:35:53.827201 1978 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' I0224 22:35:53.827636 1978 http.cpp:501] HTTP POST for /master/api/v1/scheduler from 172.17.0.1:45097 I0224 22:35:53.827922 1978 master.cpp:1974] Received subscription request for HTTP framework 'default' I0224 22:35:53.827991 1978 master.cpp:1751] Authorizing framework principal 'test-principal' to receive offers for role '*' I0224 22:35:53.828418 1982 master.cpp:2065] Subscribing framework 'default' with checkpointing disabled and capabilities [ ] I0224 22:35:53.828943 1968 hierarchical.cpp:265] Added framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.829124 1982 master.hpp:1657] Sending heartbeat to aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.829987 1968 hierarchical.cpp:1127] Performed allocation for 1 slaves in 1.011356ms I0224 22:35:53.830204 1982 master.cpp:5355] Sending 1 offers to framework aab18b61-7811-4c43-a672-d1a63818c880-0000 (default) I0224 22:35:53.830801 1982 master.cpp:5445] Sending 1 inverse offers to framework aab18b61-7811-4c43-a672-d1a63818c880-0000 (default) I0224 22:35:53.831132 1969 scheduler.cpp:457] Enqueuing event SUBSCRIBED received from master@172.17.0.1:36678 I0224 22:35:53.832396 1968 scheduler.cpp:457] Enqueuing event HEARTBEAT received from master@172.17.0.1:36678 I0224 22:35:53.833050 1976 master_maintenance_tests.cpp:177] Ignoring HEARTBEAT event I0224 22:35:53.833256 1979 scheduler.cpp:457] Enqueuing event OFFERS received from master@172.17.0.1:36678 I0224 22:35:53.833775 1979 scheduler.cpp:457] Enqueuing event OFFERS received from master@172.17.0.1:36678 I0224 22:35:53.835662 1980 scheduler.cpp:298] Sending ACCEPT call to master@172.17.0.1:36678 I0224 22:35:53.837591 1967 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' I0224 22:35:53.838021 1967 http.cpp:501] HTTP POST for /master/api/v1/scheduler from 172.17.0.1:45098 I0224 22:35:53.838851 1967 master.cpp:3138] Processing ACCEPT call for offers: [ aab18b61-7811-4c43-a672-d1a63818c880-O0 ] on slave aab18b61-7811-4c43-a672-d1a63818c880-S0 at slave(115)@172.17.0.1:36678 (maintenance-host) for framework aab18b61-7811-4c43-a672-d1a63818c880-0000 (default) I0224 22:35:53.838946 1967 master.cpp:2825] Authorizing framework principal 'test-principal' to launch task 90bcae0c-9d40-40b7-9537-dae7e83479f6 as user 'mesos' W0224 22:35:53.841048 1967 validation.cpp:404] Executor default for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0224 22:35:53.841101 1967 validation.cpp:416] Executor default for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0224 22:35:53.841624 1967 master.hpp:176] Adding task 90bcae0c-9d40-40b7-9537-dae7e83479f6 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave aab18b61-7811-4c43-a672-d1a63818c880-S0 (maintenance-host) I0224 22:35:53.842157 1967 master.cpp:3623] Launching task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 (default) with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave aab18b61-7811-4c43-a672-d1a63818c880-S0 at slave(115)@172.17.0.1:36678 (maintenance-host) I0224 22:35:53.842571 1980 slave.cpp:1361] Got assigned task 90bcae0c-9d40-40b7-9537-dae7e83479f6 for framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.843122 1980 slave.cpp:1480] Launching task 90bcae0c-9d40-40b7-9537-dae7e83479f6 for framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.843718 1980 paths.cpp:474] Trying to chown '/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF/slaves/aab18b61-7811-4c43-a672-d1a63818c880-S0/frameworks/aab18b61-7811-4c43-a672-d1a63818c880-0000/executors/default/runs/a5a1e49d-20a8-4796-8ec0-5a1595e76159' to user 'mesos' I0224 22:35:53.852052 1980 slave.cpp:5367] Launching executor default of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 with resources in work directory '/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF/slaves/aab18b61-7811-4c43-a672-d1a63818c880-S0/frameworks/aab18b61-7811-4c43-a672-d1a63818c880-0000/executors/default/runs/a5a1e49d-20a8-4796-8ec0-5a1595e76159' I0224 22:35:53.854452 1980 exec.cpp:143] Version: 0.28.0 I0224 22:35:53.854812 1967 exec.cpp:193] Executor started at: executor(47)@172.17.0.1:36678 with pid 1948 I0224 22:35:53.855108 1980 slave.cpp:1698] Queuing task '90bcae0c-9d40-40b7-9537-dae7e83479f6' for executor 'default' of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.855264 1980 slave.cpp:749] Successfully attached file '/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF/slaves/aab18b61-7811-4c43-a672-d1a63818c880-S0/frameworks/aab18b61-7811-4c43-a672-d1a63818c880-0000/executors/default/runs/a5a1e49d-20a8-4796-8ec0-5a1595e76159' I0224 22:35:53.855362 1980 slave.cpp:2643] Got registration for executor 'default' of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 from executor(47)@172.17.0.1:36678 I0224 22:35:53.855785 1974 exec.cpp:217] Executor registered on slave aab18b61-7811-4c43-a672-d1a63818c880-S0 I0224 22:35:53.855857 1974 exec.cpp:229] Executor::registered took 42512ns I0224 22:35:53.856391 1980 slave.cpp:1863] Sending queued task '90bcae0c-9d40-40b7-9537-dae7e83479f6' to executor 'default' of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 at executor(47)@172.17.0.1:36678 I0224 22:35:53.856720 1974 exec.cpp:304] Executor asked to run task '90bcae0c-9d40-40b7-9537-dae7e83479f6' I0224 22:35:53.856812 1974 exec.cpp:313] Executor::launchTask took 65703ns I0224 22:35:53.856922 1974 exec.cpp:526] Executor sending status update TASK_RUNNING (UUID: 249b169a-6b5f-4776-95c8-c897ba6b3f0b) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.857378 1980 slave.cpp:3002] Handling status update TASK_RUNNING (UUID: 249b169a-6b5f-4776-95c8-c897ba6b3f0b) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 from executor(47)@172.17.0.1:36678 I0224 22:35:53.858175 1980 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 249b169a-6b5f-4776-95c8-c897ba6b3f0b) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.858222 1980 status_update_manager.cpp:497] Creating StatusUpdate stream for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.858687 1980 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 249b169a-6b5f-4776-95c8-c897ba6b3f0b) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 to the slave I0224 22:35:53.859210 1980 slave.cpp:3400] Forwarding the update TASK_RUNNING (UUID: 249b169a-6b5f-4776-95c8-c897ba6b3f0b) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 to master@172.17.0.1:36678 I0224 22:35:53.859390 1980 slave.cpp:3294] Status update manager successfully handled status update TASK_RUNNING (UUID: 249b169a-6b5f-4776-95c8-c897ba6b3f0b) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.859436 1980 slave.cpp:3310] Sending acknowledgement for status update TASK_RUNNING (UUID: 249b169a-6b5f-4776-95c8-c897ba6b3f0b) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 to executor(47)@172.17.0.1:36678 I0224 22:35:53.859663 1980 exec.cpp:350] Executor received status update acknowledgement 249b169a-6b5f-4776-95c8-c897ba6b3f0b for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.859657 1967 master.cpp:4794] Status update TASK_RUNNING (UUID: 249b169a-6b5f-4776-95c8-c897ba6b3f0b) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 from slave aab18b61-7811-4c43-a672-d1a63818c880-S0 at slave(115)@172.17.0.1:36678 (maintenance-host) I0224 22:35:53.859851 1967 master.cpp:4842] Forwarding status update TASK_RUNNING (UUID: 249b169a-6b5f-4776-95c8-c897ba6b3f0b) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.860587 1967 master.cpp:6450] Updating the state of task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0224 22:35:53.862711 1967 scheduler.cpp:457] Enqueuing event UPDATE received from master@172.17.0.1:36678 I0224 22:35:53.866711 1976 scheduler.cpp:298] Sending ACKNOWLEDGE call to master@172.17.0.1:36678 I0224 22:35:53.870667 1972 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' I0224 22:35:53.871269 1972 http.cpp:501] HTTP POST for /master/api/v1/scheduler from 172.17.0.1:45099 I0224 22:35:53.871459 1972 master.cpp:3952] Processing ACKNOWLEDGE call 249b169a-6b5f-4776-95c8-c897ba6b3f0b for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 (default) on slave aab18b61-7811-4c43-a672-d1a63818c880-S0 I0224 22:35:53.872184 1972 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 249b169a-6b5f-4776-95c8-c897ba6b3f0b) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.872537 1972 slave.cpp:2412] Status update manager successfully handled status update acknowledgement (UUID: 249b169a-6b5f-4776-95c8-c897ba6b3f0b) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:35:53.874407 1975 scheduler.cpp:298] Sending DECLINE call to master@172.17.0.1:36678 I0224 22:35:53.877537 1979 hierarchical.cpp:1434] No resources available to allocate! I0224 22:35:53.877795 1979 hierarchical.cpp:1127] Performed allocation for 1 slaves in 482441ns I0224 22:35:53.878082 1981 process.cpp:3141] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' I0224 22:35:53.878675 1978 http.cpp:501] HTTP POST for /master/api/v1/scheduler from 172.17.0.1:45100 I0224 22:35:53.878931 1978 master.cpp:3675] Processing DECLINE call for offers: [ aab18b61-7811-4c43-a672-d1a63818c880-O1 ] for framework aab18b61-7811-4c43-a672-d1a63818c880-0000 (default) ../../src/tests/master_maintenance_tests.cpp:1222: Failure Failed to wait 15secs for event I0224 22:36:08.881649 1948 master.cpp:1027] Master terminating W0224 22:36:08.881925 1948 master.cpp:6502] Removing task 90bcae0c-9d40-40b7-9537-dae7e83479f6 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 on slave aab18b61-7811-4c43-a672-d1a63818c880-S0 at slave(115)@172.17.0.1:36678 (maintenance-host) in non-terminal state TASK_RUNNING I0224 22:36:08.882961 1948 master.cpp:6545] Removing executor 'default' with resources of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 on slave aab18b61-7811-4c43-a672-d1a63818c880-S0 at slave(115)@172.17.0.1:36678 (maintenance-host) I0224 22:36:08.884789 1969 hierarchical.cpp:505] Removed slave aab18b61-7811-4c43-a672-d1a63818c880-S0 I0224 22:36:08.887261 1969 hierarchical.cpp:326] Removed framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:36:08.916983 1976 slave.cpp:3528] master@172.17.0.1:36678 exited W0224 22:36:08.917191 1976 slave.cpp:3531] Master disconnected! Waiting for a new master to be elected I0224 22:36:08.934546 1975 slave.cpp:3528] executor(47)@172.17.0.1:36678 exited I0224 22:36:08.934806 1974 slave.cpp:3886] Executor 'default' of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 exited with status 0 I0224 22:36:08.935024 1974 slave.cpp:3002] Handling status update TASK_FAILED (UUID: 77d415df-58bd-4cf5-9c49-6106691d9599) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 from @0.0.0.0:0 I0224 22:36:08.935505 1974 slave.cpp:5677] Terminating task 90bcae0c-9d40-40b7-9537-dae7e83479f6 I0224 22:36:08.936190 1967 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: 77d415df-58bd-4cf5-9c49-6106691d9599) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:36:08.936368 1967 status_update_manager.cpp:374] Forwarding update TASK_FAILED (UUID: 77d415df-58bd-4cf5-9c49-6106691d9599) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 to the slave I0224 22:36:08.936606 1974 slave.cpp:3400] Forwarding the update TASK_FAILED (UUID: 77d415df-58bd-4cf5-9c49-6106691d9599) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 to master@172.17.0.1:36678 I0224 22:36:08.936779 1974 slave.cpp:3294] Status update manager successfully handled status update TASK_FAILED (UUID: 77d415df-58bd-4cf5-9c49-6106691d9599) for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:36:08.955370 1967 slave.cpp:668] Slave terminating I0224 22:36:08.955499 1967 slave.cpp:2079] Asked to shut down framework aab18b61-7811-4c43-a672-d1a63818c880-0000 by @0.0.0.0:0 I0224 22:36:08.955538 1967 slave.cpp:2104] Shutting down framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:36:08.955606 1967 slave.cpp:3990] Cleaning up executor 'default' of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 at executor(47)@172.17.0.1:36678 I0224 22:36:08.956053 1967 slave.cpp:4078] Cleaning up framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:36:08.956327 1967 gc.cpp:54] Scheduling '/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF/slaves/aab18b61-7811-4c43-a672-d1a63818c880-S0/frameworks/aab18b61-7811-4c43-a672-d1a63818c880-0000/executors/default/runs/a5a1e49d-20a8-4796-8ec0-5a1595e76159' for gc 1.00002336880296weeks in the future I0224 22:36:08.956495 1973 status_update_manager.cpp:282] Closing status update streams for framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:36:08.956524 1967 gc.cpp:54] Scheduling '/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF/slaves/aab18b61-7811-4c43-a672-d1a63818c880-S0/frameworks/aab18b61-7811-4c43-a672-d1a63818c880-0000/executors/default' for gc 1.00002336880296weeks in the future I0224 22:36:08.956549 1973 status_update_manager.cpp:528] Cleaning up status update stream for task 90bcae0c-9d40-40b7-9537-dae7e83479f6 of framework aab18b61-7811-4c43-a672-d1a63818c880-0000 I0224 22:36:08.956619 1967 gc.cpp:54] Scheduling '/tmp/MasterMaintenanceTest_InverseOffers_ywqvFF/slaves/aab18b61-7811-4c43-a672-d1a63818c880-S0/frameworks/aab18b61-7811-4c43-a672-d1a63818c880-0000' for gc 1.00002336880296weeks in the future [ FAILED ] MasterMaintenanceTest.InverseOffers (15258 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4772","02/25/2016 07:00:29",2,"TaskInfo/ExecutorInfo should include fine-grained ownership/namespacing ""We need a way to assign fine-grained ownership to tasks/executors so that multi-user frameworks can tell Mesos to associate the task with a user identity (rather than just the framework principal+role). Then, when an HTTP user requests to view the task's sandbox contents, or kill the task, or list all tasks, the authorizer can determine whether to allow/deny/filter the request based on finer-grained, user-level ownership. Some systems may want TaskInfo.owner to represent a group rather than an individual user. That's fine as long as the framework sets the field to the group ID in such a way that a group-aware authorizer can interpret it.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4776","02/25/2016 16:27:12",2,"Libprocess metrics/snapshot endpoint rate limiting should be configurable. ""Currently the {{/metrics/snapshot}} endpoint in libprocess has a [hard-coded|https://github.com/apache/mesos/blob/0.27.1/3rdparty/libprocess/include/process/metrics/metrics.hpp#L52] rate limit of 2 requests per second: This should be configurable via a libprocess environment variable so that users can control this when initializing libprocess."""," MetricsProcess() : ProcessBase(""""metrics""""), limiter(2, Seconds(1)) {} ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4783","02/26/2016 09:13:15",3,"Disable rate limiting of the global metrics endpoint for mesos-tests execution ""Once we can optionally disable rate limiting in the global metrics endpoint with MESOS-4776 we should disable the rate limiting during the execution of mesos-tests. * rate limiting makes it cumbersome to repeatedly hit the endpoint since one would not want to interfere with the rate limiting * rate limiting might incur additional wait time which might slown down tests""","",0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4784","02/26/2016 09:17:05",1,"SlaveTest.MetricsSlaveLaunchErrors test relies on implicit blocking behavior hitting the global metrics endpoint ""The test attempts to observe a change in the {{slave/container_launch_errors}} metric, but does not wait for the triggering action to take place. Currently the test passes since hitting the endpoint blocks for some rate limit-related time which provides under many circumstances enough wait time for the action to take place. ""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4785","02/26/2016 17:41:50",5,"Reorganize ACL subject/object descriptions. ""The authorization documentation would benefit from a reorganization of the ACL subject/object descriptions. Instead of simple lists of the available subjects and objects, it would be nice to see a table showing which subject and object is used with each action.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4786","02/26/2016 18:48:13",1,"Example in C++ style guide uses wrong indention for wrapped line "" Here the second line should be indented by two spaces since it is a wrapped assignment; the corresponding rule is laid out in the preceeding paragraph."""," Try long_name = ::protobuf::parse( request); ",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4787","02/26/2016 18:50:01",2,"HTTP endpoint docs should use shorter paths ""My understanding is that the recommended path for the v1 scheduler API is {{/api/v1/scheduler}}, but the HTTP endpoint [docs|http://mesos.apache.org/documentation/latest/endpoints/] for this endpoint list the path as {{/master/api/v1/scheduler}}; the filename of the doc page is also in the {{master}} subdirectory. Similarly, we document the master state endpoint as {{/master/state}}, whereas the preferred name is now just {{/state}}, and so on for most of the other endpoints. Unlike we the V1 API, we might want to consider backward compatibility and document both forms -- not sure. But certainly it seems like we should encourage people to use the shorter paths, not the longer ones.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4790","02/26/2016 21:31:40",1,"Revert external linkage of symbols in master/constants.hpp ""src/master/constants.hpp contains: From commit 232a23b2a2e11f4e905b834aa2a11afe5bf6438a. We should investigate whether this is still necessary on supported compilers; it likely is not."""," // TODO(bmahler): It appears there may be a bug with gcc-4.1.2 in which the // duration constants were not being initialized when having static linkage. // This issue did not manifest in newer gcc's. Specifically, 4.2.1 was ok. // So we've moved these to have external linkage but perhaps in the future // we can revert this. ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4801","02/29/2016 07:01:48",1,"Updated `createFrameworkInfo` for hierarchical_allocator_tests.cpp. ""The function of {{createFrameworkInfo}} in hierarchical_allocator_tests.cpp should be updated by enabling caller can set a framework capability to create a framework which can use revocable resources.""","",0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4806","02/29/2016 08:36:11",2,"LevelDBStateTests write to the current directory ""All {{LevelDBStateTest}} tests write to the current directory. This is bad for a number of reasons, e.g., * should the test fail data might be leaked to random locations, * the test cannot be executed from a write-only directory, or * executing tests from the same suite in parallel (e.g., with {{gtest-parallel}} would race on the existence of the created files, and show bogus behavior. The tests should probably be executed from a temporary directory, e.g., via stout's {{TemporaryDirectoryTest}} fixture.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4807","02/29/2016 08:38:17",1,"IOTest.BufferedRead writes to the current directory ""libprocess's {{IOTest.BufferedRead}} writes to the current directory. This is bad for a number of reasons, e.g., * should the test fail data might be leaked to random locations, * the test cannot be executed from a write-only directory, or * executing the same test in parallel would race on the existence of the created file, and show bogus behavior. The test should probably be executed from a temporary directory, e.g., via stout's {{TemporaryDirectoryTest}} fixture.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4810","02/29/2016 10:42:41",3,"ProvisionerDockerPullerTest.ROOT_INTERNET_CURL_ShellCommand fails. "" """," [09:46:46] : [Step 11/11] [ RUN ] ProvisionerDockerRegistryPullerTest.ROOT_INTERNET_CURL_ShellCommand [09:46:46]W: [Step 11/11] I0229 09:46:46.628413 1166 leveldb.cpp:174] Opened db in 4.242882ms [09:46:46]W: [Step 11/11] I0229 09:46:46.629926 1166 leveldb.cpp:181] Compacted db in 1.483621ms [09:46:46]W: [Step 11/11] I0229 09:46:46.629966 1166 leveldb.cpp:196] Created db iterator in 15498ns [09:46:46]W: [Step 11/11] I0229 09:46:46.629977 1166 leveldb.cpp:202] Seeked to beginning of db in 1405ns [09:46:46]W: [Step 11/11] I0229 09:46:46.629984 1166 leveldb.cpp:271] Iterated through 0 keys in the db in 239ns [09:46:46]W: [Step 11/11] I0229 09:46:46.630015 1166 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned [09:46:46]W: [Step 11/11] I0229 09:46:46.630470 1183 recover.cpp:447] Starting replica recovery [09:46:46]W: [Step 11/11] I0229 09:46:46.630702 1180 recover.cpp:473] Replica is in EMPTY status [09:46:46]W: [Step 11/11] I0229 09:46:46.631767 1182 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (14567)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.632115 1183 recover.cpp:193] Received a recover response from a replica in EMPTY status [09:46:46]W: [Step 11/11] I0229 09:46:46.632450 1186 recover.cpp:564] Updating replica status to STARTING [09:46:46]W: [Step 11/11] I0229 09:46:46.633476 1186 master.cpp:375] Master 3fbb2fb0-4f18-498b-a440-9acbf6923a13 (ip-172-30-2-124.mesosphere.io) started on 172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.633491 1186 master.cpp:377] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_http=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/4UxXoW/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/4UxXoW/master"""" --zk_session_timeout=""""10secs"""" [09:46:46]W: [Step 11/11] I0229 09:46:46.633677 1186 master.cpp:422] Master only allowing authenticated frameworks to register [09:46:46]W: [Step 11/11] I0229 09:46:46.633685 1186 master.cpp:427] Master only allowing authenticated slaves to register [09:46:46]W: [Step 11/11] I0229 09:46:46.633692 1186 credentials.hpp:35] Loading credentials for authentication from '/tmp/4UxXoW/credentials' [09:46:46]W: [Step 11/11] I0229 09:46:46.633851 1183 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.191043ms [09:46:46]W: [Step 11/11] I0229 09:46:46.633873 1183 replica.cpp:320] Persisted replica status to STARTING [09:46:46]W: [Step 11/11] I0229 09:46:46.633894 1186 master.cpp:467] Using default 'crammd5' authenticator [09:46:46]W: [Step 11/11] I0229 09:46:46.634003 1186 master.cpp:536] Using default 'basic' HTTP authenticator [09:46:46]W: [Step 11/11] I0229 09:46:46.634062 1184 recover.cpp:473] Replica is in STARTING status [09:46:46]W: [Step 11/11] I0229 09:46:46.634109 1186 master.cpp:570] Authorization enabled [09:46:46]W: [Step 11/11] I0229 09:46:46.634249 1187 whitelist_watcher.cpp:77] No whitelist given [09:46:46]W: [Step 11/11] I0229 09:46:46.634255 1184 hierarchical.cpp:144] Initialized hierarchical allocator process [09:46:46]W: [Step 11/11] I0229 09:46:46.634884 1187 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (14569)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.635278 1181 recover.cpp:193] Received a recover response from a replica in STARTING status [09:46:46]W: [Step 11/11] I0229 09:46:46.635742 1187 recover.cpp:564] Updating replica status to VOTING [09:46:46]W: [Step 11/11] I0229 09:46:46.636391 1180 master.cpp:1711] The newly elected leader is master@172.30.2.124:37431 with id 3fbb2fb0-4f18-498b-a440-9acbf6923a13 [09:46:46]W: [Step 11/11] I0229 09:46:46.636415 1180 master.cpp:1724] Elected as the leading master! [09:46:46]W: [Step 11/11] I0229 09:46:46.636430 1180 master.cpp:1469] Recovering from registrar [09:46:46]W: [Step 11/11] I0229 09:46:46.636554 1187 registrar.cpp:307] Recovering registrar [09:46:46]W: [Step 11/11] I0229 09:46:46.637111 1181 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.120322ms [09:46:46]W: [Step 11/11] I0229 09:46:46.637133 1181 replica.cpp:320] Persisted replica status to VOTING [09:46:46]W: [Step 11/11] I0229 09:46:46.637218 1186 recover.cpp:578] Successfully joined the Paxos group [09:46:46]W: [Step 11/11] I0229 09:46:46.637354 1186 recover.cpp:462] Recover process terminated [09:46:46]W: [Step 11/11] I0229 09:46:46.637715 1182 log.cpp:659] Attempting to start the writer [09:46:46]W: [Step 11/11] I0229 09:46:46.638617 1184 replica.cpp:493] Replica received implicit promise request from (14570)@172.30.2.124:37431 with proposal 1 [09:46:46]W: [Step 11/11] I0229 09:46:46.639700 1184 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.057386ms [09:46:46]W: [Step 11/11] I0229 09:46:46.639722 1184 replica.cpp:342] Persisted promised to 1 [09:46:46]W: [Step 11/11] I0229 09:46:46.640251 1184 coordinator.cpp:238] Coordinator attempting to fill missing positions [09:46:46]W: [Step 11/11] I0229 09:46:46.641274 1185 replica.cpp:388] Replica received explicit promise request from (14571)@172.30.2.124:37431 for position 0 with proposal 2 [09:46:46]W: [Step 11/11] I0229 09:46:46.642371 1185 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 1.061574ms [09:46:46]W: [Step 11/11] I0229 09:46:46.642396 1185 replica.cpp:712] Persisted action at 0 [09:46:46]W: [Step 11/11] I0229 09:46:46.643299 1186 replica.cpp:537] Replica received write request for position 0 from (14572)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.643349 1186 leveldb.cpp:436] Reading position from leveldb took 21735ns [09:46:46]W: [Step 11/11] I0229 09:46:46.644448 1186 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 1.06671ms [09:46:46]W: [Step 11/11] I0229 09:46:46.644469 1186 replica.cpp:712] Persisted action at 0 [09:46:46]W: [Step 11/11] I0229 09:46:46.645077 1181 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 [09:46:46]W: [Step 11/11] I0229 09:46:46.646174 1181 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.069097ms [09:46:46]W: [Step 11/11] I0229 09:46:46.646198 1181 replica.cpp:712] Persisted action at 0 [09:46:46]W: [Step 11/11] I0229 09:46:46.646211 1181 replica.cpp:697] Replica learned NOP action at position 0 [09:46:46]W: [Step 11/11] I0229 09:46:46.646716 1182 log.cpp:675] Writer started with ending position 0 [09:46:46]W: [Step 11/11] I0229 09:46:46.647538 1183 leveldb.cpp:436] Reading position from leveldb took 21456ns [09:46:46]W: [Step 11/11] I0229 09:46:46.648298 1186 registrar.cpp:340] Successfully fetched the registry (0B) in 11.71072ms [09:46:46]W: [Step 11/11] I0229 09:46:46.648388 1186 registrar.cpp:439] Applied 1 operations in 21138ns; attempting to update the 'registry' [09:46:46]W: [Step 11/11] I0229 09:46:46.648947 1187 log.cpp:683] Attempting to append 210 bytes to the log [09:46:46]W: [Step 11/11] I0229 09:46:46.649050 1183 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 [09:46:46]W: [Step 11/11] I0229 09:46:46.649655 1187 replica.cpp:537] Replica received write request for position 1 from (14573)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.650725 1187 leveldb.cpp:341] Persisting action (229 bytes) to leveldb took 1.041938ms [09:46:46]W: [Step 11/11] I0229 09:46:46.650748 1187 replica.cpp:712] Persisted action at 1 [09:46:46]W: [Step 11/11] I0229 09:46:46.651198 1181 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 [09:46:46]W: [Step 11/11] I0229 09:46:46.652312 1181 leveldb.cpp:341] Persisting action (231 bytes) to leveldb took 1.092268ms [09:46:46]W: [Step 11/11] I0229 09:46:46.652335 1181 replica.cpp:712] Persisted action at 1 [09:46:46]W: [Step 11/11] I0229 09:46:46.652349 1181 replica.cpp:697] Replica learned APPEND action at position 1 [09:46:46]W: [Step 11/11] I0229 09:46:46.653095 1187 registrar.cpp:484] Successfully updated the 'registry' in 4.664064ms [09:46:46]W: [Step 11/11] I0229 09:46:46.653236 1187 registrar.cpp:370] Successfully recovered registrar [09:46:46]W: [Step 11/11] I0229 09:46:46.653306 1181 log.cpp:702] Attempting to truncate the log to 1 [09:46:46]W: [Step 11/11] I0229 09:46:46.653476 1184 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 [09:46:46]W: [Step 11/11] I0229 09:46:46.653642 1183 master.cpp:1521] Recovered 0 slaves from the Registry (171B) ; allowing 10mins for slaves to re-register [09:46:46]W: [Step 11/11] I0229 09:46:46.653659 1181 hierarchical.cpp:171] Skipping recovery of hierarchical allocator: nothing to recover [09:46:46]W: [Step 11/11] I0229 09:46:46.654270 1181 replica.cpp:537] Replica received write request for position 2 from (14574)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.655357 1181 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.055267ms [09:46:46]W: [Step 11/11] I0229 09:46:46.655378 1181 replica.cpp:712] Persisted action at 2 [09:46:46]W: [Step 11/11] I0229 09:46:46.655850 1184 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 [09:46:46]W: [Step 11/11] I0229 09:46:46.657009 1184 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 1.137223ms [09:46:46]W: [Step 11/11] I0229 09:46:46.657059 1184 leveldb.cpp:399] Deleting ~1 keys from leveldb took 26459ns [09:46:46]W: [Step 11/11] I0229 09:46:46.657074 1184 replica.cpp:712] Persisted action at 2 [09:46:46]W: [Step 11/11] I0229 09:46:46.657089 1184 replica.cpp:697] Replica learned TRUNCATE action at position 2 [09:46:46]W: [Step 11/11] I0229 09:46:46.665710 1166 containerizer.cpp:149] Using isolation: docker/runtime,filesystem/linux [09:46:46]W: [Step 11/11] I0229 09:46:46.672399 1166 linux_launcher.cpp:101] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher [09:46:46]W: [Step 11/11] E0229 09:46:46.676822 1166 shell.hpp:93] Command 'hadoop version 2>&1' failed; this is the output: [09:46:46]W: [Step 11/11] sh: hadoop: command not found [09:46:46]W: [Step 11/11] E0229 09:46:46.676851 1166 fetcher.cpp:58] Failed to create URI fetcher plugin 'hadoop': Failed to create HDFS client: Failed to execute 'hadoop version 2>&1'; the command was either not found or exited with a non-zero exit status: 127 [09:46:46]W: [Step 11/11] I0229 09:46:46.678383 1166 linux.cpp:81] Making '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv' a shared mount [09:46:46]W: [Step 11/11] I0229 09:46:46.687223 1180 slave.cpp:193] Slave started on 422)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.687248 1180 slave.cpp:194] Flags at startup: --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_providers=""""docker"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""docker/runtime,filesystem/linux"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv"""" [09:46:46]W: [Step 11/11] I0229 09:46:46.687531 1180 credentials.hpp:83] Loading credential for authentication from '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/credential' [09:46:46]W: [Step 11/11] I0229 09:46:46.687666 1180 slave.cpp:324] Slave using credential for: test-principal [09:46:46]W: [Step 11/11] I0229 09:46:46.687798 1180 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] [09:46:46]W: [Step 11/11] Trying semicolon-delimited string format instead [09:46:46]W: [Step 11/11] I0229 09:46:46.688151 1180 slave.cpp:464] Slave resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [09:46:46]W: [Step 11/11] I0229 09:46:46.688207 1180 slave.cpp:472] Slave attributes: [ ] [09:46:46]W: [Step 11/11] I0229 09:46:46.688217 1180 slave.cpp:477] Slave hostname: ip-172-30-2-124.mesosphere.io [09:46:46]W: [Step 11/11] I0229 09:46:46.689259 1187 state.cpp:58] Recovering state from '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/meta' [09:46:46]W: [Step 11/11] I0229 09:46:46.689394 1166 sched.cpp:222] Version: 0.28.0 [09:46:46]W: [Step 11/11] I0229 09:46:46.689497 1180 status_update_manager.cpp:200] Recovering status update manager [09:46:46]W: [Step 11/11] I0229 09:46:46.689798 1182 containerizer.cpp:407] Recovering containerizer [09:46:46]W: [Step 11/11] I0229 09:46:46.690021 1186 sched.cpp:326] New master detected at master@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.690146 1186 sched.cpp:382] Authenticating with master master@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.690162 1186 sched.cpp:389] Using default CRAM-MD5 authenticatee [09:46:46]W: [Step 11/11] I0229 09:46:46.690378 1181 authenticatee.cpp:121] Creating new client SASL connection [09:46:46]W: [Step 11/11] I0229 09:46:46.690688 1186 master.cpp:5540] Authenticating scheduler-52603476-875a-49a8-85d4-c98d102cdfab@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.690801 1184 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(877)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.691025 1181 authenticator.cpp:98] Creating new server SASL connection [09:46:46]W: [Step 11/11] I0229 09:46:46.691314 1180 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 [09:46:46]W: [Step 11/11] I0229 09:46:46.691339 1180 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' [09:46:46]W: [Step 11/11] I0229 09:46:46.691437 1180 authenticator.cpp:203] Received SASL authentication start [09:46:46]W: [Step 11/11] I0229 09:46:46.691490 1180 authenticator.cpp:325] Authentication requires more steps [09:46:46]W: [Step 11/11] I0229 09:46:46.691581 1180 authenticatee.cpp:258] Received SASL authentication step [09:46:46]W: [Step 11/11] I0229 09:46:46.691684 1180 authenticator.cpp:231] Received SASL authentication step [09:46:46]W: [Step 11/11] I0229 09:46:46.691712 1180 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-124.mesosphere.io' server FQDN: 'ip-172-30-2-124.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [09:46:46]W: [Step 11/11] I0229 09:46:46.691726 1180 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [09:46:46]W: [Step 11/11] I0229 09:46:46.691768 1180 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [09:46:46]W: [Step 11/11] I0229 09:46:46.691802 1180 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-124.mesosphere.io' server FQDN: 'ip-172-30-2-124.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [09:46:46]W: [Step 11/11] I0229 09:46:46.691817 1180 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [09:46:46]W: [Step 11/11] I0229 09:46:46.691829 1180 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [09:46:46]W: [Step 11/11] I0229 09:46:46.691848 1180 authenticator.cpp:317] Authentication success [09:46:46]W: [Step 11/11] I0229 09:46:46.691944 1186 authenticatee.cpp:298] Authentication success [09:46:46]W: [Step 11/11] I0229 09:46:46.692011 1185 master.cpp:5570] Successfully authenticated principal 'test-principal' at scheduler-52603476-875a-49a8-85d4-c98d102cdfab@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.692056 1187 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(877)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.692308 1184 sched.cpp:471] Successfully authenticated with master master@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.692325 1184 sched.cpp:776] Sending SUBSCRIBE call to master@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.692399 1184 sched.cpp:809] Will retry registration in 954.231367ms if necessary [09:46:46]W: [Step 11/11] I0229 09:46:46.692505 1183 master.cpp:2279] Received SUBSCRIBE call for framework 'default' at scheduler-52603476-875a-49a8-85d4-c98d102cdfab@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.692553 1183 master.cpp:1750] Authorizing framework principal 'test-principal' to receive offers for role '*' [09:46:46]W: [Step 11/11] I0229 09:46:46.692836 1184 master.cpp:2350] Subscribing framework default with checkpointing disabled and capabilities [ ] [09:46:46]W: [Step 11/11] I0229 09:46:46.692942 1183 metadata_manager.cpp:188] No images to load from disk. Docker provisioner image storage path '/tmp/mesos/store/docker/storedImages' does not exist [09:46:46]W: [Step 11/11] I0229 09:46:46.693208 1180 provisioner.cpp:245] Provisioner recovery complete [09:46:46]W: [Step 11/11] I0229 09:46:46.693295 1186 hierarchical.cpp:265] Added framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:46]W: [Step 11/11] I0229 09:46:46.693357 1186 hierarchical.cpp:1437] No resources available to allocate! [09:46:46]W: [Step 11/11] I0229 09:46:46.693397 1186 hierarchical.cpp:1532] No inverse offers to send out! [09:46:46]W: [Step 11/11] I0229 09:46:46.693424 1186 hierarchical.cpp:1130] Performed allocation for 0 slaves in 111679ns [09:46:46]W: [Step 11/11] I0229 09:46:46.693442 1187 sched.cpp:703] Framework registered with 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:46]W: [Step 11/11] I0229 09:46:46.693476 1187 sched.cpp:717] Scheduler::registered took 15735ns [09:46:46]W: [Step 11/11] I0229 09:46:46.693604 1183 slave.cpp:4565] Finished recovery [09:46:46]W: [Step 11/11] I0229 09:46:46.693872 1183 slave.cpp:4737] Querying resource estimator for oversubscribable resources [09:46:46]W: [Step 11/11] I0229 09:46:46.694072 1183 slave.cpp:796] New master detected at master@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.694078 1182 status_update_manager.cpp:174] Pausing sending status updates [09:46:46]W: [Step 11/11] I0229 09:46:46.694133 1183 slave.cpp:859] Authenticating with master master@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.694159 1183 slave.cpp:864] Using default CRAM-MD5 authenticatee [09:46:46]W: [Step 11/11] I0229 09:46:46.694279 1183 slave.cpp:832] Detecting new master [09:46:46]W: [Step 11/11] I0229 09:46:46.694320 1180 authenticatee.cpp:121] Creating new client SASL connection [09:46:46]W: [Step 11/11] I0229 09:46:46.694438 1183 slave.cpp:4751] Received oversubscribable resources from the resource estimator [09:46:46]W: [Step 11/11] I0229 09:46:46.694577 1183 master.cpp:5540] Authenticating slave(422)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.694659 1181 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(878)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.694840 1182 authenticator.cpp:98] Creating new server SASL connection [09:46:46]W: [Step 11/11] I0229 09:46:46.695081 1187 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 [09:46:46]W: [Step 11/11] I0229 09:46:46.695109 1187 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' [09:46:46]W: [Step 11/11] I0229 09:46:46.695215 1186 authenticator.cpp:203] Received SASL authentication start [09:46:46]W: [Step 11/11] I0229 09:46:46.695257 1186 authenticator.cpp:325] Authentication requires more steps [09:46:46]W: [Step 11/11] I0229 09:46:46.695322 1186 authenticatee.cpp:258] Received SASL authentication step [09:46:46]W: [Step 11/11] I0229 09:46:46.695423 1185 authenticator.cpp:231] Received SASL authentication step [09:46:46]W: [Step 11/11] I0229 09:46:46.695446 1185 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-124.mesosphere.io' server FQDN: 'ip-172-30-2-124.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [09:46:46]W: [Step 11/11] I0229 09:46:46.695453 1185 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [09:46:46]W: [Step 11/11] I0229 09:46:46.695477 1185 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [09:46:46]W: [Step 11/11] I0229 09:46:46.695497 1185 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-124.mesosphere.io' server FQDN: 'ip-172-30-2-124.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [09:46:46]W: [Step 11/11] I0229 09:46:46.695504 1185 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [09:46:46]W: [Step 11/11] I0229 09:46:46.695510 1185 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [09:46:46]W: [Step 11/11] I0229 09:46:46.695520 1185 authenticator.cpp:317] Authentication success [09:46:46]W: [Step 11/11] I0229 09:46:46.695588 1180 authenticatee.cpp:298] Authentication success [09:46:46]W: [Step 11/11] I0229 09:46:46.695633 1186 master.cpp:5570] Successfully authenticated principal 'test-principal' at slave(422)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.695675 1180 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(878)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.695933 1187 slave.cpp:927] Successfully authenticated with master master@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.696039 1187 slave.cpp:1321] Will retry registration in 6.094985ms if necessary [09:46:46]W: [Step 11/11] I0229 09:46:46.696171 1183 master.cpp:4254] Registering slave at slave(422)@172.30.2.124:37431 (ip-172-30-2-124.mesosphere.io) with id 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 [09:46:46]W: [Step 11/11] I0229 09:46:46.696535 1181 registrar.cpp:439] Applied 1 operations in 48295ns; attempting to update the 'registry' [09:46:46]W: [Step 11/11] I0229 09:46:46.697289 1182 log.cpp:683] Attempting to append 396 bytes to the log [09:46:46]W: [Step 11/11] I0229 09:46:46.697402 1183 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 [09:46:46]W: [Step 11/11] I0229 09:46:46.698032 1181 replica.cpp:537] Replica received write request for position 3 from (14593)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.699445 1181 leveldb.cpp:341] Persisting action (415 bytes) to leveldb took 1.381647ms [09:46:46]W: [Step 11/11] I0229 09:46:46.699467 1181 replica.cpp:712] Persisted action at 3 [09:46:46]W: [Step 11/11] I0229 09:46:46.699934 1181 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 [09:46:46]W: [Step 11/11] I0229 09:46:46.701073 1181 leveldb.cpp:341] Persisting action (417 bytes) to leveldb took 1.117397ms [09:46:46]W: [Step 11/11] I0229 09:46:46.701095 1181 replica.cpp:712] Persisted action at 3 [09:46:46]W: [Step 11/11] I0229 09:46:46.701110 1181 replica.cpp:697] Replica learned APPEND action at position 3 [09:46:46]W: [Step 11/11] I0229 09:46:46.702229 1185 registrar.cpp:484] Successfully updated the 'registry' in 5.643008ms [09:46:46]W: [Step 11/11] I0229 09:46:46.702409 1182 log.cpp:702] Attempting to truncate the log to 3 [09:46:46]W: [Step 11/11] I0229 09:46:46.702441 1180 slave.cpp:1321] Will retry registration in 33.795772ms if necessary [09:46:46]W: [Step 11/11] I0229 09:46:46.702523 1181 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 [09:46:46]W: [Step 11/11] I0229 09:46:46.702775 1182 slave.cpp:3482] Received ping from slave-observer(389)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.702837 1184 master.cpp:4322] Registered slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 at slave(422)@172.30.2.124:37431 (ip-172-30-2-124.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [09:46:46]W: [Step 11/11] I0229 09:46:46.702922 1182 slave.cpp:971] Registered with master master@172.30.2.124:37431; given slave ID 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 [09:46:46]W: [Step 11/11] I0229 09:46:46.702947 1182 fetcher.cpp:81] Clearing fetcher cache [09:46:46]W: [Step 11/11] I0229 09:46:46.703011 1181 hierarchical.cpp:473] Added slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 (ip-172-30-2-124.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) [09:46:46]W: [Step 11/11] I0229 09:46:46.703053 1184 master.cpp:4224] Slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 at slave(422)@172.30.2.124:37431 (ip-172-30-2-124.mesosphere.io) already registered, resending acknowledgement [09:46:46]W: [Step 11/11] I0229 09:46:46.703060 1186 status_update_manager.cpp:181] Resuming sending status updates [09:46:46]W: [Step 11/11] I0229 09:46:46.703213 1184 replica.cpp:537] Replica received write request for position 4 from (14594)@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.703228 1182 slave.cpp:994] Checkpointing SlaveInfo to '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/meta/slaves/3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0/slave.info' [09:46:46]W: [Step 11/11] I0229 09:46:46.703416 1182 slave.cpp:1030] Forwarding total oversubscribed resources [09:46:46]W: [Step 11/11] W0229 09:46:46.703513 1182 slave.cpp:1016] Already registered with master master@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.703531 1182 slave.cpp:1030] Forwarding total oversubscribed resources [09:46:46]W: [Step 11/11] I0229 09:46:46.703559 1185 master.cpp:4663] Received update of slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 at slave(422)@172.30.2.124:37431 (ip-172-30-2-124.mesosphere.io) with total oversubscribed resources [09:46:46]W: [Step 11/11] I0229 09:46:46.703564 1181 hierarchical.cpp:1532] No inverse offers to send out! [09:46:46]W: [Step 11/11] I0229 09:46:46.703614 1181 hierarchical.cpp:1150] Performed allocation for slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 in 572661ns [09:46:46]W: [Step 11/11] I0229 09:46:46.703939 1185 master.cpp:5369] Sending 1 offers to framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 (default) at scheduler-52603476-875a-49a8-85d4-c98d102cdfab@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.703972 1186 hierarchical.cpp:531] Slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 (ip-172-30-2-124.mesosphere.io) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) [09:46:46]W: [Step 11/11] I0229 09:46:46.704087 1186 hierarchical.cpp:1437] No resources available to allocate! [09:46:46]W: [Step 11/11] I0229 09:46:46.704113 1185 master.cpp:4663] Received update of slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 at slave(422)@172.30.2.124:37431 (ip-172-30-2-124.mesosphere.io) with total oversubscribed resources [09:46:46]W: [Step 11/11] I0229 09:46:46.704123 1186 hierarchical.cpp:1532] No inverse offers to send out! [09:46:46]W: [Step 11/11] I0229 09:46:46.704169 1186 hierarchical.cpp:1150] Performed allocation for slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 in 162818ns [09:46:46]W: [Step 11/11] I0229 09:46:46.704421 1184 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.177949ms [09:46:46]W: [Step 11/11] I0229 09:46:46.704442 1187 sched.cpp:873] Scheduler::resourceOffers took 146551ns [09:46:46]W: [Step 11/11] I0229 09:46:46.704452 1184 replica.cpp:712] Persisted action at 4 [09:46:46]W: [Step 11/11] I0229 09:46:46.704747 1166 resources.cpp:572] Parsing resources as JSON failed: cpus:1;mem:128 [09:46:46]W: [Step 11/11] Trying semicolon-delimited string format instead [09:46:46]W: [Step 11/11] I0229 09:46:46.704737 1185 hierarchical.cpp:531] Slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 (ip-172-30-2-124.mesosphere.io) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) [09:46:46]W: [Step 11/11] I0229 09:46:46.704888 1185 hierarchical.cpp:1437] No resources available to allocate! [09:46:46]W: [Step 11/11] I0229 09:46:46.704931 1185 hierarchical.cpp:1532] No inverse offers to send out! [09:46:46]W: [Step 11/11] I0229 09:46:46.704958 1185 hierarchical.cpp:1150] Performed allocation for slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 in 172983ns [09:46:46]W: [Step 11/11] I0229 09:46:46.705059 1185 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 [09:46:46]W: [Step 11/11] I0229 09:46:46.705976 1184 master.cpp:3152] Processing ACCEPT call for offers: [ 3fbb2fb0-4f18-498b-a440-9acbf6923a13-O0 ] on slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 at slave(422)@172.30.2.124:37431 (ip-172-30-2-124.mesosphere.io) for framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 (default) at scheduler-52603476-875a-49a8-85d4-c98d102cdfab@172.30.2.124:37431 [09:46:46]W: [Step 11/11] I0229 09:46:46.706009 1184 master.cpp:2824] Authorizing framework principal 'test-principal' to launch task cd81ece8-93b2-4e8a-a4b0-b566038bf281 as user 'root' [09:46:46]W: [Step 11/11] I0229 09:46:46.706212 1185 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 1.125309ms [09:46:46]W: [Step 11/11] I0229 09:46:46.706269 1185 leveldb.cpp:399] Deleting ~2 keys from leveldb took 32428ns [09:46:46]W: [Step 11/11] I0229 09:46:46.706284 1185 replica.cpp:712] Persisted action at 4 [09:46:46]W: [Step 11/11] I0229 09:46:46.706298 1185 replica.cpp:697] Replica learned TRUNCATE action at position 4 [09:46:46]W: [Step 11/11] I0229 09:46:46.707129 1184 master.hpp:176] Adding task cd81ece8-93b2-4e8a-a4b0-b566038bf281 with resources cpus(*):1; mem(*):128 on slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 (ip-172-30-2-124.mesosphere.io) [09:46:46]W: [Step 11/11] I0229 09:46:46.707231 1184 master.cpp:3637] Launching task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 (default) at scheduler-52603476-875a-49a8-85d4-c98d102cdfab@172.30.2.124:37431 with resources cpus(*):1; mem(*):128 on slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 at slave(422)@172.30.2.124:37431 (ip-172-30-2-124.mesosphere.io) [09:46:46]W: [Step 11/11] I0229 09:46:46.707516 1182 slave.cpp:1361] Got assigned task cd81ece8-93b2-4e8a-a4b0-b566038bf281 for framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:46]W: [Step 11/11] I0229 09:46:46.707669 1182 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [09:46:46]W: [Step 11/11] Trying semicolon-delimited string format instead [09:46:46]W: [Step 11/11] I0229 09:46:46.707772 1183 hierarchical.cpp:890] Recovered cpus(*):1; mem(*):896; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):1; mem(*):128) on slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 from framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:46]W: [Step 11/11] I0229 09:46:46.707814 1183 hierarchical.cpp:927] Framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 filtered slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 for 5secs [09:46:46]W: [Step 11/11] I0229 09:46:46.708055 1182 slave.cpp:1480] Launching task cd81ece8-93b2-4e8a-a4b0-b566038bf281 for framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:46]W: [Step 11/11] I0229 09:46:46.708122 1182 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [09:46:46]W: [Step 11/11] Trying semicolon-delimited string format instead [09:46:46]W: [Step 11/11] I0229 09:46:46.708601 1182 paths.cpp:474] Trying to chown '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/slaves/3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0/frameworks/3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000/executors/cd81ece8-93b2-4e8a-a4b0-b566038bf281/runs/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89' to user 'root' [09:46:46]W: [Step 11/11] I0229 09:46:46.713331 1182 slave.cpp:5367] Launching executor cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/slaves/3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0/frameworks/3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000/executors/cd81ece8-93b2-4e8a-a4b0-b566038bf281/runs/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89' [09:46:46]W: [Step 11/11] I0229 09:46:46.713762 1185 containerizer.cpp:666] Starting container '7f271a3f-bada-4dfb-a37f-f6c6d7aefd89' for executor 'cd81ece8-93b2-4e8a-a4b0-b566038bf281' of framework '3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000' [09:46:46]W: [Step 11/11] I0229 09:46:46.713769 1182 slave.cpp:1698] Queuing task 'cd81ece8-93b2-4e8a-a4b0-b566038bf281' for executor 'cd81ece8-93b2-4e8a-a4b0-b566038bf281' of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:46]W: [Step 11/11] I0229 09:46:46.713860 1182 slave.cpp:749] Successfully attached file '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/slaves/3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0/frameworks/3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000/executors/cd81ece8-93b2-4e8a-a4b0-b566038bf281/runs/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89' [09:46:47]W: [Step 11/11] I0229 09:46:47.601546 1180 registry_puller.cpp:210] The manifest for image 'library/alpine' is '{ [09:46:47]W: [Step 11/11] """"schemaVersion"""": 1, [09:46:47]W: [Step 11/11] """"name"""": """"library/alpine"""", [09:46:47]W: [Step 11/11] """"tag"""": """"latest"""", [09:46:47]W: [Step 11/11] """"architecture"""": """"amd64"""", [09:46:47]W: [Step 11/11] """"fsLayers"""": [ [09:46:47]W: [Step 11/11] { [09:46:47]W: [Step 11/11] """"blobSum"""": """"sha256:ee54741ab35b188477c19fddc30356317b091177966da94c2e9391de49fc7f43"""" [09:46:47]W: [Step 11/11] } [09:46:47]W: [Step 11/11] ], [09:46:47]W: [Step 11/11] """"history"""": [ [09:46:47]W: [Step 11/11] { [09:46:47]W: [Step 11/11] """"v1Compatibility"""": """"{\""""id\"""":\""""9d710148acd0066166bf3ce04894072b2f3caed24d0295ae2fa136fb7f602605\"""",\""""created\"""":\""""2016-02-17T15:51:37.348814441Z\"""",\""""container\"""":\""""1c7d9aa5eff83e7f7e563f36c01ba975b90a4a6e17fa6024f4a998f5f0a43b28\"""",\""""container_config\"""":{\""""Hostname\"""":\""""1c7d9aa5eff8\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":null,\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) ADD file:0f9cfb2e848f093649aca9cc67927e4d04a74e150e0d92f4ad18ee583a287bf2 in /\""""],\""""Image\"""":\""""\"""",\""""Volumes\"""":null,\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""OnBuild\"""":null,\""""Labels\"""":null},\""""docker_version\"""":\""""1.9.1\"""",\""""config\"""":{\""""Hostname\"""":\""""1c7d9aa5eff8\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":null,\""""Cmd\"""":null,\""""Image\"""":\""""\"""",\""""Volumes\"""":null,\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""OnBuild\"""":null,\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":4793867}"""" [09:46:47]W: [Step 11/11] } [09:46:47]W: [Step 11/11] ], [09:46:47]W: [Step 11/11] """"signatures"""": [ [09:46:47]W: [Step 11/11] { [09:46:47]W: [Step 11/11] """"header"""": { [09:46:47]W: [Step 11/11] """"jwk"""": { [09:46:47]W: [Step 11/11] """"crv"""": """"P-256"""", [09:46:47]W: [Step 11/11] """"kid"""": """"OOI5:SI3T:LC7D:O7DX:FY6S:IAYW:WDRN:VQEM:BCFL:OIST:Q3LO:GTQQ"""", [09:46:47]W: [Step 11/11] """"kty"""": """"EC"""", [09:46:47]W: [Step 11/11] """"x"""": """"J2N5ePGhlblMI2cdsR6NrAG_xbNC_X7s1HRtk5GXvzM"""", [09:46:47]W: [Step 11/11] """"y"""": """"Idr-tEBjnNnfq6_71aeXBi3Z9ah_rrE209l4wiaohk0"""" [09:46:47]W: [Step 11/11] }, [09:46:47]W: [Step 11/11] """"alg"""": """"ES256"""" [09:46:47]W: [Step 11/11] }, [09:46:47]W: [Step 11/11] """"signature"""": """"gFqNfRROAFCbMmm7sCjaNFjy18vu3IWQUrFQbhCwrpNuNbMc7ImdW636Pz1IrVfGTzalAZftluLsiHcMPU2jBQ"""", [09:46:47]W: [Step 11/11] """"protected"""": """"eyJmb3JtYXRMZW5ndGgiOjEzNzUsImZvcm1hdFRhaWwiOiJDbjAiLCJ0aW1lIjoiMjAxNi0wMi0yM1QxOTowMjowMFoifQ"""" [09:46:47]W: [Step 11/11] } [09:46:47]W: [Step 11/11] ] [09:46:47]W: [Step 11/11] }' [09:46:47]W: [Step 11/11] I0229 09:46:47.601771 1180 registry_puller.cpp:317] Fetching blob 'sha256:ee54741ab35b188477c19fddc30356317b091177966da94c2e9391de49fc7f43' for layer '9d710148acd0066166bf3ce04894072b2f3caed24d0295ae2fa136fb7f602605' of image 'library/alpine' [09:46:47]W: [Step 11/11] I0229 09:46:47.635748 1182 hierarchical.cpp:1623] Filtered offer with cpus(*):1; mem(*):896; disk(*):1024; ports(*):[31000-32000] on slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 for framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:47]W: [Step 11/11] I0229 09:46:47.635797 1182 hierarchical.cpp:1437] No resources available to allocate! [09:46:47]W: [Step 11/11] I0229 09:46:47.635829 1182 hierarchical.cpp:1532] No inverse offers to send out! [09:46:47]W: [Step 11/11] I0229 09:46:47.635854 1182 hierarchical.cpp:1130] Performed allocation for 1 slaves in 573296ns [09:46:48]W: [Step 11/11] I0229 09:46:48.299258 1180 provisioner.cpp:285] Provisioning image rootfs '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/provisioner/containers/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89/backends/copy/rootfses/6f8be9d5-12fe-4b66-9ff3-fda1efbb2519' for container 7f271a3f-bada-4dfb-a37f-f6c6d7aefd89 [09:46:48]W: [Step 11/11] I0229 09:46:48.299828 1181 copy.cpp:127] Copying layer path '/tmp/mesos/store/docker/layers/9d710148acd0066166bf3ce04894072b2f3caed24d0295ae2fa136fb7f602605/rootfs' to rootfs '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/provisioner/containers/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89/backends/copy/rootfses/6f8be9d5-12fe-4b66-9ff3-fda1efbb2519' [09:46:48]W: [Step 11/11] I0229 09:46:48.410997 1187 linux_launcher.cpp:304] Cloning child process with flags = CLONE_NEWNS [09:46:48]W: [Step 11/11] + /mnt/teamcity/work/4240ba9ddd0997c3/build/src/mesos-containerizer mount --help=false --operation=make-rslave --path=/ [09:46:48]W: [Step 11/11] + grep -E /tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/.+ /proc/self/mountinfo [09:46:48]W: [Step 11/11] + grep -v 7f271a3f-bada-4dfb-a37f-f6c6d7aefd89 [09:46:48]W: [Step 11/11] + cut '-d ' -f5 [09:46:48]W: [Step 11/11] + xargs --no-run-if-empty umount -l [09:46:48]W: [Step 11/11] + mount -n --rbind /tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/provisioner/containers/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89/backends/copy/rootfses/6f8be9d5-12fe-4b66-9ff3-fda1efbb2519 /tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/slaves/3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0/frameworks/3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000/executors/cd81ece8-93b2-4e8a-a4b0-b566038bf281/runs/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89/.rootfs [09:46:48]W: [Step 11/11] WARNING: Logging before InitGoogleLogging() is written to STDERR [09:46:48]W: [Step 11/11] I0229 09:46:48.550132 12320 process.cpp:991] libprocess is initialized on 172.30.2.124:39586 for 8 cpus [09:46:48]W: [Step 11/11] I0229 09:46:48.550712 12320 logging.cpp:193] Logging to STDERR [09:46:48]W: [Step 11/11] I0229 09:46:48.552098 12320 exec.cpp:143] Version: 0.28.0 [09:46:48]W: [Step 11/11] I0229 09:46:48.557407 12370 exec.cpp:193] Executor started at: executor(1)@172.30.2.124:39586 with pid 12320 [09:46:48]W: [Step 11/11] I0229 09:46:48.559065 1180 slave.cpp:2643] Got registration for executor 'cd81ece8-93b2-4e8a-a4b0-b566038bf281' of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 from executor(1)@172.30.2.124:39586 [09:46:48]W: [Step 11/11] I0229 09:46:48.560705 12374 exec.cpp:217] Executor registered on slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 [09:46:48]W: [Step 11/11] I0229 09:46:48.560752 1180 slave.cpp:1863] Sending queued task 'cd81ece8-93b2-4e8a-a4b0-b566038bf281' to executor 'cd81ece8-93b2-4e8a-a4b0-b566038bf281' of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 at executor(1)@172.30.2.124:39586 [09:46:48]W: [Step 11/11] I0229 09:46:48.562134 12374 exec.cpp:229] Executor::registered took 256564ns [09:46:48]W: [Step 11/11] I0229 09:46:48.562368 12374 exec.cpp:304] Executor asked to run task 'cd81ece8-93b2-4e8a-a4b0-b566038bf281' [09:46:48]W: [Step 11/11] I0229 09:46:48.562463 12374 exec.cpp:313] Executor::launchTask took 75896ns [09:46:48] : [Step 11/11] Registered executor on ip-172-30-2-124.mesosphere.io [09:46:48] : [Step 11/11] Starting task cd81ece8-93b2-4e8a-a4b0-b566038bf281 [09:46:48] : [Step 11/11] Forked command at 12377 [09:46:48]W: [Step 11/11] I0229 09:46:48.566723 12369 exec.cpp:526] Executor sending status update TASK_RUNNING (UUID: 78b0b15b-22c8-479b-b2ee-bb02a7466964) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48] : [Step 11/11] sh -c 'ls -al /' [09:46:48]W: [Step 11/11] I0229 09:46:48.567494 1187 slave.cpp:3002] Handling status update TASK_RUNNING (UUID: 78b0b15b-22c8-479b-b2ee-bb02a7466964) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 from executor(1)@172.30.2.124:39586 [09:46:48]W: [Step 11/11] Failed to exec: No such file or directory [09:46:48]W: [Step 11/11] I0229 09:46:48.568670 1186 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 78b0b15b-22c8-479b-b2ee-bb02a7466964) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.568704 1186 status_update_manager.cpp:497] Creating StatusUpdate stream for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.569011 1186 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 78b0b15b-22c8-479b-b2ee-bb02a7466964) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 to the slave [09:46:48]W: [Step 11/11] I0229 09:46:48.569222 1183 slave.cpp:3400] Forwarding the update TASK_RUNNING (UUID: 78b0b15b-22c8-479b-b2ee-bb02a7466964) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 to master@172.30.2.124:37431 [09:46:48]W: [Step 11/11] I0229 09:46:48.569411 1183 slave.cpp:3294] Status update manager successfully handled status update TASK_RUNNING (UUID: 78b0b15b-22c8-479b-b2ee-bb02a7466964) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.569447 1183 slave.cpp:3310] Sending acknowledgement for status update TASK_RUNNING (UUID: 78b0b15b-22c8-479b-b2ee-bb02a7466964) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 to executor(1)@172.30.2.124:39586 [09:46:48]W: [Step 11/11] I0229 09:46:48.569512 1187 master.cpp:4808] Status update TASK_RUNNING (UUID: 78b0b15b-22c8-479b-b2ee-bb02a7466964) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 from slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 at slave(422)@172.30.2.124:37431 (ip-172-30-2-124.mesosphere.io) [09:46:48]W: [Step 11/11] I0229 09:46:48.569546 1187 master.cpp:4856] Forwarding status update TASK_RUNNING (UUID: 78b0b15b-22c8-479b-b2ee-bb02a7466964) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.569679 1187 master.cpp:6464] Updating the state of task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) [09:46:48]W: [Step 11/11] I0229 09:46:48.569871 1184 sched.cpp:981] Scheduler::statusUpdate took 110230ns [09:46:48]W: [Step 11/11] I0229 09:46:48.569912 12374 exec.cpp:350] Executor received status update acknowledgement 78b0b15b-22c8-479b-b2ee-bb02a7466964 for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.570202 1184 master.cpp:3966] Processing ACKNOWLEDGE call 78b0b15b-22c8-479b-b2ee-bb02a7466964 for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 (default) at scheduler-52603476-875a-49a8-85d4-c98d102cdfab@172.30.2.124:37431 on slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 [09:46:48]W: [Step 11/11] I0229 09:46:48.570435 1186 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 78b0b15b-22c8-479b-b2ee-bb02a7466964) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.570641 1183 slave.cpp:2412] Status update manager successfully handled status update acknowledgement (UUID: 78b0b15b-22c8-479b-b2ee-bb02a7466964) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.636754 1182 hierarchical.cpp:1623] Filtered offer with cpus(*):1; mem(*):896; disk(*):1024; ports(*):[31000-32000] on slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 for framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.636795 1182 hierarchical.cpp:1437] No resources available to allocate! [09:46:48]W: [Step 11/11] I0229 09:46:48.636827 1182 hierarchical.cpp:1532] No inverse offers to send out! [09:46:48]W: [Step 11/11] I0229 09:46:48.636849 1182 hierarchical.cpp:1130] Performed allocation for 1 slaves in 503523ns [09:46:48] : [Step 11/11] Command terminated with signal Aborted (pid: 12377) [09:46:48]W: [Step 11/11] I0229 09:46:48.665805 12369 exec.cpp:526] Executor sending status update TASK_FAILED (UUID: 265863c0-80d5-48a4-ac87-6f0de02ddbcb) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.666326 1184 slave.cpp:3002] Handling status update TASK_FAILED (UUID: 265863c0-80d5-48a4-ac87-6f0de02ddbcb) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 from executor(1)@172.30.2.124:39586 [09:46:48]W: [Step 11/11] I0229 09:46:48.667079 1183 slave.cpp:5677] Terminating task cd81ece8-93b2-4e8a-a4b0-b566038bf281 [09:46:48]W: [Step 11/11] I0229 09:46:48.667944 1180 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: 265863c0-80d5-48a4-ac87-6f0de02ddbcb) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.668077 1180 status_update_manager.cpp:374] Forwarding update TASK_FAILED (UUID: 265863c0-80d5-48a4-ac87-6f0de02ddbcb) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 to the slave [09:46:48]W: [Step 11/11] I0229 09:46:48.668303 1182 slave.cpp:3400] Forwarding the update TASK_FAILED (UUID: 265863c0-80d5-48a4-ac87-6f0de02ddbcb) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 to master@172.30.2.124:37431 [09:46:48]W: [Step 11/11] I0229 09:46:48.668453 1182 slave.cpp:3294] Status update manager successfully handled status update TASK_FAILED (UUID: 265863c0-80d5-48a4-ac87-6f0de02ddbcb) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.668499 1182 slave.cpp:3310] Sending acknowledgement for status update TASK_FAILED (UUID: 265863c0-80d5-48a4-ac87-6f0de02ddbcb) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 to executor(1)@172.30.2.124:39586 [09:46:48]W: [Step 11/11] I0229 09:46:48.668642 1181 master.cpp:4808] Status update TASK_FAILED (UUID: 265863c0-80d5-48a4-ac87-6f0de02ddbcb) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 from slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 at slave(422)@172.30.2.124:37431 (ip-172-30-2-124.mesosphere.io) [09:46:48]W: [Step 11/11] I0229 09:46:48.668689 1181 master.cpp:4856] Forwarding status update TASK_FAILED (UUID: 265863c0-80d5-48a4-ac87-6f0de02ddbcb) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.668826 1181 master.cpp:6464] Updating the state of task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 (latest state: TASK_FAILED, status update state: TASK_FAILED) [09:46:48]W: [Step 11/11] I0229 09:46:48.668920 12373 exec.cpp:350] Executor received status update acknowledgement 265863c0-80d5-48a4-ac87-6f0de02ddbcb for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.669082 1183 sched.cpp:981] Scheduler::statusUpdate took 143562ns [09:46:48]W: [Step 11/11] I0229 09:46:48.669242 1186 hierarchical.cpp:890] Recovered cpus(*):1; mem(*):128 (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 from framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48] : [Step 11/11] ../../src/tests/containerizer/provisioner_docker_tests.cpp:379: Failure [09:46:48]W: [Step 11/11] I0229 09:46:48.669381 1186 master.cpp:3966] Processing ACKNOWLEDGE call 265863c0-80d5-48a4-ac87-6f0de02ddbcb for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 (default) at scheduler-52603476-875a-49a8-85d4-c98d102cdfab@172.30.2.124:37431 on slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 [09:46:48] : [Step 11/11] Value of: statusFinished->state() [09:46:48]W: [Step 11/11] I0229 09:46:48.669421 1166 sched.cpp:1903] Asked to stop the driver [09:46:48] : [Step 11/11] Actual: TASK_FAILED [09:46:48] : [Step 11/11] Expected: TASK_FINISHED [09:46:48]W: [Step 11/11] I0229 09:46:48.669423 1186 master.cpp:6530] Removing task cd81ece8-93b2-4e8a-a4b0-b566038bf281 with resources cpus(*):1; mem(*):128 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 on slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 at slave(422)@172.30.2.124:37431 (ip-172-30-2-124.mesosphere.io) [09:46:48]W: [Step 11/11] I0229 09:46:48.669519 1181 sched.cpp:1143] Stopping framework '3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000' [09:46:48]W: [Step 11/11] I0229 09:46:48.669746 1186 master.cpp:5940] Processing TEARDOWN call for framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 (default) at scheduler-52603476-875a-49a8-85d4-c98d102cdfab@172.30.2.124:37431 [09:46:48]W: [Step 11/11] I0229 09:46:48.669778 1186 master.cpp:5952] Removing framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 (default) at scheduler-52603476-875a-49a8-85d4-c98d102cdfab@172.30.2.124:37431 [09:46:48]W: [Step 11/11] I0229 09:46:48.669850 1184 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 265863c0-80d5-48a4-ac87-6f0de02ddbcb) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.669946 1187 hierarchical.cpp:375] Deactivated framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.670032 1184 status_update_manager.cpp:528] Cleaning up status update stream for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.670030 1180 slave.cpp:2079] Asked to shut down framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 by master@172.30.2.124:37431 [09:46:48]W: [Step 11/11] I0229 09:46:48.670080 1180 slave.cpp:2104] Shutting down framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.670140 1180 slave.cpp:4198] Shutting down executor 'cd81ece8-93b2-4e8a-a4b0-b566038bf281' of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 at executor(1)@172.30.2.124:39586 [09:46:48]W: [Step 11/11] I0229 09:46:48.670253 1186 master.cpp:1026] Master terminating [09:46:48]W: [Step 11/11] I0229 09:46:48.670295 1187 hierarchical.cpp:326] Removed framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.670384 1180 slave.cpp:2412] Status update manager successfully handled status update acknowledgement (UUID: 265863c0-80d5-48a4-ac87-6f0de02ddbcb) for task cd81ece8-93b2-4e8a-a4b0-b566038bf281 of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.670434 1180 slave.cpp:5718] Completing task cd81ece8-93b2-4e8a-a4b0-b566038bf281 [09:46:48]W: [Step 11/11] I0229 09:46:48.670588 1187 hierarchical.cpp:505] Removed slave 3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0 [09:46:48]W: [Step 11/11] I0229 09:46:48.670605 12375 exec.cpp:390] Executor asked to shutdown [09:46:48]W: [Step 11/11] I0229 09:46:48.670717 12375 exec.cpp:405] Executor::shutdown took 12737ns [09:46:48]W: [Step 11/11] I0229 09:46:48.670728 12369 exec.cpp:87] Scheduling shutdown of the executor in 5secs [09:46:48]W: [Step 11/11] I0229 09:46:48.670922 1183 slave.cpp:3528] master@172.30.2.124:37431 exited [09:46:48]W: [Step 11/11] W0229 09:46:48.670940 1183 slave.cpp:3531] Master disconnected! Waiting for a new master to be elected [09:46:48]W: [Step 11/11] I0229 09:46:48.675063 1186 containerizer.cpp:1378] Destroying container '7f271a3f-bada-4dfb-a37f-f6c6d7aefd89' [09:46:48]W: [Step 11/11] I0229 09:46:48.677278 1182 cgroups.cpp:2427] Freezing cgroup /sys/fs/cgroup/freezer/mesos/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89 [09:46:48]W: [Step 11/11] I0229 09:46:48.679386 1184 cgroups.cpp:1409] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89 after 2.066176ms [09:46:48]W: [Step 11/11] I0229 09:46:48.681586 1186 cgroups.cpp:2445] Thawing cgroup /sys/fs/cgroup/freezer/mesos/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89 [09:46:48]W: [Step 11/11] I0229 09:46:48.683552 1186 cgroups.cpp:1438] Successfullly thawed cgroup /sys/fs/cgroup/freezer/mesos/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89 after 1.926144ms [09:46:48]W: [Step 11/11] I0229 09:46:48.696513 1186 slave.cpp:3528] executor(1)@172.30.2.124:39586 exited [09:46:48]W: [Step 11/11] I0229 09:46:48.708107 1181 containerizer.cpp:1594] Executor for container '7f271a3f-bada-4dfb-a37f-f6c6d7aefd89' has exited [09:46:48]W: [Step 11/11] I0229 09:46:48.710535 1186 linux.cpp:765] Ignoring unmounting sandbox/work directory for container 7f271a3f-bada-4dfb-a37f-f6c6d7aefd89 [09:46:48]W: [Step 11/11] I0229 09:46:48.710969 1187 provisioner.cpp:330] Destroying container rootfs at '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/provisioner/containers/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89/backends/copy/rootfses/6f8be9d5-12fe-4b66-9ff3-fda1efbb2519' for container 7f271a3f-bada-4dfb-a37f-f6c6d7aefd89 [09:46:48]W: [Step 11/11] I0229 09:46:48.809336 1183 slave.cpp:3886] Executor 'cd81ece8-93b2-4e8a-a4b0-b566038bf281' of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 terminated with signal Killed [09:46:48]W: [Step 11/11] I0229 09:46:48.809378 1183 slave.cpp:3990] Cleaning up executor 'cd81ece8-93b2-4e8a-a4b0-b566038bf281' of framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 at executor(1)@172.30.2.124:39586 [09:46:48]W: [Step 11/11] I0229 09:46:48.809614 1187 gc.cpp:54] Scheduling '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/slaves/3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0/frameworks/3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000/executors/cd81ece8-93b2-4e8a-a4b0-b566038bf281/runs/7f271a3f-bada-4dfb-a37f-f6c6d7aefd89' for gc 6.9999906309837days in the future [09:46:48]W: [Step 11/11] I0229 09:46:48.809703 1183 slave.cpp:4078] Cleaning up framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.809739 1187 gc.cpp:54] Scheduling '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/slaves/3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0/frameworks/3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000/executors/cd81ece8-93b2-4e8a-a4b0-b566038bf281' for gc 6.99999062896889days in the future [09:46:48]W: [Step 11/11] I0229 09:46:48.809801 1182 status_update_manager.cpp:282] Closing status update streams for framework 3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000 [09:46:48]W: [Step 11/11] I0229 09:46:48.809854 1187 gc.cpp:54] Scheduling '/tmp/ProvisionerDockerRegistryPullerTest_ROOT_INTERNET_CURL_ShellCommand_5BWCfv/slaves/3fbb2fb0-4f18-498b-a440-9acbf6923a13-S0/frameworks/3fbb2fb0-4f18-498b-a440-9acbf6923a13-0000' for gc 6.99999062718519days in the future [09:46:48]W: [Step 11/11] I0229 09:46:48.810493 1187 slave.cpp:668] Slave terminating [09:46:48]W: [Step 11/11] Using temporary directory '/tmp/ContainerizerTest_ROOT_CGROUPS_BalloonFramework_e9Aoqv' [09:46:48] : [Step 11/11] [ FAILED ] ProvisionerDockerRegistryPullerTest.ROOT_INTERNET_CURL_ShellCommand (2193 ms) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4813","02/29/2016 18:33:30",2,"Implement base tests for unified container using local puller. ""Using command line executor to test shell commands with local docker images.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4818","02/29/2016 19:52:58",3,"Add end to end testing for Appc images. ""Add tests that covers integration test of the Appc provisioner feature with mesos containerizer. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4820","02/29/2016 20:02:54",1,"Need to set `EXPOSED` ports from docker images into `ContainerConfig` ""Most docker images have an `EXPOSE` command associated with them. This tells the container run-time the TCP ports that the micro-service """"wishes"""" to expose to the outside world. With the `Unified containerizer` project since `MesosContainerizer` is going to natively support docker images it is imperative that the Mesos container run time have a mechanism to expose ports listed in a Docker image. The first step to achieve this is to extract this information from the `Docker` image and set in the `ContainerConfig` . The `ContainerConfig` can then be used to pass this information to any isolator (for e.g. `network/cni` isolator) that will install port forwarding rules to expose the desired ports.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4821","02/29/2016 20:06:19",1,"Introduce a port field in `ImageManifest` in order to set exposed ports for a container. ""Networking isolators such as `network/cni` need to learn about ports that a container wishes to be exposed to the outside world. This can be achieved by adding a field to the `ImageManifest` protobuf and allowing the `ImageProvisioner` to set these fields to inform the isolator of the ports that the container wishes to be exposed. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4822","02/29/2016 20:07:37",2,"Add support for local image fetching in Appc provisioner. ""Currently Appc image provisioner supports http(s) fetching. It would be valuable to add support for local file path(URI) based fetching.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4823","02/29/2016 20:12:20",2,"Implement port forwarding in `network/cni` isolator ""Most docker and appc images wish to expose ports that micro-services are listening on, to the outside world. When containers are running on bridged (or ptp) networking this can be achieved by installing port forwarding rules on the agent (using iptables). This can be done in the `network/cni` isolator. The reason we would like this functionality to be implemented in the `network/cni` isolator, and not a CNI plugin, is that the specifications currently do not support specifying port forwarding rules. Further, to install these rules the isolator needs two pieces of information, the exposed ports and the IP address associated with the container. Bother are available to the isolator.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4824","03/01/2016 00:28:16",2,"""filesystem/linux"" isolator does not unmount orphaned persistent volumes ""A persistent volume can be orphaned when: # A framework registers with checkpointing enabled. # The framework starts a task + a persistent volume. # The agent exits. The task continues to run. # Something wipes the agent's {{meta}} directory. This removes the checkpointed framework info from the agent. # The agent comes back and recovers. The framework for the task is not found, so the task is considered orphaned now. The agent currently does not unmount the persistent volume, saying (with {{GLOG_v=1}}) Test implemented here: https://reviews.apache.org/r/44122/"""," I0229 23:55:42.078940 5635 linux.cpp:711] Ignoring cleanup request for unknown container: a35189d3-85d5-4d02-b568-67f675b6dc97 ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4825","03/01/2016 02:03:29",1,"Master's slave reregister logic does not update version field ""The master's logic for reregistering a slave does not update the version field if the slave re-registers with a new version.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4830","03/01/2016 19:43:47",1,"Bind docker runtime isolator with docker image provider. ""If image provider is specified as `docker` but docker/runtime is not set, it would be not meaningful, because of no executables. A check should be added to make sure docker runtime isolator is on if using docker as image provider.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4832","03/01/2016 21:22:16",2,"DockerContainerizerTest.ROOT_DOCKER_RecoverOrphanedPersistentVolumes exits when the /tmp directory is bind-mounted ""If the {{/tmp}} directory (where Mesos tests create temporary directories) is a bind mount, the test suite will exit here: There appear to be two problems: 1) The docker containerizer should not exit on failure to clean up orphans. The MesosContainerizer does not do this (see [MESOS-2367]). 2) Unmounting the orphan persistent volume fails for some reason."""," [ RUN ] DockerContainerizerTest.ROOT_DOCKER_RecoverOrphanedPersistentVolumes I0226 03:17:26.722806 1097 leveldb.cpp:174] Opened db in 12.587676ms I0226 03:17:26.723496 1097 leveldb.cpp:181] Compacted db in 636999ns I0226 03:17:26.723536 1097 leveldb.cpp:196] Created db iterator in 18271ns I0226 03:17:26.723547 1097 leveldb.cpp:202] Seeked to beginning of db in 1555ns I0226 03:17:26.723554 1097 leveldb.cpp:271] Iterated through 0 keys in the db in 363ns I0226 03:17:26.723593 1097 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0226 03:17:26.724128 1117 recover.cpp:447] Starting replica recovery I0226 03:17:26.724367 1117 recover.cpp:473] Replica is in EMPTY status I0226 03:17:26.725237 1117 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (13810)@172.30.2.151:51934 I0226 03:17:26.725744 1114 recover.cpp:193] Received a recover response from a replica in EMPTY status I0226 03:17:26.726356 1111 master.cpp:376] Master 5cc57c0e-f1ad-4107-893f-420ed1a1db1a (ip-172-30-2-151.mesosphere.io) started on 172.30.2.151:51934 I0226 03:17:26.726369 1118 recover.cpp:564] Updating replica status to STARTING I0226 03:17:26.726378 1111 master.cpp:378] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_http=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/djHTVQ/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/djHTVQ/master"""" --zk_session_timeout=""""10secs"""" I0226 03:17:26.726605 1111 master.cpp:423] Master only allowing authenticated frameworks to register I0226 03:17:26.726616 1111 master.cpp:428] Master only allowing authenticated slaves to register I0226 03:17:26.726632 1111 credentials.hpp:35] Loading credentials for authentication from '/tmp/djHTVQ/credentials' I0226 03:17:26.726860 1111 master.cpp:468] Using default 'crammd5' authenticator I0226 03:17:26.726977 1111 master.cpp:537] Using default 'basic' HTTP authenticator I0226 03:17:26.727092 1111 master.cpp:571] Authorization enabled I0226 03:17:26.727243 1118 hierarchical.cpp:144] Initialized hierarchical allocator process I0226 03:17:26.727285 1116 whitelist_watcher.cpp:77] No whitelist given I0226 03:17:26.728852 1114 master.cpp:1712] The newly elected leader is master@172.30.2.151:51934 with id 5cc57c0e-f1ad-4107-893f-420ed1a1db1a I0226 03:17:26.728876 1114 master.cpp:1725] Elected as the leading master! I0226 03:17:26.728891 1114 master.cpp:1470] Recovering from registrar I0226 03:17:26.728977 1117 registrar.cpp:307] Recovering registrar I0226 03:17:26.731503 1112 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 4.977811ms I0226 03:17:26.731539 1112 replica.cpp:320] Persisted replica status to STARTING I0226 03:17:26.731711 1111 recover.cpp:473] Replica is in STARTING status I0226 03:17:26.732501 1114 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (13812)@172.30.2.151:51934 I0226 03:17:26.732862 1111 recover.cpp:193] Received a recover response from a replica in STARTING status I0226 03:17:26.733264 1117 recover.cpp:564] Updating replica status to VOTING I0226 03:17:26.733836 1118 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 388246ns I0226 03:17:26.733855 1118 replica.cpp:320] Persisted replica status to VOTING I0226 03:17:26.733979 1113 recover.cpp:578] Successfully joined the Paxos group I0226 03:17:26.734149 1113 recover.cpp:462] Recover process terminated I0226 03:17:26.734478 1111 log.cpp:659] Attempting to start the writer I0226 03:17:26.735523 1114 replica.cpp:493] Replica received implicit promise request from (13813)@172.30.2.151:51934 with proposal 1 I0226 03:17:26.736130 1114 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 576451ns I0226 03:17:26.736150 1114 replica.cpp:342] Persisted promised to 1 I0226 03:17:26.736709 1115 coordinator.cpp:238] Coordinator attempting to fill missing positions I0226 03:17:26.737771 1114 replica.cpp:388] Replica received explicit promise request from (13814)@172.30.2.151:51934 for position 0 with proposal 2 I0226 03:17:26.738386 1114 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 583184ns I0226 03:17:26.738404 1114 replica.cpp:712] Persisted action at 0 I0226 03:17:26.739312 1118 replica.cpp:537] Replica received write request for position 0 from (13815)@172.30.2.151:51934 I0226 03:17:26.739367 1118 leveldb.cpp:436] Reading position from leveldb took 26157ns I0226 03:17:26.740638 1118 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 1.238477ms I0226 03:17:26.740669 1118 replica.cpp:712] Persisted action at 0 I0226 03:17:26.741158 1118 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0226 03:17:26.742878 1118 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.697254ms I0226 03:17:26.742902 1118 replica.cpp:712] Persisted action at 0 I0226 03:17:26.742916 1118 replica.cpp:697] Replica learned NOP action at position 0 I0226 03:17:26.743393 1117 log.cpp:675] Writer started with ending position 0 I0226 03:17:26.744370 1112 leveldb.cpp:436] Reading position from leveldb took 34329ns I0226 03:17:26.745240 1117 registrar.cpp:340] Successfully fetched the registry (0B) in 16.21888ms I0226 03:17:26.745350 1117 registrar.cpp:439] Applied 1 operations in 30460ns; attempting to update the 'registry' I0226 03:17:26.746016 1111 log.cpp:683] Attempting to append 210 bytes to the log I0226 03:17:26.746119 1116 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0226 03:17:26.746798 1114 replica.cpp:537] Replica received write request for position 1 from (13816)@172.30.2.151:51934 I0226 03:17:26.747251 1114 leveldb.cpp:341] Persisting action (229 bytes) to leveldb took 411333ns I0226 03:17:26.747269 1114 replica.cpp:712] Persisted action at 1 I0226 03:17:26.747808 1113 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0226 03:17:26.749511 1113 leveldb.cpp:341] Persisting action (231 bytes) to leveldb took 1.673488ms I0226 03:17:26.749534 1113 replica.cpp:712] Persisted action at 1 I0226 03:17:26.749550 1113 replica.cpp:697] Replica learned APPEND action at position 1 I0226 03:17:26.750422 1111 registrar.cpp:484] Successfully updated the 'registry' in 5.021952ms I0226 03:17:26.750560 1111 registrar.cpp:370] Successfully recovered registrar I0226 03:17:26.750635 1112 log.cpp:702] Attempting to truncate the log to 1 I0226 03:17:26.750751 1113 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0226 03:17:26.751096 1116 master.cpp:1522] Recovered 0 slaves from the Registry (171B) ; allowing 10mins for slaves to re-register I0226 03:17:26.751126 1111 hierarchical.cpp:171] Skipping recovery of hierarchical allocator: nothing to recover I0226 03:17:26.751561 1118 replica.cpp:537] Replica received write request for position 2 from (13817)@172.30.2.151:51934 I0226 03:17:26.751999 1118 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 406823ns I0226 03:17:26.752018 1118 replica.cpp:712] Persisted action at 2 I0226 03:17:26.752521 1113 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0226 03:17:26.754161 1113 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 1.614888ms I0226 03:17:26.754210 1113 leveldb.cpp:399] Deleting ~1 keys from leveldb took 26384ns I0226 03:17:26.754225 1113 replica.cpp:712] Persisted action at 2 I0226 03:17:26.754240 1113 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0226 03:17:26.765103 1115 slave.cpp:193] Slave started on 399)@172.30.2.151:51934 I0226 03:17:26.765130 1115 slave.cpp:194] Flags at startup: --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpu:2;mem:2048;disk(role1):2048"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP"""" I0226 03:17:26.765403 1115 credentials.hpp:83] Loading credential for authentication from '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/credential' I0226 03:17:26.765573 1115 slave.cpp:324] Slave using credential for: test-principal I0226 03:17:26.765733 1115 resources.cpp:576] Parsing resources as JSON failed: cpu:2;mem:2048;disk(role1):2048 Trying semicolon-delimited string format instead I0226 03:17:26.766185 1115 slave.cpp:464] Slave resources: cpu(*):2; mem(*):2048; disk(role1):2048; cpus(*):8; ports(*):[31000-32000] I0226 03:17:26.766242 1115 slave.cpp:472] Slave attributes: [ ] I0226 03:17:26.766250 1115 slave.cpp:477] Slave hostname: ip-172-30-2-151.mesosphere.io I0226 03:17:26.767325 1097 sched.cpp:222] Version: 0.28.0 I0226 03:17:26.767390 1111 state.cpp:58] Recovering state from '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/meta' I0226 03:17:26.767603 1115 status_update_manager.cpp:200] Recovering status update manager I0226 03:17:26.767865 1113 docker.cpp:726] Recovering Docker containers I0226 03:17:26.767971 1111 sched.cpp:326] New master detected at master@172.30.2.151:51934 I0226 03:17:26.768045 1111 sched.cpp:382] Authenticating with master master@172.30.2.151:51934 I0226 03:17:26.768059 1111 sched.cpp:389] Using default CRAM-MD5 authenticatee I0226 03:17:26.768070 1118 slave.cpp:4565] Finished recovery I0226 03:17:26.768273 1112 authenticatee.cpp:121] Creating new client SASL connection I0226 03:17:26.768435 1118 slave.cpp:4737] Querying resource estimator for oversubscribable resources I0226 03:17:26.768565 1111 master.cpp:5526] Authenticating scheduler-c59020d6-385e-48a3-8a10-9e5c3f1dbd92@172.30.2.151:51934 I0226 03:17:26.768661 1118 slave.cpp:796] New master detected at master@172.30.2.151:51934 I0226 03:17:26.768659 1115 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(839)@172.30.2.151:51934 I0226 03:17:26.768679 1113 status_update_manager.cpp:174] Pausing sending status updates I0226 03:17:26.768728 1118 slave.cpp:859] Authenticating with master master@172.30.2.151:51934 I0226 03:17:26.768743 1118 slave.cpp:864] Using default CRAM-MD5 authenticatee I0226 03:17:26.768865 1118 slave.cpp:832] Detecting new master I0226 03:17:26.768868 1112 authenticator.cpp:98] Creating new server SASL connection I0226 03:17:26.768908 1114 authenticatee.cpp:121] Creating new client SASL connection I0226 03:17:26.769003 1118 slave.cpp:4751] Received oversubscribable resources from the resource estimator I0226 03:17:26.769103 1115 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0226 03:17:26.769131 1115 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0226 03:17:26.769209 1116 master.cpp:5526] Authenticating slave(399)@172.30.2.151:51934 I0226 03:17:26.769253 1114 authenticator.cpp:203] Received SASL authentication start I0226 03:17:26.769295 1115 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(840)@172.30.2.151:51934 I0226 03:17:26.769307 1114 authenticator.cpp:325] Authentication requires more steps I0226 03:17:26.769403 1117 authenticatee.cpp:258] Received SASL authentication step I0226 03:17:26.769495 1114 authenticator.cpp:98] Creating new server SASL connection I0226 03:17:26.769531 1115 authenticator.cpp:231] Received SASL authentication step I0226 03:17:26.769554 1115 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-151.mesosphere.io' server FQDN: 'ip-172-30-2-151.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0226 03:17:26.769562 1115 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0226 03:17:26.769608 1115 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0226 03:17:26.769629 1115 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-151.mesosphere.io' server FQDN: 'ip-172-30-2-151.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0226 03:17:26.769637 1115 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0226 03:17:26.769642 1115 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0226 03:17:26.769654 1115 authenticator.cpp:317] Authentication success I0226 03:17:26.769728 1117 authenticatee.cpp:298] Authentication success I0226 03:17:26.769769 1112 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0226 03:17:26.769767 1118 master.cpp:5556] Successfully authenticated principal 'test-principal' at scheduler-c59020d6-385e-48a3-8a10-9e5c3f1dbd92@172.30.2.151:51934 I0226 03:17:26.769803 1112 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0226 03:17:26.769798 1114 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(839)@172.30.2.151:51934 I0226 03:17:26.769881 1112 authenticator.cpp:203] Received SASL authentication start I0226 03:17:26.769932 1112 authenticator.cpp:325] Authentication requires more steps I0226 03:17:26.769981 1117 sched.cpp:471] Successfully authenticated with master master@172.30.2.151:51934 I0226 03:17:26.770004 1117 sched.cpp:776] Sending SUBSCRIBE call to master@172.30.2.151:51934 I0226 03:17:26.770064 1118 authenticatee.cpp:258] Received SASL authentication step I0226 03:17:26.770102 1117 sched.cpp:809] Will retry registration in 1.937819802secs if necessary I0226 03:17:26.770165 1115 authenticator.cpp:231] Received SASL authentication step I0226 03:17:26.770193 1115 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-151.mesosphere.io' server FQDN: 'ip-172-30-2-151.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0226 03:17:26.770207 1115 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0226 03:17:26.770213 1116 master.cpp:2280] Received SUBSCRIBE call for framework 'default' at scheduler-c59020d6-385e-48a3-8a10-9e5c3f1dbd92@172.30.2.151:51934 I0226 03:17:26.770241 1115 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0226 03:17:26.770274 1115 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-151.mesosphere.io' server FQDN: 'ip-172-30-2-151.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0226 03:17:26.770277 1116 master.cpp:1751] Authorizing framework principal 'test-principal' to receive offers for role 'role1' I0226 03:17:26.770298 1115 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0226 03:17:26.770331 1115 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0226 03:17:26.770349 1115 authenticator.cpp:317] Authentication success I0226 03:17:26.770428 1118 authenticatee.cpp:298] Authentication success I0226 03:17:26.770442 1116 master.cpp:5556] Successfully authenticated principal 'test-principal' at slave(399)@172.30.2.151:51934 I0226 03:17:26.770547 1116 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(840)@172.30.2.151:51934 I0226 03:17:26.770846 1116 master.cpp:2351] Subscribing framework default with checkpointing enabled and capabilities [ ] I0226 03:17:26.770866 1118 slave.cpp:927] Successfully authenticated with master master@172.30.2.151:51934 I0226 03:17:26.770966 1118 slave.cpp:1321] Will retry registration in 1.453415ms if necessary I0226 03:17:26.771225 1115 hierarchical.cpp:265] Added framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:26.771275 1118 sched.cpp:703] Framework registered with 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:26.771299 1115 hierarchical.cpp:1434] No resources available to allocate! I0226 03:17:26.771328 1115 hierarchical.cpp:1529] No inverse offers to send out! I0226 03:17:26.771344 1118 sched.cpp:717] Scheduler::registered took 50146ns I0226 03:17:26.771356 1116 master.cpp:4240] Registering slave at slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) with id 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 I0226 03:17:26.771348 1115 hierarchical.cpp:1127] Performed allocation for 0 slaves in 101438ns I0226 03:17:26.771860 1114 registrar.cpp:439] Applied 1 operations in 59672ns; attempting to update the 'registry' I0226 03:17:26.772645 1117 log.cpp:683] Attempting to append 423 bytes to the log I0226 03:17:26.772758 1112 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0226 03:17:26.773435 1117 replica.cpp:537] Replica received write request for position 3 from (13824)@172.30.2.151:51934 I0226 03:17:26.773586 1111 slave.cpp:1321] Will retry registration in 2.74261ms if necessary I0226 03:17:26.773682 1115 master.cpp:4228] Ignoring register slave message from slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) as admission is already in progress I0226 03:17:26.773937 1117 leveldb.cpp:341] Persisting action (442 bytes) to leveldb took 469969ns I0226 03:17:26.773957 1117 replica.cpp:712] Persisted action at 3 I0226 03:17:26.774605 1114 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0226 03:17:26.775961 1114 leveldb.cpp:341] Persisting action (444 bytes) to leveldb took 1.329435ms I0226 03:17:26.775986 1114 replica.cpp:712] Persisted action at 3 I0226 03:17:26.776008 1114 replica.cpp:697] Replica learned APPEND action at position 3 I0226 03:17:26.777228 1115 slave.cpp:1321] Will retry registration in 41.5608ms if necessary I0226 03:17:26.777300 1112 registrar.cpp:484] Successfully updated the 'registry' in 5.378048ms I0226 03:17:26.777361 1114 master.cpp:4228] Ignoring register slave message from slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) as admission is already in progress I0226 03:17:26.777505 1113 log.cpp:702] Attempting to truncate the log to 3 I0226 03:17:26.777616 1111 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0226 03:17:26.778062 1114 slave.cpp:3482] Received ping from slave-observer(369)@172.30.2.151:51934 I0226 03:17:26.778139 1118 master.cpp:4308] Registered slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 at slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) with cpu(*):2; mem(*):2048; disk(role1):2048; cpus(*):8; ports(*):[31000-32000] I0226 03:17:26.778213 1113 replica.cpp:537] Replica received write request for position 4 from (13825)@172.30.2.151:51934 I0226 03:17:26.778291 1114 slave.cpp:971] Registered with master master@172.30.2.151:51934; given slave ID 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 I0226 03:17:26.778316 1114 fetcher.cpp:81] Clearing fetcher cache I0226 03:17:26.778367 1116 hierarchical.cpp:473] Added slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 (ip-172-30-2-151.mesosphere.io) with cpu(*):2; mem(*):2048; disk(role1):2048; cpus(*):8; ports(*):[31000-32000] (allocated: ) I0226 03:17:26.778447 1117 status_update_manager.cpp:181] Resuming sending status updates I0226 03:17:26.778617 1113 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 375414ns I0226 03:17:26.778635 1113 replica.cpp:712] Persisted action at 4 I0226 03:17:26.778650 1114 slave.cpp:994] Checkpointing SlaveInfo to '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/meta/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/slave.info' I0226 03:17:26.778900 1114 slave.cpp:1030] Forwarding total oversubscribed resources I0226 03:17:26.779109 1114 master.cpp:4649] Received update of slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 at slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) with total oversubscribed resources I0226 03:17:26.779139 1112 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0226 03:17:26.779331 1116 hierarchical.cpp:1529] No inverse offers to send out! I0226 03:17:26.779369 1116 hierarchical.cpp:1147] Performed allocation for slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 in 969593ns I0226 03:17:26.779645 1113 master.cpp:5355] Sending 1 offers to framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 (default) at scheduler-c59020d6-385e-48a3-8a10-9e5c3f1dbd92@172.30.2.151:51934 I0226 03:17:26.779700 1116 hierarchical.cpp:531] Slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 (ip-172-30-2-151.mesosphere.io) updated with oversubscribed resources (total: cpu(*):2; mem(*):2048; disk(role1):2048; cpus(*):8; ports(*):[31000-32000], allocated: disk(role1):2048; cpu(*):2; mem(*):2048; cpus(*):8; ports(*):[31000-32000]) I0226 03:17:26.779819 1116 hierarchical.cpp:1434] No resources available to allocate! I0226 03:17:26.779847 1116 hierarchical.cpp:1529] No inverse offers to send out! I0226 03:17:26.779865 1116 hierarchical.cpp:1147] Performed allocation for slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 in 133437ns I0226 03:17:26.780025 1118 sched.cpp:873] Scheduler::resourceOffers took 102165ns I0226 03:17:26.780372 1097 resources.cpp:576] Parsing resources as JSON failed: cpus:1;mem:64; Trying semicolon-delimited string format instead I0226 03:17:26.780882 1112 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 1.715066ms I0226 03:17:26.780938 1112 leveldb.cpp:399] Deleting ~2 keys from leveldb took 32370ns I0226 03:17:26.780953 1112 replica.cpp:712] Persisted action at 4 I0226 03:17:26.780971 1112 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0226 03:17:26.781693 1117 master.cpp:3138] Processing ACCEPT call for offers: [ 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-O0 ] on slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 at slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) for framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 (default) at scheduler-c59020d6-385e-48a3-8a10-9e5c3f1dbd92@172.30.2.151:51934 I0226 03:17:26.781731 1117 master.cpp:2926] Authorizing principal 'test-principal' to create volumes I0226 03:17:26.781801 1117 master.cpp:2825] Authorizing framework principal 'test-principal' to launch task 1 as user 'root' I0226 03:17:26.782827 1114 master.cpp:3467] Applying CREATE operation for volumes disk(role1)[id1:path1]:64 from framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 (default) at scheduler-c59020d6-385e-48a3-8a10-9e5c3f1dbd92@172.30.2.151:51934 to slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 at slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) I0226 03:17:26.783136 1114 master.cpp:6589] Sending checkpointed resources disk(role1)[id1:path1]:64 to slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 at slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) I0226 03:17:26.783641 1111 slave.cpp:2341] Updated checkpointed resources from to disk(role1)[id1:path1]:64 I0226 03:17:26.783911 1114 master.hpp:176] Adding task 1 with resources cpus(*):1; mem(*):64; disk(role1)[id1:path1]:64 on slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 (ip-172-30-2-151.mesosphere.io) I0226 03:17:26.784056 1114 master.cpp:3623] Launching task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 (default) at scheduler-c59020d6-385e-48a3-8a10-9e5c3f1dbd92@172.30.2.151:51934 with resources cpus(*):1; mem(*):64; disk(role1)[id1:path1]:64 on slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 at slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) I0226 03:17:26.784397 1115 slave.cpp:1361] Got assigned task 1 for framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:26.784557 1115 slave.cpp:5287] Checkpointing FrameworkInfo to '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/meta/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/framework.info' I0226 03:17:26.784739 1116 hierarchical.cpp:653] Updated allocation of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 on slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 from disk(role1):2048; cpu(*):2; mem(*):2048; cpus(*):8; ports(*):[31000-32000] to disk(role1):1984; cpu(*):2; mem(*):2048; cpus(*):8; ports(*):[31000-32000]; disk(role1)[id1:path1]:64 I0226 03:17:26.784848 1115 slave.cpp:5298] Checkpointing framework pid 'scheduler-c59020d6-385e-48a3-8a10-9e5c3f1dbd92@172.30.2.151:51934' to '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/meta/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/framework.pid' I0226 03:17:26.785078 1115 resources.cpp:576] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0226 03:17:26.785322 1116 hierarchical.cpp:892] Recovered disk(role1):1984; cpu(*):2; mem(*):1984; cpus(*):7; ports(*):[31000-32000] (total: cpu(*):2; mem(*):2048; disk(role1):1984; cpus(*):8; ports(*):[31000-32000]; disk(role1)[id1:path1]:64, allocated: disk(role1)[id1:path1]:64; cpus(*):1; mem(*):64) on slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 from framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:26.785658 1115 slave.cpp:1480] Launching task 1 for framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:26.785719 1115 resources.cpp:576] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0226 03:17:26.786197 1115 paths.cpp:474] Trying to chown '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/executors/1/runs/bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c' to user 'root' I0226 03:17:26.791122 1115 slave.cpp:5739] Checkpointing ExecutorInfo to '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/meta/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/executors/1/executor.info' I0226 03:17:26.791543 1115 slave.cpp:5367] Launching executor 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/executors/1/runs/bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c' I0226 03:17:26.792325 1115 slave.cpp:5762] Checkpointing TaskInfo to '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/meta/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/executors/1/runs/bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c/tasks/1/task.info' I0226 03:17:26.794337 1115 slave.cpp:1698] Queuing task '1' for executor '1' of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:26.794478 1115 slave.cpp:749] Successfully attached file '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/executors/1/runs/bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c' I0226 03:17:26.797106 1116 docker.cpp:1023] Starting container 'bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c' for task '1' (and executor '1') of framework '5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000' I0226 03:17:26.797462 1116 docker.cpp:1053] Running docker -H unix:///var/run/docker.sock inspect alpine:latest I0226 03:17:26.910549 1111 docker.cpp:394] Docker pull alpine completed I0226 03:17:26.910800 1111 docker.cpp:483] Changing the ownership of the persistent volume at '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/volumes/roles/role1/id1' with uid 0 and gid 0 I0226 03:17:26.915712 1111 docker.cpp:504] Mounting '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/volumes/roles/role1/id1' to '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/executors/1/runs/bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c/path1' for persistent volume disk(role1)[id1:path1]:64 of container bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c I0226 03:17:26.919000 1117 docker.cpp:576] Checkpointing pid 9568 to '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/meta/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/executors/1/runs/bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c/pids/forked.pid' I0226 03:17:26.974776 1114 slave.cpp:2643] Got registration for executor '1' of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 from executor(1)@172.30.2.151:46052 I0226 03:17:26.975217 1114 slave.cpp:2729] Checkpointing executor pid 'executor(1)@172.30.2.151:46052' to '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/meta/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/executors/1/runs/bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c/pids/libprocess.pid' I0226 03:17:26.976177 1113 docker.cpp:1303] Ignoring updating container 'bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c' with resources passed to update is identical to existing resources I0226 03:17:26.976492 1115 slave.cpp:1863] Sending queued task '1' to executor '1' of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 at executor(1)@172.30.2.151:46052 I0226 03:17:27.691769 1111 slave.cpp:3002] Handling status update TASK_RUNNING (UUID: 9f75a4e5-9ff4-4ca9-8623-8b2574796229) for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 from executor(1)@172.30.2.151:46052 I0226 03:17:27.692291 1116 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 9f75a4e5-9ff4-4ca9-8623-8b2574796229) for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:27.692327 1116 status_update_manager.cpp:497] Creating StatusUpdate stream for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:27.692773 1116 status_update_manager.cpp:824] Checkpointing UPDATE for status update TASK_RUNNING (UUID: 9f75a4e5-9ff4-4ca9-8623-8b2574796229) for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:27.700090 1116 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 9f75a4e5-9ff4-4ca9-8623-8b2574796229) for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 to the slave I0226 03:17:27.700389 1113 slave.cpp:3400] Forwarding the update TASK_RUNNING (UUID: 9f75a4e5-9ff4-4ca9-8623-8b2574796229) for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 to master@172.30.2.151:51934 I0226 03:17:27.700606 1113 slave.cpp:3294] Status update manager successfully handled status update TASK_RUNNING (UUID: 9f75a4e5-9ff4-4ca9-8623-8b2574796229) for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:27.700644 1113 slave.cpp:3310] Sending acknowledgement for status update TASK_RUNNING (UUID: 9f75a4e5-9ff4-4ca9-8623-8b2574796229) for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 to executor(1)@172.30.2.151:46052 I0226 03:17:27.700742 1117 master.cpp:4794] Status update TASK_RUNNING (UUID: 9f75a4e5-9ff4-4ca9-8623-8b2574796229) for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 from slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 at slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) I0226 03:17:27.700775 1117 master.cpp:4842] Forwarding status update TASK_RUNNING (UUID: 9f75a4e5-9ff4-4ca9-8623-8b2574796229) for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:27.700923 1117 master.cpp:6450] Updating the state of task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0226 03:17:27.701145 1118 sched.cpp:981] Scheduler::statusUpdate took 107222ns I0226 03:17:27.701550 1112 master.cpp:3952] Processing ACKNOWLEDGE call 9f75a4e5-9ff4-4ca9-8623-8b2574796229 for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 (default) at scheduler-c59020d6-385e-48a3-8a10-9e5c3f1dbd92@172.30.2.151:51934 on slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 I0226 03:17:27.701828 1114 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 9f75a4e5-9ff4-4ca9-8623-8b2574796229) for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:27.701962 1114 status_update_manager.cpp:824] Checkpointing ACK for status update TASK_RUNNING (UUID: 9f75a4e5-9ff4-4ca9-8623-8b2574796229) for task 1 of framework 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000 I0226 03:17:27.701987 1112 slave.cpp:668] Slave terminating I0226 03:17:27.702256 1117 master.cpp:1174] Slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 at slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) disconnected I0226 03:17:27.702275 1117 master.cpp:2635] Disconnecting slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 at slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) I0226 03:17:27.702335 1117 master.cpp:2654] Deactivating slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 at slave(399)@172.30.2.151:51934 (ip-172-30-2-151.mesosphere.io) I0226 03:17:27.702492 1111 hierarchical.cpp:560] Slave 5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0 deactivated I0226 03:17:27.707713 1115 slave.cpp:193] Slave started on 400)@172.30.2.151:51934 I0226 03:17:27.707739 1115 slave.cpp:194] Flags at startup: --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpu:2;mem:2048;disk(role1):2048"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP"""" I0226 03:17:27.708133 1115 credentials.hpp:83] Loading credential for authentication from '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/credential' I0226 03:17:27.708282 1115 slave.cpp:324] Slave using credential for: test-principal I0226 03:17:27.708407 1115 resources.cpp:576] Parsing resources as JSON failed: cpu:2;mem:2048;disk(role1):2048 Trying semicolon-delimited string format instead I0226 03:17:27.708874 1115 slave.cpp:464] Slave resources: cpu(*):2; mem(*):2048; disk(role1):2048; cpus(*):8; ports(*):[31000-32000] I0226 03:17:27.708931 1115 slave.cpp:472] Slave attributes: [ ] I0226 03:17:27.708941 1115 slave.cpp:477] Slave hostname: ip-172-30-2-151.mesosphere.io I0226 03:17:27.710033 1113 state.cpp:58] Recovering state from '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/meta' I0226 03:17:27.711252 1114 fetcher.cpp:81] Clearing fetcher cache I0226 03:17:27.711447 1116 status_update_manager.cpp:200] Recovering status update manager I0226 03:17:27.711727 1111 docker.cpp:726] Recovering Docker containers I0226 03:17:27.711839 1111 docker.cpp:885] Running docker -H unix:///var/run/docker.sock ps -a I0226 03:17:27.728170 1117 hierarchical.cpp:1434] No resources available to allocate! I0226 03:17:27.728235 1117 hierarchical.cpp:1529] No inverse offers to send out! I0226 03:17:27.728268 1117 hierarchical.cpp:1127] Performed allocation for 1 slaves in 296715ns I0226 03:17:27.817551 1113 docker.cpp:766] Running docker -H unix:///var/run/docker.sock inspect mesos-5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0.bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c I0226 03:17:27.923014 1112 docker.cpp:932] Checking if Docker container named '/mesos-5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0.bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c' was started by Mesos I0226 03:17:27.923071 1112 docker.cpp:942] Checking if Mesos container with ID 'bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c' has been orphaned I0226 03:17:27.923122 1112 docker.cpp:678] Running docker -H unix:///var/run/docker.sock stop -t 0 0a10ad8641f8e85227324a979817933322dc901706cb4430eab0bcaf979835d1 I0226 03:17:28.023885 1116 docker.cpp:727] Running docker -H unix:///var/run/docker.sock rm -v 0a10ad8641f8e85227324a979817933322dc901706cb4430eab0bcaf979835d1 I0226 03:17:28.127876 1114 docker.cpp:912] Unmounting volume for container 'bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c' ../../3rdparty/libprocess/include/process/gmock.hpp:214: ERROR: this mock object (used in test DockerContainerizerTest.ROOT_DOCKER_RecoverOrphanedPersistentVolumes) should be deleted but never is. Its address is @0x5781dd8. I0226 03:17:28.127957 1114 docker.cpp:912] Unmounting volume for container 'bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c' ../../src/tests/mesos.cpp:673: ERROR: this mock object (used in test DockerContainerizerTest.ROOT_DOCKER_RecoverOrphanedPersistentVolumes) should be deleted but never is. Its address is @0x5a03260. ../../src/tests/mesos.hpp:1357: ERROR: this mock object (used in test DockerContainerizerTest.ROOT_DOCKER_RecoverOrphanedPersistentVolumes) should be deleted but never is. Its address is @0x5b477c0. Failed to perform recovery: Unable to unmount volumes for Docker container 'bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c': Failed to unmount volume '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/executors/1/runs/bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c/path1': Failed to unmount '/tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/slaves/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-S0/frameworks/5cc57c0e-f1ad-4107-893f-420ed1a1db1a-0000/executors/1/runs/bcc90102-163d-4ff6-a3fc-a1b2e3fc3b7c/path1': Invalid argument ../../src/tests/containerizer/docker_containerizer_tests.cpp:1650: ERROR: this mock object (used in test DockerContainerizerTest.ROOT_DOCKER_RecoverOrphanedPersistentVolumes) should be deleted but never is. Its address is @0x7ffe75a8d310. To remedy this do as follows: ERROR: 4 leaked mock objects found at program exit. Step 1: rm -f /tmp/DockerContainerizerTest_ROOT_DOCKER_RecoverOrphanedPersistentVolumes_aJOesP/meta/slaves/latest This ensures slave doesn't recover old live executors. Step 2: Restart the slave. Process exited with code 1 ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4833","03/01/2016 21:34:04",5,"Poor allocator performance with labeled resources and/or persistent volumes ""Modifying the {{HierarchicalAllocator_BENCHMARK_Test.ResourceLabels}} benchmark from https://reviews.apache.org/r/43686/ to use distinct labels between different slaves, performance regresses from ~2 seconds to ~3 minutes. The culprit seems to be the way in which the allocator merges together resources; reserved resource labels (or persistent volume IDs) inhibit merging, which causes performance to be much worse.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4834","03/02/2016 00:11:53",2,"Add 'file' fetcher plugin. ""Add support for """"file"""" based URI fetcher. This could be useful for container image provisioning from local file system.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4844","03/03/2016 06:19:51",2,"Add authentication to master endpoints ""Before we can add authorization around operator endpoints, we need to add authentication support, so that unauthenticated requests are denied when --authenticate_http is enabled, and so that the principal is passed into `route()`.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4848","03/03/2016 09:16:19",2,"Agent Authn Research Spike ""Research the master authentication flags to see what changes will be necessary for agent http authentication. Write up a 1-2 page summary/design doc.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4849","03/03/2016 09:21:20",2,"Add agent flags for HTTP authentication ""Flags should be added to the agent to: 1. Enable HTTP authentication ({{--authenticate_http}}) 2. Specify credentials ({{--http_credentials}}) 3. Specify HTTP authenticators ({{--authenticators}})""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4850","03/03/2016 09:30:24",3,"Add authentication to agent endpoints /state and /flags ""The {{/state}} and {{/flags}} endpoints are installed in {{src/slave/slave.cpp}}, and thus are straightforward to make authenticated. Other agent endpoints require a bit more consideration, and are tracked in MESOS-4902. For more information on agent endpoints, see http://mesos.apache.org/documentation/latest/endpoints/ or search for `route(` in the source code: """," $ grep -rn """"route("""" src/ |grep -v master |grep -v tests |grep -v json src/version/version.cpp:75: route(""""/"""", VERSION_HELP(), &VersionProcess::version); src/files/files.cpp:150: route(""""/browse"""", src/files/files.cpp:153: route(""""/read"""", src/files/files.cpp:156: route(""""/download"""", src/files/files.cpp:159: route(""""/debug"""", src/slave/slave.cpp:580: route(""""/api/v1/executor"""", src/slave/slave.cpp:595: route(""""/state"""", src/slave/slave.cpp:601: route(""""/flags"""", src/slave/slave.cpp:607: route(""""/health"""", src/slave/monitor.cpp:100: route(""""/statistics"""", $ grep -rn """"route("""" 3rdparty/ |grep -v tests |grep -v README |grep -v examples |grep -v help |grep -v """"process..pp"""" 3rdparty/libprocess/include/process/profiler.hpp:34: route(""""/start"""", START_HELP(), &Profiler::start); 3rdparty/libprocess/include/process/profiler.hpp:35: route(""""/stop"""", STOP_HELP(), &Profiler::stop); 3rdparty/libprocess/include/process/system.hpp:70: route(""""/stats.json"""", statsHelp(), &System::stats); 3rdparty/libprocess/include/process/logging.hpp:44: route(""""/toggle"""", TOGGLE_HELP(), &This::toggle); ",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4854","03/03/2016 19:13:53",1,"Update CHANGELOG with net_cls isolator ""Need to update the CHANGELOG for 0.28 release.""","",0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4860","03/04/2016 00:48:40",2,"Add a script to install the Nvidia GDK on a host. ""This script can be used to install the Nvidia GDK for Cuda 7.5 on a mesos development machine. The purpose of the Nvidia GDK is to provide all the necessary header files (nvml.h) and library files (libnvidia-ml.so) necessary to build mesos with Nvidia GPU support. If the machine on which Mesos is being compiled doesn't have any GPUs, then libnvidia-ml.so consists only of stubs, allowing Mesos to build and run, but not actually do anything useful under the hood. This enables us to build a GPU-enabled mesos on a development machine without GPUs and then deploy it to a production machine with GPUs and be reasonably sure it will work.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4861","03/04/2016 00:51:47",2,"Add configure flags to build with Nvidia GPU support. ""The configure flags can be used to enable Nvidia GPU support, as well as specify the installation directories of the nvml header and library files if not already installed in standard include/library paths on the system. They will also be used to conditionally build support for Nvidia GPUs into Mesos.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4863","03/04/2016 00:55:29",2,"Add Nvidia GPU isolator tests. ""We need to be able to run unit tests that verify GPU isolation, as well as run full blown tests that actually exercise the GPUs. These tests should only build when the proper configure flags are set for enabling nvidia GPU support.""","",0,0,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4864","03/04/2016 00:58:26",3,"Add flag to specify available Nvidia GPUs on an agent's command line. ""In the initial GPU support we will not do auto-discovery of GPUs on an agent. As such, an operator will need to specify a flag on the command line, listing all of the GPUs available on the system.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4865","03/04/2016 01:04:29",3,"Add GPUs as an explicit resource. ""We will add """"gpus"""" as an explicitly recognized resource in Mesos, akin to cpus, memory, ports, and disk. In the containerizer, we will verify that the number of GPU resources passed in via the --resources flag matches the list of GPUs passed in via the --nvidia_gpus flag. In the future we will add autodiscovery so this matching is unnecessary. However, we will always have to pass """"gpus"""" as a resource to make any GPU available on the system (unlike for cpus and memory, where the default is probed).""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4877","03/06/2016 16:44:06",3,"Mesos containerizer can't handle top level docker image like ""alpine"" (must use ""library/alpine"") ""This can be demonstrated with the {{mesos-execute}} command: # Docker containerizer with image {{alpine}}: success # Mesos containerizer with image {{alpine}}: failure # Mesos containerizer with image {{library/alpine}}: success In the slave logs: curl command executed: Also got the same result with {{ubuntu}} docker image."""," sudo ./build/src/mesos-execute --docker_image=alpine --containerizer=docker --name=just-a-test --command=""""sleep 1000"""" --master=localhost:5050 sudo ./build/src/mesos-execute --docker_image=alpine --containerizer=mesos --name=just-a-test --command=""""sleep 1000"""" --master=localhost:5050 sudo ./build/src/mesos-execute --docker_image=library/alpine --containerizer=mesos --name=just-a-test --command=""""sleep 1000"""" --master=localhost:5050 ea-4460-83 9c-838da86af34c-0007' I0306 16:32:41.418269 3403 metadata_manager.cpp:159] Looking for image 'alpine:latest' I0306 16:32:41.418699 3403 registry_puller.cpp:194] Pulling image 'alpine:latest' from 'docker-manifest://registry-1.docker.io:443alpine?latest#https' to '/tmp/mesos-test /store/docker/staging/ka7MlQ' E0306 16:32:43.098131 3400 slave.cpp:3773] Container '4bf9132d-9a57-4baa-a78c-e7164e93ace6' for executor 'just-a-test' of framework 4f055c6f-1bea-4460-839c-838da86af34c-0 007 failed to start: Collect failed: Unexpected HTTP response '401 Unauthorized $ sudo sysdig -A -p """"*%evt.time %proc.cmdline"""" evt.type=execve and proc.name=curl 16:42:53.198998042 curl -s -S -L -D - https://registry-1.docker.io:443/v2/alpine/manifests/latest 16:42:53.784958541 curl -s -S -L -D - https://auth.docker.io/token?service=registry.docker.io&scope=repository:alpine:pull 16:42:54.294192024 curl -s -S -L -D - -H Authorization: Bearer eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCIsIng1YyI6WyJNSUlDTHpDQ0FkU2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakJHTVVRd1FnWURWUVFERXp0Uk5Gb3pPa2RYTjBrNldGUlFSRHBJVFRSUk9rOVVWRmc2TmtGRlF6cFNUVE5ET2tGU01rTTZUMFkzTnpwQ1ZrVkJPa2xHUlVrNlExazFTekFlRncweE5UQTJNalV4T1RVMU5EWmFGdzB4TmpBMk1qUXhPVFUxTkRaYU1FWXhSREJDQmdOVkJBTVRPMGhHU1UwNldGZFZWam8yUVZkSU9sWlpUVEk2TTFnMVREcFNWREkxT2s5VFNrbzZTMVExUmpwWVRsSklPbFJMTmtnNlMxUkxOanBCUVV0VU1Ga3dFd1lIS29aSXpqMENBUVlJS29aSXpqMERBUWNEUWdBRXl2UzIvdEI3T3JlMkVxcGRDeFdtS1NqV1N2VmJ2TWUrWGVFTUNVMDByQjI0akNiUVhreFdmOSs0MUxQMlZNQ29BK0RMRkIwVjBGZGdwajlOWU5rL2pxT0JzakNCcnpBT0JnTlZIUThCQWY4RUJBTUNBSUF3RHdZRFZSMGxCQWd3QmdZRVZSMGxBREJFQmdOVkhRNEVQUVE3U0VaSlRUcFlWMVZXT2paQlYwZzZWbGxOTWpveldEVk1PbEpVTWpVNlQxTktTanBMVkRWR09saE9Va2c2VkVzMlNEcExWRXMyT2tGQlMxUXdSZ1lEVlIwakJEOHdQWUE3VVRSYU16cEhWemRKT2xoVVVFUTZTRTAwVVRwUFZGUllPalpCUlVNNlVrMHpRenBCVWpKRE9rOUdOemM2UWxaRlFUcEpSa1ZKT2tOWk5Vc3dDZ1lJS29aSXpqMEVBd0lEU1FBd1JnSWhBTXZiT2h4cHhrTktqSDRhMFBNS0lFdXRmTjZtRDFvMWs4ZEJOVGxuWVFudkFpRUF0YVJGSGJSR2o4ZlVSSzZ4UVJHRURvQm1ZZ3dZelR3Z3BMaGJBZzNOUmFvPSJdfQ.eyJhY2Nlc3MiOltdLCJhdWQiOiJyZWdpc3RyeS5kb2NrZXIuaW8iLCJleHAiOjE0NTcyODI4NzQsImlhdCI6MTQ1NzI4MjU3NCwiaXNzIjoiYXV0aC5kb2NrZXIuaW8iLCJqdGkiOiJaOGtyNXZXNEJMWkNIRS1IcVJIaCIsIm5iZiI6MTQ1NzI4MjU3NCwic3ViIjoiIn0.C2wtJq_P-m0buPARhmQjDfh6ztIAhcvgN3tfWIZEClSgXlVQ_sAQXAALNZKwAQL2Chj7NpHX--0GW-aeL_28Aw https://registry-1.docker.io:443/v2/alpine/manifests/latest ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4886","03/07/2016 20:42:02",3,"Support mesos containerizer force_pull_image option. ""Currently for unified containerizer, images that are already cached by metadata manager cannot be updated. User has to delete corresponding images in store if an update is need. We should support `force_pull_image` option for unified containerizer, to provide override option if existed.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4888","03/07/2016 21:16:52",2,"Default cmd is executed as an incorrect command. ""When mesos containerizer launch a container using a docker image, which only container default Cmd. The executable command is is a incorrect sequence. For example: If an image default entrypoint is null, cmd is """"sh"""", user defines shell=false, value is none, and arguments as [-c, echo 'hello world']. The executable command is `[sh, -c, echo 'hello world', sh]`, which is incorrect. It should be `[sh, sh, -c, echo 'hello world']` instead. This problem is only exposed for the case: sh=0, value=0, argv=1, entrypoint=0, cmd=1. ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4889","03/07/2016 21:28:12",5,"Implement runtime isolator tests. ""There different cases in docker runtime isolator. Some special cases should be tested with unique test case, to verify the docker runtime isolator logic is correct.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4891","03/07/2016 22:48:00",8,"Add a '/containers' endpoint to the agent to list all the active containers. ""This endpoint will be similar to /monitor/statistics.json endpoint, but it'll also contain the 'container_status' about the container (see ContainerStatus in mesos.proto). We'll eventually deprecate the /monitor/statistics.json endpoint.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4902","03/09/2016 18:48:49",5,"Add authentication to libprocess endpoints ""In addition to the endpoints addressed by MESOS-4850 and MESOS-5152, the following endpoints would also benefit from HTTP authentication: * {{/profiler/*}} * {{/logging/toggle}} * {{/metrics/snapshot}} Adding HTTP authentication to these endpoints is a bit more complicated because they are defined at the libprocess level. While working on MESOS-4850, it became apparent that since our tests use the same instance of libprocess for both master and agent, different default authentication realms must be used for master/agent so that HTTP authentication can be independently enabled/disabled for each. We should establish a mechanism for making an endpoint authenticated that allows us to: 1) Install an endpoint like {{/files}}, whose code is shared by the master and agent, with different authentication realms for the master and agent 2) Avoid hard-coding a default authentication realm into libprocess, to permit the use of different authentication realms for the master and agent and to keep application-level concerns from leaking into libprocess Another option would be to use a single default authentication realm and always enable or disable HTTP authentication for *both* the master and agent in tests. However, this wouldn't allow us to test scenarios where HTTP authentication is enabled on one but disabled on the other.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4903","03/09/2016 20:46:19",3,"Allow multiple loads of module manifests ""The ModuleManager::load() is designed to be called exactly once during a process lifetime. This works well for Master/Agent environments. However, it can fail in Scheduler environments. For example, a single Scheduler binary might implement multiple scheduler drivers causing multiple calls to ModuleManager::load() leading to a failure.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4910","03/10/2016 14:55:20",1,"Deprecate the --docker_stop_timeout agent flag. ""Instead, a combination of {{executor_shutdown_grace_period}} agent flag and optionally task kill policies should be used.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4912","03/10/2016 15:37:08",3,"LinuxFilesystemIsolatorTest.ROOT_MultipleContainers fails. ""Observed on our CI: """," [09:34:15] : [Step 11/11] [ RUN ] LinuxFilesystemIsolatorTest.ROOT_MultipleContainers [09:34:19]W: [Step 11/11] I0309 09:34:19.906719 2357 linux.cpp:81] Making '/tmp/MLVLnv' a shared mount [09:34:19]W: [Step 11/11] I0309 09:34:19.923548 2357 linux_launcher.cpp:101] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher [09:34:19]W: [Step 11/11] I0309 09:34:19.924705 2376 containerizer.cpp:666] Starting container 'da610f7f-a709-4de8-94d3-74f4a520619b' for executor 'test_executor1' of framework '' [09:34:19]W: [Step 11/11] I0309 09:34:19.925355 2371 provisioner.cpp:285] Provisioning image rootfs '/tmp/MLVLnv/provisioner/containers/da610f7f-a709-4de8-94d3-74f4a520619b/backends/copy/rootfses/0d7e047a-50f1-490b-bb58-00e9c49628d0' for container da610f7f-a709-4de8-94d3-74f4a520619b [09:34:19]W: [Step 11/11] I0309 09:34:19.925881 2377 copy.cpp:127] Copying layer path '/tmp/MLVLnv/test_image1' to rootfs '/tmp/MLVLnv/provisioner/containers/da610f7f-a709-4de8-94d3-74f4a520619b/backends/copy/rootfses/0d7e047a-50f1-490b-bb58-00e9c49628d0' [09:34:30]W: [Step 11/11] I0309 09:34:30.835127 2376 linux.cpp:355] Bind mounting work directory from '/tmp/MLVLnv/slaves/test_slave/frameworks/executors/test_executor1/runs/da610f7f-a709-4de8-94d3-74f4a520619b' to '/tmp/MLVLnv/provisioner/containers/da610f7f-a709-4de8-94d3-74f4a520619b/backends/copy/rootfses/0d7e047a-50f1-490b-bb58-00e9c49628d0/mnt/mesos/sandbox' for container da610f7f-a709-4de8-94d3-74f4a520619b [09:34:30]W: [Step 11/11] I0309 09:34:30.835392 2376 linux.cpp:683] Changing the ownership of the persistent volume at '/tmp/MLVLnv/volumes/roles/test_role/persistent_volume_id' with uid 0 and gid 0 [09:34:30]W: [Step 11/11] I0309 09:34:30.840425 2376 linux.cpp:723] Mounting '/tmp/MLVLnv/volumes/roles/test_role/persistent_volume_id' to '/tmp/MLVLnv/provisioner/containers/da610f7f-a709-4de8-94d3-74f4a520619b/backends/copy/rootfses/0d7e047a-50f1-490b-bb58-00e9c49628d0/mnt/mesos/sandbox/volume' for persistent volume disk(test_role)[persistent_volume_id:volume]:32 of container da610f7f-a709-4de8-94d3-74f4a520619b [09:34:30]W: [Step 11/11] I0309 09:34:30.843878 2374 linux_launcher.cpp:304] Cloning child process with flags = CLONE_NEWNS [09:34:30]W: [Step 11/11] I0309 09:34:30.848302 2371 containerizer.cpp:666] Starting container 'fe4729c5-1e63-4cc6-a2e3-fe5006ffe087' for executor 'test_executor2' of framework '' [09:34:30]W: [Step 11/11] I0309 09:34:30.848758 2371 containerizer.cpp:1392] Destroying container 'da610f7f-a709-4de8-94d3-74f4a520619b' [09:34:30]W: [Step 11/11] I0309 09:34:30.848865 2373 provisioner.cpp:285] Provisioning image rootfs '/tmp/MLVLnv/provisioner/containers/fe4729c5-1e63-4cc6-a2e3-fe5006ffe087/backends/copy/rootfses/518b2464-43dd-47b0-9648-e78aedde6917' for container fe4729c5-1e63-4cc6-a2e3-fe5006ffe087 [09:34:30]W: [Step 11/11] I0309 09:34:30.849449 2375 copy.cpp:127] Copying layer path '/tmp/MLVLnv/test_image2' to rootfs '/tmp/MLVLnv/provisioner/containers/fe4729c5-1e63-4cc6-a2e3-fe5006ffe087/backends/copy/rootfses/518b2464-43dd-47b0-9648-e78aedde6917' [09:34:30]W: [Step 11/11] I0309 09:34:30.854038 2374 cgroups.cpp:2427] Freezing cgroup /sys/fs/cgroup/freezer/mesos/da610f7f-a709-4de8-94d3-74f4a520619b [09:34:30]W: [Step 11/11] I0309 09:34:30.856693 2372 cgroups.cpp:1409] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/da610f7f-a709-4de8-94d3-74f4a520619b after 2.608128ms [09:34:30]W: [Step 11/11] I0309 09:34:30.859237 2377 cgroups.cpp:2445] Thawing cgroup /sys/fs/cgroup/freezer/mesos/da610f7f-a709-4de8-94d3-74f4a520619b [09:34:30]W: [Step 11/11] I0309 09:34:30.861454 2377 cgroups.cpp:1438] Successfullly thawed cgroup /sys/fs/cgroup/freezer/mesos/da610f7f-a709-4de8-94d3-74f4a520619b after 2176us [09:34:30]W: [Step 11/11] I0309 09:34:30.934608 2378 containerizer.cpp:1608] Executor for container 'da610f7f-a709-4de8-94d3-74f4a520619b' has exited [09:34:30]W: [Step 11/11] I0309 09:34:30.937692 2372 linux.cpp:798] Unmounting volume '/tmp/MLVLnv/provisioner/containers/da610f7f-a709-4de8-94d3-74f4a520619b/backends/copy/rootfses/0d7e047a-50f1-490b-bb58-00e9c49628d0/mnt/mesos/sandbox/volume' for container da610f7f-a709-4de8-94d3-74f4a520619b [09:34:30]W: [Step 11/11] I0309 09:34:30.937742 2372 linux.cpp:817] Unmounting sandbox/work directory '/tmp/MLVLnv/provisioner/containers/da610f7f-a709-4de8-94d3-74f4a520619b/backends/copy/rootfses/0d7e047a-50f1-490b-bb58-00e9c49628d0/mnt/mesos/sandbox' for container da610f7f-a709-4de8-94d3-74f4a520619b [09:34:30]W: [Step 11/11] I0309 09:34:30.938129 2375 provisioner.cpp:330] Destroying container rootfs at '/tmp/MLVLnv/provisioner/containers/da610f7f-a709-4de8-94d3-74f4a520619b/backends/copy/rootfses/0d7e047a-50f1-490b-bb58-00e9c49628d0' for container da610f7f-a709-4de8-94d3-74f4a520619b [09:34:45] : [Step 11/11] ../../src/tests/containerizer/filesystem_isolator_tests.cpp:1318: Failure [09:34:45] : [Step 11/11] Failed to wait 15secs for wait1 [09:34:48] : [Step 11/11] [ FAILED ] LinuxFilesystemIsolatorTest.ROOT_MultipleContainers (32341 ms) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4922","03/13/2016 00:58:12",5,"Setup proper /etc/hostname, /etc/hosts and /etc/resolv.conf for containers in network/cni isolator. ""The network/cni isolator needs to properly setup /etc/hostname and /etc/hosts for the container with a hostname (e.g., randomly generated) and the assigned IP returned by CNI plugin. We should consider the following cases: 1) container is using host filesystem 2) container is using a different filesystem 3) custom executor and command executor""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4932","03/14/2016 11:01:11",5,"Propose Design for Authorization based filtering for endpoints. ""The design doc can be found here: https://docs.google.com/document/d/1M27S7OTSfJ8afZCklOz00g_wcVrL32i9Lyl6g22GWeY""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4937","03/14/2016 17:17:47",5,"Investigate container security options for Mesos containerizer ""We should investigate the following to improve the container security for Mesos containerizer and come up with a list of features that we want to support in MVP. 1) Capabilities 2) User namespace 3) Seccomp 4) SELinux 5) AppArmor We should investigate what other container systems are doing regarding security: 1) [k8s| https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/types.go#L2905] 2) [docker|https://docs.docker.com/engine/security/security/] 3) [oci|https://github.com/opencontainers/specs/blob/master/config.md]""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4941","03/14/2016 21:01:22",8,"Support update existing quota. ""We want to support updating an existing quota without the cycle of delete and recreate. This avoids the possible starvation risk of losing the quota between delete and recreate, and also makes the interface friendly. Design doc: https://docs.google.com/document/d/1c8fJY9_N0W04FtUQ_b_kZM6S0eePU7eYVyfUP14dSys""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4942","03/15/2016 00:15:24",2,"Docker runtime isolator tests may cause disk issue. ""Currently slave working directory is used as docker store dir and archive dir, which is problematic. Because slave work dir is exactly `environment->mkdtemp()`, it will get cleaned up until the end of the whole test. But the runtime isolator local puller tests cp the host's rootfs, which size is relatively big. Cleanup has to be done by each test tear down. ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4943","03/15/2016 01:23:34",13,"Reduce the size of LinuxRootfs in tests. ""Right now, LinuxRootfs copies files from the host filesystem to construct a chroot-able rootfs. We copy a lot of unnecessary files, making it very large. We can potentially strip a lot files.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4944","03/15/2016 01:25:39",5,"Improve overlay backend so that it's writable ""Currently, the overlay backend will provision a read-only FS. We can use an empty directory from the container sandbox to act as the upper layer so that it's writable.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4951","03/15/2016 20:53:17",2,"Enable actors to pass an authentication realm to libprocess ""To prepare for MESOS-4902, the Mesos master and agent need a way to pass the desired authentication realm to libprocess. Since some endpoints (like {{/profiler/*}}) get installed in libprocess, the master/agent should be able to specify during initialization what authentication realm the libprocess-level endpoints will be authenticated under.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4956","03/16/2016 09:58:33",5,"Add authentication to /files endpoints ""To protect access (authz) to master/agent logs as well as executor sandboxes, we need authentication on the /files endpoints. Adding HTTP authentication to these endpoints is a bit complicated since they are defined in code that is shared by the master and agent. While working on MESOS-4850, it became apparent that since our tests use the same instance of libprocess for both master and agent, different default authentication realms must be used for master/agent so that HTTP authentication can be independently enabled/disabled for each. We should establish a mechanism for making an endpoint authenticated that allows us to: 1) Install an endpoint like {{/files}}, whose code is shared by the master and agent, with different authentication realms for the master and agent 2) Avoid hard-coding a default authentication realm into libprocess, to permit the use of different authentication realms for the master and agent and to keep application-level concerns from leaking into libprocess Another option would be to use a single default authentication realm and always enable or disable HTTP authentication for *both* the master and agent in tests. However, this wouldn't allow us to test scenarios where HTTP authentication is enabled on one but disabled on the other.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4965","03/16/2016 22:56:42",8,"Support resizing of an existing persistent volume ""We need a mechanism to update the size of a persistent volume. The increase case is generally more interesting to us (as long as there still available disk resource on the same disk).""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-4978","03/18/2016 17:36:44",3,"Update mesos-execute with Appc changes. ""mesos-execute cli application currently does not have support for Appc images. Adding support would make integration tests easier.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4984","03/18/2016 22:45:23",2,"MasterTest.SlavesEndpointTwoSlaves is flaky ""Observed on Arch Linux with GCC 6, running in a virtualbox VM: [ RUN ] MasterTest.SlavesEndpointTwoSlaves /mesos-2/src/tests/master_tests.cpp:1710: Failure Value of: array.get().values.size() Actual: 1 Expected: 2u Which is: 2 [ FAILED ] MasterTest.SlavesEndpointTwoSlaves (86 ms) Seems to fail non-deterministically, perhaps more often when there is concurrent CPU load on the machine.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4985","03/18/2016 23:21:02",3,"Destroy a container while it's provisioning can lead to leaked provisioned directories. ""Here is the possible sequence of events: 1) containerizer->launch 2) provisioner->provision is called. it is fetching the image 3) executor registration timed out 4) containerizer->destroy is called 5) container->state is still in PREPARING 6) provisioner->destroy is called So we can be calling provisioner->destory while provisioner->provision hasn't finished yet. provisioner->destroy might just skip since there's no information about the container yet, and later, provisioner will prepare the root filesystem. This root filesystem will not be destroyed as destroy already finishes.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-4992","03/21/2016 16:50:48",3,"sandbox uri does not work outisde mesos http server ""The SandBox uri of a framework does not work if i just copy paste it to the browser. For example the following sandbox uri: http://172.17.0.1:5050/#/slaves/50f87c73-79ef-4f2a-95f0-b2b4062b2de6-S0/frameworks/50f87c73-79ef-4f2a-95f0-b2b4062b2de6-0009/executors/driver-20160321155016-0001/browse should redirect to: http://172.17.0.1:5050/#/slaves/50f87c73-79ef-4f2a-95f0-b2b4062b2de6-S0/browse?path=%2Ftmp%2Fmesos%2Fslaves%2F50f87c73-79ef-4f2a-95f0-b2b4062b2de6-S0%2Fframeworks%2F50f87c73-79ef-4f2a-95f0-b2b4062b2de6-0009%2Fexecutors%2Fdriver-20160321155016-0001%2Fruns%2F60533483-31fb-4353-987d-f3393911cc80 yet it fails with the message: """"Failed to find slaves. Navigate to the slave's sandbox via the Mesos UI."""" and redirects to: http://172.17.0.1:5050/#/ It is an issue for me because im working on expanding the mesos spark ui with sandbox uri, The other option is to get the slave info and parse the json file there and get executor paths not so straightforward or elegant though. Moreover i dont see the runs/container_id in the Mesos Proto Api. I guess this is hidden info, this is the needed piece of info to re-write the uri without redirection. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5000","03/22/2016 12:48:11",3,"MasterTest.MasterLost is flaky ""The test {{MasterTest.MasterLost}} and {{ExceptionTest.DisallowSchedulerActionsOnAbort}} fail at least half the time under OS X (clang, not optimized, {{30efac7}}), e.g., Sometimes also {{FaultToleranceTest.SchedulerFailover}} fails with the same stack trace. I could trace this to the recent refactoring of the test helpers (MESOS-4633, MESOS-4634), It appears the lifetimes of some objects are still not ordered correctly. """," [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from MasterTest [ RUN ] MasterTest.MasterLost *** Aborted at 1458650698 (unix time) try """"date -d @1458650698"""" if you are using GNU date *** PC: @ 0x109685fcc mesos::internal::state::State::store() *** SIGSEGV (@0x0) received by PID 18620 (TID 0x111259000) stack trace: *** @ 0x7fff850e1f1a _sigtramp @ 0x108c74eaf boost::uuids::detail::sha1::process_byte_impl() @ 0x1095fd723 mesos::internal::state::protobuf::State::store<>() @ 0x1095fbd3e mesos::internal::master::RegistrarProcess::update() @ 0x1095fcf6c mesos::internal::master::RegistrarProcess::_apply() @ 0x1096797a0 _ZZN7process8dispatchIbN5mesos8internal6master16RegistrarProcessENS_5OwnedINS3_9OperationEEES7_EENS_6FutureIT_EERKNS_3PIDIT0_EEMSC_FSA_T1_ET2_ENKUlPNS_11ProcessBaseEE_clESL_ @ 0x1096795f0 _ZNSt3__128__invoke_void_return_wrapperIvE6__callIJRZN7process8dispatchIbN5mesos8internal6master16RegistrarProcessENS3_5OwnedINS7_9OperationEEESB_EENS3_6FutureIT_EERKNS3_3PIDIT0_EEMSG_FSE_T1_ET2_EUlPNS3_11ProcessBaseEE_SP_EEEvDpOT_ @ 0x1096792d9 _ZNSt3__110__function6__funcIZN7process8dispatchIbN5mesos8internal6master16RegistrarProcessENS2_5OwnedINS6_9OperationEEESA_EENS2_6FutureIT_EERKNS2_3PIDIT0_EEMSF_FSD_T1_ET2_EUlPNS2_11ProcessBaseEE_NS_9allocatorISP_EEFvSO_EEclEOSO_ @ 0x10b2e9e4c std::__1::function<>::operator()() @ 0x10b2e9d9c process::ProcessBase::visit() @ 0x10b31d26e process::DispatchEvent::visit() @ 0x108ad7d81 process::ProcessBase::serve() @ 0x10b2e3cb4 process::ProcessManager::resume() @ 0x10b36c479 process::ProcessManager::init_threads()::$_1::operator()() @ 0x10b36c0a2 _ZNSt3__114__thread_proxyINS_5tupleIJNS_6__bindIZN7process14ProcessManager12init_threadsEvE3$_1JNS_17reference_wrapperIKNS_6atomicIbEEEEEEEEEEEEPvSD_ @ 0x7fff90eca05a _pthread_body @ 0x7fff90ec9fd7 _pthread_start @ 0x7fff90ec73ed thread_start There are only 'skip'ped commits left to test. The first bad commit could be any of: 75ca1e6c9fde655c41fdf835aa20c47570d21f10 56e9406763e8514a7557ab3862d2f352a61425d5 b377557c2bfc35c894e87becb47122955540f133 7bf6e4f70131175edd4d6d77ea0dc7692b3e72ae c7df1d7bcb1604c95800871cc0473c946e5b5d16 951539317525f3afe9490ed098617e5d4563a80a We cannot bisect more! ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5001","03/22/2016 13:41:07",1,"Prefix allocator metrics with ""mesos/"" to better support custom allocator metrics. ""There currently exists only a single allocator metric named In order to support different allocator implementations (the """"mesos"""" allocator being the default one included in the project currently), it would be better to rename the metric so that allocator metrics are prefixed with the allocator implementation name: This consistent with the approach taken for containerizer metrics, where the mesos containerizer exposes its metrics under a """"mesos/"""" prefix."""," 'allocator/event_queue_dispatches' allocator/mesos/event_queue_dispatches ",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5004","03/22/2016 18:29:19",1,"Clarify docs on '/reserve' and '/create-volumes' without authentication ""For both reservations and persistent volume creation, the behavior of the HTTP endpoints differs slightly from that of the framework operations. Due to the implementation of HTTP authentication, it is not possible for a framework/operator to provide a principal when HTTP authentication is disabled. This means that when HTTP authentication is disabled, the endpoint handlers will _always_ receive {{None()}} as the principal associated with the request, and thus if authorization is enabled, the request will only succeed if the NONE principal is authorized to do stuff. The docs should be updated to explain this behavior explicitly.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5010","03/22/2016 22:43:46",2,"Installation of mesos python package is incomplete ""The installation of mesos python package is incomplete, i.e., the files {{cli.py}}, {{futures.py}}, and {{http.py}} are not installed. This appears to be first broken with {{d1d70b9}} (MESOS-3969, [Upgraded bundled pip to 7.1.2.|https://reviews.apache.org/r/40630]). Bisecting in {{pip}}-land shows that our install becomes broken for {{pip-6.0.1}} and later (we are using {{pip-7.1.2}}). """," % ../configure --enable-python % make install DESTDIR=$PWD/D % PYTHONPATH=$PWD/D/usr/local/lib/python2.7/site-packages:$PYTHONPATH python -c 'from mesos import http' Traceback (most recent call last): File """""""", line 1, in ImportError: cannot import name http ",0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5023","03/24/2016 14:33:43",2,"MesosContainerizerProvisionerTest.DestroyWhileProvisioning is flaky. ""Observed on the Apache Jenkins. """," [ RUN ] MesosContainerizerProvisionerTest.ProvisionFailed I0324 13:38:56.284261 2948 containerizer.cpp:666] Starting container 'test_container' for executor 'executor' of framework '' I0324 13:38:56.285825 2939 containerizer.cpp:1421] Destroying container 'test_container' I0324 13:38:56.285854 2939 containerizer.cpp:1424] Waiting for the provisioner to complete for container 'test_container' [ OK ] MesosContainerizerProvisionerTest.ProvisionFailed (7 ms) [ RUN ] MesosContainerizerProvisionerTest.DestroyWhileProvisioning I0324 13:38:56.291187 2944 containerizer.cpp:666] Starting container 'c2316963-c6cb-4c7f-a3b9-17ca5931e5b2' for executor 'executor' of framework '' I0324 13:38:56.292157 2944 containerizer.cpp:1421] Destroying container 'c2316963-c6cb-4c7f-a3b9-17ca5931e5b2' I0324 13:38:56.292179 2944 containerizer.cpp:1424] Waiting for the provisioner to complete for container 'c2316963-c6cb-4c7f-a3b9-17ca5931e5b2' F0324 13:38:56.292899 2944 containerizer.cpp:752] Check failed: containers_.contains(containerId) *** Check failure stack trace: *** @ 0x2ac9973d0ae4 google::LogMessage::Fail() @ 0x2ac9973d0a30 google::LogMessage::SendToLog() @ 0x2ac9973d0432 google::LogMessage::Flush() @ 0x2ac9973d3346 google::LogMessageFatal::~LogMessageFatal() @ 0x2ac996af897c mesos::internal::slave::MesosContainerizerProcess::_launch() @ 0x2ac996b1f18a _ZZN7process8dispatchIbN5mesos8internal5slave25MesosContainerizerProcessERKNS1_11ContainerIDERK6OptionINS1_8TaskInfoEERKNS1_12ExecutorInfoERKSsRKS8_ISsERKNS1_7SlaveIDERKNS_3PIDINS3_5SlaveEEEbRKS8_INS3_13ProvisionInfoEES5_SA_SD_SsSI_SL_SQ_bSU_EENS_6FutureIT_EERKNSO_IT0_EEMS10_FSZ_T1_T2_T3_T4_T5_T6_T7_T8_T9_ET10_T11_T12_T13_T14_T15_T16_T17_T18_ENKUlPNS_11ProcessBaseEE_clES1P_ @ 0x2ac996b479d9 _ZNSt17_Function_handlerIFvPN7process11ProcessBaseEEZNS0_8dispatchIbN5mesos8internal5slave25MesosContainerizerProcessERKNS5_11ContainerIDERK6OptionINS5_8TaskInfoEERKNS5_12ExecutorInfoERKSsRKSC_ISsERKNS5_7SlaveIDERKNS0_3PIDINS7_5SlaveEEEbRKSC_INS7_13ProvisionInfoEES9_SE_SH_SsSM_SP_SU_bSY_EENS0_6FutureIT_EERKNSS_IT0_EEMS14_FS13_T1_T2_T3_T4_T5_T6_T7_T8_T9_ET10_T11_T12_T13_T14_T15_T16_T17_T18_EUlS2_E_E9_M_invokeERKSt9_Any_dataS2_ @ 0x2ac997334fef std::function<>::operator()() @ 0x2ac99731b1c7 process::ProcessBase::visit() @ 0x2ac997321154 process::DispatchEvent::visit() @ 0x9a699c process::ProcessBase::serve() @ 0x2ac9973173c0 process::ProcessManager::resume() @ 0x2ac99731445a _ZZN7process14ProcessManager12init_threadsEvENKUlRKSt11atomic_boolE_clES3_ @ 0x2ac997320916 _ZNSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS3_EEE6__callIvIEILm0EEEET_OSt5tupleIIDpT0_EESt12_Index_tupleIIXspT1_EEE @ 0x2ac9973208c6 _ZNSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS3_EEEclIIEvEET0_DpOT_ @ 0x2ac997320858 _ZNSt12_Bind_simpleIFSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS4_EEEvEE9_M_invokeIIEEEvSt12_Index_tupleIIXspT_EEE @ 0x2ac9973207af _ZNSt12_Bind_simpleIFSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS4_EEEvEEclEv @ 0x2ac997320748 _ZNSt6thread5_ImplISt12_Bind_simpleIFSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS6_EEEvEEE6_M_runEv @ 0x2ac9989aea60 (unknown) @ 0x2ac999125182 start_thread @ 0x2ac99943547d (unknown) make[4]: Leaving directory `/mesos/mesos-0.29.0/_build/src' make[4]: *** [check-local] Aborted make[3]: *** [check-am] Error 2 make[3]: Leaving directory `/mesos/mesos-0.29.0/_build/src' make[2]: *** [check] Error 2 make[2]: Leaving directory `/mesos/mesos-0.29.0/_build/src' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/mesos/mesos-0.29.0/_build' make: *** [distcheck] Error 1 Build step 'Execute shell' marked build as failure ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5027","03/24/2016 18:58:55",2,"Enable authenticated login in the webui ""The webui hits a number of endpoints to get the data that it displays: {{/state}}, {{/metrics/snapshot}}, {{/files/browse}}, {{/files/read}}, and maybe others? Once authentication is enabled on these endpoints, we need to add a login prompt to the webui so that users can provide credentials.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5028","03/24/2016 20:19:51",3,"Copy provisioner cannot replace directory with symlink ""I'm trying to play with the new image provisioner on our custom docker images, but one of layer failed to get copied, possibly due to a dangling symlink. Error log with Glog_v=1: {quote} I0324 05:42:48.926678 15067 copy.cpp:127] Copying layer path '/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs' to rootfs '/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6' E0324 05:42:49.028506 15062 slave.cpp:3773] Container '5f05be6c-c970-4539-aa64-fd0eef2ec7ae' for executor 'test' of framework 75932a89-1514-4011-bafe-beb6a208bb2d-0004 failed to start: Collect failed: Collect failed: Failed to copy layer: cp: cannot overwrite directory ‘/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6/etc/apt’ with non-directory {quote} Content of _/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs/etc/apt_ points to a non-existing absolute path (cannot provide exact path but it's a result of us trying to mount apt keys into docker container at build time). I believe what happened is that we executed a script at build time, which contains equivalent of: {quote} rm -rf /etc/apt/* && ln -sf /build-mount-point/ /etc/apt {quote} ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5041","03/26/2016 13:44:00",8,"Add cgroups unified isolator ""Implement the cgroups unified isolator for Mesos containerizer.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5044","03/26/2016 20:08:54",1,"Temporary directories created by environment->mkdtemp cleanup can be problematic. ""Currently in mesos test, we have the temporary directories created by `environment->mkdtemp()` cleaned up until the end of the test suite, which can be problematic. For instance, if we have many tests in a test suite, each of those tests is performing large size disk read/write in its temp dir, which may lead to out of disk issue on some resource limited machines. We should have these temp dir created by `environment->mkdtemp` cleaned up during each test teardown. Currently we only clean up the sandbox for each test.""","",0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5058","03/29/2016 09:32:17",2,"Expose per-role dominant share ""A client's dominant share is crucial measure for how likely it is to receive offers in the future. We should expose it in a dedicated allocator metric. As currently the {{HierarchicalAllocatorProcess}} does work with generic {{Sorters}} which have no notion of DRF share we need to decide whether and where we would need to limit generality in order to expose the innards of the currently used {{DRFSorter}}. ""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5065","03/30/2016 16:40:28",3,"Support docker private registry default docker config. ""For docker private registry with authentication, docker containerizer should support using a default .docker/config.json file (or the old .dockercfg file) locally, which is pre-handled by operators. The default docker config file should be exposed by a new agent flag `--docker_config`. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5073","03/31/2016 15:30:40",1,"Mesos allocator leaks role sorter and quota role sorters. ""The Mesos allocator {{internal::HierarchicalAllocatorProcess}} owns two raw pointer members {{roleSorter}} and {{quotaRoleSorter}}, but fails to properly manage their lifetime; they are e.g., not cleaned up in the allocator process destructor. Since currently we do not recreate an existing allocator in production code they seem to be unaffected by these leaks; they do affect tests though where we create allocators multiple times.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5078","04/01/2016 17:47:30",5,"Document TaskStatus reasons ""We should document the possible {{reason}} values that can be found in the {{TaskStatus}} message.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5109","04/04/2016 21:08:25",2,"Capture the error code in `ErrnoError` and `WindowsError`. ""The {{ErrnoError}} and {{WindowsError}} classes simply construct the error string via a mechanism such as {{strerror}}. They should also capture the error code, as it is an essential piece of information for such an error type.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5110","04/04/2016 21:09:22",3,"Introduce an additional template parameter to `Try` for typed error. ""Add an additional template parameter {{E}} to the {{Try}} class template. """," template class Try { /* ... */ }; ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5111","04/04/2016 21:12:51",2,"Update `network::connect` to use the typed error state of `Try`. ""{{network::connect}} function returns a {{Try}} currently and the caller is required to inspect the state of {{errno}} out-of-band. {{network::connect}} should really return something like a {{Try}}.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5112","04/04/2016 21:19:04",2,"Introduce `WindowsSocketError`. ""{{WindowsError}} invokes {{::GetLastError}} to retrieve the error code. Windows has a {{::WSAGetLastError}} function which at the interface level, is intended for failed socket operations. We should introduce a {{WindowsSocketError}} which invokes {{::WSAGetLastError}} and use them accordingly.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5113","04/04/2016 21:19:19",1,"`network/cni` isolator crashes when launched without the --network_cni_plugins_dir flag ""If we start the agent with the --isolation='network/cni' but do not specify the --network_cni_plugins_dir flag, the agent crashes with the following stack dump: 0x00007ffff2324cc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) bt #0 0x00007ffff2324cc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 #1 0x00007ffff23280d8 in __GI_abort () at abort.c:89 #2 0x00007ffff231db86 in __assert_fail_base (fmt=0x7ffff246e830 """"%s%s%s:%u: %s%sAssertion `%s' failed.\n%n"""", assertion=assertion@entry=0x451f5c """"isSome()"""", file=file@entry=0x451f65 """"../../3rdparty/libprocess/3rdparty/stout/include/stout/option.hpp"""", line=line@entry=111, function=function@entry=0x45294a """"const T &Option >::get() const & [T = std::basic_string]"""") at assert.c:92 #3 0x00007ffff231dc32 in __GI___assert_fail (assertion=0x451f5c """"isSome()"""", file=0x451f65 """"../../3rdparty/libprocess/3rdparty/stout/include/stout/option.hpp"""", line=111, function=0x45294a """"const T &Option >::get() const & [T = std::basic_string]"""") at assert.c:101 #4 0x0000000000432c0d in Option::get() const & (this=0x6c1ea8) at ../../3rdparty/libprocess/3rdparty/stout/include/stout/option.hpp:111 Python Exception list index out of range: #5 0x00007ffff63ef7cc in mesos::internal::slave::NetworkCniIsolatorProcess::recover (this=0x6c1e70, states=empty std::list, orphans=...) at ../../src/slave/containerizer/mesos/isolators/network/cni/cni.cpp:331 #6 0x00007ffff60cddd8 in operator() (this=0x7fffc0001e00, process=0x6c1ef8) at ../../3rdparty/libprocess/include/process/dispatch.hpp:239 #7 0x00007ffff60cd972 in std::_Function_handler process::dispatch > const&, hashset, std::equal_to > const&, std::list >, hashset, std::equal_to > >(process::PID const&, process::Future (mesos::internal::slave::MesosIsolatorProcess::*)(std::list > const&, hashset, std::equal_to > const&), std::list >, hashset, std::equal_to >)::{lambda(process::ProcessBase*)#1}>::_M_invoke(std::_Any_data const&, process::ProcessBase*) (__functor=..., __args=0x6c1ef8) at /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/functional:2071 #8 0x00007ffff6a6bf38 in std::function::operator()(process::ProcessBase*) const (this=0x7fffc0001d70, __args=0x6c1ef8) at /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/functional:2471 #9 0x00007ffff6a561b4 in process::ProcessBase::visit (this=0x6c1ef8, event=...) at ../../../3rdparty/libprocess/src/process.cpp:3130 #10 0x00007ffff6aac5fe in process::DispatchEvent::visit (this=0x7fffc0001570, visitor=0x6c1ef8) at ../../../3rdparty/libprocess/include/process/event.hpp:161 #11 0x00007ffff55e9c91 in process::ProcessBase::serve (this=0x6c1ef8, event=...) at ../../3rdparty/libprocess/include/process/process.hpp:82 #12 0x00007ffff6a53ed4 in process::ProcessManager::resume (this=0x67cca0, process=0x6c1ef8) at ../../../3rdparty/libprocess/src/process.cpp:2570 #13 0x00007ffff6a5bff5 in operator() (this=0x697d70, joining=...) at ../../../3rdparty/libprocess/src/process.cpp:2218 #14 0x00007ffff6a5bf33 in std::_Bind)>::__call(std::tuple<>&&, std::_Index_tuple<0ul>) (this=0x697d70, __args=) at /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/functional:1295 #15 0x00007ffff6a5bee6 in std::_Bind)>::operator()<, void>() (this=0x697d70) at /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/functional:1353 #16 0x00007ffff6a5be95 in std::_Bind_simple)> ()>::_M_invoke<>(std::_Index_tuple<>) (this=0x697d70) at /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/functional:1731 #17 0x00007ffff6a5be65 in std::_Bind_simple)> ()>::operator()() (this=0x697d70) at /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/functional:1720 #18 0x00007ffff6a5be3c in std::thread::_Impl)> ()> >::_M_run() (this=0x697d58) at /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/thread:115 #19 0x00007ffff2b98a60 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #20 0x00007ffff26bb182 in start_thread (arg=0x7fffeb92d700) at pthread_create.c:312 #21 0x00007ffff23e847d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 (gdb) frame 4 #4 0x0000000000432c0d in Option::get() const & (this=0x6c1ea8) at ../../3rdparty/libprocess/3rdparty/stout/include/stout/option.hpp:111""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5114","04/04/2016 22:01:33",2,"Flags::parse does not handle empty string correctly. ""A missing default for quorum size has generated the following master config This was causing each elected leader to attempt replica recovery. E.g. {{group.cpp:700] Trying to get '/mesos/log_replicas/0000000012' in ZooKeeper}} And eventually: {{master.cpp:1458] Recovery failed: Failed to recover registrar: Failed to perform fetch within 1mins}} Full log on one of the masters https://gist.github.com/clehene/09a9ddfe49b92a5deb4c1b421f63479e All masters and zk nodes were reachable over the network. Also once the quorum was configured the master recovery protocol finished gracefully. """," MESOS_WORK_DIR=""""/var/lib/mesos/master"""" MESOS_ZK=""""zk://zk1:2181,zk2:2181,zk3:2181/mesos"""" MESOS_QUORUM= MESOS_PORT=5050 MESOS_CLUSTER=""""mesos"""" MESOS_LOG_DIR=""""/var/log/mesos"""" MESOS_LOGBUFSECS=1 MESOS_LOGGING_LEVEL=""""INFO"""" ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5115","04/04/2016 23:17:22",2,"Grant access to /dev/nvidiactl and /dev/nvidia-uvm in the Nvidia GPU isolator. "" Calls to 'nvidia-smi' fail inside a container even if access to a GPU has been granted. Moreover, access to /dev/nvidiactl is actually required for a container to do anything useful with a GPU even if it has access to it. We should grant/revoke access to /dev/nvidiactl and /dev/nvidia-uvm as GPUs are added and removed from a container in the Nvidia GPU isolator.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5124","04/05/2016 16:36:24",3,"TASK_KILLING is not supported by mesos-execute. ""Recently {{TASK_KILLING}} state (MESOS-4547) have been introduced to Mesos. We should add support for this feature to {{mesos-execute}}.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5127","04/06/2016 05:10:24",1,"Reset `LIBPROCESS_IP` in `network\cni` isolator. ""Currently the `LIBPROCESS_IP` environment variable was being set to the Agent IP if the environment variable has not be defined by the `Framework`. For containers having their own IP address (as with containers on CNI networks) this becomes a problem since the command executor tries to bind to the `LIBPROCESS_IP` that does not exist in its network namespace, and fails. Thus, for containers launched on CNI networks the `LIBPROCESS_IP` should not be set, or rather is set to """"0.0.0.0"""", allowing the container to bind to the IP address provided by the CNI network.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5128","04/06/2016 06:19:18",3,"PersistentVolumeTest.AccessPersistentVolume is flaky ""Observed on ASF CI: """," [ RUN ] DiskResource/PersistentVolumeTest.AccessPersistentVolume/0 I0405 17:29:19.134435 31837 cluster.cpp:139] Creating default 'local' authorizer I0405 17:29:19.251143 31837 leveldb.cpp:174] Opened db in 116.386403ms I0405 17:29:19.310050 31837 leveldb.cpp:181] Compacted db in 58.80688ms I0405 17:29:19.310180 31837 leveldb.cpp:196] Created db iterator in 37145ns I0405 17:29:19.310199 31837 leveldb.cpp:202] Seeked to beginning of db in 4212ns I0405 17:29:19.310210 31837 leveldb.cpp:271] Iterated through 0 keys in the db in 410ns I0405 17:29:19.310279 31837 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0405 17:29:19.311069 31861 recover.cpp:447] Starting replica recovery I0405 17:29:19.311362 31861 recover.cpp:473] Replica is in EMPTY status I0405 17:29:19.312641 31861 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (14359)@172.17.0.4:43972 I0405 17:29:19.313045 31860 recover.cpp:193] Received a recover response from a replica in EMPTY status I0405 17:29:19.313608 31860 recover.cpp:564] Updating replica status to STARTING I0405 17:29:19.316416 31867 master.cpp:376] Master 9565ff6f-f1b6-4259-8430-690e635c391f (4090d10eba90) started on 172.17.0.4:43972 I0405 17:29:19.316470 31867 master.cpp:378] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_http=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/0A9ELu/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-0.29.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/0A9ELu/master"""" --zk_session_timeout=""""10secs"""" I0405 17:29:19.316938 31867 master.cpp:427] Master only allowing authenticated frameworks to register I0405 17:29:19.316951 31867 master.cpp:432] Master only allowing authenticated agents to register I0405 17:29:19.316961 31867 credentials.hpp:37] Loading credentials for authentication from '/tmp/0A9ELu/credentials' I0405 17:29:19.317402 31867 master.cpp:474] Using default 'crammd5' authenticator I0405 17:29:19.317643 31867 master.cpp:545] Using default 'basic' HTTP authenticator I0405 17:29:19.317854 31867 master.cpp:583] Authorization enabled I0405 17:29:19.318081 31864 whitelist_watcher.cpp:77] No whitelist given I0405 17:29:19.318079 31861 hierarchical.cpp:144] Initialized hierarchical allocator process I0405 17:29:19.320838 31864 master.cpp:1826] The newly elected leader is master@172.17.0.4:43972 with id 9565ff6f-f1b6-4259-8430-690e635c391f I0405 17:29:19.320888 31864 master.cpp:1839] Elected as the leading master! I0405 17:29:19.320909 31864 master.cpp:1526] Recovering from registrar I0405 17:29:19.321218 31871 registrar.cpp:331] Recovering registrar I0405 17:29:19.347045 31860 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 33.164133ms I0405 17:29:19.347126 31860 replica.cpp:320] Persisted replica status to STARTING I0405 17:29:19.347611 31869 recover.cpp:473] Replica is in STARTING status I0405 17:29:19.349215 31871 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (14361)@172.17.0.4:43972 I0405 17:29:19.349653 31870 recover.cpp:193] Received a recover response from a replica in STARTING status I0405 17:29:19.350236 31866 recover.cpp:564] Updating replica status to VOTING I0405 17:29:19.388882 31864 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 38.38299ms I0405 17:29:19.388993 31864 replica.cpp:320] Persisted replica status to VOTING I0405 17:29:19.389369 31856 recover.cpp:578] Successfully joined the Paxos group I0405 17:29:19.389735 31856 recover.cpp:462] Recover process terminated I0405 17:29:19.390476 31868 log.cpp:659] Attempting to start the writer I0405 17:29:19.392125 31862 replica.cpp:493] Replica received implicit promise request from (14362)@172.17.0.4:43972 with proposal 1 I0405 17:29:19.430706 31862 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 38.505062ms I0405 17:29:19.430816 31862 replica.cpp:342] Persisted promised to 1 I0405 17:29:19.431918 31856 coordinator.cpp:238] Coordinator attempting to fill missing positions I0405 17:29:19.433725 31861 replica.cpp:388] Replica received explicit promise request from (14363)@172.17.0.4:43972 for position 0 with proposal 2 I0405 17:29:19.472491 31861 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 38.659492ms I0405 17:29:19.472595 31861 replica.cpp:712] Persisted action at 0 I0405 17:29:19.474556 31864 replica.cpp:537] Replica received write request for position 0 from (14364)@172.17.0.4:43972 I0405 17:29:19.474652 31864 leveldb.cpp:436] Reading position from leveldb took 49423ns I0405 17:29:19.528175 31864 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 53.443616ms I0405 17:29:19.528300 31864 replica.cpp:712] Persisted action at 0 I0405 17:29:19.529389 31865 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0405 17:29:19.571137 31865 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 41.676495ms I0405 17:29:19.571254 31865 replica.cpp:712] Persisted action at 0 I0405 17:29:19.571302 31865 replica.cpp:697] Replica learned NOP action at position 0 I0405 17:29:19.572322 31856 log.cpp:675] Writer started with ending position 0 I0405 17:29:19.574060 31861 leveldb.cpp:436] Reading position from leveldb took 83200ns I0405 17:29:19.575417 31864 registrar.cpp:364] Successfully fetched the registry (0B) in 0ns I0405 17:29:19.575565 31864 registrar.cpp:463] Applied 1 operations in 46419ns; attempting to update the 'registry' I0405 17:29:19.576517 31857 log.cpp:683] Attempting to append 170 bytes to the log I0405 17:29:19.576849 31857 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0405 17:29:19.578390 31857 replica.cpp:537] Replica received write request for position 1 from (14365)@172.17.0.4:43972 I0405 17:29:19.780277 31857 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 201.808617ms I0405 17:29:19.780366 31857 replica.cpp:712] Persisted action at 1 I0405 17:29:19.782024 31857 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0405 17:29:19.823770 31857 leveldb.cpp:341] Persisting action (191 bytes) to leveldb took 41.667662ms I0405 17:29:19.823851 31857 replica.cpp:712] Persisted action at 1 I0405 17:29:19.823889 31857 replica.cpp:697] Replica learned APPEND action at position 1 I0405 17:29:19.825701 31867 registrar.cpp:508] Successfully updated the 'registry' in 0ns I0405 17:29:19.825929 31867 registrar.cpp:394] Successfully recovered registrar I0405 17:29:19.826015 31857 log.cpp:702] Attempting to truncate the log to 1 I0405 17:29:19.826262 31867 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0405 17:29:19.827647 31867 replica.cpp:537] Replica received write request for position 2 from (14366)@172.17.0.4:43972 I0405 17:29:19.828018 31857 master.cpp:1634] Recovered 0 agents from the Registry (131B) ; allowing 10mins for agents to re-register I0405 17:29:19.828065 31861 hierarchical.cpp:171] Skipping recovery of hierarchical allocator: nothing to recover I0405 17:29:19.865555 31867 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 37.822178ms I0405 17:29:19.865661 31867 replica.cpp:712] Persisted action at 2 I0405 17:29:19.866921 31867 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0405 17:29:19.907341 31867 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 40.356649ms I0405 17:29:19.907531 31867 leveldb.cpp:399] Deleting ~1 keys from leveldb took 91109ns I0405 17:29:19.907560 31867 replica.cpp:712] Persisted action at 2 I0405 17:29:19.907599 31867 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0405 17:29:19.923305 31837 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:2048 Trying semicolon-delimited string format instead I0405 17:29:19.926491 31837 containerizer.cpp:155] Using isolation: posix/cpu,posix/mem,filesystem/posix W0405 17:29:19.927836 31837 backend.cpp:66] Failed to create 'bind' backend: BindBackend requires root privileges I0405 17:29:19.932029 31862 slave.cpp:200] Agent started on 441)@172.17.0.4:43972 I0405 17:29:19.932086 31862 slave.cpp:201] Flags at startup: --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_credentials=""""/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.29.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""[{""""name"""":""""cpus"""",""""role"""":""""*"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""role"""":""""*"""",""""scalar"""":{""""value"""":2048.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""role"""":""""role1"""",""""scalar"""":{""""value"""":4096.0},""""type"""":""""SCALAR""""}]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC"""" I0405 17:29:19.932665 31862 credentials.hpp:86] Loading credential for authentication from '/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/credential' I0405 17:29:19.932934 31862 slave.cpp:338] Agent using credential for: test-principal I0405 17:29:19.932968 31862 credentials.hpp:37] Loading credentials for authentication from '/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/http_credentials' I0405 17:29:19.933284 31862 slave.cpp:390] Using default 'basic' HTTP authenticator I0405 17:29:19.934916 31837 sched.cpp:222] Version: 0.29.0 I0405 17:29:19.935566 31862 slave.cpp:589] Agent resources: cpus(*):2; mem(*):2048; disk(role1):4096; ports(*):[31000-32000] I0405 17:29:19.935664 31862 slave.cpp:597] Agent attributes: [ ] I0405 17:29:19.935679 31862 slave.cpp:602] Agent hostname: 4090d10eba90 I0405 17:29:19.938390 31864 state.cpp:57] Recovering state from '/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/meta' I0405 17:29:19.940608 31869 sched.cpp:326] New master detected at master@172.17.0.4:43972 I0405 17:29:19.940749 31869 sched.cpp:382] Authenticating with master master@172.17.0.4:43972 I0405 17:29:19.940773 31869 sched.cpp:389] Using default CRAM-MD5 authenticatee I0405 17:29:19.942371 31869 authenticatee.cpp:121] Creating new client SASL connection I0405 17:29:19.942873 31859 master.cpp:5679] Authenticating scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 I0405 17:29:19.943156 31859 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(896)@172.17.0.4:43972 I0405 17:29:19.943507 31863 authenticator.cpp:98] Creating new server SASL connection I0405 17:29:19.943740 31859 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0405 17:29:19.943783 31859 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0405 17:29:19.943892 31859 authenticator.cpp:203] Received SASL authentication start I0405 17:29:19.943977 31859 authenticator.cpp:325] Authentication requires more steps I0405 17:29:19.944066 31859 authenticatee.cpp:258] Received SASL authentication step I0405 17:29:19.944164 31859 authenticator.cpp:231] Received SASL authentication step I0405 17:29:19.944193 31859 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '4090d10eba90' server FQDN: '4090d10eba90' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0405 17:29:19.944206 31859 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0405 17:29:19.944268 31859 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0405 17:29:19.944300 31859 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '4090d10eba90' server FQDN: '4090d10eba90' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0405 17:29:19.944313 31859 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0405 17:29:19.944321 31859 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0405 17:29:19.944339 31859 authenticator.cpp:317] Authentication success I0405 17:29:19.944541 31859 authenticatee.cpp:298] Authentication success I0405 17:29:19.944655 31859 master.cpp:5709] Successfully authenticated principal 'test-principal' at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 I0405 17:29:19.944737 31859 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(896)@172.17.0.4:43972 I0405 17:29:19.945111 31859 sched.cpp:472] Successfully authenticated with master master@172.17.0.4:43972 I0405 17:29:19.945132 31859 sched.cpp:777] Sending SUBSCRIBE call to master@172.17.0.4:43972 I0405 17:29:19.945591 31859 sched.cpp:810] Will retry registration in 372.80738ms if necessary I0405 17:29:19.945744 31865 master.cpp:2346] Received SUBSCRIBE call for framework 'default' at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 I0405 17:29:19.945838 31865 master.cpp:1865] Authorizing framework principal 'test-principal' to receive offers for role 'role1' I0405 17:29:19.946194 31865 master.cpp:2417] Subscribing framework default with checkpointing disabled and capabilities [ ] I0405 17:29:19.946866 31866 hierarchical.cpp:266] Added framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:19.946974 31866 hierarchical.cpp:1490] No resources available to allocate! I0405 17:29:19.947010 31866 hierarchical.cpp:1585] No inverse offers to send out! I0405 17:29:19.947054 31865 sched.cpp:704] Framework registered with 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:19.947074 31866 hierarchical.cpp:1141] Performed allocation for 0 agents in 178242ns I0405 17:29:19.947124 31865 sched.cpp:718] Scheduler::registered took 38907ns I0405 17:29:19.948712 31866 status_update_manager.cpp:200] Recovering status update manager I0405 17:29:19.948901 31866 containerizer.cpp:416] Recovering containerizer I0405 17:29:19.951021 31866 provisioner.cpp:245] Provisioner recovery complete I0405 17:29:19.951802 31866 slave.cpp:4773] Finished recovery I0405 17:29:19.952518 31866 slave.cpp:4945] Querying resource estimator for oversubscribable resources I0405 17:29:19.953248 31866 slave.cpp:928] New master detected at master@172.17.0.4:43972 I0405 17:29:19.953305 31865 status_update_manager.cpp:174] Pausing sending status updates I0405 17:29:19.953626 31866 slave.cpp:991] Authenticating with master master@172.17.0.4:43972 I0405 17:29:19.953716 31866 slave.cpp:996] Using default CRAM-MD5 authenticatee I0405 17:29:19.954074 31866 slave.cpp:964] Detecting new master I0405 17:29:19.954167 31861 authenticatee.cpp:121] Creating new client SASL connection I0405 17:29:19.954372 31866 slave.cpp:4959] Received oversubscribable resources from the resource estimator I0405 17:29:19.954756 31866 master.cpp:5679] Authenticating slave(441)@172.17.0.4:43972 I0405 17:29:19.954944 31861 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(897)@172.17.0.4:43972 I0405 17:29:19.955368 31863 authenticator.cpp:98] Creating new server SASL connection I0405 17:29:19.955687 31861 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0405 17:29:19.955801 31861 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0405 17:29:19.956075 31861 authenticator.cpp:203] Received SASL authentication start I0405 17:29:19.956279 31861 authenticator.cpp:325] Authentication requires more steps I0405 17:29:19.956455 31861 authenticatee.cpp:258] Received SASL authentication step I0405 17:29:19.956676 31861 authenticator.cpp:231] Received SASL authentication step I0405 17:29:19.956815 31861 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '4090d10eba90' server FQDN: '4090d10eba90' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0405 17:29:19.956907 31861 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0405 17:29:19.957044 31861 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0405 17:29:19.957166 31861 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '4090d10eba90' server FQDN: '4090d10eba90' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0405 17:29:19.957264 31861 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0405 17:29:19.957353 31861 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0405 17:29:19.957449 31861 authenticator.cpp:317] Authentication success I0405 17:29:19.957664 31857 authenticatee.cpp:298] Authentication success I0405 17:29:19.957813 31857 master.cpp:5709] Successfully authenticated principal 'test-principal' at slave(441)@172.17.0.4:43972 I0405 17:29:19.958008 31861 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(897)@172.17.0.4:43972 I0405 17:29:19.958732 31857 slave.cpp:1061] Successfully authenticated with master master@172.17.0.4:43972 I0405 17:29:19.958930 31857 slave.cpp:1457] Will retry registration in 18.568334ms if necessary I0405 17:29:19.959262 31857 master.cpp:4390] Registering agent at slave(441)@172.17.0.4:43972 (4090d10eba90) with id 9565ff6f-f1b6-4259-8430-690e635c391f-S0 I0405 17:29:19.959934 31857 registrar.cpp:463] Applied 1 operations in 99197ns; attempting to update the 'registry' I0405 17:29:19.961587 31857 log.cpp:683] Attempting to append 343 bytes to the log I0405 17:29:19.961879 31857 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0405 17:29:19.963135 31857 replica.cpp:537] Replica received write request for position 3 from (14381)@172.17.0.4:43972 I0405 17:29:19.999408 31857 leveldb.cpp:341] Persisting action (362 bytes) to leveldb took 36.200109ms I0405 17:29:19.999512 31857 replica.cpp:712] Persisted action at 3 I0405 17:29:20.001049 31869 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0405 17:29:20.038849 31869 leveldb.cpp:341] Persisting action (364 bytes) to leveldb took 37.709507ms I0405 17:29:20.038930 31869 replica.cpp:712] Persisted action at 3 I0405 17:29:20.038965 31869 replica.cpp:697] Replica learned APPEND action at position 3 I0405 17:29:20.041484 31869 registrar.cpp:508] Successfully updated the 'registry' in 0ns I0405 17:29:20.041785 31869 log.cpp:702] Attempting to truncate the log to 3 I0405 17:29:20.042364 31859 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0405 17:29:20.043767 31859 replica.cpp:537] Replica received write request for position 4 from (14382)@172.17.0.4:43972 I0405 17:29:20.044585 31869 master.cpp:4458] Registered agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) with cpus(*):2; mem(*):2048; disk(role1):4096; ports(*):[31000-32000] I0405 17:29:20.044910 31864 slave.cpp:1105] Registered with master master@172.17.0.4:43972; given agent ID 9565ff6f-f1b6-4259-8430-690e635c391f-S0 I0405 17:29:20.045075 31864 fetcher.cpp:81] Clearing fetcher cache I0405 17:29:20.045140 31870 hierarchical.cpp:476] Added agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 (4090d10eba90) with cpus(*):2; mem(*):2048; disk(role1):4096; ports(*):[31000-32000] (allocated: ) I0405 17:29:20.045581 31864 slave.cpp:1128] Checkpointing SlaveInfo to '/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/meta/slaves/9565ff6f-f1b6-4259-8430-690e635c391f-S0/slave.info' I0405 17:29:20.045974 31864 slave.cpp:1165] Forwarding total oversubscribed resources I0405 17:29:20.046077 31864 slave.cpp:3664] Received ping from slave-observer(399)@172.17.0.4:43972 I0405 17:29:20.046193 31864 status_update_manager.cpp:181] Resuming sending status updates I0405 17:29:20.046289 31870 hierarchical.cpp:1585] No inverse offers to send out! I0405 17:29:20.046370 31870 hierarchical.cpp:1164] Performed allocation for agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 in 1.153377ms I0405 17:29:20.046499 31864 master.cpp:4802] Received update of agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) with total oversubscribed resources I0405 17:29:20.047142 31868 hierarchical.cpp:534] Agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 (4090d10eba90) updated with oversubscribed resources (total: cpus(*):2; mem(*):2048; disk(role1):4096; ports(*):[31000-32000], allocated: disk(role1):4096; cpus(*):2; mem(*):2048; ports(*):[31000-32000]) I0405 17:29:20.047960 31868 hierarchical.cpp:1490] No resources available to allocate! I0405 17:29:20.048009 31868 hierarchical.cpp:1585] No inverse offers to send out! I0405 17:29:20.048065 31868 hierarchical.cpp:1164] Performed allocation for agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 in 866803ns I0405 17:29:20.048591 31864 master.cpp:5508] Sending 1 offers to framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 I0405 17:29:20.049188 31860 sched.cpp:874] Scheduler::resourceOffers took 114867ns I0405 17:29:20.080921 31859 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 37.025538ms I0405 17:29:20.081001 31859 replica.cpp:712] Persisted action at 4 I0405 17:29:20.082425 31859 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0405 17:29:20.106056 31859 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 23.583037ms I0405 17:29:20.106205 31859 leveldb.cpp:399] Deleting ~2 keys from leveldb took 76995ns I0405 17:29:20.106240 31859 replica.cpp:712] Persisted action at 4 I0405 17:29:20.106278 31859 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0405 17:29:20.119488 31837 resources.cpp:572] Parsing resources as JSON failed: cpus:1;mem:128 Trying semicolon-delimited string format instead I0405 17:29:20.121356 31859 master.cpp:3288] Processing ACCEPT call for offers: [ 9565ff6f-f1b6-4259-8430-690e635c391f-O0 ] on agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) for framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 I0405 17:29:20.121485 31859 master.cpp:3046] Authorizing principal 'test-principal' to create volumes I0405 17:29:20.121692 31859 master.cpp:2891] Authorizing framework principal 'test-principal' to launch task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 as user 'mesos' I0405 17:29:20.123877 31871 master.cpp:3617] Applying CREATE operation for volumes disk(role1)[id1:path1]:2048 from framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 to agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) I0405 17:29:20.125424 31871 master.cpp:6747] Sending checkpointed resources disk(role1)[id1:path1]:2048 to agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) I0405 17:29:20.126397 31856 hierarchical.cpp:656] Updated allocation of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 on agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 from disk(role1):4096; cpus(*):2; mem(*):2048; ports(*):[31000-32000] to disk(role1):2048; cpus(*):2; mem(*):2048; ports(*):[31000-32000]; disk(role1)[id1:path1]:2048 I0405 17:29:20.126667 31871 master.hpp:177] Adding task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 with resources cpus(*):1; mem(*):128; disk(role1)[id1:path1]:2048 on agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 (4090d10eba90) I0405 17:29:20.126875 31871 master.cpp:3773] Launching task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 with resources cpus(*):1; mem(*):128; disk(role1)[id1:path1]:2048 on agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) I0405 17:29:20.127390 31856 slave.cpp:2523] Updated checkpointed resources from to disk(role1)[id1:path1]:2048 I0405 17:29:20.127615 31856 slave.cpp:1497] Got assigned task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 for framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.127876 31856 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0405 17:29:20.127841 31871 hierarchical.cpp:893] Recovered disk(role1):2048; cpus(*):1; mem(*):1920; ports(*):[31000-32000] (total: cpus(*):2; mem(*):2048; disk(role1):2048; ports(*):[31000-32000]; disk(role1)[id1:path1]:2048, allocated: disk(role1)[id1:path1]:2048; cpus(*):1; mem(*):128) on agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 from framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.127913 31871 hierarchical.cpp:930] Framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 filtered agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 for 5secs I0405 17:29:20.128667 31856 slave.cpp:1616] Launching task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 for framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.128937 31856 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0405 17:29:20.129776 31856 paths.cpp:528] Trying to chown '/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/slaves/9565ff6f-f1b6-4259-8430-690e635c391f-S0/frameworks/9565ff6f-f1b6-4259-8430-690e635c391f-0000/executors/91050005-0b1d-4a37-9ea1-f8ae1ff3b542/runs/bc8b48e5-dd32-4283-a1a6-e1988c82ae09' to user 'mesos' I0405 17:29:20.145324 31856 slave.cpp:5575] Launching executor 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/slaves/9565ff6f-f1b6-4259-8430-690e635c391f-S0/frameworks/9565ff6f-f1b6-4259-8430-690e635c391f-0000/executors/91050005-0b1d-4a37-9ea1-f8ae1ff3b542/runs/bc8b48e5-dd32-4283-a1a6-e1988c82ae09' I0405 17:29:20.146057 31858 containerizer.cpp:675] Starting container 'bc8b48e5-dd32-4283-a1a6-e1988c82ae09' for executor '91050005-0b1d-4a37-9ea1-f8ae1ff3b542' of framework '9565ff6f-f1b6-4259-8430-690e635c391f-0000' I0405 17:29:20.146078 31856 slave.cpp:1834] Queuing task '91050005-0b1d-4a37-9ea1-f8ae1ff3b542' for executor '91050005-0b1d-4a37-9ea1-f8ae1ff3b542' of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.146203 31856 slave.cpp:881] Successfully attached file '/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/slaves/9565ff6f-f1b6-4259-8430-690e635c391f-S0/frameworks/9565ff6f-f1b6-4259-8430-690e635c391f-0000/executors/91050005-0b1d-4a37-9ea1-f8ae1ff3b542/runs/bc8b48e5-dd32-4283-a1a6-e1988c82ae09' I0405 17:29:20.147619 31859 posix.cpp:206] Changing the ownership of the persistent volume at '/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/volumes/roles/role1/id1' with uid 1000 and gid 1000 I0405 17:29:20.162421 31859 posix.cpp:250] Adding symlink from '/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/volumes/roles/role1/id1' to '/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/slaves/9565ff6f-f1b6-4259-8430-690e635c391f-S0/frameworks/9565ff6f-f1b6-4259-8430-690e635c391f-0000/executors/91050005-0b1d-4a37-9ea1-f8ae1ff3b542/runs/bc8b48e5-dd32-4283-a1a6-e1988c82ae09/path1' for persistent volume disk(role1)[id1:path1]:2048 of container bc8b48e5-dd32-4283-a1a6-e1988c82ae09 I0405 17:29:20.172133 31861 launcher.cpp:123] Forked child with pid '7927' for container 'bc8b48e5-dd32-4283-a1a6-e1988c82ae09' WARNING: Logging before InitGoogleLogging() is written to STDERR I0405 17:29:20.376197 7941 process.cpp:986] libprocess is initialized on 172.17.0.4:50952 with 16 worker threads I0405 17:29:20.378132 7941 logging.cpp:195] Logging to STDERR I0405 17:29:20.380861 7941 exec.cpp:150] Version: 0.29.0 I0405 17:29:20.396257 7966 exec.cpp:200] Executor started at: executor(1)@172.17.0.4:50952 with pid 7941 I0405 17:29:20.399426 31860 slave.cpp:2825] Got registration for executor '91050005-0b1d-4a37-9ea1-f8ae1ff3b542' of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 from executor(1)@172.17.0.4:50952 I0405 17:29:20.402995 7966 exec.cpp:225] Executor registered on agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 I0405 17:29:20.403014 31860 slave.cpp:1999] Sending queued task '91050005-0b1d-4a37-9ea1-f8ae1ff3b542' to executor '91050005-0b1d-4a37-9ea1-f8ae1ff3b542' of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 at executor(1)@172.17.0.4:50952 I0405 17:29:20.405624 7966 exec.cpp:237] Executor::registered took 393272ns I0405 17:29:20.406108 7966 exec.cpp:312] Executor asked to run task '91050005-0b1d-4a37-9ea1-f8ae1ff3b542' Registered executor on 4090d10eba90 I0405 17:29:20.406708 7966 exec.cpp:321] Executor::launchTask took 568039ns Starting task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 Forked command at 7972 sh -c 'echo abc > path1/file' I0405 17:29:20.411375 7966 exec.cpp:535] Executor sending status update TASK_RUNNING (UUID: cf4f8fe9-44f2-43ce-8868-b3a09b7298cf) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.413156 31857 slave.cpp:3184] Handling status update TASK_RUNNING (UUID: cf4f8fe9-44f2-43ce-8868-b3a09b7298cf) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 from executor(1)@172.17.0.4:50952 I0405 17:29:20.415714 31857 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: cf4f8fe9-44f2-43ce-8868-b3a09b7298cf) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.415788 31857 status_update_manager.cpp:497] Creating StatusUpdate stream for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.416345 31857 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: cf4f8fe9-44f2-43ce-8868-b3a09b7298cf) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 to the agent I0405 17:29:20.416720 31870 slave.cpp:3582] Forwarding the update TASK_RUNNING (UUID: cf4f8fe9-44f2-43ce-8868-b3a09b7298cf) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 to master@172.17.0.4:43972 I0405 17:29:20.416954 31870 slave.cpp:3476] Status update manager successfully handled status update TASK_RUNNING (UUID: cf4f8fe9-44f2-43ce-8868-b3a09b7298cf) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.416997 31870 slave.cpp:3492] Sending acknowledgement for status update TASK_RUNNING (UUID: cf4f8fe9-44f2-43ce-8868-b3a09b7298cf) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 to executor(1)@172.17.0.4:50952 I0405 17:29:20.417505 31870 master.cpp:4947] Status update TASK_RUNNING (UUID: cf4f8fe9-44f2-43ce-8868-b3a09b7298cf) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 from agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) I0405 17:29:20.417549 31870 master.cpp:4995] Forwarding status update TASK_RUNNING (UUID: cf4f8fe9-44f2-43ce-8868-b3a09b7298cf) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.417724 31870 master.cpp:6608] Updating the state of task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0405 17:29:20.417943 7960 exec.cpp:358] Executor received status update acknowledgement cf4f8fe9-44f2-43ce-8868-b3a09b7298cf for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.418002 31870 sched.cpp:982] Scheduler::statusUpdate took 105225ns I0405 17:29:20.418623 31870 master.cpp:4102] Processing ACKNOWLEDGE call cf4f8fe9-44f2-43ce-8868-b3a09b7298cf for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 on agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 I0405 17:29:20.419181 31860 status_update_manager.cpp:392] Received status update acknowledgement (UUID: cf4f8fe9-44f2-43ce-8868-b3a09b7298cf) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.419816 31860 slave.cpp:2594] Status update manager successfully handled status update acknowledgement (UUID: cf4f8fe9-44f2-43ce-8868-b3a09b7298cf) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.513465 7969 exec.cpp:535] Executor sending status update TASK_FINISHED (UUID: 128eb7af-a662-4cbb-9401-125dca38f719) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 Command exited with status 0 (pid: 7972) I0405 17:29:20.515449 31870 slave.cpp:3184] Handling status update TASK_FINISHED (UUID: 128eb7af-a662-4cbb-9401-125dca38f719) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 from executor(1)@172.17.0.4:50952 I0405 17:29:20.516875 31860 slave.cpp:5885] Terminating task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 I0405 17:29:20.517496 31867 posix.cpp:156] Removing symlink '/tmp/DiskResource_PersistentVolumeTest_AccessPersistentVolume_0_fJS7AC/slaves/9565ff6f-f1b6-4259-8430-690e635c391f-S0/frameworks/9565ff6f-f1b6-4259-8430-690e635c391f-0000/executors/91050005-0b1d-4a37-9ea1-f8ae1ff3b542/runs/bc8b48e5-dd32-4283-a1a6-e1988c82ae09/path1' for persistent volume disk(role1)[id1:path1]:2048 of container bc8b48e5-dd32-4283-a1a6-e1988c82ae09 I0405 17:29:20.519361 31864 status_update_manager.cpp:320] Received status update TASK_FINISHED (UUID: 128eb7af-a662-4cbb-9401-125dca38f719) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.519850 31864 status_update_manager.cpp:374] Forwarding update TASK_FINISHED (UUID: 128eb7af-a662-4cbb-9401-125dca38f719) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 to the agent I0405 17:29:20.520678 31870 slave.cpp:3582] Forwarding the update TASK_FINISHED (UUID: 128eb7af-a662-4cbb-9401-125dca38f719) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 to master@172.17.0.4:43972 I0405 17:29:20.520901 31870 slave.cpp:3476] Status update manager successfully handled status update TASK_FINISHED (UUID: 128eb7af-a662-4cbb-9401-125dca38f719) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.520949 31870 slave.cpp:3492] Sending acknowledgement for status update TASK_FINISHED (UUID: 128eb7af-a662-4cbb-9401-125dca38f719) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 to executor(1)@172.17.0.4:50952 I0405 17:29:20.521550 31864 master.cpp:4947] Status update TASK_FINISHED (UUID: 128eb7af-a662-4cbb-9401-125dca38f719) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 from agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) I0405 17:29:20.521610 31864 master.cpp:4995] Forwarding status update TASK_FINISHED (UUID: 128eb7af-a662-4cbb-9401-125dca38f719) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.522099 31871 sched.cpp:982] Scheduler::statusUpdate took 102502ns I0405 17:29:20.522367 31864 master.cpp:6608] Updating the state of task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (latest state: TASK_FINISHED, status update state: TASK_FINISHED) I0405 17:29:20.524288 31871 hierarchical.cpp:1676] Filtered offer with disk(role1):2048; cpus(*):1; mem(*):1920; ports(*):[31000-32000] on agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 for framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.524379 31871 hierarchical.cpp:1490] No resources available to allocate! I0405 17:29:20.524451 31871 hierarchical.cpp:1585] No inverse offers to send out! I0405 17:29:20.524551 31871 hierarchical.cpp:1141] Performed allocation for 1 agents in 961746ns I0405 17:29:20.525182 31858 hierarchical.cpp:893] Recovered cpus(*):1; mem(*):128; disk(role1)[id1:path1]:2048 (total: cpus(*):2; mem(*):2048; disk(role1):2048; ports(*):[31000-32000]; disk(role1)[id1:path1]:2048, allocated: ) on agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 from framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.525197 31864 master.cpp:4102] Processing ACKNOWLEDGE call 128eb7af-a662-4cbb-9401-125dca38f719 for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 on agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 I0405 17:29:20.525380 31864 master.cpp:6674] Removing task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 with resources cpus(*):1; mem(*):128; disk(role1)[id1:path1]:2048 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 on agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) I0405 17:29:20.526067 31864 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 128eb7af-a662-4cbb-9401-125dca38f719) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.526425 31864 status_update_manager.cpp:528] Cleaning up status update stream for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.526917 31864 slave.cpp:2594] Status update manager successfully handled status update acknowledgement (UUID: 128eb7af-a662-4cbb-9401-125dca38f719) for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:20.527048 31864 slave.cpp:5926] Completing task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 I0405 17:29:20.527732 7964 exec.cpp:358] Executor received status update acknowledgement 128eb7af-a662-4cbb-9401-125dca38f719 for task 91050005-0b1d-4a37-9ea1-f8ae1ff3b542 of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:21.527920 31859 slave.cpp:3710] executor(1)@172.17.0.4:50952 exited ../../src/tests/persistent_volume_tests.cpp:825: Failure Failed to wait 15secs for offers I0405 17:29:35.542609 31856 master.cpp:1269] Framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 disconnected I0405 17:29:35.542811 31856 master.cpp:2642] Disconnecting framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 I0405 17:29:35.542994 31856 master.cpp:2666] Deactivating framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 I0405 17:29:35.543349 31860 hierarchical.cpp:378] Deactivated framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:35.543501 31856 master.cpp:1293] Giving framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 0ns to failover I0405 17:29:35.543903 31868 master.cpp:5360] Framework failover timeout, removing framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 I0405 17:29:35.543936 31868 master.cpp:6093] Removing framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 (default) at scheduler-bdf68f7f-d938-47ed-a132-bb3f218628bf@172.17.0.4:43972 I0405 17:29:35.544337 31861 slave.cpp:2215] Asked to shut down framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 by master@172.17.0.4:43972 I0405 17:29:35.544381 31861 slave.cpp:2240] Shutting down framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 I0405 17:29:35.544456 31861 slave.cpp:4398] Shutting down executor '91050005-0b1d-4a37-9ea1-f8ae1ff3b542' of framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 at executor(1)@172.17.0.4:50952 I0405 17:29:35.544960 31872 poll_socket.cpp:110] Socket error while connecting I0405 17:29:35.545013 31872 process.cpp:1650] Failed to send 'mesos.internal.ShutdownExecutorMessage' to '172.17.0.4:50952', connect: Socket error while connecting E0405 17:29:35.545106 31872 process.cpp:1958] Failed to shutdown socket with fd 27: Transport endpoint is not connected I0405 17:29:35.545474 31864 hierarchical.cpp:329] Removed framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 ../../src/tests/persistent_volume_tests.cpp:819: Failure Actual function call count doesn't match EXPECT_CALL(sched, resourceOffers(&driver, _))... Expected: to be called at least once Actual: never called - unsatisfied and active I0405 17:29:35.558538 31858 containerizer.cpp:1432] Destroying container 'bc8b48e5-dd32-4283-a1a6-e1988c82ae09' ../../src/tests/cluster.cpp:453: Failure Failed to wait 15secs for wait I0405 17:29:50.565403 31870 slave.cpp:800] Agent terminating I0405 17:29:50.565512 31870 slave.cpp:2215] Asked to shut down framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 by @0.0.0.0:0 W0405 17:29:50.565544 31870 slave.cpp:2236] Ignoring shutdown framework 9565ff6f-f1b6-4259-8430-690e635c391f-0000 because it is terminating I0405 17:29:50.574620 31866 master.cpp:1230] Agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) disconnected I0405 17:29:50.574766 31866 master.cpp:2701] Disconnecting agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) I0405 17:29:50.575003 31866 master.cpp:2720] Deactivating agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 at slave(441)@172.17.0.4:43972 (4090d10eba90) I0405 17:29:50.575294 31865 hierarchical.cpp:563] Agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 deactivated I0405 17:29:50.605787 31837 master.cpp:1083] Master terminating I0405 17:29:50.606533 31866 hierarchical.cpp:508] Removed agent 9565ff6f-f1b6-4259-8430-690e635c391f-S0 [ FAILED ] DiskResource/PersistentVolumeTest.AccessPersistentVolume/0, where GetParam() = 0 (31491 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5130","04/06/2016 16:48:10",1,"Enable `newtork/cni` isolator in `MesosContainerizer` as the default `network` isolator. ""Currently there are no default `network` isolators for `MesosContainerizer`. With the development of the `network/cni` isolator we have an interface to run Mesos on multitude of IP networks. Given that its based on an open standard (the CNI spec) which is gathering a lot of traction from vendors (calico, weave, coreOS) and already works on some default networks (bridge, ipvlan, macvlan) it makes sense to make it as the default network isolator.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5135","04/07/2016 02:16:51",1,"Update existing documentation to Include references to GPUs as a first class resource. ""Specifically, the documentation in the following files should be udated: """," docs/attributes-resources.md docs/monitoring.md ",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5142","04/07/2016 09:39:17",2,"Add agent flags for HTTP authorization. ""Flags should be added to the agent to: 1. Enable authorization ({{--authorizers}}) 2. Provide ACLs ({{--acls}})""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5144","04/07/2016 22:22:58",2,"Cleanup memory leaks in libprocess finalize() ""libprocess's {{finalize}} function currently leaks memory for a few different reasons. Cleaning up the {{SocketManager}} will be somewhat involved (MESOS-3910), but the remaining memory leaks should be fairly easy to address.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5146","04/08/2016 00:54:42",1,"MasterAllocatorTest/1.RebalancedForUpdatedWeights is flaky. ""Observed on the ASF CI: """," [ RUN ] MasterAllocatorTest/1.RebalancedForUpdatedWeights I0407 22:34:10.330394 29278 cluster.cpp:149] Creating default 'local' authorizer I0407 22:34:10.466182 29278 leveldb.cpp:174] Opened db in 135.608207ms I0407 22:34:10.516398 29278 leveldb.cpp:181] Compacted db in 50.159558ms I0407 22:34:10.516464 29278 leveldb.cpp:196] Created db iterator in 34959ns I0407 22:34:10.516484 29278 leveldb.cpp:202] Seeked to beginning of db in 10195ns I0407 22:34:10.516496 29278 leveldb.cpp:271] Iterated through 0 keys in the db in 7324ns I0407 22:34:10.516547 29278 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0407 22:34:10.517277 29298 recover.cpp:447] Starting replica recovery I0407 22:34:10.517693 29300 recover.cpp:473] Replica is in EMPTY status I0407 22:34:10.520251 29310 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (4775)@172.17.0.3:35855 I0407 22:34:10.520611 29311 recover.cpp:193] Received a recover response from a replica in EMPTY status I0407 22:34:10.521164 29299 recover.cpp:564] Updating replica status to STARTING I0407 22:34:10.523435 29298 master.cpp:382] Master f59f9057-a5c7-43e1-b129-96862e640a12 (129e11060069) started on 172.17.0.3:35855 I0407 22:34:10.523473 29298 master.cpp:384] Flags at startup: --acls="""""""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate=""""true"""" --authenticate_http=""""true"""" --authenticate_slaves=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/3rZY8C/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_slave_ping_timeouts=""""5"""" --quiet=""""false"""" --recovery_slave_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --slave_ping_timeout=""""15secs"""" --slave_reregister_timeout=""""10mins"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-0.29.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/3rZY8C/master"""" --zk_session_timeout=""""10secs"""" I0407 22:34:10.523885 29298 master.cpp:433] Master only allowing authenticated frameworks to register I0407 22:34:10.523901 29298 master.cpp:438] Master only allowing authenticated agents to register I0407 22:34:10.523913 29298 credentials.hpp:37] Loading credentials for authentication from '/tmp/3rZY8C/credentials' I0407 22:34:10.524298 29298 master.cpp:480] Using default 'crammd5' authenticator I0407 22:34:10.524441 29298 master.cpp:551] Using default 'basic' HTTP authenticator I0407 22:34:10.524564 29298 master.cpp:589] Authorization enabled I0407 22:34:10.525269 29305 hierarchical.cpp:145] Initialized hierarchical allocator process I0407 22:34:10.525333 29305 whitelist_watcher.cpp:77] No whitelist given I0407 22:34:10.527331 29298 master.cpp:1832] The newly elected leader is master@172.17.0.3:35855 with id f59f9057-a5c7-43e1-b129-96862e640a12 I0407 22:34:10.527441 29298 master.cpp:1845] Elected as the leading master! I0407 22:34:10.527545 29298 master.cpp:1532] Recovering from registrar I0407 22:34:10.527889 29298 registrar.cpp:331] Recovering registrar I0407 22:34:10.549734 29299 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 28.25177ms I0407 22:34:10.549782 29299 replica.cpp:320] Persisted replica status to STARTING I0407 22:34:10.550010 29299 recover.cpp:473] Replica is in STARTING status I0407 22:34:10.551352 29299 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (4777)@172.17.0.3:35855 I0407 22:34:10.551676 29299 recover.cpp:193] Received a recover response from a replica in STARTING status I0407 22:34:10.552315 29308 recover.cpp:564] Updating replica status to VOTING I0407 22:34:10.574865 29308 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 22.413614ms I0407 22:34:10.574928 29308 replica.cpp:320] Persisted replica status to VOTING I0407 22:34:10.575103 29308 recover.cpp:578] Successfully joined the Paxos group I0407 22:34:10.575346 29308 recover.cpp:462] Recover process terminated I0407 22:34:10.575913 29308 log.cpp:659] Attempting to start the writer I0407 22:34:10.577512 29308 replica.cpp:493] Replica received implicit promise request from (4778)@172.17.0.3:35855 with proposal 1 I0407 22:34:10.599984 29308 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 22.453613ms I0407 22:34:10.600026 29308 replica.cpp:342] Persisted promised to 1 I0407 22:34:10.601773 29304 coordinator.cpp:238] Coordinator attempting to fill missing positions I0407 22:34:10.603757 29307 replica.cpp:388] Replica received explicit promise request from (4779)@172.17.0.3:35855 for position 0 with proposal 2 I0407 22:34:10.634392 29307 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 30.269987ms I0407 22:34:10.634829 29307 replica.cpp:712] Persisted action at 0 I0407 22:34:10.637017 29297 replica.cpp:537] Replica received write request for position 0 from (4780)@172.17.0.3:35855 I0407 22:34:10.637099 29297 leveldb.cpp:436] Reading position from leveldb took 52948ns I0407 22:34:10.676170 29297 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 38.917487ms I0407 22:34:10.676352 29297 replica.cpp:712] Persisted action at 0 I0407 22:34:10.677564 29306 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0407 22:34:10.717959 29306 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 40.306229ms I0407 22:34:10.718202 29306 replica.cpp:712] Persisted action at 0 I0407 22:34:10.718399 29306 replica.cpp:697] Replica learned NOP action at position 0 I0407 22:34:10.719883 29306 log.cpp:675] Writer started with ending position 0 I0407 22:34:10.721688 29305 leveldb.cpp:436] Reading position from leveldb took 75934ns I0407 22:34:10.723640 29306 registrar.cpp:364] Successfully fetched the registry (0B) in 195648us I0407 22:34:10.723999 29306 registrar.cpp:463] Applied 1 operations in 108099ns; attempting to update the 'registry' I0407 22:34:10.725077 29311 log.cpp:683] Attempting to append 170 bytes to the log I0407 22:34:10.725328 29308 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0407 22:34:10.726552 29299 replica.cpp:537] Replica received write request for position 1 from (4781)@172.17.0.3:35855 I0407 22:34:10.759747 29299 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 33.089719ms I0407 22:34:10.759976 29299 replica.cpp:712] Persisted action at 1 I0407 22:34:10.761739 29299 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0407 22:34:10.801522 29299 leveldb.cpp:341] Persisting action (191 bytes) to leveldb took 39.694064ms I0407 22:34:10.801602 29299 replica.cpp:712] Persisted action at 1 I0407 22:34:10.801638 29299 replica.cpp:697] Replica learned APPEND action at position 1 I0407 22:34:10.803371 29311 registrar.cpp:508] Successfully updated the 'registry' in 79.163904ms I0407 22:34:10.803829 29311 registrar.cpp:394] Successfully recovered registrar I0407 22:34:10.804585 29311 master.cpp:1640] Recovered 0 agents from the Registry (131B) ; allowing 10mins for agents to re-register I0407 22:34:10.805269 29308 log.cpp:702] Attempting to truncate the log to 1 I0407 22:34:10.805721 29310 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0407 22:34:10.805276 29296 hierarchical.cpp:172] Skipping recovery of hierarchical allocator: nothing to recover I0407 22:34:10.806529 29307 replica.cpp:537] Replica received write request for position 2 from (4782)@172.17.0.3:35855 I0407 22:34:10.843320 29307 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 36.77593ms I0407 22:34:10.843531 29307 replica.cpp:712] Persisted action at 2 I0407 22:34:10.845369 29311 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0407 22:34:10.885098 29311 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 39.641102ms I0407 22:34:10.885401 29311 leveldb.cpp:399] Deleting ~1 keys from leveldb took 88701ns I0407 22:34:10.885745 29311 replica.cpp:712] Persisted action at 2 I0407 22:34:10.885862 29311 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0407 22:34:10.900660 29278 containerizer.cpp:155] Using isolation: posix/cpu,posix/mem,filesystem/posix W0407 22:34:10.901793 29278 backend.cpp:66] Failed to create 'bind' backend: BindBackend requires root privileges I0407 22:34:10.905488 29302 slave.cpp:201] Agent started on 111)@172.17.0.3:35855 I0407 22:34:10.905553 29302 slave.cpp:202] Flags at startup: --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_9aCAYa/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_9aCAYa/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_credentials=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_9aCAYa/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.29.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:4096;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_9aCAYa"""" I0407 22:34:10.906365 29302 credentials.hpp:86] Loading credential for authentication from '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_9aCAYa/credential' I0407 22:34:10.906787 29302 slave.cpp:339] Agent using credential for: test-principal I0407 22:34:10.907202 29302 credentials.hpp:37] Loading credentials for authentication from '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_9aCAYa/http_credentials' I0407 22:34:10.907713 29302 slave.cpp:391] Using default 'basic' HTTP authenticator I0407 22:34:10.908499 29302 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:4096;ports:[31000-32000] Trying semicolon-delimited string format instead I0407 22:34:10.910189 29302 slave.cpp:590] Agent resources: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] I0407 22:34:10.910362 29302 slave.cpp:598] Agent attributes: [ ] I0407 22:34:10.910465 29302 slave.cpp:603] Agent hostname: 129e11060069 I0407 22:34:10.913280 29303 state.cpp:57] Recovering state from '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_9aCAYa/meta' I0407 22:34:10.914621 29303 status_update_manager.cpp:200] Recovering status update manager I0407 22:34:10.915226 29303 containerizer.cpp:416] Recovering containerizer I0407 22:34:10.917246 29301 provisioner.cpp:245] Provisioner recovery complete I0407 22:34:10.917733 29301 slave.cpp:4784] Finished recovery I0407 22:34:10.918226 29301 slave.cpp:4956] Querying resource estimator for oversubscribable resources I0407 22:34:10.918529 29301 slave.cpp:4970] Received oversubscribable resources from the resource estimator I0407 22:34:10.918908 29304 slave.cpp:939] New master detected at master@172.17.0.3:35855 I0407 22:34:10.918988 29304 slave.cpp:1002] Authenticating with master master@172.17.0.3:35855 I0407 22:34:10.919098 29301 status_update_manager.cpp:174] Pausing sending status updates I0407 22:34:10.919309 29304 slave.cpp:1007] Using default CRAM-MD5 authenticatee I0407 22:34:10.919535 29304 slave.cpp:975] Detecting new master I0407 22:34:10.919747 29308 authenticatee.cpp:121] Creating new client SASL connection I0407 22:34:10.920413 29308 master.cpp:5695] Authenticating slave(111)@172.17.0.3:35855 I0407 22:34:10.920650 29308 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(278)@172.17.0.3:35855 I0407 22:34:10.921020 29308 authenticator.cpp:98] Creating new server SASL connection I0407 22:34:10.921308 29308 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0407 22:34:10.921424 29308 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0407 22:34:10.921596 29308 authenticator.cpp:203] Received SASL authentication start I0407 22:34:10.921752 29308 authenticator.cpp:325] Authentication requires more steps I0407 22:34:10.921957 29307 authenticatee.cpp:258] Received SASL authentication step I0407 22:34:10.922178 29308 authenticator.cpp:231] Received SASL authentication step I0407 22:34:10.922214 29308 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '129e11060069' server FQDN: '129e11060069' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0407 22:34:10.922229 29308 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0407 22:34:10.922281 29308 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0407 22:34:10.922309 29308 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '129e11060069' server FQDN: '129e11060069' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0407 22:34:10.922322 29308 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0407 22:34:10.922332 29308 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0407 22:34:10.922353 29308 authenticator.cpp:317] Authentication success I0407 22:34:10.922436 29307 authenticatee.cpp:298] Authentication success I0407 22:34:10.922587 29308 master.cpp:5725] Successfully authenticated principal 'test-principal' at slave(111)@172.17.0.3:35855 I0407 22:34:10.922668 29299 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(278)@172.17.0.3:35855 I0407 22:34:10.923256 29307 slave.cpp:1072] Successfully authenticated with master master@172.17.0.3:35855 I0407 22:34:10.923429 29307 slave.cpp:1468] Will retry registration in 3.220345ms if necessary I0407 22:34:10.923707 29302 master.cpp:4406] Registering agent at slave(111)@172.17.0.3:35855 (129e11060069) with id f59f9057-a5c7-43e1-b129-96862e640a12-S0 I0407 22:34:10.924239 29309 registrar.cpp:463] Applied 1 operations in 105794ns; attempting to update the 'registry' I0407 22:34:10.925787 29309 log.cpp:683] Attempting to append 339 bytes to the log I0407 22:34:10.926028 29309 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0407 22:34:10.927139 29309 replica.cpp:537] Replica received write request for position 3 from (4797)@172.17.0.3:35855 I0407 22:34:10.929083 29305 slave.cpp:1468] Will retry registration in 39.293556ms if necessary I0407 22:34:10.929363 29305 master.cpp:4394] Ignoring register agent message from slave(111)@172.17.0.3:35855 (129e11060069) as admission is already in progress I0407 22:34:10.968843 29309 leveldb.cpp:341] Persisting action (358 bytes) to leveldb took 41.68025ms I0407 22:34:10.969005 29309 replica.cpp:712] Persisted action at 3 I0407 22:34:10.969741 29309 slave.cpp:1468] Will retry registration in 54.852242ms if necessary I0407 22:34:10.970118 29309 master.cpp:4394] Ignoring register agent message from slave(111)@172.17.0.3:35855 (129e11060069) as admission is already in progress I0407 22:34:10.970852 29306 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0407 22:34:11.010634 29306 leveldb.cpp:341] Persisting action (360 bytes) to leveldb took 39.680272ms I0407 22:34:11.010840 29306 replica.cpp:712] Persisted action at 3 I0407 22:34:11.011014 29306 replica.cpp:697] Replica learned APPEND action at position 3 I0407 22:34:11.014020 29306 registrar.cpp:508] Successfully updated the 'registry' in 89.684224ms I0407 22:34:11.014181 29296 log.cpp:702] Attempting to truncate the log to 3 I0407 22:34:11.014606 29296 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0407 22:34:11.015836 29298 replica.cpp:537] Replica received write request for position 4 from (4798)@172.17.0.3:35855 I0407 22:34:11.016973 29296 master.cpp:4474] Registered agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 at slave(111)@172.17.0.3:35855 (129e11060069) with cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] I0407 22:34:11.017518 29304 hierarchical.cpp:476] Added agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 (129e11060069) with cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] (allocated: ) I0407 22:34:11.017763 29311 slave.cpp:1116] Registered with master master@172.17.0.3:35855; given agent ID f59f9057-a5c7-43e1-b129-96862e640a12-S0 I0407 22:34:11.018362 29311 fetcher.cpp:81] Clearing fetcher cache I0407 22:34:11.018870 29311 slave.cpp:1139] Checkpointing SlaveInfo to '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_9aCAYa/meta/slaves/f59f9057-a5c7-43e1-b129-96862e640a12-S0/slave.info' I0407 22:34:11.018890 29307 status_update_manager.cpp:181] Resuming sending status updates I0407 22:34:11.019182 29304 hierarchical.cpp:1491] No resources available to allocate! I0407 22:34:11.019304 29304 hierarchical.cpp:1165] Performed allocation for agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 in 1.077349ms I0407 22:34:11.019493 29311 slave.cpp:1176] Forwarding total oversubscribed resources I0407 22:34:11.019726 29311 slave.cpp:3675] Received ping from slave-observer(112)@172.17.0.3:35855 I0407 22:34:11.019878 29299 master.cpp:4818] Received update of agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 at slave(111)@172.17.0.3:35855 (129e11060069) with total oversubscribed resources I0407 22:34:11.020845 29305 hierarchical.cpp:534] Agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 (129e11060069) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000], allocated: ) I0407 22:34:11.021005 29305 hierarchical.cpp:1491] No resources available to allocate! I0407 22:34:11.021065 29305 hierarchical.cpp:1165] Performed allocation for agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 in 173907ns I0407 22:34:11.022289 29278 containerizer.cpp:155] Using isolation: posix/cpu,posix/mem,filesystem/posix W0407 22:34:11.023422 29278 backend.cpp:66] Failed to create 'bind' backend: BindBackend requires root privileges I0407 22:34:11.026309 29309 slave.cpp:201] Agent started on 112)@172.17.0.3:35855 I0407 22:34:11.026410 29309 slave.cpp:202] Flags at startup: --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_h8KW9O/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_h8KW9O/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_credentials=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_h8KW9O/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.29.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:4096;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_h8KW9O"""" I0407 22:34:11.027070 29309 credentials.hpp:86] Loading credential for authentication from '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_h8KW9O/credential' I0407 22:34:11.027308 29309 slave.cpp:339] Agent using credential for: test-principal I0407 22:34:11.027354 29309 credentials.hpp:37] Loading credentials for authentication from '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_h8KW9O/http_credentials' I0407 22:34:11.027698 29309 slave.cpp:391] Using default 'basic' HTTP authenticator I0407 22:34:11.028147 29309 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:4096;ports:[31000-32000] Trying semicolon-delimited string format instead I0407 22:34:11.028854 29309 slave.cpp:590] Agent resources: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] I0407 22:34:11.028998 29309 slave.cpp:598] Agent attributes: [ ] I0407 22:34:11.029064 29309 slave.cpp:603] Agent hostname: 129e11060069 I0407 22:34:11.031188 29309 state.cpp:57] Recovering state from '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_h8KW9O/meta' I0407 22:34:11.031844 29300 status_update_manager.cpp:200] Recovering status update manager I0407 22:34:11.032091 29300 containerizer.cpp:416] Recovering containerizer I0407 22:34:11.033805 29300 provisioner.cpp:245] Provisioner recovery complete I0407 22:34:11.034364 29300 slave.cpp:4784] Finished recovery I0407 22:34:11.061807 29300 slave.cpp:4956] Querying resource estimator for oversubscribable resources I0407 22:34:11.062371 29300 slave.cpp:939] New master detected at master@172.17.0.3:35855 I0407 22:34:11.062450 29300 slave.cpp:1002] Authenticating with master master@172.17.0.3:35855 I0407 22:34:11.062469 29300 slave.cpp:1007] Using default CRAM-MD5 authenticatee I0407 22:34:11.062630 29300 slave.cpp:975] Detecting new master I0407 22:34:11.062737 29300 slave.cpp:4970] Received oversubscribable resources from the resource estimator I0407 22:34:11.062820 29300 status_update_manager.cpp:174] Pausing sending status updates I0407 22:34:11.062952 29300 authenticatee.cpp:121] Creating new client SASL connection I0407 22:34:11.063413 29300 master.cpp:5695] Authenticating slave(112)@172.17.0.3:35855 I0407 22:34:11.063591 29300 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(279)@172.17.0.3:35855 I0407 22:34:11.063907 29300 authenticator.cpp:98] Creating new server SASL connection I0407 22:34:11.064159 29300 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0407 22:34:11.064201 29300 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0407 22:34:11.064296 29300 authenticator.cpp:203] Received SASL authentication start I0407 22:34:11.064363 29300 authenticator.cpp:325] Authentication requires more steps I0407 22:34:11.064443 29300 authenticatee.cpp:258] Received SASL authentication step I0407 22:34:11.064537 29300 authenticator.cpp:231] Received SASL authentication step I0407 22:34:11.064569 29300 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '129e11060069' server FQDN: '129e11060069' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0407 22:34:11.064584 29300 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0407 22:34:11.064640 29300 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0407 22:34:11.064668 29300 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '129e11060069' server FQDN: '129e11060069' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0407 22:34:11.064680 29300 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0407 22:34:11.064689 29300 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0407 22:34:11.064708 29300 authenticator.cpp:317] Authentication success I0407 22:34:11.064856 29300 authenticatee.cpp:298] Authentication success I0407 22:34:11.064941 29300 master.cpp:5725] Successfully authenticated principal 'test-principal' at slave(112)@172.17.0.3:35855 I0407 22:34:11.065019 29300 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(279)@172.17.0.3:35855 I0407 22:34:11.065431 29305 slave.cpp:1072] Successfully authenticated with master master@172.17.0.3:35855 I0407 22:34:11.065580 29305 slave.cpp:1468] Will retry registration in 14.268351ms if necessary I0407 22:34:11.065948 29305 master.cpp:4406] Registering agent at slave(112)@172.17.0.3:35855 (129e11060069) with id f59f9057-a5c7-43e1-b129-96862e640a12-S1 I0407 22:34:11.066653 29296 registrar.cpp:463] Applied 1 operations in 190813ns; attempting to update the 'registry' I0407 22:34:11.075197 29298 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 59.338116ms I0407 22:34:11.075359 29298 replica.cpp:712] Persisted action at 4 I0407 22:34:11.076177 29301 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0407 22:34:11.080481 29309 slave.cpp:1468] Will retry registration in 23.018984ms if necessary I0407 22:34:11.080770 29309 master.cpp:4394] Ignoring register agent message from slave(112)@172.17.0.3:35855 (129e11060069) as admission is already in progress I0407 22:34:11.100519 29301 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 24.288152ms I0407 22:34:11.100792 29301 leveldb.cpp:399] Deleting ~2 keys from leveldb took 98264ns I0407 22:34:11.100883 29301 replica.cpp:712] Persisted action at 4 I0407 22:34:11.101002 29301 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0407 22:34:11.102180 29309 log.cpp:683] Attempting to append 505 bytes to the log I0407 22:34:11.102334 29301 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 5 I0407 22:34:11.103551 29309 replica.cpp:537] Replica received write request for position 5 from (4813)@172.17.0.3:35855 I0407 22:34:11.105705 29305 slave.cpp:1468] Will retry registration in 49.972787ms if necessary I0407 22:34:11.106020 29305 master.cpp:4394] Ignoring register agent message from slave(112)@172.17.0.3:35855 (129e11060069) as admission is already in progress I0407 22:34:11.126212 29309 leveldb.cpp:341] Persisting action (524 bytes) to leveldb took 22.638848ms I0407 22:34:11.126296 29309 replica.cpp:712] Persisted action at 5 I0407 22:34:11.127374 29305 replica.cpp:691] Replica received learned notice for position 5 from @0.0.0.0:0 I0407 22:34:11.150754 29305 leveldb.cpp:341] Persisting action (526 bytes) to leveldb took 23.376079ms I0407 22:34:11.150952 29305 replica.cpp:712] Persisted action at 5 I0407 22:34:11.150992 29305 replica.cpp:697] Replica learned APPEND action at position 5 I0407 22:34:11.154031 29305 registrar.cpp:508] Successfully updated the 'registry' in 87.26784ms I0407 22:34:11.154491 29305 log.cpp:702] Attempting to truncate the log to 5 I0407 22:34:11.154824 29305 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 6 I0407 22:34:11.155413 29308 slave.cpp:3675] Received ping from slave-observer(113)@172.17.0.3:35855 I0407 22:34:11.155467 29303 master.cpp:4474] Registered agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 at slave(112)@172.17.0.3:35855 (129e11060069) with cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] I0407 22:34:11.155580 29308 slave.cpp:1116] Registered with master master@172.17.0.3:35855; given agent ID f59f9057-a5c7-43e1-b129-96862e640a12-S1 I0407 22:34:11.155606 29308 fetcher.cpp:81] Clearing fetcher cache I0407 22:34:11.155856 29304 status_update_manager.cpp:181] Resuming sending status updates I0407 22:34:11.156281 29308 slave.cpp:1139] Checkpointing SlaveInfo to '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_h8KW9O/meta/slaves/f59f9057-a5c7-43e1-b129-96862e640a12-S1/slave.info' I0407 22:34:11.156661 29304 replica.cpp:537] Replica received write request for position 6 from (4814)@172.17.0.3:35855 I0407 22:34:11.156949 29305 hierarchical.cpp:476] Added agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 (129e11060069) with cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] (allocated: ) I0407 22:34:11.157217 29305 hierarchical.cpp:1491] No resources available to allocate! I0407 22:34:11.157346 29305 hierarchical.cpp:1165] Performed allocation for agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 in 304432ns I0407 22:34:11.157224 29308 slave.cpp:1176] Forwarding total oversubscribed resources I0407 22:34:11.157788 29303 master.cpp:4818] Received update of agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 at slave(112)@172.17.0.3:35855 (129e11060069) with total oversubscribed resources I0407 22:34:11.158424 29303 hierarchical.cpp:534] Agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 (129e11060069) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000], allocated: ) I0407 22:34:11.158633 29303 hierarchical.cpp:1491] No resources available to allocate! I0407 22:34:11.158699 29303 hierarchical.cpp:1165] Performed allocation for agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 in 178482ns I0407 22:34:11.162139 29278 containerizer.cpp:155] Using isolation: posix/cpu,posix/mem,filesystem/posix W0407 22:34:11.192978 29278 backend.cpp:66] Failed to create 'bind' backend: BindBackend requires root privileges I0407 22:34:11.197527 29307 slave.cpp:201] Agent started on 113)@172.17.0.3:35855 I0407 22:34:11.197581 29307 slave.cpp:202] Flags at startup: --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_EG5sru/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_EG5sru/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_credentials=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_EG5sru/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-0.29.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;mem:1024;disk:4096;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_EG5sru"""" I0407 22:34:11.198328 29307 credentials.hpp:86] Loading credential for authentication from '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_EG5sru/credential' I0407 22:34:11.198562 29307 slave.cpp:339] Agent using credential for: test-principal I0407 22:34:11.198598 29307 credentials.hpp:37] Loading credentials for authentication from '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_EG5sru/http_credentials' I0407 22:34:11.198884 29307 slave.cpp:391] Using default 'basic' HTTP authenticator I0407 22:34:11.199286 29307 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:4096;ports:[31000-32000] Trying semicolon-delimited string format instead I0407 22:34:11.199820 29307 slave.cpp:590] Agent resources: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] I0407 22:34:11.199905 29307 slave.cpp:598] Agent attributes: [ ] I0407 22:34:11.199920 29307 slave.cpp:603] Agent hostname: 129e11060069 I0407 22:34:11.201535 29297 state.cpp:57] Recovering state from '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_EG5sru/meta' I0407 22:34:11.201773 29309 status_update_manager.cpp:200] Recovering status update manager I0407 22:34:11.202081 29307 containerizer.cpp:416] Recovering containerizer I0407 22:34:11.202180 29304 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 45.487899ms I0407 22:34:11.202221 29304 replica.cpp:712] Persisted action at 6 I0407 22:34:11.203219 29302 replica.cpp:691] Replica received learned notice for position 6 from @0.0.0.0:0 I0407 22:34:11.205412 29301 provisioner.cpp:245] Provisioner recovery complete I0407 22:34:11.205984 29301 slave.cpp:4784] Finished recovery I0407 22:34:11.206735 29301 slave.cpp:4956] Querying resource estimator for oversubscribable resources I0407 22:34:11.207351 29301 slave.cpp:4970] Received oversubscribable resources from the resource estimator I0407 22:34:11.207679 29301 slave.cpp:939] New master detected at master@172.17.0.3:35855 I0407 22:34:11.207804 29309 status_update_manager.cpp:174] Pausing sending status updates I0407 22:34:11.208039 29301 slave.cpp:1002] Authenticating with master master@172.17.0.3:35855 I0407 22:34:11.208072 29301 slave.cpp:1007] Using default CRAM-MD5 authenticatee I0407 22:34:11.208431 29301 slave.cpp:975] Detecting new master I0407 22:34:11.208650 29309 authenticatee.cpp:121] Creating new client SASL connection I0407 22:34:11.208976 29309 master.cpp:5695] Authenticating slave(113)@172.17.0.3:35855 I0407 22:34:11.209081 29307 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(280)@172.17.0.3:35855 I0407 22:34:11.209432 29304 authenticator.cpp:98] Creating new server SASL connection I0407 22:34:11.209971 29304 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0407 22:34:11.210103 29304 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0407 22:34:11.210382 29304 authenticator.cpp:203] Received SASL authentication start I0407 22:34:11.210515 29304 authenticator.cpp:325] Authentication requires more steps I0407 22:34:11.210726 29304 authenticatee.cpp:258] Received SASL authentication step I0407 22:34:11.210940 29305 authenticator.cpp:231] Received SASL authentication step I0407 22:34:11.210980 29305 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '129e11060069' server FQDN: '129e11060069' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0407 22:34:11.210997 29305 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0407 22:34:11.211060 29305 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0407 22:34:11.211100 29305 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '129e11060069' server FQDN: '129e11060069' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0407 22:34:11.211175 29305 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0407 22:34:11.211244 29305 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0407 22:34:11.211272 29305 authenticator.cpp:317] Authentication success I0407 22:34:11.211462 29305 authenticatee.cpp:298] Authentication success I0407 22:34:11.211575 29305 master.cpp:5725] Successfully authenticated principal 'test-principal' at slave(113)@172.17.0.3:35855 I0407 22:34:11.211673 29305 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(280)@172.17.0.3:35855 I0407 22:34:11.212026 29305 slave.cpp:1072] Successfully authenticated with master master@172.17.0.3:35855 I0407 22:34:11.212280 29305 slave.cpp:1468] Will retry registration in 6.415977ms if necessary I0407 22:34:11.212704 29304 master.cpp:4406] Registering agent at slave(113)@172.17.0.3:35855 (129e11060069) with id f59f9057-a5c7-43e1-b129-96862e640a12-S2 I0407 22:34:11.213373 29311 registrar.cpp:463] Applied 1 operations in 154555ns; attempting to update the 'registry' I0407 22:34:11.223568 29303 master.cpp:4394] Ignoring register agent message from slave(113)@172.17.0.3:35855 (129e11060069) as admission is already in progress I0407 22:34:11.224171 29300 slave.cpp:1468] Will retry registration in 22.418267ms if necessary I0407 22:34:11.243433 29302 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 40.20863ms I0407 22:34:11.243851 29302 leveldb.cpp:399] Deleting ~2 keys from leveldb took 204965ns I0407 22:34:11.243980 29302 replica.cpp:712] Persisted action at 6 I0407 22:34:11.244148 29302 replica.cpp:697] Replica learned TRUNCATE action at position 6 I0407 22:34:11.245827 29302 log.cpp:683] Attempting to append 671 bytes to the log I0407 22:34:11.246206 29310 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 7 I0407 22:34:11.247114 29296 replica.cpp:537] Replica received write request for position 7 from (4829)@172.17.0.3:35855 I0407 22:34:11.248457 29304 slave.cpp:1468] Will retry registration in 14.981599ms if necessary I0407 22:34:11.248837 29302 master.cpp:4394] Ignoring register agent message from slave(113)@172.17.0.3:35855 (129e11060069) as admission is already in progress I0407 22:34:11.265728 29301 slave.cpp:1468] Will retry registration in 117.285894ms if necessary I0407 22:34:11.266026 29301 master.cpp:4394] Ignoring register agent message from slave(113)@172.17.0.3:35855 (129e11060069) as admission is already in progress I0407 22:34:11.278012 29296 leveldb.cpp:341] Persisting action (690 bytes) to leveldb took 30.789344ms I0407 22:34:11.278064 29296 replica.cpp:712] Persisted action at 7 I0407 22:34:11.278990 29303 replica.cpp:691] Replica received learned notice for position 7 from @0.0.0.0:0 I0407 22:34:11.337220 29303 leveldb.cpp:341] Persisting action (692 bytes) to leveldb took 58.231676ms I0407 22:34:11.337312 29303 replica.cpp:712] Persisted action at 7 I0407 22:34:11.337347 29303 replica.cpp:697] Replica learned APPEND action at position 7 I0407 22:34:11.340283 29305 registrar.cpp:508] Successfully updated the 'registry' in 126.71616ms I0407 22:34:11.340703 29309 log.cpp:702] Attempting to truncate the log to 7 I0407 22:34:11.341044 29309 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 8 I0407 22:34:11.341847 29309 slave.cpp:3675] Received ping from slave-observer(114)@172.17.0.3:35855 I0407 22:34:11.342489 29309 slave.cpp:1116] Registered with master master@172.17.0.3:35855; given agent ID f59f9057-a5c7-43e1-b129-96862e640a12-S2 I0407 22:34:11.342532 29309 fetcher.cpp:81] Clearing fetcher cache I0407 22:34:11.341804 29303 master.cpp:4474] Registered agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 at slave(113)@172.17.0.3:35855 (129e11060069) with cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] I0407 22:34:11.342871 29297 status_update_manager.cpp:181] Resuming sending status updates I0407 22:34:11.342267 29300 hierarchical.cpp:476] Added agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 (129e11060069) with cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] (allocated: ) I0407 22:34:11.342963 29299 replica.cpp:537] Replica received write request for position 8 from (4830)@172.17.0.3:35855 I0407 22:34:11.343101 29300 hierarchical.cpp:1491] No resources available to allocate! I0407 22:34:11.343178 29300 hierarchical.cpp:1165] Performed allocation for agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 in 242921ns I0407 22:34:11.342921 29309 slave.cpp:1139] Checkpointing SlaveInfo to '/tmp/MasterAllocatorTest_1_RebalancedForUpdatedWeights_EG5sru/meta/slaves/f59f9057-a5c7-43e1-b129-96862e640a12-S2/slave.info' I0407 22:34:11.343636 29309 slave.cpp:1176] Forwarding total oversubscribed resources I0407 22:34:11.343863 29309 master.cpp:4818] Received update of agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 at slave(113)@172.17.0.3:35855 (129e11060069) with total oversubscribed resources I0407 22:34:11.344173 29278 sched.cpp:224] Version: 0.29.0 I0407 22:34:11.344425 29309 hierarchical.cpp:534] Agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 (129e11060069) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000], allocated: ) I0407 22:34:11.344568 29309 hierarchical.cpp:1491] No resources available to allocate! I0407 22:34:11.344621 29309 hierarchical.cpp:1165] Performed allocation for agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 in 155620ns I0407 22:34:11.345155 29303 sched.cpp:328] New master detected at master@172.17.0.3:35855 I0407 22:34:11.345387 29303 sched.cpp:384] Authenticating with master master@172.17.0.3:35855 I0407 22:34:11.345479 29303 sched.cpp:391] Using default CRAM-MD5 authenticatee I0407 22:34:11.346035 29303 authenticatee.cpp:121] Creating new client SASL connection I0407 22:34:11.346884 29303 master.cpp:5695] Authenticating scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 I0407 22:34:11.347530 29303 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(281)@172.17.0.3:35855 I0407 22:34:11.349140 29303 authenticator.cpp:98] Creating new server SASL connection I0407 22:34:11.349580 29303 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0407 22:34:11.349707 29303 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0407 22:34:11.349957 29309 authenticator.cpp:203] Received SASL authentication start I0407 22:34:11.350040 29309 authenticator.cpp:325] Authentication requires more steps I0407 22:34:11.350168 29309 authenticatee.cpp:258] Received SASL authentication step I0407 22:34:11.350275 29309 authenticator.cpp:231] Received SASL authentication step I0407 22:34:11.350309 29309 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '129e11060069' server FQDN: '129e11060069' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0407 22:34:11.350323 29309 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0407 22:34:11.350375 29309 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0407 22:34:11.350407 29309 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '129e11060069' server FQDN: '129e11060069' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0407 22:34:11.350420 29309 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0407 22:34:11.350430 29309 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0407 22:34:11.350450 29309 authenticator.cpp:317] Authentication success I0407 22:34:11.350550 29303 authenticatee.cpp:298] Authentication success I0407 22:34:11.350647 29309 master.cpp:5725] Successfully authenticated principal 'test-principal' at scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 I0407 22:34:11.350803 29303 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(281)@172.17.0.3:35855 I0407 22:34:11.350986 29309 sched.cpp:474] Successfully authenticated with master master@172.17.0.3:35855 I0407 22:34:11.351011 29309 sched.cpp:779] Sending SUBSCRIBE call to master@172.17.0.3:35855 I0407 22:34:11.351109 29309 sched.cpp:812] Will retry registration in 82.651114ms if necessary I0407 22:34:11.351313 29296 master.cpp:2362] Received SUBSCRIBE call for framework 'default' at scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 I0407 22:34:11.351343 29296 master.cpp:1871] Authorizing framework principal 'test-principal' to receive offers for role 'role1' I0407 22:34:11.351662 29310 master.cpp:2433] Subscribing framework default with checkpointing disabled and capabilities [ ] I0407 22:34:11.352442 29311 hierarchical.cpp:267] Added framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 I0407 22:34:11.353435 29309 sched.cpp:706] Framework registered with f59f9057-a5c7-43e1-b129-96862e640a12-0000 I0407 22:34:11.353519 29309 sched.cpp:720] Scheduler::registered took 66350ns I0407 22:34:11.355201 29311 hierarchical.cpp:1586] No inverse offers to send out! I0407 22:34:11.355293 29311 hierarchical.cpp:1142] Performed allocation for 3 agents in 2.836617ms I0407 22:34:11.356238 29301 master.cpp:5524] Sending 3 offers to framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 (default) at scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 I0407 22:34:11.357260 29311 sched.cpp:876] Scheduler::resourceOffers took 327028ns I0407 22:34:11.357628 29278 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:4096;ports:[31000-32000] Trying semicolon-delimited string format instead I0407 22:34:11.358330 29278 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:4096;ports:[31000-32000] Trying semicolon-delimited string format instead I0407 22:34:11.358959 29278 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:4096;ports:[31000-32000] Trying semicolon-delimited string format instead I0407 22:34:11.360607 29278 sched.cpp:224] Version: 0.29.0 I0407 22:34:11.361264 29307 sched.cpp:328] New master detected at master@172.17.0.3:35855 I0407 22:34:11.361342 29307 sched.cpp:384] Authenticating with master master@172.17.0.3:35855 I0407 22:34:11.361366 29307 sched.cpp:391] Using default CRAM-MD5 authenticatee I0407 22:34:11.361670 29307 authenticatee.cpp:121] Creating new client SASL connection I0407 22:34:11.361959 29307 master.cpp:5695] Authenticating scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 I0407 22:34:11.362195 29307 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(282)@172.17.0.3:35855 I0407 22:34:11.362535 29311 authenticator.cpp:98] Creating new server SASL connection I0407 22:34:11.362890 29307 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5 I0407 22:34:11.362926 29307 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5' I0407 22:34:11.363021 29307 authenticator.cpp:203] Received SASL authentication start I0407 22:34:11.363082 29307 authenticator.cpp:325] Authentication requires more steps I0407 22:34:11.363199 29311 authenticatee.cpp:258] Received SASL authentication step I0407 22:34:11.363313 29311 authenticator.cpp:231] Received SASL authentication step I0407 22:34:11.363406 29311 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '129e11060069' server FQDN: '129e11060069' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0407 22:34:11.363512 29311 auxprop.cpp:179] Looking up auxiliary property '*userPassword' I0407 22:34:11.363605 29311 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0407 22:34:11.363651 29311 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: '129e11060069' server FQDN: '129e11060069' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0407 22:34:11.363673 29311 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0407 22:34:11.363685 29311 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0407 22:34:11.363706 29311 authenticator.cpp:317] Authentication success I0407 22:34:11.363785 29307 authenticatee.cpp:298] Authentication success I0407 22:34:11.363858 29297 master.cpp:5725] Successfully authenticated principal 'test-principal' at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 I0407 22:34:11.363903 29311 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(282)@172.17.0.3:35855 I0407 22:34:11.365274 29297 sched.cpp:474] Successfully authenticated with master master@172.17.0.3:35855 I0407 22:34:11.365301 29297 sched.cpp:779] Sending SUBSCRIBE call to master@172.17.0.3:35855 I0407 22:34:11.365396 29297 sched.cpp:812] Will retry registration in 1.739883809secs if necessary I0407 22:34:11.365500 29311 master.cpp:2362] Received SUBSCRIBE call for framework 'default' at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 I0407 22:34:11.365528 29311 master.cpp:1871] Authorizing framework principal 'test-principal' to receive offers for role 'role2' I0407 22:34:11.365952 29297 master.cpp:2433] Subscribing framework default with checkpointing disabled and capabilities [ ] I0407 22:34:11.366518 29297 sched.cpp:706] Framework registered with f59f9057-a5c7-43e1-b129-96862e640a12-0001 I0407 22:34:11.366564 29311 hierarchical.cpp:267] Added framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 I0407 22:34:11.366590 29297 sched.cpp:720] Scheduler::registered took 57363ns I0407 22:34:11.366768 29311 hierarchical.cpp:1491] No resources available to allocate! I0407 22:34:11.366837 29311 hierarchical.cpp:1586] No inverse offers to send out! I0407 22:34:11.366914 29311 hierarchical.cpp:1142] Performed allocation for 3 agents in 340908ns I0407 22:34:11.369886 29309 process.cpp:3165] Handling HTTP event for process 'master' with path: '/master/weights' I0407 22:34:11.370643 29309 http.cpp:313] HTTP PUT for /master/weights from 172.17.0.3:59397 I0407 22:34:11.370762 29309 weights_handler.cpp:58] Updating weights from request: '[{""""role"""":""""role2"""",""""weight"""":2.0}]' I0407 22:34:11.370908 29309 weights_handler.cpp:198] Authorizing principal 'test-principal' to update weights for roles '[ role2 ]' I0407 22:34:11.372067 29306 registrar.cpp:463] Applied 1 operations in 136060ns; attempting to update the 'registry' I0407 22:34:11.388222 29299 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 45.245469ms I0407 22:34:11.388381 29299 replica.cpp:712] Persisted action at 8 I0407 22:34:11.389389 29305 replica.cpp:691] Replica received learned notice for position 8 from @0.0.0.0:0 I0407 22:34:11.435415 29305 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 45.918275ms I0407 22:34:11.435688 29305 leveldb.cpp:399] Deleting ~2 keys from leveldb took 98518ns I0407 22:34:11.435835 29305 replica.cpp:712] Persisted action at 8 I0407 22:34:11.435956 29305 replica.cpp:697] Replica learned TRUNCATE action at position 8 I0407 22:34:11.437063 29310 log.cpp:683] Attempting to append 691 bytes to the log I0407 22:34:11.437297 29300 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 9 I0407 22:34:11.437979 29300 replica.cpp:537] Replica received write request for position 9 from (4834)@172.17.0.3:35855 I0407 22:34:11.479363 29300 leveldb.cpp:341] Persisting action (710 bytes) to leveldb took 41.36295ms I0407 22:34:11.479432 29300 replica.cpp:712] Persisted action at 9 I0407 22:34:11.480434 29296 replica.cpp:691] Replica received learned notice for position 9 from @0.0.0.0:0 I0407 22:34:11.521299 29296 leveldb.cpp:341] Persisting action (712 bytes) to leveldb took 40.855981ms I0407 22:34:11.521378 29296 replica.cpp:712] Persisted action at 9 I0407 22:34:11.521412 29296 replica.cpp:697] Replica learned APPEND action at position 9 I0407 22:34:11.524554 29304 registrar.cpp:508] Successfully updated the 'registry' in 152.402176ms I0407 22:34:11.524790 29298 log.cpp:702] Attempting to truncate the log to 9 I0407 22:34:11.524960 29304 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 10 I0407 22:34:11.525243 29298 hierarchical.cpp:1491] No resources available to allocate! I0407 22:34:11.525387 29298 hierarchical.cpp:1586] No inverse offers to send out! I0407 22:34:11.525538 29298 hierarchical.cpp:1142] Performed allocation for 3 agents in 540681ns I0407 22:34:11.525856 29296 replica.cpp:537] Replica received write request for position 10 from (4835)@172.17.0.3:35855 I0407 22:34:11.526267 29308 sched.cpp:902] Rescinded offer f59f9057-a5c7-43e1-b129-96862e640a12-O1 I0407 22:34:11.526398 29308 sched.cpp:913] Scheduler::offerRescinded took 54437ns I0407 22:34:11.526425 29298 hierarchical.cpp:894] Recovered cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000], allocated: ) on agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 from framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 I0407 22:34:11.527235 29299 sched.cpp:902] Rescinded offer f59f9057-a5c7-43e1-b129-96862e640a12-O2 I0407 22:34:11.527299 29299 sched.cpp:913] Scheduler::offerRescinded took 29764ns I0407 22:34:11.527825 29300 sched.cpp:902] Rescinded offer f59f9057-a5c7-43e1-b129-96862e640a12-O0 I0407 22:34:11.527920 29298 hierarchical.cpp:1586] No inverse offers to send out! I0407 22:34:11.527990 29298 hierarchical.cpp:1142] Performed allocation for 3 agents in 1.481251ms I0407 22:34:11.528009 29300 sched.cpp:913] Scheduler::offerRescinded took 333035ns I0407 22:34:11.528591 29298 hierarchical.cpp:894] Recovered cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000], allocated: ) on agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 from framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 I0407 22:34:11.529536 29311 master.cpp:5524] Sending 1 offers to framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 (default) at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 I0407 22:34:11.529846 29298 hierarchical.cpp:894] Recovered cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000], allocated: ) on agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 from framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 I0407 22:34:11.530747 29304 sched.cpp:876] Scheduler::resourceOffers took 128400ns I0407 22:34:11.560456 29296 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 34.585376ms I0407 22:34:11.560539 29296 replica.cpp:712] Persisted action at 10 I0407 22:34:11.564628 29303 replica.cpp:691] Replica received learned notice for position 10 from @0.0.0.0:0 I0407 22:34:11.601330 29303 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 36.57815ms I0407 22:34:11.601774 29303 leveldb.cpp:399] Deleting ~2 keys from leveldb took 221499ns I0407 22:34:11.601899 29303 replica.cpp:712] Persisted action at 10 I0407 22:34:11.602052 29303 replica.cpp:697] Replica learned TRUNCATE action at position 10 I0407 22:34:12.531602 29308 hierarchical.cpp:1586] No inverse offers to send out! I0407 22:34:12.532578 29308 hierarchical.cpp:1142] Performed allocation for 3 agents in 3.892929ms I0407 22:34:12.532403 29306 master.cpp:5524] Sending 1 offers to framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 (default) at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 ../../src/tests/master_allocator_tests.cpp:1587: Failure Mock function called more times than expected - returning directly. Function call: resourceOffers(0x7fffe87e3370, @0x2adef432e6f0 { 144-byte object }) Expected: to be called once Actual: called twice - over-saturated and active I0407 22:34:12.533665 29301 sched.cpp:876] Scheduler::resourceOffers took 250853ns I0407 22:34:12.533915 29306 master.cpp:5524] Sending 1 offers to framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 (default) at scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 I0407 22:34:12.534454 29306 sched.cpp:876] Scheduler::resourceOffers took 157733ns ../../src/tests/master_allocator_tests.cpp:1629: Failure Value of: framework2offers.get().size() Actual: 1 Expected: 2u Which is: 2 I0407 22:34:12.534997 29278 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:4096;ports:[31000-32000] Trying semicolon-delimited string format instead I0407 22:34:12.537264 29301 master.cpp:1275] Framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 (default) at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 disconnected I0407 22:34:12.537297 29301 master.cpp:2658] Disconnecting framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 (default) at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 I0407 22:34:12.537330 29301 master.cpp:2682] Deactivating framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 (default) at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 W0407 22:34:12.537849 29301 master.hpp:1822] Master attempted to send message to disconnected framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 (default) at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 W0407 22:34:12.538306 29301 master.hpp:1822] Master attempted to send message to disconnected framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 (default) at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 I0407 22:34:12.538394 29301 master.cpp:1299] Giving framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 (default) at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 0ns to failover I0407 22:34:12.539371 29302 hierarchical.cpp:378] Deactivated framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 I0407 22:34:12.540053 29302 hierarchical.cpp:894] Recovered cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000], allocated: ) on agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 from framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 I0407 22:34:12.540732 29302 hierarchical.cpp:894] Recovered cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000], allocated: ) on agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 from framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 I0407 22:34:12.540974 29301 master.cpp:1275] Framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 (default) at scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 disconnected I0407 22:34:12.541178 29301 master.cpp:2658] Disconnecting framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 (default) at scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 I0407 22:34:12.541292 29301 master.cpp:2682] Deactivating framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 (default) at scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 I0407 22:34:12.541553 29300 hierarchical.cpp:378] Deactivated framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 I0407 22:34:12.542654 29300 hierarchical.cpp:894] Recovered cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):4096; ports(*):[31000-32000], allocated: ) on agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 from framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 W0407 22:34:12.543051 29301 master.hpp:1822] Master attempted to send message to disconnected framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 (default) at scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 I0407 22:34:12.543525 29301 master.cpp:1299] Giving framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 (default) at scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 0ns to failover I0407 22:34:12.543861 29301 master.cpp:5376] Framework failover timeout, removing framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 (default) at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 I0407 22:34:12.543959 29301 master.cpp:6109] Removing framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 (default) at scheduler-37080386-2aa8-4592-bf09-8288bd04727a@172.17.0.3:35855 I0407 22:34:12.544445 29301 slave.cpp:2226] Asked to shut down framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 by master@172.17.0.3:35855 W0407 22:34:12.545446 29301 slave.cpp:2241] Cannot shut down unknown framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 I0407 22:34:12.544556 29300 slave.cpp:2226] Asked to shut down framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 by master@172.17.0.3:35855 W0407 22:34:12.545661 29300 slave.cpp:2241] Cannot shut down unknown framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 I0407 22:34:12.545774 29300 slave.cpp:811] Agent terminating I0407 22:34:12.544791 29305 hierarchical.cpp:329] Removed framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 I0407 22:34:12.545241 29296 master.cpp:5376] Framework failover timeout, removing framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 (default) at scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 I0407 22:34:12.544518 29302 slave.cpp:2226] Asked to shut down framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 by master@172.17.0.3:35855 I0407 22:34:12.546140 29296 master.cpp:6109] Removing framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 (default) at scheduler-72944dc9-e3cc-4ebc-bca6-d72c77ad6721@172.17.0.3:35855 W0407 22:34:12.546159 29302 slave.cpp:2241] Cannot shut down unknown framework f59f9057-a5c7-43e1-b129-96862e640a12-0001 I0407 22:34:12.546496 29296 master.cpp:1236] Agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 at slave(111)@172.17.0.3:35855 (129e11060069) disconnected I0407 22:34:12.546527 29296 master.cpp:2717] Disconnecting agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 at slave(111)@172.17.0.3:35855 (129e11060069) I0407 22:34:12.546581 29296 master.cpp:2736] Deactivating agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 at slave(111)@172.17.0.3:35855 (129e11060069) I0407 22:34:12.546752 29296 slave.cpp:2226] Asked to shut down framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 by master@172.17.0.3:35855 W0407 22:34:12.546782 29296 slave.cpp:2241] Cannot shut down unknown framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 I0407 22:34:12.546844 29296 slave.cpp:2226] Asked to shut down framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 by master@172.17.0.3:35855 W0407 22:34:12.546869 29296 slave.cpp:2241] Cannot shut down unknown framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 I0407 22:34:12.547111 29296 hierarchical.cpp:329] Removed framework f59f9057-a5c7-43e1-b129-96862e640a12-0000 I0407 22:34:12.547302 29296 hierarchical.cpp:563] Agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 deactivated I0407 22:34:12.553478 29278 slave.cpp:811] Agent terminating I0407 22:34:12.553766 29306 master.cpp:1236] Agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 at slave(112)@172.17.0.3:35855 (129e11060069) disconnected I0407 22:34:12.555483 29306 master.cpp:2717] Disconnecting agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 at slave(112)@172.17.0.3:35855 (129e11060069) I0407 22:34:12.555858 29306 master.cpp:2736] Deactivating agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 at slave(112)@172.17.0.3:35855 (129e11060069) I0407 22:34:12.556190 29307 hierarchical.cpp:563] Agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 deactivated I0407 22:34:12.559095 29299 slave.cpp:811] Agent terminating I0407 22:34:12.559301 29300 master.cpp:1236] Agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 at slave(113)@172.17.0.3:35855 (129e11060069) disconnected I0407 22:34:12.559327 29300 master.cpp:2717] Disconnecting agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 at slave(113)@172.17.0.3:35855 (129e11060069) I0407 22:34:12.559370 29300 master.cpp:2736] Deactivating agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 at slave(113)@172.17.0.3:35855 (129e11060069) I0407 22:34:12.559516 29309 hierarchical.cpp:563] Agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 deactivated I0407 22:34:12.561872 29278 master.cpp:1089] Master terminating I0407 22:34:12.562566 29304 hierarchical.cpp:508] Removed agent f59f9057-a5c7-43e1-b129-96862e640a12-S2 I0407 22:34:12.562890 29304 hierarchical.cpp:508] Removed agent f59f9057-a5c7-43e1-b129-96862e640a12-S1 I0407 22:34:12.565459 29304 hierarchical.cpp:508] Removed agent f59f9057-a5c7-43e1-b129-96862e640a12-S0 [ FAILED ] MasterAllocatorTest/1.RebalancedForUpdatedWeights, where TypeParam = mesos::internal::tests::Module (2240 ms) ",0,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5152","04/08/2016 10:48:27",2,"Add authentication to agent's /monitor/statistics endpoint ""Operators may want to enforce that only authenticated users (and subsequently only specific authorized users) be able to view per-executor resource usage statistics. Since this endpoint is handled by the ResourceMonitorProcess, I would expect the work necessary to be similar to what was done for /files or /registry endpoint authn.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5153","04/08/2016 11:35:00",8,"Sandboxes contents should be protected from unauthorized users ""MESOS-4956 introduced authentication support for the sandboxes. However, authentication can only go as far as to tell whether an user is known to mesos or not. An extra additional step is necessary to verify whether the known user is allowed to executed the requested operation on the sandbox (browse, read, download, debug).""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5157","04/08/2016 21:30:46",1,"Update webui for GPU metrics ""After adding the GPU metrics and updating the resources JSON to include GPU information, the webui should be updated accordingly.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5160","04/10/2016 04:09:28",1,"Make `network/cni` enabled as the default network isolator for `MesosContainerizer`. ""Currently there are no default `network` isolators for `MesosContainerizer`. With the development of the `network/cni` isolator we have an interface to run Mesos on multitude of IP networks. Given that its based on an open standard (the CNI spec) which is gathering a lot of traction from vendors (calico, weave, coreOS) and already works on some default networks (bridge, ipvlan, macvlan) it makes sense to make it as the default network isolator. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5164","04/11/2016 08:46:32",5,"Add authorization to agent's /monitor/statistics endpoint. ""Operators may want to enforce that only specific authorized users be able to view per-executor resource usage statistics. For 0.29 MVP, we can make this coarse-grained, and assume that only the operator or a operator-privileged monitoring service will be accessing the endpoint. For a future release, we can consider fine-grained authz that filters statistics like we plan to do for /tasks.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5167","04/11/2016 14:01:46",5,"Add tests for `network/cni` isolator ""We need to add tests to verify the functionality of `network/cni` isolator.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5169","04/11/2016 17:05:29",3,"Introduce new Authorizer Actions for Authorized based filtering of endpoints. ""For authorization based endpoint filtering we need to introduce the authorizer actions outlined via MESOS-4932.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5171","04/11/2016 17:16:57",3,"Expose state/state.hpp to public headers ""We want the Modules to be able to use replicated log along with the APIs to communicate with Zookeeper. This change would require us to expose at least the following headers state/storage.hpp, and any additional files that state.hpp depends on (e.g., zookeeper/authentication.hpp).""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5172","04/11/2016 17:23:58",3,"Registry puller cannot fetch blobs correctly from http Redirect 3xx urls. ""When the registry puller is pulling a private repository from some private registry (e.g., quay.io), errors may occur when fetching blobs, at which point fetching the manifest of the repo is finished correctly. The error message is `Unexpected HTTP response '400 Bad Request' when trying to download the blob`. This may arise from the logic of fetching blobs, or incorrect format of uri when requesting blobs.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5181","04/12/2016 01:16:58",1,"Master should reject calls from the scheduler driver if the scheduler is not connected. ""When a scheduler registers, the master will create a link from master to scheduler. If this link breaks, the master will consider the scheduler {{inactive}} and mark it as {{disconnected}}. This causes a couple problems: 1) Master does not send offers to {{inactive}} schedulers. But these schedulers might consider themselves """"registered"""" in a one-way network partition scenario. 2) Any calls from the {{inactive}} scheduler is still accepted, which leaves the scheduler in a starved, but semi-functional state. See the related issue for more context: MESOS-5180 There should be an additional guard for registered, but {{inactive}} schedulers here: https://github.com/apache/mesos/blob/94f4f4ebb7d491ec6da1473b619600332981dd8e/src/master/master.cpp#L1977 The HTTP API already does this: https://github.com/apache/mesos/blob/94f4f4ebb7d491ec6da1473b619600332981dd8e/src/master/http.cpp#L459 Since the scheduler driver cannot return a 403, it may be necessary to return a {{Event::ERROR}} and force the scheduler to abort.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5187","04/12/2016 10:10:58",3,"The filesystem/linux isolator does not set the permissions of the host_path. ""The {{filesystem/linux}} isolator is not a drop in replacement for the {{filesystem/shared}} isolator. This should be considered before the latter is deprecated. We are currently using the {{filesystem/shared}} isolator together with the following slave option. This provides us with a private {{/tmp}} and {{/var/tmp}} folder for each task. When browsing the Mesos sandbox, one can see the following permissions: However, when running with the new {{filesystem/linux}} isolator, the permissions are different: This prevents user code (running as a non-root user) from writing to those folders, i.e. every write attempt fails with permission denied. *Context*: * We are using Apache Aurora. Aurora is running its custom executor as root but then switches to a non-privileged user before running the actual user code. * The follow code seems to have enabled our usecase in the existing {{filesystem/shared}} isolator: https://github.com/apache/mesos/blob/4d2b1b793e07a9c90b984ca330a3d7bc9e1404cc/src/slave/containerizer/mesos/isolators/filesystem/shared.cpp#L175-L198 """," --default_container_info='{ """"type"""": """"MESOS"""", """"volumes"""": [ {""""host_path"""": """"system/tmp"""", """"container_path"""": """"/tmp"""", """"mode"""": """"RW""""}, {""""host_path"""": """"system/vartmp"""", """"container_path"""": """"/var/tmp"""", """"mode"""": """"RW""""} ] }' mode nlink uid gid size mtime drwxrwxrwx 3 root root 4 KB Apr 11 18:16 tmp drwxrwxrwx 2 root root 4 KB Apr 11 18:15 vartmp mode nlink uid gid size mtime drwxr-xr-x 2 root root 4 KB Apr 12 10:34 tmp drwxr-xr-x 2 root root 4 KB Apr 12 10:34 vartmp ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5214","04/13/2016 22:48:01",2,"Populate FrameworkInfo.principal for authenticated frameworks ""If a framework authenticates and then does not provide a {{principal}} in its {{FrameworkInfo}}, we currently allow this and leave {{FrameworkInfo.principal}} unset. Instead, we should populate {{FrameworkInfo.principal}} for them automatically in that case to ensure that the two principals are equal.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5215","04/13/2016 23:05:01",1,"Update the documentation for '/reserve' and '/create-volumes' ""There are a couple issues related to the {{principal}} field in {{DiskInfo}} and {{ReservationInfo}} (see linked JIRAs) that should be better documented. We need to help users understand the purpose of these fields and how they interact with the principal provided in the HTTP authentication header. See linked tickets for background.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5216","04/14/2016 00:00:46",5,"Document docker volume driver isolator. ""Should include the followings: 1. What features (driver options) are supported in docker volume driver isolator. 2. How to use docker volume driver isolator. *related agent flags introduction and usage. *isolator dependency clarification (e.g., filesystem/linux). *related driver daemon preprocess. *volumes pre-specified by users and volume cleanup.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5228","04/18/2016 23:05:03",3,"Add tests for Capability API. ""Add basic tests for the capability API.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5232","04/19/2016 07:25:08",1,"Add capability information to ContainerInfo protobuf message. ""To enable support for capability as first class framework entity, we need to add capabilities related information to the ContainerInfo protobuf.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5237","04/20/2016 00:31:48",2,"The windows version of `os::access` has differing behavior than the POSIX version. ""The POSIX version of {{os::access}} looks like this: Compare this to the Windows version of {{os::access}} which looks like this following: As we can see, the case where {{errno}} is set to {{EACCES}} is handled differently between the 2 functions. We can actually consolidate the 2 functions by simply using the POSIX version. The challenge is that on POSIX, we should use {{::access}} and {{::_access}} on Windows. Note however, that this problem is already solved, as we have an implementation of {{::access}} for Windows in {{3rdparty/libprocess/3rdparty/stout/include/stout/windows.hpp}} which simply defers to {{::_access}}. Thus, I propose to simply consolidate the 2 implementations."""," inline Try access(const std::string& path, int how) { if (::access(path.c_str(), how) < 0) { if (errno == EACCES) { return false; } else { return ErrnoError(); } } return true; } inline Try access(const std::string& fileName, int how) { if (::_access(fileName.c_str(), how) != 0) { return ErrnoError(""""access: Could not access path '"""" + fileName + """"'""""); } return true; } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5239","04/21/2016 03:28:35",3,"Persistent volume DockerContainerizer support assumes proper mount propagation setup on the host. ""We recently added persistent volume support in DockerContainerizer (MESOS-3413). To understand the problem, we first need to understand how persistent volumes are supported in DockerContainerizer. To support persistent volumes in DockerContainerizer, we bind mount persistent volumes under a container's sandbox ('container_path' has to be relative for persistent volumes). When the Docker container is launched, since we always add a volume (-v) for the sandbox, the persistent volumes will be bind mounted into the container as well (since Docker does a 'rbind'). The assumption that the above works is that the Docker daemon should see those persistent volume mounts that Mesos mounts on the host mount table. It's not a problem if Docker daemon itself is using the host mount namespace. However, on systemd enabled systems, Docker daemon is running in a separate mount namespace and all mounts in that mount namespace will be marked as slave mounts due to this [patch|https://github.com/docker/docker/commit/eb76cb2301fc883941bc4ca2d9ebc3a486ab8e0a]. So what that means is that: in order for it to work, the parent mount of agent's work_dir should be a shared mount when docker daemon starts. This is typically true on CentOS7, CoreOS as all mounts are shared mounts by default. However, this causes an issue with the 'filesystem/linux' isolator. To understand why, first I need to show you a typical problem when dealing with shared mounts. Let me explain that using the following commands on a CentOS7 machine: As you can see above, there're two entries (/run/netns/test) in the mount table (unexpected). This will confuse some systems sometimes. The reason is because when we create a self bind mount (/run/netns -> /run/netns), the mount will be put into the same shared mount peer group (shared:22) as its parent (/run). Then, when you create another mount underneath that (/run/netns/test), that mount operation will be propagated to all mounts in the same peer group (shared:22), resulting an unexpected additional mount being created. The reason we need to do a self bind mount in Mesos is that sometimes, we need to make sure some mounts are shared so that it does not get copied when a new mount namespace is created. However, on some systems, mounts are private by default (e.g., Ubuntu 14.04). In those cases, since we cannot change the system mounts, we have to do a self bind mount so that we can set mount propagation to shared. For instance, in filesytem/linux isolator, we do a self bind mount on agent's work_dir. To avoid the self bind mount pitfall mentioned above, in filesystem/linux isolator, after we created the mount, we do a make-slave + make-shared so that the mount is its own shared mount peer group. In that way, any mounts underneath it will not be propagated back. However, that operation will break the assumption that the persistent volume DockerContainerizer support makes. As a result, we're seeing problem with persistent volumes in DockerContainerizer when filesystem/linux isolator is turned on."""," [root@core-dev run]# cat /proc/self/mountinfo 24 60 0:19 / /run rw,nosuid,nodev shared:22 - tmpfs tmpfs rw,seclabel,mode=755 [root@core-dev run]# mkdir /run/netns [root@core-dev run]# mount --bind /run/netns /run/netns [root@core-dev run]# cat /proc/self/mountinfo 24 60 0:19 / /run rw,nosuid,nodev shared:22 - tmpfs tmpfs rw,seclabel,mode=755 121 24 0:19 /netns /run/netns rw,nosuid,nodev shared:22 - tmpfs tmpfs rw,seclabel,mode=755 [root@core-dev run]# ip netns add test [root@core-dev run]# cat /proc/self/mountinfo 24 60 0:19 / /run rw,nosuid,nodev shared:22 - tmpfs tmpfs rw,seclabel,mode=755 121 24 0:19 /netns /run/netns rw,nosuid,nodev shared:22 - tmpfs tmpfs rw,seclabel,mode=755 162 121 0:3 / /run/netns/test rw,nosuid,nodev,noexec,relatime shared:5 - proc proc rw 163 24 0:3 / /run/netns/test rw,nosuid,nodev,noexec,relatime shared:5 - proc proc rw ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5253","04/22/2016 18:58:00",2,"Isolator cleanup should not be invoked if they are not prepared yet. ""If the mesos containerizer destroys a container in PROVISIONING state, isolator cleanup is still called, which is incorrect because there is no isolator prepared yet. In this case, there no need to clean up any isolator, call provisioner destroy directly.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5256","04/22/2016 22:53:42",3,"Add support for per-containerizer resource enumeration ""Currently the top level containerizer includes a static function for enumerating the resources available on a given agent. Ideally, this functionality should be the responsibility of individual containerizers (and specifically the responsibility of each isolator used to control access to those resources). Adding support for this will involve making the `Containerizer::resources()` function virtual instead of static and then implementing it on a per-containerizer basis. We should consider providing a default to make this easier in cases where there is only really one good way of enumerating a given set of resources.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5272","04/25/2016 07:25:59",3,"Support docker image labels. ""Docker image labels should be supported in unified containerizer, which can be used for applying custom metadata. Image labels are necessary for mesos features to support docker in unified containerizer (e.g., for mesos GPU device isolator).""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5273","04/25/2016 09:49:38",3,"Need support for Authorization information via HELP. ""We should add information about authentication to the help message and thereby endpoint documentation (similarly as MESOS-4934 has done for authentication).""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5275","04/25/2016 19:05:31",5,"Add capabilities support for unified containerizer. ""Add capabilities support for unified containerizer. Requirements: 1. Use the mesos capabilities API. 2. Frameworks be able to add capability requests for containers. 3. Agents be able to add maximum allowed capabilities for all containers launched. Design document: https://docs.google.com/document/d/1YiTift8TQla2vq3upQr7K-riQ_pQ-FKOCOsysQJROGc/edit#heading=h.rgfwelqrskmd ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5277","04/25/2016 22:28:10",5,"Need to add REMOVE semantics to the copy backend ""Some Dockerfiles run the `rm` command to remove files from the base image using the """"RUN"""" directive in the Dockerfile. An example can be found here: https://github.com/ngineered/nginx-php-fpm.git In the final rootfs the removed files should not be present. Presence of these files in the final image can make the container misbehave. For example, the nginx-php-fpm docker image that is referenced tries to remove the default nginx config and replaces it with its own config to point to a different HTML root. If the default nginx config is still present after the building the image, nginx will start pointing to a different HTML root than the one set in the Dockerfile. Currently the copy backend cannot handle removal of files from intermediate layers. This can cause issues with docker images built using a Dockerfile similar to the one listed here. Hence, we need to add REMOVE semantics to the copy backend. ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5286","04/26/2016 15:31:33",5,"Add authorization to libprocess HTTP endpoints ""Now that the libprocess-level HTTP endpoints have had authentication added to them in MESOS-4902, we can add authorization to them as well. As a first step, we can implement a """"coarse-grained"""" approach, in which a principal is granted or denied access to a given endpoint. We will likely need to register an authorizer with libprocess.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5303","04/28/2016 22:53:20",3,"Add capabilities support for mesos execute cli. ""Add support for `user` and `capabilities` to execute cli. This will help in testing the `capabilities` feature for unified containerizer.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5310","04/29/2016 20:28:59",5,"Enable `network/cni` isolator to allow modifications and deletion of CNI config ""Currently the `network/cni` isolator can only load the CNI configs at startup. This makes the CNI networks immutable. From an operational standpoint this can make deployments painful for operators. To make CNI more flexible the `network/cni` isolator should be able to load configs at run time. The proposal is to add an endpoint to the `network/cni` isolator, to which when the operator sends a PUT request the `network/cni` isolator will reload CNI configs. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5313","05/01/2016 06:56:05",1,"Failed to set quota and update weight according to document "" The right command should be adding {{@}} before the quota json file {{jsonMessageBody}}."""," root@mesos002:~/test# curl -d jsonMessageBody -X POST http://192.168.56.12:5050/quota Failed to parse set quota request JSON 'jsonMessageBody': syntax error at line 1 near: jsonMessageBodyroot@mesos002:~/test# cat jsonMessageBody { """"role"""": """"role1"""", """"guarantee"""": [{ """"name"""": """"cpus"""", """"type"""": """"SCALAR"""", """"scalar"""": { """"value"""": 1 } }, { """"name"""": """"mem"""", """"type"""": """"SCALAR"""", """"scalar"""": { """"value"""": 128 } }] } root@mesos002:~/test# curl -d weight.json -X PUT http://192.168.56.12:5050/weights Failed to parse update weights request JSON ('weight.json'): syntax error at line 1 near: weight.js root@mesos002:~/test# cat weight.json [ { """"role"""": """"role1"""", """"weight"""": 2.0 }, { """"role"""": """"role2"""", """"weight"""": 3.5 } ] ",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5316","05/02/2016 20:30:32",2,"Authenticate the agent's '/containers' endpoint. ""The {{/containers}} endpoint was recently added to the agent. Authentication should be enabled on this endpoint.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5317","05/02/2016 20:32:16",2,"Authorize the agent's '/containers' endpoint. ""After the agent's {{/containers}} endpoint is authenticated, we should enabled authorization as well.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5333","05/05/2016 23:03:53",3,"GET /master/maintenance/schedule/ produces 404. ""Attempts to make a GET request to /master/maintenance/schedule/ result in a 404. However, if I make a GET request to /master/maintenance/schedule (without the trailing /), it works. My current (untested) theory is that this might be related to the fact that there is also a /master/maintenance/schedule/status endpoint (an endpoint built on top of a functioning endpoint), as requests to /help and /help/ (with and without the trailing slash) produce the same functioning result.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5335","05/06/2016 10:52:56",3,"Add authorization to GET /weights. ""We already authorize which http users can update weights for particular roles, but even knowing of the existence of these roles (let alone their weights) may be sensitive information. We should add authz around GET operations on /weights. Easy option: GET_ENDPOINT_WITH_PATH /weights - Pro: No new verb - Con: All or nothing Complex option: GET_WEIGHTS_WITH_ROLE - Pro: Filters contents based on roles the user is authorized to see - Con: More authorize calls (one per role in each /weights request)""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5336","05/06/2016 11:01:05",3,"Add authorization to GET /quota. ""We already authorize which http users can set/remove quota for particular roles, but even knowing of the existence of these roles (let alone their quotas) may be sensitive information. We should add authz around GET operations on /quota.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5345","05/09/2016 13:31:56",5,"Design doc for TASK_LOST_PENDING ""The TASK_LOST task status describes two different situations: (a) the task was not launched because of an error (e.g., insufficient available resources), or (b) the master lost contact with a running task (e.g., due to a network partition); the master will kill the task when it can (e.g., when the network partition heals), but in the meantime the task may still be running. This has two problems: 1. Using the same task status for two fairly different situations is confusing. 2. In the partitioned-but-still-running case, frameworks have no easy way to determine when a task has truly terminated. To address these problems, we propose introducing a new task status, TASK_LOST_PENDING. If a framework opts into this behavior using a new capability, TASK_LOST would mean """"the task is definitely not running"""", whereas TASK_LOST_PENDING would mean """"the task may or may not be running (we've lost contact with the agent), but the master will try to shut it down when possible.""""""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5347","05/09/2016 18:21:15",2,"Enhance the log message when launching mesos containerizer. ""Log the launch flag which includes the executor command, pre-launch commands and other information when launching the mesos containerizer. ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5348","05/09/2016 18:25:05",2,"Enhance the log message when launching docker containerizer. ""Log the launch flag which includes the executor command and other information when launching the docker containerizer.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5350","05/09/2016 23:10:39",5,"Add asynchronous hook for validating docker containerizer tasks ""It is possible to plug in custom validation logic for the MesosContainerizer via an {{Isolator}} module, but the same is not true of the DockerContainerizer. Basic logic can be plugged into the DockerContainerizer via {{Hooks}}, but this has some notable differences compared to isolators: * Hooks are synchronous. * Modifications to tasks via Hooks have lower priority compared to the task itself. i.e. If both the {{TaskInfo}} and {{slaveExecutorEnvironmentDecorator}} define the same environment variable, the {{TaskInfo}} wins. * Hooks have no effect if they fail (short of segfaulting) i.e. The {{slavePreLaunchDockerHook}} has a return type of {{Try}}: https://github.com/apache/mesos/blob/628ccd23501078b04fb21eee85060a6226a80ef8/include/mesos/hook.hpp#L90 But the effect of returning an {{Error}} is a log message: https://github.com/apache/mesos/blob/628ccd23501078b04fb21eee85060a6226a80ef8/src/hook/manager.cpp#L227-L230 We should add a hook to the DockerContainerizer to narrow this gap. This new hook would: * Be called at roughly the same place as {{slavePreLaunchDockerHook}} https://github.com/apache/mesos/blob/628ccd23501078b04fb21eee85060a6226a80ef8/src/slave/containerizer/docker.cpp#L1022 * Return a {{Future}} and require splitting up {{DockerContainerizer::launch}}. * Prevent a task from launching if it returns a {{Failure}}.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5362","05/11/2016 00:11:43",2,"Add authentication to example frameworks ""Some example frameworks do not have the ability to authenticate with the master. Adding authentication to the example frameworks that don't already have it implemented would allow us to use these frameworks for testing in authenticated/authorized scenarios.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5372","05/12/2016 22:50:41",1,"Add random() to os:: namespace ""The function """"random()"""" is not available in Windows. After this improvement the calls to """"os::random()"""" will result in calls to """"::random()"""" on POSIX and """"::rand()"""" on Windows. ""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5378","05/13/2016 03:45:13",3,"Terminating a framework during master failover leads to orphaned tasks ""Repro steps: 1) Setup: 2) Kill all three from (1), in the order they were started. 3) Restart the master and agent. Do not restart the framework. Result) * The agent will reconnect to an orphaned task. * The Web UI will report no memory usage * {{curl localhost:5050/metrics/snapshot}} will say: {{""""master/mem_used"""": 128,}} Cause) When a framework registers with the master, it provides a {{failover_timeout}}, in case the framework disconnects. If the framework disconnects and does not reconnect within this {{failover_timeout}}, the master will kill all tasks belonging to the framework. However, the master does not persist this {{failover_timeout}} across master failover. The master will """"forget"""" about a framework if: 1) The master dies before {{failover_timeout}} passes. 2) The framework dies while the master is dead. When the master comes back up, the agent will re-register. The agent will report the orphaned task(s). Because the master failed over, it does not know these tasks are orphans (i.e. it thinks the frameworks might re-register). Proposed solution) The master should save the {{FrameworkID}} and {{failover_timeout}} in the registry. Upon recovery, the master should resume the {{failover_timeout}} timers."""," bin/mesos-master.sh --work_dir=/tmp/master bin/mesos-slave.sh --work_dir=/tmp/slave --master=localhost:5050 src/mesos-execute --checkpoint --command=""""sleep 1000"""" --master=localhost:5050 --name=""""test"""" ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5380","05/13/2016 20:37:04",3,"Killing a queued task can cause the corresponding command executor to never terminate. ""We observed this in our testing environment. Sequence of events: 1) A command task is queued since the executor has not registered yet. 2) The framework issues a killTask. 3) Since executor is in REGISTERING state, agent calls `statusUpdate(TASK_KILLED, UPID())` 4) `statusUpdate` now will call `containerizer->status()` before calling `executor->terminateTask(status.task_id(), status);` which will remove the queued task. (Introduced in this patch: https://reviews.apache.org/r/43258). 5) Since the above is async, it's possible that the task is still in queued task when we trying to see if we need to kill unregistered executor in `killTask`: {code} // TODO(jieyu): Here, we kill the executor if it no longer has // any task to run and has not yet registered. This is a // workaround for those single task executors that do not have a // proper self terminating logic when they haven't received the // task within a timeout. if (executor->queuedTasks.empty()) { CHECK(executor->launchedTasks.empty()) << """" Unregistered executor '"""" << executor->id << """"' has launched tasks""""; LOG(WARNING) << """"Killing the unregistered executor """" << *executor << """" because it has no tasks""""; executor->state = Executor::TERMINATING; containerizer->destroy(executor->containerId); } {code} 6) Consequently, the executor will never be terminated by Mesos. Attaching the relevant agent log: """," // TODO(jieyu): Here, we kill the executor if it no longer has // any task to run and has not yet registered. This is a // workaround for those single task executors that do not have a // proper self terminating logic when they haven't received the // task within a timeout. if (executor->queuedTasks.empty()) { CHECK(executor->launchedTasks.empty()) << """" Unregistered executor '"""" << executor->id << """"' has launched tasks""""; LOG(WARNING) << """"Killing the unregistered executor """" << *executor << """" because it has no tasks""""; executor->state = Executor::TERMINATING; containerizer->destroy(executor->containerId); } May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.640527 1342 slave.cpp:1361] Got assigned task mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6 for framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.641034 1342 slave.cpp:1480] Launching task mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6 for framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.641440 1342 paths.cpp:528] Trying to chown '/var/lib/mesos/slave/slaves/a3ad8418-cb77-4705-b353-4b514ceca52c-S0/frameworks/a3ad8418-cb77-4705-b353-4b514ceca52c-0000/executors/mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6/runs/24762d43-2134-475e-b724-caa72110497a' to user 'root' May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.644664 1342 slave.cpp:5389] Launching executor mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6 of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/var/lib/mesos/slave/slaves/a3ad8418-cb77-4705-b353-4b514ceca52c-S0/frameworks/a3ad8418-cb77-4705-b353-4b514ceca52c-0000/executors/mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6/runs/24762d43-2134-475e-b724-caa72110497a' May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.645195 1342 slave.cpp:1698] Queuing task 'mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6' for executor 'mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6' of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.645491 1338 containerizer.cpp:671] Starting container '24762d43-2134-475e-b724-caa72110497a' for executor 'mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6' of framework 'a3ad8418-cb77-4705-b353-4b514ceca52c-0000' May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.647897 1345 cpushare.cpp:389] Updated 'cpu.shares' to 1126 (cpus 1.1) for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.648619 1345 cpushare.cpp:411] Updated 'cpu.cfs_period_us' to 100ms and 'cpu.cfs_quota_us' to 110ms (cpus 1.1) for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.650180 1341 mem.cpp:602] Started listening for OOM events for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.650718 1341 mem.cpp:722] Started listening on low memory pressure events for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.651147 1341 mem.cpp:722] Started listening on medium memory pressure events for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.651599 1341 mem.cpp:722] Started listening on critical memory pressure events for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.652015 1341 mem.cpp:353] Updated 'memory.soft_limit_in_bytes' to 160MB for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:13 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:13.652719 1341 mem.cpp:388] Updated 'memory.limit_in_bytes' to 160MB for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.508930 1342 slave.cpp:1891] Asked to kill task mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6 of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.509063 1342 slave.cpp:3048] Handling status update TASK_KILLED (UUID: f9d15955-6c9a-4a73-98c3-97c0128510ba) for task mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6 of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 from @0.0.0.0:0 May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.509702 1340 disk.cpp:169] Updating the disk resources for container 24762d43-2134-475e-b724-caa72110497a to cpus(*):0.1; mem(*):32 May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.510298 1343 mem.cpp:353] Updated 'memory.soft_limit_in_bytes' to 32MB for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.510349 1341 cpushare.cpp:389] Updated 'cpu.shares' to 102 (cpus 0.1) for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.511102 1343 mem.cpp:388] Updated 'memory.limit_in_bytes' to 32MB for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.511495 1341 cpushare.cpp:411] Updated 'cpu.cfs_period_us' to 100ms and 'cpu.cfs_quota_us' to 10ms (cpus 0.1) for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.511715 1341 status_update_manager.cpp:320] Received status update TASK_KILLED (UUID: f9d15955-6c9a-4a73-98c3-97c0128510ba) for task mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6 of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.512032 1341 status_update_manager.cpp:824] Checkpointing UPDATE for status update TASK_KILLED (UUID: f9d15955-6c9a-4a73-98c3-97c0128510ba) for task mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6 of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.513849 1343 slave.cpp:3446] Forwarding the update TASK_KILLED (UUID: f9d15955-6c9a-4a73-98c3-97c0128510ba) for task mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6 of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 to master@10.0.5.79:5050 May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.528929 1344 status_update_manager.cpp:392] Received status update acknowledgement (UUID: f9d15955-6c9a-4a73-98c3-97c0128510ba) for task mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6 of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 15:36:25 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:25.529002 1344 status_update_manager.cpp:824] Checkpointing ACK for status update TASK_KILLED (UUID: f9d15955-6c9a-4a73-98c3-97c0128510ba) for task mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6 of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 15:36:28 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:28.199105 1345 isolator.cpp:469] Mounting docker volume mount point '//var/lib/rexray/volumes/jdef-test-125/data' to '/var/lib/mesos/slave/slaves/a3ad8418-cb77-4705-b353-4b514ceca52c-S0/frameworks/a3ad8418-cb77-4705-b353-4b514ceca52c-0000/executors/mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6/runs/24762d43-2134-475e-b724-caa72110497a/data' for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:28 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:28.207062 1338 containerizer.cpp:1184] Checkpointing executor's forked pid 5810 to '/var/lib/mesos/slave/meta/slaves/a3ad8418-cb77-4705-b353-4b514ceca52c-S0/frameworks/a3ad8418-cb77-4705-b353-4b514ceca52c-0000/executors/mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6/runs/24762d43-2134-475e-b724-caa72110497a/pids/forked.pid' May 13 15:36:28 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:28.832330 1338 slave.cpp:2689] Got registration for executor 'mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6' of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 from executor(1)@10.0.2.74:46154 May 13 15:36:28 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:28.833149 1345 disk.cpp:169] Updating the disk resources for container 24762d43-2134-475e-b724-caa72110497a to cpus(*):0.1; mem(*):32 May 13 15:36:28 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:28.833804 1342 mem.cpp:353] Updated 'memory.soft_limit_in_bytes' to 32MB for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:28 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:28.833871 1340 cpushare.cpp:389] Updated 'cpu.shares' to 102 (cpus 0.1) for container 24762d43-2134-475e-b724-caa72110497a May 13 15:36:28 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[1304]: I0513 15:36:28.835160 1340 cpushare.cpp:411] Updated 'cpu.cfs_period_us' to 100ms and 'cpu.cfs_quota_us' to 10ms (cpus 0.1) for container 24762d43-2134-475e-b724-caa72110497a May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: 5804 'mesos-logrotate-logger --help=false --log_filename=/var/lib/mesos/slave/slaves/a3ad8418-cb77-4705-b353-4b514ceca52c-S0/frameworks/a3ad8418-cb77-4705-b353-4b514ceca52c-0000/executors/mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6/runs/24762d43-2134-475e-b724-caa72110497a/stdout --logrotate_options=rotate 9 --logrotate_path=logrotate --max_size=2MB ' May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: 5809 'mesos-logrotate-logger --help=false --log_filename=/var/lib/mesos/slave/slaves/a3ad8418-cb77-4705-b353-4b514ceca52c-S0/frameworks/a3ad8418-cb77-4705-b353-4b514ceca52c-0000/executors/mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6/runs/24762d43-2134-475e-b724-caa72110497a/stderr --logrotate_options=rotate 9 --logrotate_path=logrotate --max_size=2MB ' May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: 5804 'mesos-logrotate-logger --help=false --log_filename=/var/lib/mesos/slave/slaves/a3ad8418-cb77-4705-b353-4b514ceca52c-S0/frameworks/a3ad8418-cb77-4705-b353-4b514ceca52c-0000/executors/mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6/runs/24762d43-2134-475e-b724-caa72110497a/stdout --logrotate_options=rotate 9 --logrotate_path=logrotate --max_size=2MB ' May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: 5809 'mesos-logrotate-logger --help=false --log_filename=/var/lib/mesos/slave/slaves/a3ad8418-cb77-4705-b353-4b514ceca52c-S0/frameworks/a3ad8418-cb77-4705-b353-4b514ceca52c-0000/executors/mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6/runs/24762d43-2134-475e-b724-caa72110497a/stderr --logrotate_options=rotate 9 --logrotate_path=logrotate --max_size=2MB ' May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.374567 30993 slave.cpp:5498] Recovering executor 'mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6' of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.420411 30990 status_update_manager.cpp:208] Recovering executor 'mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6' of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.513164 30994 containerizer.cpp:467] Recovering container '24762d43-2134-475e-b724-caa72110497a' for executor 'mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6' of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.533478 30988 mem.cpp:602] Started listening for OOM events for container 24762d43-2134-475e-b724-caa72110497a May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.534553 30988 mem.cpp:722] Started listening on low memory pressure events for container 24762d43-2134-475e-b724-caa72110497a May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.535269 30988 mem.cpp:722] Started listening on medium memory pressure events for container 24762d43-2134-475e-b724-caa72110497a May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.536198 30988 mem.cpp:722] Started listening on critical memory pressure events for container 24762d43-2134-475e-b724-caa72110497a May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.579385 30988 docker.cpp:859] Skipping recovery of executor 'mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6' of framework 'a3ad8418-cb77-4705-b353-4b514ceca52c-0000' because it was not launched from docker containerizer May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.587158 30989 slave.cpp:4527] Sending reconnect request to executor 'mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6' of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 at executor(1)@10.0.2.74:46154 May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.588287 30990 slave.cpp:2838] Re-registering executor 'mesosvol.6ccd993c-1920-11e6-a722-9648cb19afd6' of framework a3ad8418-cb77-4705-b353-4b514ceca52c-0000 May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.589736 30988 disk.cpp:169] Updating the disk resources for container 24762d43-2134-475e-b724-caa72110497a to cpus(*):0.1; mem(*):32 May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.590117 30990 cpushare.cpp:389] Updated 'cpu.shares' to 102 (cpus 0.1) for container 24762d43-2134-475e-b724-caa72110497a May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.591284 30990 cpushare.cpp:411] Updated 'cpu.cfs_period_us' to 100ms and 'cpu.cfs_quota_us' to 10ms (cpus 0.1) for container 24762d43-2134-475e-b724-caa72110497a May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.595403 30992 mem.cpp:353] Updated 'memory.soft_limit_in_bytes' to 32MB for container 24762d43-2134-475e-b724-caa72110497a May 13 16:58:30 ip-10-0-2-74.us-west-2.compute.internal mesos-slave[30985]: I0513 16:58:30.596102 30992 mem.cpp:388] Updated 'memory.limit_in_bytes' to 32MB for container 24762d43-2134-475e-b724-caa72110497a ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5388","05/16/2016 19:28:16",5,"MesosContainerizerLaunch flags execute arbitrary commands via shell. ""For example, the docker volume isolator's containerPath is appended (without sanitation) to a command that's executed in this manner. As such, it's possible to inject arbitrary shell commands to be executed by mesos. https://github.com/apache/mesos/blob/17260204c833c643adf3d8f36ad8a1a606ece809/src/slave/containerizer/mesos/launch.cpp#L206 Perhaps instead of strings these commands could/should be sent as string arrays that could be passed as argv arguments w/o shell interpretation?""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5389","05/16/2016 20:11:24",3,"docker containerizer should prefix relative volume.container_path values with the path to the sandbox ""docker containerizer currently requires absolute paths for values of volume.container_path. this is inconsistent with the mesos containerizer which requires relative container_path. it makes for a confusing API. both at the Mesos level as well as at the Marathon level. ideally the docker containerizer would allow a framework to specify a relative path for volume.container_path and in such cases automatically convert it to an absolute path by prepending the sandbox directory to it. /cc [~jieyu]""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5390","05/16/2016 21:58:57",1,"v1 Executor Protos not included in maven jar ""According to MESOS-4793 the Executor v1 HTTP API was released in Mesos 0.28.0 however the corresponding protos are not included in the maven jar for version 0.28.0 or 0.28.1. Script to verify """," wget https://repo.maven.apache.org/maven2/org/apache/mesos/mesos/0.28.1/mesos-0.28.1.jar && unzip -lf mesos-0.28.1.jar | grep """"v1\/executor"""" | wc -l ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5397","05/17/2016 19:40:49",1,"Slave/Agent Rename Phase 1: Update terms in the website ""The following files need to be updated site/source/index.html.md ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5404","05/17/2016 22:05:04",3,"Allow `Task` to be authorized. ""As we need to be able to authorize `Tasks` (e.g., for deciding whether to include them in the /state endpoint when applying authorization based filtering) we need to expose it to the authorizer. Secondly we also need to include some additional information (`user` and `Env variables`) in order to provide the authorizer with meaning information.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5425","05/20/2016 18:17:29",3,"Consider using IntervalSet for Port range resource math ""Follow-up JIRA for comments raised in MESOS-3051 (see comments there). We should consider utilizing [{{IntervalSet}}|https://github.com/apache/mesos/blob/a0b798d2fac39445ce0545cfaf05a682cd393abe/3rdparty/stout/include/stout/interval.hpp] in [Port range resource math|https://github.com/apache/mesos/blob/a0b798d2fac39445ce0545cfaf05a682cd393abe/src/common/values.cpp#L143].""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5437","05/23/2016 04:45:33",1,"AppC appc_simple_discovery_uri_prefix is lost in configuration.md ""AppC appc_simple_discovery_uri_prefix is lost in configuration.md""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5445","05/24/2016 03:46:33",2,"Allow libprocess/stout to build without first doing `make` in 3rdparty. ""After the 3rdparty reorg, libprocess/stout are enable to build their dependencies and so one has to do `make` in 3rdpart/ before building libprocess/stout.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5450","05/25/2016 06:26:23",2,"Make the SASL dependency optional. ""Right now there is a hard dependency on SASL, which probably won't work well on Windows (at least) in the near future for our use cases. In the future, it would be nice to have a pluggable authentication layer.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5452","05/25/2016 15:57:48",1,"Agent modules should be initialized before all components except firewall. ""On Mesos Agents Anonymous modules should not have any dependencies, by design, on any other Mesos components. This implies that Anonymous modules should be initialized before all other Mesos components other than `Firewall`. The dependency on `Firewall` is primarily to enforce any policies to secure endpoints that might be owned by the Anonymous module.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5459","05/26/2016 11:53:42",5,"Update RUN_TASK_WITH_USER to use additional metadata ""Currently, the `authorization::Action` `RUN_TASK_WITH_USER` will pass the user as its `Object.value` string, but some authorizers may want to make authorization decisions based on additional task attributes, like role, resources, labels, container type, etc. We should create a new Action `RUN_TASK` that passes FrameworkInfo and TaskInfo in its Object, and the LocalAuthorizer's RunTaskWithUser ACL can be implemented using the user found in TaskInfo/FrameworkInfo. We may need to leave the old _WITH_USER action around, but it's arguable whether we should call the authorizer once for RUN_TASK and once for RUN_TASK_WITH_USER, or only use the new action and deprecate the old one?""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5532","06/01/2016 21:14:53",1,"Maven build is too verbose for batch builds ""During a non-interactive (without terminal) Mesos build, maven generates several thousands of log lines when downloading artifacts. This often makes several web-based log viewers unresponsive. Further, these several thousand line long progress indicator logs don't provide any meaningful information either. From a user's point of view, just knowing that the artifact download succeeded/failed is often enough. We should be using '--batch-mode' flag to disable these additionals log lines.""","",0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5576","06/09/2016 03:46:20",5,"Masters may drop the first message they send between masters after a network partition ""We observed the following situation in a cluster of five masters: || Time || Master 1 || Master 2 || Master 3 || Master 4 || Master 5 || | 0 | Follower | Follower | Follower | Follower | Leader | | 1 | Follower | Follower | Follower | Follower || Partitioned from cluster by downing this VM's network || | 2 || Elected Leader by ZK | Voting | Voting | Voting | Suicides due to lost leadership | | 3 | Performs consensus | Replies to leader | Replies to leader | Replies to leader | Still down | | 4 | Performs writing | Acks to leader | Acks to leader | Acks to leader | Still down | | 5 | Leader | Follower | Follower | Follower | Still down | | 6 | Leader | Follower | Follower | Follower | Comes back up | | 7 | Leader | Follower | Follower | Follower | Follower | | 8 || Partitioned in the same way as Master 5 | Follower | Follower | Follower | Follower | | 9 | Suicides due to lost leadership || Elected Leader by ZK | Follower | Follower | Follower | | 10 | Still down | Performs consensus | Replies to leader | Replies to leader || Doesn't get the message! || | 11 | Still down | Performs writing | Acks to leader | Acks to leader || Acks to leader || | 12 | Still down | Leader | Follower | Follower | Follower | Master 2 sends a series of messages to the recently-restarted Master 5. The first message is dropped, but subsequent messages are not dropped. This appears to be due to a stale link between the masters. Before leader election, the replicated log actors create a network watcher, which adds links to masters that join the ZK group: https://github.com/apache/mesos/blob/7a23d0da817be4e8f68d96f524cecf802431033c/src/log/network.hpp#L157-L159 This link does not appear to break (Master 2 -> 5) when Master 5 goes down, perhaps due to how the network partition was induced (in the hypervisor layer, rather than in the VM itself). When Master 2 tries to send an {{PromiseRequest}} to Master 5, we do not observe the [expected log message|https://github.com/apache/mesos/blob/7a23d0da817be4e8f68d96f524cecf802431033c/src/log/replica.cpp#L493-L494] Instead, we see a log line in Master 2: The broken link is removed by the libprocess {{socket_manager}} and the following {{WriteRequest}} from Master 2 to Master 5 succeeds via a new socket."""," process.cpp:2040] Failed to shutdown socket with fd 27: Transport endpoint is not connected ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5577","06/09/2016 05:16:50",1,"Modules using replicated log state API require zookeeper headers ""The state API uses zookeeper client headers and hence the bundled zookeeper headers need to be installed during Mesos installation. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5597","06/10/2016 16:45:55",5,"Document Mesos ""health check"" feature. ""We don't talk about this feature at all.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5618","06/16/2016 06:37:18",3,"Added a metric indicating if replicated log for the registrar has recovered or not. ""This gives operator insight about the state of the replicated log for registrar. The operator needs to know when it is safe to move on to another master in the upgrade orchestration pipeline. ""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5647","06/18/2016 15:33:49",5,"Expose network statistics for containers on CNI network in the `network/cni` isolator. ""We need to implement the `usage` method in the `network/cni` isolator to expose metrics relating to a containers network traffic. On receiving a request for getting `usage` for a a given container the `network/cni` isolator could use NETLINK system calls to query the kernel for interface and routing statistics for a given container's network namespace.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5650","06/19/2016 10:42:04",5,"UNRESERVE operation causes master to crash. ""{{RESERVE}} operation may cause a master failure: Possible reasons: * Recent improvements in allocator (b4d746f) * Bug in bookkeeping during the previous {{UNRESERVE}} * Network partition that happened after {{RESERVE}} and before {{UNRESERVE}}"""," I0619 05:02:02.298602 11194 http.cpp:312] HTTP GET for /master/slaves from 172.17.0.4:49617 with User-Agent='python-requests/2.9.1' I0619 05:02:02.305542 11193 http.cpp:312] HTTP POST for /master/destroy-volumes from 172.17.0.4:49618 with User-Agent='python-requests/2.9.1' I0619 05:02:02.306731 11191 master.cpp:6560] Sending checkpointed resources mem(kafkatest-role, kafkatest-principal, {resource_id: 7408cc53-183c-48c2-a07f-7087806219f3}):256; cpus(kafkatest-role, kafkatest-principal, {resource_id: d7888099-db8f-4018-9109-f70fb1174f53}):1.5; mem(kafkatest-role, kafkatest-principal, {resource_id: b5dd90fc-2c12-4199-9fc4-cf9f918e332b}):2304; ports(kafkatest-role, kafkatest-principal, {resource_id: a0ee4e01-803f-4b71-950d-483caeb01a57}):[9305-9305, 11596-11596]; cpus(kafkatest-role, kafkatest-principal, {resource_id: 8cd72abb-7089-4220-bb90-46b70c9953ab}):0.5; disk(kafkatest-role, kafkatest-principal, {resource_id: ed06ec6e-2d15-4d0e-bbc4-95a942e58596})[]:11204 to slave a80ff9dd-e046-43ab-b763-28365b136f6b-S0 at slave(1)@10.0.0.5:5051 (10.0.0.5) I0619 05:02:02.311069 11189 http.cpp:312] HTTP POST for /master/destroy-volumes from 172.17.0.4:49619 with User-Agent='python-requests/2.9.1' I0619 05:02:02.312191 11187 master.cpp:6560] Sending checkpointed resources cpus(kafkatest-role, kafkatest-principal, {resource_id: f1ff4806-0c24-4d60-ad2b-b06462ee4081}):1.5; mem(kafkatest-role, kafkatest-principal, {resource_id: cb8dc92d-64f0-4007-8520-1f63625b98c0}):2304; ports(kafkatest-role, kafkatest-principal, {resource_id: 225b4172-be77-453a-a94f-8845edc3f09a}):[9692-9692, 11824-11824]; cpus(kafkatest-role, kafkatest-principal, {resource_id: 942e102a-ca63-480d-9853-9a39e2695ec9}):0.5; mem(kafkatest-role, kafkatest-principal, {resource_id: cad57f8c-27f5-484c-a3fb-e80da74f0813}):256; disk(kafkatest-role, kafkatest-principal, {resource_id: e6563e09-e284-4aaf-8d53-72056695de41})[]:11204 to slave 489aa72f-ae07-4383-a56f-6fe9346ace37-S7 at slave(1)@10.0.0.7:5051 (10.0.0.7) I0619 05:02:02.316118 11189 http.cpp:312] HTTP GET for /master/slaves from 172.17.0.4:49620 with User-Agent='python-requests/2.9.1' I0619 05:02:02.321527 11189 http.cpp:312] HTTP POST for /master/unreserve from 172.17.0.4:49621 with User-Agent='python-requests/2.9.1' I0619 05:02:02.323523 11193 master.cpp:6560] Sending checkpointed resources to slave a80ff9dd-e046-43ab-b763-28365b136f6b-S0 at slave(1)@10.0.0.5:5051 (10.0.0.5) I0619 05:02:02.327658 11191 http.cpp:312] HTTP POST for /master/unreserve from 172.17.0.4:49622 with User-Agent='python-requests/2.9.1' F0619 05:02:02.329208 11190 sorter.cpp:284] Check failed: total_.scalarQuantities.contains(oldSlaveQuantity) ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5657","06/20/2016 07:53:03",3,"Executors should not inherit environment variables from the agent. ""Currently executors are inheriting environment variables form the slave in mesos containerizer. This is problematic, because of two reasons: 1. When we use docker images (such as `mongo`) in unified containerizer, duplicated environment variables inherited from the slave lead to initialization failures, because LANG and/or LC_* environment variables are not set correctly. 2. When we are looking at the environment variables from the executor tasks, there are pages of environment variables listed, which is redundant and dangerous. Depending on the reasons above, we propose that no longer allow executors to inherit environment variables from the slave. Instead, users should specify all environment variables they need by setting the slave flag `--executor_environment_variables` as a JSON format.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5659","06/20/2016 15:16:13",5,"Design doc for TASK_UNREACHABLE ""See MESOS-4049.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5660","06/20/2016 16:32:52",2,"ContainerizerTest.ROOT_CGROUPS_BalloonFramework fails because executor environment isn't inherited ""A recent change forbits the executor to inherit environment variables from the agent's environment. As a regression this break {{ContainerizerTest.ROOT_CGROUPS_BalloonFramework}}.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5666","06/20/2016 23:34:01",2,"Deprecate camel case proto field in isolator ContainerConfig. ""Currently there are extra ExecutorInfo and TaskInfo in isolator ContaienrConfig, because a deprecation cycle is needed to deprecate camel cased proto field names. This JIRA is used for tracking this issue, which should address the TODO in isolator.proto.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5667","06/21/2016 02:03:57",2,"CniIsolatorTest.ROOT_INTERNET_CURL_LaunchCommandTask fails on CentOS 7. """""," [22:41:54] : [Step 10/10] [ RUN ] CniIsolatorTest.ROOT_INTERNET_CURL_LaunchCommandTask [22:41:54]W: [Step 10/10] I0619 22:41:54.348641 30896 cluster.cpp:155] Creating default 'local' authorizer [22:41:54]W: [Step 10/10] I0619 22:41:54.353384 30896 leveldb.cpp:174] Opened db in 4.634552ms [22:41:54]W: [Step 10/10] I0619 22:41:54.354763 30896 leveldb.cpp:181] Compacted db in 1.360201ms [22:41:54]W: [Step 10/10] I0619 22:41:54.354784 30896 leveldb.cpp:196] Created db iterator in 3421ns [22:41:54]W: [Step 10/10] I0619 22:41:54.354790 30896 leveldb.cpp:202] Seeked to beginning of db in 633ns [22:41:54]W: [Step 10/10] I0619 22:41:54.354797 30896 leveldb.cpp:271] Iterated through 0 keys in the db in 401ns [22:41:54]W: [Step 10/10] I0619 22:41:54.354811 30896 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned [22:41:54]W: [Step 10/10] I0619 22:41:54.354990 30913 recover.cpp:451] Starting replica recovery [22:41:54]W: [Step 10/10] I0619 22:41:54.355123 30915 recover.cpp:477] Replica is in EMPTY status [22:41:54]W: [Step 10/10] I0619 22:41:54.355391 30915 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (18695)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.355479 30912 recover.cpp:197] Received a recover response from a replica in EMPTY status [22:41:54]W: [Step 10/10] I0619 22:41:54.355581 30914 recover.cpp:568] Updating replica status to STARTING [22:41:54]W: [Step 10/10] I0619 22:41:54.356091 30910 master.cpp:382] Master 27c796db-6f98-4d61-96c0-f583f22787ff (ip-172-30-2-105.mesosphere.io) started on 172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.356104 30910 master.cpp:384] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/KhgYrQ/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/KhgYrQ/master"""" --zk_session_timeout=""""10secs"""" [22:41:54]W: [Step 10/10] I0619 22:41:54.356237 30910 master.cpp:434] Master only allowing authenticated frameworks to register [22:41:54]W: [Step 10/10] I0619 22:41:54.356245 30910 master.cpp:448] Master only allowing authenticated agents to register [22:41:54]W: [Step 10/10] I0619 22:41:54.356247 30910 master.cpp:461] Master only allowing authenticated HTTP frameworks to register [22:41:54]W: [Step 10/10] I0619 22:41:54.356251 30910 credentials.hpp:37] Loading credentials for authentication from '/tmp/KhgYrQ/credentials' [22:41:54]W: [Step 10/10] I0619 22:41:54.356351 30910 master.cpp:506] Using default 'crammd5' authenticator [22:41:54]W: [Step 10/10] I0619 22:41:54.356389 30910 master.cpp:578] Using default 'basic' HTTP authenticator [22:41:54]W: [Step 10/10] I0619 22:41:54.356439 30910 master.cpp:658] Using default 'basic' HTTP framework authenticator [22:41:54]W: [Step 10/10] I0619 22:41:54.356467 30910 master.cpp:705] Authorization enabled [22:41:54]W: [Step 10/10] I0619 22:41:54.356531 30913 whitelist_watcher.cpp:77] No whitelist given [22:41:54]W: [Step 10/10] I0619 22:41:54.356549 30912 hierarchical.cpp:142] Initialized hierarchical allocator process [22:41:54]W: [Step 10/10] I0619 22:41:54.356868 30916 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.232816ms [22:41:54]W: [Step 10/10] I0619 22:41:54.356884 30916 replica.cpp:320] Persisted replica status to STARTING [22:41:54]W: [Step 10/10] I0619 22:41:54.356945 30916 recover.cpp:477] Replica is in STARTING status [22:41:54]W: [Step 10/10] I0619 22:41:54.357100 30917 master.cpp:1969] The newly elected leader is master@172.30.2.105:40724 with id 27c796db-6f98-4d61-96c0-f583f22787ff [22:41:54]W: [Step 10/10] I0619 22:41:54.357115 30917 master.cpp:1982] Elected as the leading master! [22:41:54]W: [Step 10/10] I0619 22:41:54.357122 30917 master.cpp:1669] Recovering from registrar [22:41:54]W: [Step 10/10] I0619 22:41:54.357213 30910 registrar.cpp:332] Recovering registrar [22:41:54]W: [Step 10/10] I0619 22:41:54.357429 30913 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (18698)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.357549 30914 recover.cpp:197] Received a recover response from a replica in STARTING status [22:41:54]W: [Step 10/10] I0619 22:41:54.357728 30913 recover.cpp:568] Updating replica status to VOTING [22:41:54]W: [Step 10/10] I0619 22:41:54.358937 30913 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.14792ms [22:41:54]W: [Step 10/10] I0619 22:41:54.358952 30913 replica.cpp:320] Persisted replica status to VOTING [22:41:54]W: [Step 10/10] I0619 22:41:54.358986 30913 recover.cpp:582] Successfully joined the Paxos group [22:41:54]W: [Step 10/10] I0619 22:41:54.359041 30913 recover.cpp:466] Recover process terminated [22:41:54]W: [Step 10/10] I0619 22:41:54.359180 30916 log.cpp:553] Attempting to start the writer [22:41:54]W: [Step 10/10] I0619 22:41:54.359578 30917 replica.cpp:493] Replica received implicit promise request from (18699)@172.30.2.105:40724 with proposal 1 [22:41:54]W: [Step 10/10] I0619 22:41:54.360752 30917 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.157449ms [22:41:54]W: [Step 10/10] I0619 22:41:54.360767 30917 replica.cpp:342] Persisted promised to 1 [22:41:54]W: [Step 10/10] I0619 22:41:54.360982 30914 coordinator.cpp:238] Coordinator attempting to fill missing positions [22:41:54]W: [Step 10/10] I0619 22:41:54.361426 30910 replica.cpp:388] Replica received explicit promise request from (18700)@172.30.2.105:40724 for position 0 with proposal 2 [22:41:54]W: [Step 10/10] I0619 22:41:54.362571 30910 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 1.124969ms [22:41:54]W: [Step 10/10] I0619 22:41:54.362587 30910 replica.cpp:712] Persisted action at 0 [22:41:54]W: [Step 10/10] I0619 22:41:54.362999 30911 replica.cpp:537] Replica received write request for position 0 from (18701)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.363030 30911 leveldb.cpp:436] Reading position from leveldb took 14967ns [22:41:54]W: [Step 10/10] I0619 22:41:54.364264 30911 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 1.214497ms [22:41:54]W: [Step 10/10] I0619 22:41:54.364279 30911 replica.cpp:712] Persisted action at 0 [22:41:54]W: [Step 10/10] I0619 22:41:54.364470 30910 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 [22:41:54]W: [Step 10/10] I0619 22:41:54.365622 30910 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.131398ms [22:41:54]W: [Step 10/10] I0619 22:41:54.365636 30910 replica.cpp:712] Persisted action at 0 [22:41:54]W: [Step 10/10] I0619 22:41:54.365643 30910 replica.cpp:697] Replica learned NOP action at position 0 [22:41:54]W: [Step 10/10] I0619 22:41:54.365769 30915 log.cpp:569] Writer started with ending position 0 [22:41:54]W: [Step 10/10] I0619 22:41:54.366080 30913 leveldb.cpp:436] Reading position from leveldb took 8794ns [22:41:54]W: [Step 10/10] I0619 22:41:54.366284 30915 registrar.cpp:365] Successfully fetched the registry (0B) in 9.053952ms [22:41:54]W: [Step 10/10] I0619 22:41:54.366315 30915 registrar.cpp:464] Applied 1 operations in 3436ns; attempting to update the 'registry' [22:41:54]W: [Step 10/10] I0619 22:41:54.366487 30911 log.cpp:577] Attempting to append 209 bytes to the log [22:41:54]W: [Step 10/10] I0619 22:41:54.366539 30917 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 [22:41:54]W: [Step 10/10] I0619 22:41:54.366839 30917 replica.cpp:537] Replica received write request for position 1 from (18702)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.367966 30917 leveldb.cpp:341] Persisting action (228 bytes) to leveldb took 1.106053ms [22:41:54]W: [Step 10/10] I0619 22:41:54.367982 30917 replica.cpp:712] Persisted action at 1 [22:41:54]W: [Step 10/10] I0619 22:41:54.368201 30915 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 [22:41:54]W: [Step 10/10] I0619 22:41:54.371786 30915 leveldb.cpp:341] Persisting action (230 bytes) to leveldb took 3.566076ms [22:41:54]W: [Step 10/10] I0619 22:41:54.371803 30915 replica.cpp:712] Persisted action at 1 [22:41:54]W: [Step 10/10] I0619 22:41:54.371809 30915 replica.cpp:697] Replica learned APPEND action at position 1 [22:41:54]W: [Step 10/10] I0619 22:41:54.372032 30910 registrar.cpp:509] Successfully updated the 'registry' in 5.693952ms [22:41:54]W: [Step 10/10] I0619 22:41:54.372097 30910 registrar.cpp:395] Successfully recovered registrar [22:41:54]W: [Step 10/10] I0619 22:41:54.372107 30911 log.cpp:596] Attempting to truncate the log to 1 [22:41:54]W: [Step 10/10] I0619 22:41:54.372151 30910 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 [22:41:54]W: [Step 10/10] I0619 22:41:54.372218 30911 master.cpp:1777] Recovered 0 agents from the Registry (170B) ; allowing 10mins for agents to re-register [22:41:54]W: [Step 10/10] I0619 22:41:54.372242 30915 hierarchical.cpp:169] Skipping recovery of hierarchical allocator: nothing to recover [22:41:54]W: [Step 10/10] I0619 22:41:54.372467 30914 replica.cpp:537] Replica received write request for position 2 from (18703)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.373693 30914 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.207676ms [22:41:54]W: [Step 10/10] I0619 22:41:54.373708 30914 replica.cpp:712] Persisted action at 2 [22:41:54]W: [Step 10/10] I0619 22:41:54.373920 30913 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 [22:41:54]W: [Step 10/10] I0619 22:41:54.375115 30913 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 1.17978ms [22:41:54]W: [Step 10/10] I0619 22:41:54.375145 30913 leveldb.cpp:399] Deleting ~1 keys from leveldb took 14216ns [22:41:54]W: [Step 10/10] I0619 22:41:54.375154 30913 replica.cpp:712] Persisted action at 2 [22:41:54]W: [Step 10/10] I0619 22:41:54.375159 30913 replica.cpp:697] Replica learned TRUNCATE action at position 2 [22:41:54]W: [Step 10/10] I0619 22:41:54.383839 30896 containerizer.cpp:201] Using isolation: docker/runtime,filesystem/linux,network/cni [22:41:54]W: [Step 10/10] I0619 22:41:54.388789 30896 linux_launcher.cpp:101] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher [22:41:54]W: [Step 10/10] E0619 22:41:54.393234 30896 shell.hpp:106] Command 'hadoop version 2>&1' failed; this is the output: [22:41:54]W: [Step 10/10] sh: hadoop: command not found [22:41:54]W: [Step 10/10] I0619 22:41:54.393265 30896 fetcher.cpp:62] Skipping URI fetcher plugin 'hadoop' as it could not be created: Failed to create HDFS client: Failed to execute 'hadoop version 2>&1'; the command was either not found or exited with a non-zero exit status: 127 [22:41:54]W: [Step 10/10] I0619 22:41:54.393316 30896 registry_puller.cpp:111] Creating registry puller with docker registry 'https://registry-1.docker.io' [22:41:54]W: [Step 10/10] I0619 22:41:54.395668 30896 cluster.cpp:432] Creating default 'local' authorizer [22:41:54]W: [Step 10/10] I0619 22:41:54.396100 30914 slave.cpp:203] Agent started on 469)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.396116 30914 slave.cpp:204] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/KhgYrQ/store"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/http_credentials"""" --image_providers=""""docker"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""docker/runtime,filesystem/linux,network/cni"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --network_cni_config_dir=""""/tmp/KhgYrQ/configs"""" --network_cni_plugins_dir=""""/tmp/KhgYrQ/plugins"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI"""" [22:41:54]W: [Step 10/10] I0619 22:41:54.396380 30914 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/credential' [22:41:54]W: [Step 10/10] I0619 22:41:54.396495 30914 slave.cpp:341] Agent using credential for: test-principal [22:41:54]W: [Step 10/10] I0619 22:41:54.396509 30914 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/http_credentials' [22:41:54]W: [Step 10/10] I0619 22:41:54.396586 30914 slave.cpp:393] Using default 'basic' HTTP authenticator [22:41:54]W: [Step 10/10] I0619 22:41:54.396698 30914 resources.cpp:572] Parsing resources as JSON failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000] [22:41:54]W: [Step 10/10] Trying semicolon-delimited string format instead [22:41:54]W: [Step 10/10] I0619 22:41:54.396780 30896 sched.cpp:224] Version: 1.0.0 [22:41:54]W: [Step 10/10] I0619 22:41:54.396991 30914 slave.cpp:592] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [22:41:54]W: [Step 10/10] I0619 22:41:54.397020 30914 slave.cpp:600] Agent attributes: [ ] [22:41:54]W: [Step 10/10] I0619 22:41:54.397029 30914 slave.cpp:605] Agent hostname: ip-172-30-2-105.mesosphere.io [22:41:54]W: [Step 10/10] I0619 22:41:54.397040 30916 sched.cpp:328] New master detected at master@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.397068 30916 sched.cpp:394] Authenticating with master master@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.397078 30916 sched.cpp:401] Using default CRAM-MD5 authenticatee [22:41:54]W: [Step 10/10] I0619 22:41:54.397188 30916 authenticatee.cpp:121] Creating new client SASL connection [22:41:54]W: [Step 10/10] I0619 22:41:54.397467 30914 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/meta' [22:41:54]W: [Step 10/10] I0619 22:41:54.397476 30912 master.cpp:5943] Authenticating scheduler-af10d6a3-1ebc-4377-b44d-8c0dfbffcb8e@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.397544 30913 authenticator.cpp:414] Starting authentication session for crammd5_authenticatee(953)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.397614 30915 status_update_manager.cpp:200] Recovering status update manager [22:41:54]W: [Step 10/10] I0619 22:41:54.397668 30912 authenticator.cpp:98] Creating new server SASL connection [22:41:54]W: [Step 10/10] I0619 22:41:54.397709 30915 containerizer.cpp:514] Recovering containerizer [22:41:54]W: [Step 10/10] I0619 22:41:54.397869 30912 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [22:41:54]W: [Step 10/10] I0619 22:41:54.397886 30912 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [22:41:54]W: [Step 10/10] I0619 22:41:54.397927 30912 authenticator.cpp:204] Received SASL authentication start [22:41:54]W: [Step 10/10] I0619 22:41:54.397964 30912 authenticator.cpp:326] Authentication requires more steps [22:41:54]W: [Step 10/10] I0619 22:41:54.398000 30912 authenticatee.cpp:259] Received SASL authentication step [22:41:54]W: [Step 10/10] I0619 22:41:54.398052 30912 authenticator.cpp:232] Received SASL authentication step [22:41:54]W: [Step 10/10] I0619 22:41:54.398066 30912 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-105.mesosphere.io' server FQDN: 'ip-172-30-2-105.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [22:41:54]W: [Step 10/10] I0619 22:41:54.398073 30912 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [22:41:54]W: [Step 10/10] I0619 22:41:54.398087 30912 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [22:41:54]W: [Step 10/10] I0619 22:41:54.398098 30912 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-105.mesosphere.io' server FQDN: 'ip-172-30-2-105.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [22:41:54]W: [Step 10/10] I0619 22:41:54.398103 30912 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [22:41:54]W: [Step 10/10] I0619 22:41:54.398108 30912 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [22:41:54]W: [Step 10/10] I0619 22:41:54.398116 30912 authenticator.cpp:318] Authentication success [22:41:54]W: [Step 10/10] I0619 22:41:54.398162 30914 authenticatee.cpp:299] Authentication success [22:41:54]W: [Step 10/10] I0619 22:41:54.398181 30913 authenticator.cpp:432] Authentication session cleanup for crammd5_authenticatee(953)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.398200 30912 master.cpp:5973] Successfully authenticated principal 'test-principal' at scheduler-af10d6a3-1ebc-4377-b44d-8c0dfbffcb8e@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.398270 30914 sched.cpp:484] Successfully authenticated with master master@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.398280 30914 sched.cpp:800] Sending SUBSCRIBE call to master@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.398342 30914 sched.cpp:833] Will retry registration in 869.123866ms if necessary [22:41:54]W: [Step 10/10] I0619 22:41:54.398381 30916 master.cpp:2539] Received SUBSCRIBE call for framework 'default' at scheduler-af10d6a3-1ebc-4377-b44d-8c0dfbffcb8e@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.398398 30916 master.cpp:2008] Authorizing framework principal 'test-principal' to receive offers for role '*' [22:41:54]W: [Step 10/10] I0619 22:41:54.398483 30916 master.cpp:2615] Subscribing framework default with checkpointing disabled and capabilities [ ] [22:41:54]W: [Step 10/10] I0619 22:41:54.398679 30916 sched.cpp:723] Framework registered with 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:54]W: [Step 10/10] I0619 22:41:54.398701 30916 sched.cpp:737] Scheduler::registered took 10291ns [22:41:54]W: [Step 10/10] I0619 22:41:54.398784 30910 hierarchical.cpp:264] Added framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:54]W: [Step 10/10] I0619 22:41:54.398802 30910 hierarchical.cpp:1488] No allocations performed [22:41:54]W: [Step 10/10] I0619 22:41:54.398808 30910 hierarchical.cpp:1583] No inverse offers to send out! [22:41:54]W: [Step 10/10] I0619 22:41:54.398818 30910 hierarchical.cpp:1139] Performed allocation for 0 agents in 22451ns [22:41:54]W: [Step 10/10] I0619 22:41:54.399222 30916 metadata_manager.cpp:205] No images to load from disk. Docker provisioner image storage path '/tmp/KhgYrQ/store/storedImages' does not exist [22:41:54]W: [Step 10/10] I0619 22:41:54.399318 30910 provisioner.cpp:253] Provisioner recovery complete [22:41:54]W: [Step 10/10] I0619 22:41:54.399453 30913 slave.cpp:4845] Finished recovery [22:41:54]W: [Step 10/10] I0619 22:41:54.399690 30913 slave.cpp:5017] Querying resource estimator for oversubscribable resources [22:41:54]W: [Step 10/10] I0619 22:41:54.399796 30911 slave.cpp:967] New master detected at master@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.399811 30911 slave.cpp:1029] Authenticating with master master@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.399801 30914 status_update_manager.cpp:174] Pausing sending status updates [22:41:54]W: [Step 10/10] I0619 22:41:54.399821 30911 slave.cpp:1040] Using default CRAM-MD5 authenticatee [22:41:54]W: [Step 10/10] I0619 22:41:54.399855 30911 slave.cpp:1002] Detecting new master [22:41:54]W: [Step 10/10] I0619 22:41:54.399879 30915 authenticatee.cpp:121] Creating new client SASL connection [22:41:54]W: [Step 10/10] I0619 22:41:54.399910 30911 slave.cpp:5031] Received oversubscribable resources from the resource estimator [22:41:54]W: [Step 10/10] I0619 22:41:54.400044 30915 master.cpp:5943] Authenticating slave(469)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.400099 30910 authenticator.cpp:414] Starting authentication session for crammd5_authenticatee(954)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.400151 30910 authenticator.cpp:98] Creating new server SASL connection [22:41:54]W: [Step 10/10] I0619 22:41:54.400316 30910 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [22:41:54]W: [Step 10/10] I0619 22:41:54.400329 30910 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [22:41:54]W: [Step 10/10] I0619 22:41:54.400367 30910 authenticator.cpp:204] Received SASL authentication start [22:41:54]W: [Step 10/10] I0619 22:41:54.400398 30910 authenticator.cpp:326] Authentication requires more steps [22:41:54]W: [Step 10/10] I0619 22:41:54.400431 30910 authenticatee.cpp:259] Received SASL authentication step [22:41:54]W: [Step 10/10] I0619 22:41:54.400516 30917 authenticator.cpp:232] Received SASL authentication step [22:41:54]W: [Step 10/10] I0619 22:41:54.400530 30917 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-105.mesosphere.io' server FQDN: 'ip-172-30-2-105.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [22:41:54]W: [Step 10/10] I0619 22:41:54.400537 30917 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [22:41:54]W: [Step 10/10] I0619 22:41:54.400544 30917 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [22:41:54]W: [Step 10/10] I0619 22:41:54.400550 30917 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-105.mesosphere.io' server FQDN: 'ip-172-30-2-105.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [22:41:54]W: [Step 10/10] I0619 22:41:54.400554 30917 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [22:41:54]W: [Step 10/10] I0619 22:41:54.400558 30917 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [22:41:54]W: [Step 10/10] I0619 22:41:54.400566 30917 authenticator.cpp:318] Authentication success [22:41:54]W: [Step 10/10] I0619 22:41:54.400609 30914 authenticatee.cpp:299] Authentication success [22:41:54]W: [Step 10/10] I0619 22:41:54.400640 30912 authenticator.cpp:432] Authentication session cleanup for crammd5_authenticatee(954)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.400682 30917 master.cpp:5973] Successfully authenticated principal 'test-principal' at slave(469)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.400738 30911 slave.cpp:1108] Successfully authenticated with master master@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.400790 30911 slave.cpp:1511] Will retry registration in 13.364855ms if necessary [22:41:54]W: [Step 10/10] I0619 22:41:54.400848 30913 master.cpp:4653] Registering agent at slave(469)@172.30.2.105:40724 (ip-172-30-2-105.mesosphere.io) with id 27c796db-6f98-4d61-96c0-f583f22787ff-S0 [22:41:54]W: [Step 10/10] I0619 22:41:54.400950 30914 registrar.cpp:464] Applied 1 operations in 16921ns; attempting to update the 'registry' [22:41:54]W: [Step 10/10] I0619 22:41:54.401154 30915 log.cpp:577] Attempting to append 395 bytes to the log [22:41:54]W: [Step 10/10] I0619 22:41:54.401213 30914 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 [22:41:54]W: [Step 10/10] I0619 22:41:54.401515 30914 replica.cpp:537] Replica received write request for position 3 from (18725)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.402851 30914 leveldb.cpp:341] Persisting action (414 bytes) to leveldb took 1.317458ms [22:41:54]W: [Step 10/10] I0619 22:41:54.402866 30914 replica.cpp:712] Persisted action at 3 [22:41:54]W: [Step 10/10] I0619 22:41:54.403101 30917 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 [22:41:54]W: [Step 10/10] I0619 22:41:54.404217 30917 leveldb.cpp:341] Persisting action (416 bytes) to leveldb took 1.100393ms [22:41:54]W: [Step 10/10] I0619 22:41:54.404233 30917 replica.cpp:712] Persisted action at 3 [22:41:54]W: [Step 10/10] I0619 22:41:54.404239 30917 replica.cpp:697] Replica learned APPEND action at position 3 [22:41:54]W: [Step 10/10] I0619 22:41:54.404495 30915 registrar.cpp:509] Successfully updated the 'registry' in 3.521792ms [22:41:54]W: [Step 10/10] I0619 22:41:54.404561 30913 log.cpp:596] Attempting to truncate the log to 3 [22:41:54]W: [Step 10/10] I0619 22:41:54.404621 30915 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 [22:41:54]W: [Step 10/10] I0619 22:41:54.404690 30910 master.cpp:4721] Registered agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 at slave(469)@172.30.2.105:40724 (ip-172-30-2-105.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [22:41:54]W: [Step 10/10] I0619 22:41:54.404726 30915 slave.cpp:3747] Received ping from slave-observer(429)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.404747 30916 hierarchical.cpp:473] Added agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 (ip-172-30-2-105.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) [22:41:54]W: [Step 10/10] I0619 22:41:54.404825 30915 slave.cpp:1152] Registered with master master@172.30.2.105:40724; given agent ID 27c796db-6f98-4d61-96c0-f583f22787ff-S0 [22:41:54]W: [Step 10/10] I0619 22:41:54.404840 30915 fetcher.cpp:86] Clearing fetcher cache [22:41:54]W: [Step 10/10] I0619 22:41:54.404880 30910 replica.cpp:537] Replica received write request for position 4 from (18726)@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.404911 30916 hierarchical.cpp:1583] No inverse offers to send out! [22:41:54]W: [Step 10/10] I0619 22:41:54.404932 30913 status_update_manager.cpp:181] Resuming sending status updates [22:41:54]W: [Step 10/10] I0619 22:41:54.404942 30916 hierarchical.cpp:1162] Performed allocation for agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 in 168147ns [22:41:54]W: [Step 10/10] I0619 22:41:54.405025 30911 master.cpp:5772] Sending 1 offers to framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 (default) at scheduler-af10d6a3-1ebc-4377-b44d-8c0dfbffcb8e@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.405082 30915 slave.cpp:1175] Checkpointing SlaveInfo to '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/meta/slaves/27c796db-6f98-4d61-96c0-f583f22787ff-S0/slave.info' [22:41:54]W: [Step 10/10] I0619 22:41:54.405177 30911 sched.cpp:897] Scheduler::resourceOffers took 55063ns [22:41:54]W: [Step 10/10] I0619 22:41:54.405239 30915 slave.cpp:1212] Forwarding total oversubscribed resources [22:41:54]W: [Step 10/10] I0619 22:41:54.405299 30911 master.cpp:5066] Received update of agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 at slave(469)@172.30.2.105:40724 (ip-172-30-2-105.mesosphere.io) with total oversubscribed resources [22:41:54]W: [Step 10/10] I0619 22:41:54.405318 30896 resources.cpp:572] Parsing resources as JSON failed: cpus:1;mem:128 [22:41:54]W: [Step 10/10] Trying semicolon-delimited string format instead [22:41:54]W: [Step 10/10] I0619 22:41:54.405387 30911 hierarchical.cpp:531] Agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 (ip-172-30-2-105.mesosphere.io) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) [22:41:54]W: [Step 10/10] I0619 22:41:54.405421 30911 hierarchical.cpp:1488] No allocations performed [22:41:54]W: [Step 10/10] I0619 22:41:54.405431 30911 hierarchical.cpp:1583] No inverse offers to send out! [22:41:54]W: [Step 10/10] I0619 22:41:54.405447 30911 hierarchical.cpp:1162] Performed allocation for agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 in 40224ns [22:41:54]W: [Step 10/10] I0619 22:41:54.405643 30914 master.cpp:3457] Processing ACCEPT call for offers: [ 27c796db-6f98-4d61-96c0-f583f22787ff-O0 ] on agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 at slave(469)@172.30.2.105:40724 (ip-172-30-2-105.mesosphere.io) for framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 (default) at scheduler-af10d6a3-1ebc-4377-b44d-8c0dfbffcb8e@172.30.2.105:40724 [22:41:54]W: [Step 10/10] I0619 22:41:54.405668 30914 master.cpp:3095] Authorizing framework principal 'test-principal' to launch task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 [22:41:54]W: [Step 10/10] I0619 22:41:54.406030 30912 master.hpp:177] Adding task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 with resources cpus(*):1; mem(*):128 on agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 (ip-172-30-2-105.mesosphere.io) [22:41:54]W: [Step 10/10] I0619 22:41:54.406056 30912 master.cpp:3946] Launching task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 (default) at scheduler-af10d6a3-1ebc-4377-b44d-8c0dfbffcb8e@172.30.2.105:40724 with resources cpus(*):1; mem(*):128 on agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 at slave(469)@172.30.2.105:40724 (ip-172-30-2-105.mesosphere.io) [22:41:54]W: [Step 10/10] I0619 22:41:54.406158 30916 slave.cpp:1551] Got assigned task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 for framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:54]W: [Step 10/10] I0619 22:41:54.406193 30912 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):896; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):1; mem(*):128) on agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 from framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:54]W: [Step 10/10] I0619 22:41:54.406214 30912 hierarchical.cpp:928] Framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 filtered agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 for 5secs [22:41:54]W: [Step 10/10] I0619 22:41:54.406250 30916 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [22:41:54]W: [Step 10/10] Trying semicolon-delimited string format instead [22:41:54]W: [Step 10/10] I0619 22:41:54.406347 30910 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.44747ms [22:41:54]W: [Step 10/10] I0619 22:41:54.406359 30910 replica.cpp:712] Persisted action at 4 [22:41:54]W: [Step 10/10] I0619 22:41:54.406381 30916 slave.cpp:1670] Launching task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 for framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:54]W: [Step 10/10] I0619 22:41:54.406420 30916 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [22:41:54]W: [Step 10/10] Trying semicolon-delimited string format instead [22:41:54]W: [Step 10/10] I0619 22:41:54.406555 30914 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 [22:41:54]W: [Step 10/10] I0619 22:41:54.406793 30916 paths.cpp:528] Trying to chown '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/slaves/27c796db-6f98-4d61-96c0-f583f22787ff-S0/frameworks/27c796db-6f98-4d61-96c0-f583f22787ff-0000/executors/d7416b1b-cd1c-422a-bbaa-bb28913eeaf2/runs/548370b5-05f2-4e33-8f6f-015aa3fd1af4' to user 'root' [22:41:54]W: [Step 10/10] I0619 22:41:54.408360 30914 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 1.635458ms [22:41:54]W: [Step 10/10] I0619 22:41:54.408453 30914 leveldb.cpp:399] Deleting ~2 keys from leveldb took 53370ns [22:41:54]W: [Step 10/10] I0619 22:41:54.408469 30914 replica.cpp:712] Persisted action at 4 [22:41:54]W: [Step 10/10] I0619 22:41:54.408480 30914 replica.cpp:697] Replica learned TRUNCATE action at position 4 [22:41:54]W: [Step 10/10] I0619 22:41:54.411355 30916 slave.cpp:5734] Launching executor d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/slaves/27c796db-6f98-4d61-96c0-f583f22787ff-S0/frameworks/27c796db-6f98-4d61-96c0-f583f22787ff-0000/executors/d7416b1b-cd1c-422a-bbaa-bb28913eeaf2/runs/548370b5-05f2-4e33-8f6f-015aa3fd1af4' [22:41:54]W: [Step 10/10] I0619 22:41:54.411485 30916 slave.cpp:1896] Queuing task 'd7416b1b-cd1c-422a-bbaa-bb28913eeaf2' for executor 'd7416b1b-cd1c-422a-bbaa-bb28913eeaf2' of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:54]W: [Step 10/10] I0619 22:41:54.411516 30915 containerizer.cpp:773] Starting container '548370b5-05f2-4e33-8f6f-015aa3fd1af4' for executor 'd7416b1b-cd1c-422a-bbaa-bb28913eeaf2' of framework '27c796db-6f98-4d61-96c0-f583f22787ff-0000' [22:41:54]W: [Step 10/10] I0619 22:41:54.411521 30916 slave.cpp:920] Successfully attached file '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/slaves/27c796db-6f98-4d61-96c0-f583f22787ff-S0/frameworks/27c796db-6f98-4d61-96c0-f583f22787ff-0000/executors/d7416b1b-cd1c-422a-bbaa-bb28913eeaf2/runs/548370b5-05f2-4e33-8f6f-015aa3fd1af4' [22:41:54]W: [Step 10/10] I0619 22:41:54.411733 30914 metadata_manager.cpp:167] Looking for image 'alpine' [22:41:54]W: [Step 10/10] I0619 22:41:54.412009 30911 registry_puller.cpp:235] Pulling image 'library/alpine' from 'docker-manifest://registry-1.docker.io:443library/alpine?latest#https' to '/tmp/KhgYrQ/store/staging/0cVlJm' [22:41:54]W: [Step 10/10] I0619 22:41:54.870712 30915 registry_puller.cpp:258] The manifest for image 'library/alpine' is '{ [22:41:54]W: [Step 10/10] """"schemaVersion"""": 1, [22:41:54]W: [Step 10/10] """"name"""": """"library/alpine"""", [22:41:54]W: [Step 10/10] """"tag"""": """"latest"""", [22:41:54]W: [Step 10/10] """"architecture"""": """"amd64"""", [22:41:54]W: [Step 10/10] """"fsLayers"""": [ [22:41:54]W: [Step 10/10] { [22:41:54]W: [Step 10/10] """"blobSum"""": """"sha256:fae91920dcd4542f97c9350b3157139a5d901362c2abec284de5ebd1b45b4957"""" [22:41:54]W: [Step 10/10] } [22:41:54]W: [Step 10/10] ], [22:41:54]W: [Step 10/10] """"history"""": [ [22:41:54]W: [Step 10/10] { [22:41:54]W: [Step 10/10] """"v1Compatibility"""": """"{\""""architecture\"""":\""""amd64\"""",\""""config\"""":{\""""Hostname\"""":\""""571cde9b03ce\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":null,\""""Cmd\"""":null,\""""Image\"""":\""""\"""",\""""Volumes\"""":null,\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""OnBuild\"""":null,\""""Labels\"""":null},\""""container\"""":\""""571cde9b03ce6f46b78b8e9c5089d03034863a4ab9f05d3e4997d0e5e80a2a6e\"""",\""""container_config\"""":{\""""Hostname\"""":\""""571cde9b03ce\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":null,\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) ADD file:701fd33a2f463fd4bd459779276897ef01dcf998dd47f6c8eae34fa5e0886046 in /\""""],\""""Image\"""":\""""\"""",\""""Volumes\"""":null,\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""OnBuild\"""":null,\""""Labels\"""":null},\""""created\"""":\""""2016-06-02T21:43:31.291506236Z\"""",\""""docker_version\"""":\""""1.9.1\"""",\""""id\"""":\""""e43bd3919b4ed702040fe5d0b19c9a0778ae7d61f169cf98112a842746168e6b\"""",\""""os\"""":\""""linux\""""}"""" [22:41:54]W: [Step 10/10] } [22:41:54]W: [Step 10/10] ], [22:41:54]W: [Step 10/10] """"signatures"""": [ [22:41:54]W: [Step 10/10] { [22:41:54]W: [Step 10/10] """"header"""": { [22:41:54]W: [Step 10/10] """"jwk"""": { [22:41:54]W: [Step 10/10] """"crv"""": """"P-256"""", [22:41:54]W: [Step 10/10] """"kid"""": """"IZ4C:AKG6:LLBK:4Y62:6YWU:OI2G:K2EN:ZOJH:GHRY:5PKA:PFEE:WZWD"""", [22:41:54]W: [Step 10/10] """"kty"""": """"EC"""", [22:41:54]W: [Step 10/10] """"x"""": """"hU3h5pMhA0tgT3mF41BH5EbsLy9Tv3O-bla53S8-25g"""", [22:41:54]W: [Step 10/10] """"y"""": """"Y9sM4tXh_3KKKeEhikWEGgTUlQLYJxPWCXcs_bVP4Pc"""" [22:41:54]W: [Step 10/10] }, [22:41:54]W: [Step 10/10] """"alg"""": """"ES256"""" [22:41:54]W: [Step 10/10] }, [22:41:54]W: [Step 10/10] """"signature"""": """"8SZVGFKd_Ovz9FtfNMoLRWkwayOY9zaTq4bgPnKPuFPK-48nhDTMlkMz52Nqm2SHCk2xtYYkhzLtE6wUctrjqA"""", [22:41:54]W: [Step 10/10] """"protected"""": """"eyJmb3JtYXRMZW5ndGgiOjEzNTgsImZvcm1hdFRhaWwiOiJDbjAiLCJ0aW1lIjoiMjAxNi0wNi0xOVQyMjo0MTo1NFoifQ"""" [22:41:54]W: [Step 10/10] } [22:41:54]W: [Step 10/10] ] [22:41:54]W: [Step 10/10] }' [22:41:54]W: [Step 10/10] I0619 22:41:54.870767 30915 registry_puller.cpp:368] Fetching blob 'sha256:fae91920dcd4542f97c9350b3157139a5d901362c2abec284de5ebd1b45b4957' for layer 'e43bd3919b4ed702040fe5d0b19c9a0778ae7d61f169cf98112a842746168e6b' of image 'library/alpine' [22:41:55]W: [Step 10/10] I0619 22:41:55.357898 30910 hierarchical.cpp:1674] Filtered offer with cpus(*):1; mem(*):896; disk(*):1024; ports(*):[31000-32000] on agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 for framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:55]W: [Step 10/10] I0619 22:41:55.357965 30910 hierarchical.cpp:1488] No allocations performed [22:41:55]W: [Step 10/10] I0619 22:41:55.357980 30910 hierarchical.cpp:1583] No inverse offers to send out! [22:41:55]W: [Step 10/10] I0619 22:41:55.358002 30910 hierarchical.cpp:1139] Performed allocation for 1 agents in 238814ns [22:41:55]W: [Step 10/10] I0619 22:41:55.474309 30911 registry_puller.cpp:305] Extracting layer tar ball '/tmp/KhgYrQ/store/staging/0cVlJm/sha256:fae91920dcd4542f97c9350b3157139a5d901362c2abec284de5ebd1b45b4957 to rootfs '/tmp/KhgYrQ/store/staging/0cVlJm/e43bd3919b4ed702040fe5d0b19c9a0778ae7d61f169cf98112a842746168e6b/rootfs' [22:41:55]W: [Step 10/10] I0619 22:41:55.575764 30910 metadata_manager.cpp:155] Successfully cached image 'alpine' [22:41:55]W: [Step 10/10] I0619 22:41:55.576198 30911 provisioner.cpp:294] Provisioning image rootfs '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/provisioner/containers/548370b5-05f2-4e33-8f6f-015aa3fd1af4/backends/copy/rootfses/4f5eb0d5-118b-4129-972d-0a7e6a374f6f' for container 548370b5-05f2-4e33-8f6f-015aa3fd1af4 [22:41:55]W: [Step 10/10] I0619 22:41:55.576556 30910 copy.cpp:128] Copying layer path '/tmp/KhgYrQ/store/layers/e43bd3919b4ed702040fe5d0b19c9a0778ae7d61f169cf98112a842746168e6b/rootfs' to rootfs '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/provisioner/containers/548370b5-05f2-4e33-8f6f-015aa3fd1af4/backends/copy/rootfses/4f5eb0d5-118b-4129-972d-0a7e6a374f6f' [22:41:55]W: [Step 10/10] I0619 22:41:55.676825 30916 containerizer.cpp:1267] Launching 'mesos-containerizer' with flags '--command=""""{""""arguments"""":[""""mesos-executor"""",""""--sandbox_directory=\/mnt\/mesos\/sandbox"""",""""--user=root"""",""""--rootfs=\/mnt\/teamcity\/temp\/buildTmp\/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI\/provisioner\/containers\/548370b5-05f2-4e33-8f6f-015aa3fd1af4\/backends\/copy\/rootfses\/4f5eb0d5-118b-4129-972d-0a7e6a374f6f""""],""""shell"""":false,""""user"""":""""root"""",""""value"""":""""\/mnt\/teamcity\/work\/4240ba9ddd0997c3\/build\/src\/mesos-executor""""}"""" --commands=""""{""""commands"""":[{""""shell"""":true,""""value"""":""""#!\/bin\/sh\nset -x -e\n\/mnt\/teamcity\/work\/4240ba9ddd0997c3\/build\/src\/mesos-containerizer mount --help=\""""false\"""" --operation=\""""make-rslave\"""" --path=\""""\/\""""\nmount -n --rbind '\/mnt\/teamcity\/temp\/buildTmp\/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI\/slaves\/27c796db-6f98-4d61-96c0-f583f22787ff-S0\/frameworks\/27c796db-6f98-4d61-96c0-f583f22787ff-0000\/executors\/d7416b1b-cd1c-422a-bbaa-bb28913eeaf2\/runs\/548370b5-05f2-4e33-8f6f-015aa3fd1af4' '\/mnt\/teamcity\/temp\/buildTmp\/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI\/provisioner\/containers\/548370b5-05f2-4e33-8f6f-015aa3fd1af4\/backends\/copy\/rootfses\/4f5eb0d5-118b-4129-972d-0a7e6a374f6f\/mnt\/mesos\/sandbox'\n""""}]}"""" --help=""""false"""" --pipe_read=""""17"""" --pipe_write=""""20"""" --sandbox=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/slaves/27c796db-6f98-4d61-96c0-f583f22787ff-S0/frameworks/27c796db-6f98-4d61-96c0-f583f22787ff-0000/executors/d7416b1b-cd1c-422a-bbaa-bb28913eeaf2/runs/548370b5-05f2-4e33-8f6f-015aa3fd1af4"""" --user=""""root""""' [22:41:55]W: [Step 10/10] I0619 22:41:55.676923 30916 linux_launcher.cpp:281] Cloning child process with flags = CLONE_NEWUTS | CLONE_NEWNS [22:41:55]W: [Step 10/10] I0619 22:41:55.681491 30913 cni.cpp:683] Bind mounted '/proc/13484/ns/net' to '/run/mesos/isolators/network/cni/548370b5-05f2-4e33-8f6f-015aa3fd1af4/ns' for container 548370b5-05f2-4e33-8f6f-015aa3fd1af4 [22:41:55]W: [Step 10/10] I0619 22:41:55.681712 30913 cni.cpp:977] Invoking CNI plugin 'mockPlugin' with network configuration '{""""args"""":{""""org.apache.mesos"""":{""""network_info"""":{""""name"""":""""__MESOS_TEST__""""}}},""""name"""":""""__MESOS_TEST__"""",""""type"""":""""mockPlugin""""}' [22:41:55]W: [Step 10/10] I0619 22:41:55.776078 30916 cni.cpp:1066] Got assigned IPv4 address '172.17.0.1/16' from CNI network '__MESOS_TEST__' for container 548370b5-05f2-4e33-8f6f-015aa3fd1af4 [22:41:55]W: [Step 10/10] I0619 22:41:55.776463 30913 cni.cpp:808] DNS nameservers for container 548370b5-05f2-4e33-8f6f-015aa3fd1af4 are: [22:41:55]W: [Step 10/10] nameserver 172.30.0.2 [22:41:55]W: [Step 10/10] + /mnt/teamcity/work/4240ba9ddd0997c3/build/src/mesos-containerizer mount --help=false --operation=make-rslave --path=/ [22:41:55]W: [Step 10/10] + mount -n --rbind /mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/slaves/27c796db-6f98-4d61-96c0-f583f22787ff-S0/frameworks/27c796db-6f98-4d61-96c0-f583f22787ff-0000/executors/d7416b1b-cd1c-422a-bbaa-bb28913eeaf2/runs/548370b5-05f2-4e33-8f6f-015aa3fd1af4 /mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/provisioner/containers/548370b5-05f2-4e33-8f6f-015aa3fd1af4/backends/copy/rootfses/4f5eb0d5-118b-4129-972d-0a7e6a374f6f/mnt/mesos/sandbox [22:41:55]W: [Step 10/10] WARNING: Logging before InitGoogleLogging() is written to STDERR [22:41:55]W: [Step 10/10] I0619 22:41:55.944355 13484 process.cpp:1060] libprocess is initialized on 172.17.0.1:60396 with 8 worker threads [22:41:55]W: [Step 10/10] I0619 22:41:55.946605 13484 logging.cpp:199] Logging to STDERR [22:41:55]W: [Step 10/10] I0619 22:41:55.947335 13484 exec.cpp:161] Version: 1.0.0 [22:41:55]W: [Step 10/10] I0619 22:41:55.947404 13541 exec.cpp:211] Executor started at: executor(1)@172.17.0.1:60396 with pid 13484 [22:41:55]W: [Step 10/10] I0619 22:41:55.947883 30917 slave.cpp:2884] Got registration for executor 'd7416b1b-cd1c-422a-bbaa-bb28913eeaf2' of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 from executor(1)@172.17.0.1:60396 [22:41:55]W: [Step 10/10] I0619 22:41:55.948427 13543 exec.cpp:236] Executor registered on agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 [22:41:55]W: [Step 10/10] I0619 22:41:55.948524 30914 slave.cpp:2061] Sending queued task 'd7416b1b-cd1c-422a-bbaa-bb28913eeaf2' to executor 'd7416b1b-cd1c-422a-bbaa-bb28913eeaf2' of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 at executor(1)@172.17.0.1:60396 [22:41:55]W: [Step 10/10] I0619 22:41:55.949061 13543 exec.cpp:248] Executor::registered took 75489ns [22:41:55]W: [Step 10/10] I0619 22:41:55.949213 13543 exec.cpp:323] Executor asked to run task 'd7416b1b-cd1c-422a-bbaa-bb28913eeaf2' [22:41:55]W: [Step 10/10] I0619 22:41:55.949246 13543 exec.cpp:332] Executor::launchTask took 21245ns [22:41:55] : [Step 10/10] Received SUBSCRIBED event [22:41:55] : [Step 10/10] Subscribed executor on ip-172-30-2-105.mesosphere.io [22:41:55] : [Step 10/10] Received LAUNCH event [22:41:55] : [Step 10/10] Starting task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 [22:41:55] : [Step 10/10] Forked command at 13550 [22:41:55] : [Step 10/10] sh -c 'ifconfig' [22:41:55]W: [Step 10/10] I0619 22:41:55.953589 13547 exec.cpp:546] Executor sending status update TASK_RUNNING (UUID: 5caccf6c-9e1e-44cc-93d4-6851987802cd) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:55]W: [Step 10/10] Failed to exec: No such file or directory [22:41:55]W: [Step 10/10] I0619 22:41:55.953891 30917 slave.cpp:3267] Handling status update TASK_RUNNING (UUID: 5caccf6c-9e1e-44cc-93d4-6851987802cd) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 from executor(1)@172.17.0.1:60396 [22:41:55]W: [Step 10/10] I0619 22:41:55.954368 30910 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 5caccf6c-9e1e-44cc-93d4-6851987802cd) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:55]W: [Step 10/10] I0619 22:41:55.954385 30910 status_update_manager.cpp:497] Creating StatusUpdate stream for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:55]W: [Step 10/10] I0619 22:41:55.954545 30910 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 5caccf6c-9e1e-44cc-93d4-6851987802cd) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 to the agent [22:41:55]W: [Step 10/10] I0619 22:41:55.954637 30911 slave.cpp:3665] Forwarding the update TASK_RUNNING (UUID: 5caccf6c-9e1e-44cc-93d4-6851987802cd) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 to master@172.30.2.105:40724 [22:41:55]W: [Step 10/10] I0619 22:41:55.954711 30911 slave.cpp:3559] Status update manager successfully handled status update TASK_RUNNING (UUID: 5caccf6c-9e1e-44cc-93d4-6851987802cd) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:55]W: [Step 10/10] I0619 22:41:55.954732 30911 slave.cpp:3575] Sending acknowledgement for status update TASK_RUNNING (UUID: 5caccf6c-9e1e-44cc-93d4-6851987802cd) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 to executor(1)@172.17.0.1:60396 [22:41:55]W: [Step 10/10] I0619 22:41:55.954761 30914 master.cpp:5211] Status update TASK_RUNNING (UUID: 5caccf6c-9e1e-44cc-93d4-6851987802cd) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 from agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 at slave(469)@172.30.2.105:40724 (ip-172-30-2-105.mesosphere.io) [22:41:55]W: [Step 10/10] I0619 22:41:55.954788 30914 master.cpp:5259] Forwarding status update TASK_RUNNING (UUID: 5caccf6c-9e1e-44cc-93d4-6851987802cd) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:55]W: [Step 10/10] I0619 22:41:55.954843 30914 master.cpp:6871] Updating the state of task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) [22:41:55]W: [Step 10/10] I0619 22:41:55.954934 13548 exec.cpp:369] Executor received status update acknowledgement 5caccf6c-9e1e-44cc-93d4-6851987802cd for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:55]W: [Step 10/10] I0619 22:41:55.954967 30910 sched.cpp:1005] Scheduler::statusUpdate took 57021ns [22:41:55]W: [Step 10/10] I0619 22:41:55.955070 30914 master.cpp:4365] Processing ACKNOWLEDGE call 5caccf6c-9e1e-44cc-93d4-6851987802cd for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 (default) at scheduler-af10d6a3-1ebc-4377-b44d-8c0dfbffcb8e@172.30.2.105:40724 on agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 [22:41:55]W: [Step 10/10] I0619 22:41:55.955150 30911 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 5caccf6c-9e1e-44cc-93d4-6851987802cd) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:55]W: [Step 10/10] I0619 22:41:55.955219 30911 slave.cpp:2653] Status update manager successfully handled status update acknowledgement (UUID: 5caccf6c-9e1e-44cc-93d4-6851987802cd) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56] : [Step 10/10] Command terminated with signal Aborted (pid: 13550) [22:41:56]W: [Step 10/10] I0619 22:41:56.054153 13541 exec.cpp:546] Executor sending status update TASK_FAILED (UUID: 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.054498 30913 slave.cpp:3267] Handling status update TASK_FAILED (UUID: 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 from executor(1)@172.17.0.1:60396 [22:41:56]W: [Step 10/10] I0619 22:41:56.054955 30917 slave.cpp:6074] Terminating task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 [22:41:56]W: [Step 10/10] I0619 22:41:56.055366 30912 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.055409 30912 status_update_manager.cpp:374] Forwarding update TASK_FAILED (UUID: 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 to the agent [22:41:56]W: [Step 10/10] I0619 22:41:56.055485 30916 slave.cpp:3665] Forwarding the update TASK_FAILED (UUID: 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 to master@172.30.2.105:40724 [22:41:56]W: [Step 10/10] I0619 22:41:56.055558 30916 slave.cpp:3559] Status update manager successfully handled status update TASK_FAILED (UUID: 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56] : [Step 10/10] ../../src/tests/containerizer/cni_isolator_tests.cpp:216: Failure [22:41:56] : [Step 10/10] Value of: statusFinished->state() [22:41:56]W: [Step 10/10] I0619 22:41:56.055572 30916 slave.cpp:3575] Sending acknowledgement for status update TASK_FAILED (UUID: 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 to executor(1)@172.17.0.1:60396 [22:41:56] : [Step 10/10] Actual: TASK_FAILED [22:41:56] : [Step 10/10] Expected: TASK_FINISHED [22:41:56]W: [Step 10/10] I0619 22:41:56.055613 30914 master.cpp:5211] Status update TASK_FAILED (UUID: 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 from agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 at slave(469)@172.30.2.105:40724 (ip-172-30-2-105.mesosphere.io) [22:41:56]W: [Step 10/10] I0619 22:41:56.055640 30914 master.cpp:5259] Forwarding status update TASK_FAILED (UUID: 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.055696 30914 master.cpp:6871] Updating the state of task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 (latest state: TASK_FAILED, status update state: TASK_FAILED) [22:41:56]W: [Step 10/10] I0619 22:41:56.055773 30912 sched.cpp:1005] Scheduler::statusUpdate took 29145ns [22:41:56]W: [Step 10/10] I0619 22:41:56.055780 13546 exec.cpp:369] Executor received status update acknowledgement 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34 for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.055816 30916 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):128 (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 from framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.055887 30911 master.cpp:4365] Processing ACKNOWLEDGE call 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34 for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 (default) at scheduler-af10d6a3-1ebc-4377-b44d-8c0dfbffcb8e@172.30.2.105:40724 on agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 [22:41:56]W: [Step 10/10] I0619 22:41:56.055907 30911 master.cpp:6937] Removing task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 with resources cpus(*):1; mem(*):128 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 on agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 at slave(469)@172.30.2.105:40724 (ip-172-30-2-105.mesosphere.io) [22:41:56]W: [Step 10/10] I0619 22:41:56.055971 30896 sched.cpp:1964] Asked to stop the driver [22:41:56]W: [Step 10/10] I0619 22:41:56.056030 30913 sched.cpp:1167] Stopping framework '27c796db-6f98-4d61-96c0-f583f22787ff-0000' [22:41:56]W: [Step 10/10] I0619 22:41:56.056040 30916 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.056073 30916 status_update_manager.cpp:528] Cleaning up status update stream for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.056151 30915 master.cpp:6342] Processing TEARDOWN call for framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 (default) at scheduler-af10d6a3-1ebc-4377-b44d-8c0dfbffcb8e@172.30.2.105:40724 [22:41:56]W: [Step 10/10] I0619 22:41:56.056172 30915 master.cpp:6354] Removing framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 (default) at scheduler-af10d6a3-1ebc-4377-b44d-8c0dfbffcb8e@172.30.2.105:40724 [22:41:56]W: [Step 10/10] I0619 22:41:56.056197 30916 slave.cpp:2653] Status update manager successfully handled status update acknowledgement (UUID: 3d3632b6-f69b-4ca1-8bac-0b4e8e471d34) for task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.056216 30916 slave.cpp:6115] Completing task d7416b1b-cd1c-422a-bbaa-bb28913eeaf2 [22:41:56]W: [Step 10/10] I0619 22:41:56.056218 30913 hierarchical.cpp:375] Deactivated framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.056248 30916 slave.cpp:2274] Asked to shut down framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 by master@172.30.2.105:40724 [22:41:56]W: [Step 10/10] I0619 22:41:56.056265 30916 slave.cpp:2299] Shutting down framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.056277 30916 slave.cpp:4470] Shutting down executor 'd7416b1b-cd1c-422a-bbaa-bb28913eeaf2' of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 at executor(1)@172.17.0.1:60396 [22:41:56]W: [Step 10/10] I0619 22:41:56.056468 30914 hierarchical.cpp:326] Removed framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.056634 30911 containerizer.cpp:1576] Destroying container '548370b5-05f2-4e33-8f6f-015aa3fd1af4' [22:41:56]W: [Step 10/10] I0619 22:41:56.057258 13543 exec.cpp:410] Executor asked to shutdown [22:41:56]W: [Step 10/10] I0619 22:41:56.057303 13543 exec.cpp:425] Executor::shutdown took 6363ns [22:41:56]W: [Step 10/10] I0619 22:41:56.057324 13547 exec.cpp:92] Scheduling shutdown of the executor in 5secs [22:41:56]W: [Step 10/10] I0619 22:41:56.058279 30910 cgroups.cpp:2676] Freezing cgroup /sys/fs/cgroup/freezer/mesos/548370b5-05f2-4e33-8f6f-015aa3fd1af4 [22:41:56]W: [Step 10/10] I0619 22:41:56.059762 30912 cgroups.cpp:1409] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/548370b5-05f2-4e33-8f6f-015aa3fd1af4 after 1.460736ms [22:41:56]W: [Step 10/10] I0619 22:41:56.061364 30910 cgroups.cpp:2694] Thawing cgroup /sys/fs/cgroup/freezer/mesos/548370b5-05f2-4e33-8f6f-015aa3fd1af4 [22:41:56]W: [Step 10/10] I0619 22:41:56.062861 30915 cgroups.cpp:1438] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/548370b5-05f2-4e33-8f6f-015aa3fd1af4 after 1.478912ms [22:41:56]W: [Step 10/10] I0619 22:41:56.064016 30910 slave.cpp:3793] executor(1)@172.17.0.1:60396 exited [22:41:56]W: [Step 10/10] I0619 22:41:56.078352 30915 containerizer.cpp:1812] Executor for container '548370b5-05f2-4e33-8f6f-015aa3fd1af4' has exited [22:41:56]W: [Step 10/10] I0619 22:41:56.179833 30916 cni.cpp:1217] Unmounted the network namespace handle '/run/mesos/isolators/network/cni/548370b5-05f2-4e33-8f6f-015aa3fd1af4/ns' for container 548370b5-05f2-4e33-8f6f-015aa3fd1af4 [22:41:56]W: [Step 10/10] I0619 22:41:56.179924 30916 cni.cpp:1228] Removed the container directory '/run/mesos/isolators/network/cni/548370b5-05f2-4e33-8f6f-015aa3fd1af4' [22:41:56]W: [Step 10/10] I0619 22:41:56.180981 30913 provisioner.cpp:434] Destroying container rootfs at '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/provisioner/containers/548370b5-05f2-4e33-8f6f-015aa3fd1af4/backends/copy/rootfses/4f5eb0d5-118b-4129-972d-0a7e6a374f6f' for container 548370b5-05f2-4e33-8f6f-015aa3fd1af4 [22:41:56]W: [Step 10/10] I0619 22:41:56.280364 30912 slave.cpp:4152] Executor 'd7416b1b-cd1c-422a-bbaa-bb28913eeaf2' of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 terminated with signal Killed [22:41:56]W: [Step 10/10] I0619 22:41:56.280406 30912 slave.cpp:4256] Cleaning up executor 'd7416b1b-cd1c-422a-bbaa-bb28913eeaf2' of framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 at executor(1)@172.17.0.1:60396 [22:41:56]W: [Step 10/10] I0619 22:41:56.280545 30915 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/slaves/27c796db-6f98-4d61-96c0-f583f22787ff-S0/frameworks/27c796db-6f98-4d61-96c0-f583f22787ff-0000/executors/d7416b1b-cd1c-422a-bbaa-bb28913eeaf2/runs/548370b5-05f2-4e33-8f6f-015aa3fd1af4' for gc 6.99999675365926days in the future [22:41:56]W: [Step 10/10] I0619 22:41:56.280575 30912 slave.cpp:4344] Cleaning up framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.280647 30915 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/slaves/27c796db-6f98-4d61-96c0-f583f22787ff-S0/frameworks/27c796db-6f98-4d61-96c0-f583f22787ff-0000/executors/d7416b1b-cd1c-422a-bbaa-bb28913eeaf2' for gc 6.99999675293037days in the future [22:41:56]W: [Step 10/10] I0619 22:41:56.280654 30914 status_update_manager.cpp:282] Closing status update streams for framework 27c796db-6f98-4d61-96c0-f583f22787ff-0000 [22:41:56]W: [Step 10/10] I0619 22:41:56.280710 30915 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_GcX6XI/slaves/27c796db-6f98-4d61-96c0-f583f22787ff-S0/frameworks/27c796db-6f98-4d61-96c0-f583f22787ff-0000' for gc 6.99999675200296days in the future [22:41:56]W: [Step 10/10] I0619 22:41:56.280745 30915 slave.cpp:839] Agent terminating [22:41:56]W: [Step 10/10] I0619 22:41:56.280810 30912 master.cpp:1367] Agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 at slave(469)@172.30.2.105:40724 (ip-172-30-2-105.mesosphere.io) disconnected [22:41:56]W: [Step 10/10] I0619 22:41:56.280827 30912 master.cpp:2899] Disconnecting agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 at slave(469)@172.30.2.105:40724 (ip-172-30-2-105.mesosphere.io) [22:41:56]W: [Step 10/10] I0619 22:41:56.280844 30912 master.cpp:2918] Deactivating agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 at slave(469)@172.30.2.105:40724 (ip-172-30-2-105.mesosphere.io) [22:41:56]W: [Step 10/10] I0619 22:41:56.280912 30912 hierarchical.cpp:560] Agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 deactivated [22:41:56]W: [Step 10/10] I0619 22:41:56.283011 30896 master.cpp:1214] Master terminating [22:41:56]W: [Step 10/10] I0619 22:41:56.283140 30916 hierarchical.cpp:505] Removed agent 27c796db-6f98-4d61-96c0-f583f22787ff-S0 [22:41:56] : [Step 10/10] [ FAILED ] CniIsolatorTest.ROOT_INTERNET_CURL_LaunchCommandTask (1945 ms) ",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5668","06/21/2016 02:23:56",3,"Add CGROUP namespace to linux ns helper. ""Since linux kernel 4.6, CGROUP namespace is added. we need to support the handle for the cgroup namespace of the process. This also relates to two test failures on Ubuntu 16: """," [22:41:26] : [Step 10/10] [ RUN ] NsTest.ROOT_setns [22:41:26] : [Step 10/10] ../../src/tests/containerizer/ns_tests.cpp:75: Failure [22:41:26] : [Step 10/10] nstype: Unknown namespace 'cgroup' [22:41:26] : [Step 10/10] [ FAILED ] NsTest.ROOT_setns (1 ms) [22:41:26] : [Step 10/10] [ RUN ] NsTest.ROOT_getns [22:41:26] : [Step 10/10] ../../src/tests/containerizer/ns_tests.cpp:160: Failure [22:41:26] : [Step 10/10] nstype: Unknown namespace 'cgroup' [22:41:26] : [Step 10/10] [ FAILED ] NsTest.ROOT_getns (0 ms) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5669","06/21/2016 02:36:30",3,"CNI isolator should not return failure if /etc/hostname does not exist on host. ""/etc/hostname may not necessarily exist on every system (e.g., CentOS 6). Currently CNI isolator just return a failure if it does not exist on host, because the isolator need to mount it into the container. This is fine for /etc/host and /etc/resolv.conf, but we should make an exception for /etc/hostname, because hostname may still be accessible even if /etc/hostname doesn't exist. This issue relates to 3 failure tests: """," [22:45:21] : [Step 10/10] [ RUN ] CniIsolatorTest.ROOT_INTERNET_CURL_LaunchCommandTask [22:45:21]W: [Step 10/10] I0619 22:45:21.647611 24647 cluster.cpp:155] Creating default 'local' authorizer [22:45:21]W: [Step 10/10] I0619 22:45:21.655230 24647 leveldb.cpp:174] Opened db in 7.510408ms [22:45:21]W: [Step 10/10] I0619 22:45:21.657680 24647 leveldb.cpp:181] Compacted db in 2.427309ms [22:45:21]W: [Step 10/10] I0619 22:45:21.657702 24647 leveldb.cpp:196] Created db iterator in 6209ns [22:45:21]W: [Step 10/10] I0619 22:45:21.657709 24647 leveldb.cpp:202] Seeked to beginning of db in 692ns [22:45:21]W: [Step 10/10] I0619 22:45:21.657713 24647 leveldb.cpp:271] Iterated through 0 keys in the db in 431ns [22:45:21]W: [Step 10/10] I0619 22:45:21.657727 24647 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned [22:45:21]W: [Step 10/10] I0619 22:45:21.657888 24662 recover.cpp:451] Starting replica recovery [22:45:21]W: [Step 10/10] I0619 22:45:21.658051 24668 recover.cpp:477] Replica is in EMPTY status [22:45:21]W: [Step 10/10] I0619 22:45:21.658495 24664 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (18401)@172.30.2.247:42024 [22:45:21]W: [Step 10/10] I0619 22:45:21.658583 24662 recover.cpp:197] Received a recover response from a replica in EMPTY status [22:45:21]W: [Step 10/10] I0619 22:45:21.658687 24664 recover.cpp:568] Updating replica status to STARTING [22:45:21]W: [Step 10/10] I0619 22:45:21.659111 24664 master.cpp:382] Master 9a4a353b-91c5-43b9-8c37-19245c37758c (ip-172-30-2-247.mesosphere.io) started on 172.30.2.247:42024 [22:45:21]W: [Step 10/10] I0619 22:45:21.659126 24664 master.cpp:384] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/l8346Z/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/l8346Z/master"""" --zk_session_timeout=""""10secs"""" [22:45:21]W: [Step 10/10] I0619 22:45:21.659267 24664 master.cpp:434] Master only allowing authenticated frameworks to register [22:45:21]W: [Step 10/10] I0619 22:45:21.659276 24664 master.cpp:448] Master only allowing authenticated agents to register [22:45:21]W: [Step 10/10] I0619 22:45:21.659278 24664 master.cpp:461] Master only allowing authenticated HTTP frameworks to register [22:45:21]W: [Step 10/10] I0619 22:45:21.659282 24664 credentials.hpp:37] Loading credentials for authentication from '/tmp/l8346Z/credentials' [22:45:21]W: [Step 10/10] I0619 22:45:21.659375 24664 master.cpp:506] Using default 'crammd5' authenticator [22:45:21]W: [Step 10/10] I0619 22:45:21.659415 24664 master.cpp:578] Using default 'basic' HTTP authenticator [22:45:21]W: [Step 10/10] I0619 22:45:21.659495 24664 master.cpp:658] Using default 'basic' HTTP framework authenticator [22:45:21]W: [Step 10/10] I0619 22:45:21.659569 24664 master.cpp:705] Authorization enabled [22:45:21]W: [Step 10/10] I0619 22:45:21.659684 24666 hierarchical.cpp:142] Initialized hierarchical allocator process [22:45:21]W: [Step 10/10] I0619 22:45:21.659696 24665 whitelist_watcher.cpp:77] No whitelist given [22:45:21]W: [Step 10/10] I0619 22:45:21.660269 24666 master.cpp:1969] The newly elected leader is master@172.30.2.247:42024 with id 9a4a353b-91c5-43b9-8c37-19245c37758c [22:45:21]W: [Step 10/10] I0619 22:45:21.660281 24666 master.cpp:1982] Elected as the leading master! [22:45:21]W: [Step 10/10] I0619 22:45:21.660290 24666 master.cpp:1669] Recovering from registrar [22:45:21]W: [Step 10/10] I0619 22:45:21.660342 24662 registrar.cpp:332] Recovering registrar [22:45:21]W: [Step 10/10] I0619 22:45:21.661232 24669 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 2.48585ms [22:45:21]W: [Step 10/10] I0619 22:45:21.661254 24669 replica.cpp:320] Persisted replica status to STARTING [22:45:21]W: [Step 10/10] I0619 22:45:21.661326 24669 recover.cpp:477] Replica is in STARTING status [22:45:21]W: [Step 10/10] I0619 22:45:21.661667 24668 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (18404)@172.30.2.247:42024 [22:45:21]W: [Step 10/10] I0619 22:45:21.661758 24665 recover.cpp:197] Received a recover response from a replica in STARTING status [22:45:21]W: [Step 10/10] I0619 22:45:21.661893 24664 recover.cpp:568] Updating replica status to VOTING [22:45:21]W: [Step 10/10] I0619 22:45:21.663851 24664 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.915617ms [22:45:21]W: [Step 10/10] I0619 22:45:21.663866 24664 replica.cpp:320] Persisted replica status to VOTING [22:45:21]W: [Step 10/10] I0619 22:45:21.663899 24664 recover.cpp:582] Successfully joined the Paxos group [22:45:21]W: [Step 10/10] I0619 22:45:21.663944 24664 recover.cpp:466] Recover process terminated [22:45:21]W: [Step 10/10] I0619 22:45:21.664088 24668 log.cpp:553] Attempting to start the writer [22:45:21]W: [Step 10/10] I0619 22:45:21.664556 24668 replica.cpp:493] Replica received implicit promise request from (18405)@172.30.2.247:42024 with proposal 1 [22:45:21]W: [Step 10/10] I0619 22:45:21.666551 24668 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.971938ms [22:45:21]W: [Step 10/10] I0619 22:45:21.666566 24668 replica.cpp:342] Persisted promised to 1 [22:45:21]W: [Step 10/10] I0619 22:45:21.666767 24667 coordinator.cpp:238] Coordinator attempting to fill missing positions [22:45:21]W: [Step 10/10] I0619 22:45:21.667230 24668 replica.cpp:388] Replica received explicit promise request from (18406)@172.30.2.247:42024 for position 0 with proposal 2 [22:45:21]W: [Step 10/10] I0619 22:45:21.669271 24668 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 2.02399ms [22:45:21]W: [Step 10/10] I0619 22:45:21.669287 24668 replica.cpp:712] Persisted action at 0 [22:45:21]W: [Step 10/10] I0619 22:45:21.669656 24669 replica.cpp:537] Replica received write request for position 0 from (18407)@172.30.2.247:42024 [22:45:21]W: [Step 10/10] I0619 22:45:21.669680 24669 leveldb.cpp:436] Reading position from leveldb took 10808ns [22:45:21]W: [Step 10/10] I0619 22:45:21.671674 24669 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 1.977316ms [22:45:21]W: [Step 10/10] I0619 22:45:21.671689 24669 replica.cpp:712] Persisted action at 0 [22:45:21]W: [Step 10/10] I0619 22:45:21.671907 24665 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 [22:45:21]W: [Step 10/10] I0619 22:45:21.673920 24665 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.991274ms [22:45:21]W: [Step 10/10] I0619 22:45:21.673935 24665 replica.cpp:712] Persisted action at 0 [22:45:21]W: [Step 10/10] I0619 22:45:21.673941 24665 replica.cpp:697] Replica learned NOP action at position 0 [22:45:21]W: [Step 10/10] I0619 22:45:21.674190 24665 log.cpp:569] Writer started with ending position 0 [22:45:21]W: [Step 10/10] I0619 22:45:21.674489 24663 leveldb.cpp:436] Reading position from leveldb took 9059ns [22:45:21]W: [Step 10/10] I0619 22:45:21.674718 24663 registrar.cpp:365] Successfully fetched the registry (0B) in 14.355968ms [22:45:21]W: [Step 10/10] I0619 22:45:21.674747 24663 registrar.cpp:464] Applied 1 operations in 3070ns; attempting to update the 'registry' [22:45:21]W: [Step 10/10] I0619 22:45:21.674935 24665 log.cpp:577] Attempting to append 209 bytes to the log [22:45:21]W: [Step 10/10] I0619 22:45:21.674978 24665 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 [22:45:21]W: [Step 10/10] I0619 22:45:21.675242 24666 replica.cpp:537] Replica received write request for position 1 from (18408)@172.30.2.247:42024 [22:45:21]W: [Step 10/10] I0619 22:45:21.677088 24666 leveldb.cpp:341] Persisting action (228 bytes) to leveldb took 1.823904ms [22:45:21]W: [Step 10/10] I0619 22:45:21.677103 24666 replica.cpp:712] Persisted action at 1 [22:45:21]W: [Step 10/10] I0619 22:45:21.677299 24667 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 [22:45:21]W: [Step 10/10] I0619 22:45:21.679270 24667 leveldb.cpp:341] Persisting action (230 bytes) to leveldb took 1.952303ms [22:45:21]W: [Step 10/10] I0619 22:45:21.679286 24667 replica.cpp:712] Persisted action at 1 [22:45:21]W: [Step 10/10] I0619 22:45:21.679291 24667 replica.cpp:697] Replica learned APPEND action at position 1 [22:45:21]W: [Step 10/10] I0619 22:45:21.679481 24663 registrar.cpp:509] Successfully updated the 'registry' in 4.715264ms [22:45:21]W: [Step 10/10] I0619 22:45:21.679503 24666 log.cpp:596] Attempting to truncate the log to 1 [22:45:21]W: [Step 10/10] I0619 22:45:21.679560 24667 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 [22:45:21]W: [Step 10/10] I0619 22:45:21.679581 24663 registrar.cpp:395] Successfully recovered registrar [22:45:21]W: [Step 10/10] I0619 22:45:21.679745 24664 master.cpp:1777] Recovered 0 agents from the Registry (170B) ; allowing 10mins for agents to re-register [22:45:21]W: [Step 10/10] I0619 22:45:21.679774 24662 hierarchical.cpp:169] Skipping recovery of hierarchical allocator: nothing to recover [22:45:21]W: [Step 10/10] I0619 22:45:21.679986 24662 replica.cpp:537] Replica received write request for position 2 from (18409)@172.30.2.247:42024 [22:45:21]W: [Step 10/10] I0619 22:45:21.681895 24662 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.891877ms [22:45:21]W: [Step 10/10] I0619 22:45:21.681910 24662 replica.cpp:712] Persisted action at 2 [22:45:21]W: [Step 10/10] I0619 22:45:21.682160 24666 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 [22:45:21]W: [Step 10/10] I0619 22:45:21.684331 24666 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 2.153217ms [22:45:21]W: [Step 10/10] I0619 22:45:21.684375 24666 leveldb.cpp:399] Deleting ~1 keys from leveldb took 26973ns [22:45:21]W: [Step 10/10] I0619 22:45:21.684383 24666 replica.cpp:712] Persisted action at 2 [22:45:21]W: [Step 10/10] I0619 22:45:21.684389 24666 replica.cpp:697] Replica learned TRUNCATE action at position 2 [22:45:21]W: [Step 10/10] I0619 22:45:21.691529 24647 containerizer.cpp:201] Using isolation: docker/runtime,filesystem/linux,network/cni [22:45:21]W: [Step 10/10] I0619 22:45:21.694491 24647 linux_launcher.cpp:101] Using /cgroup/freezer as the freezer hierarchy for the Linux launcher [22:45:21]W: [Step 10/10] E0619 22:45:21.699741 24647 shell.hpp:106] Command 'hadoop version 2>&1' failed; this is the output: [22:45:21]W: [Step 10/10] sh: hadoop: command not found [22:45:21]W: [Step 10/10] I0619 22:45:21.699769 24647 fetcher.cpp:62] Skipping URI fetcher plugin 'hadoop' as it could not be created: Failed to create HDFS client: Failed to execute 'hadoop version 2>&1'; the command was either not found or exited with a non-zero exit status: 127 [22:45:21]W: [Step 10/10] I0619 22:45:21.699823 24647 registry_puller.cpp:111] Creating registry puller with docker registry 'https://registry-1.docker.io' [22:45:21]W: [Step 10/10] I0619 22:45:21.700865 24647 linux.cpp:146] Bind mounting '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_CVAWpG' and making it a shared mount [22:45:21]W: [Step 10/10] I0619 22:45:21.707801 24647 cni.cpp:286] Bind mounting '/var/run/mesos/isolators/network/cni' and making it a shared mount [22:45:21]W: [Step 10/10] I0619 22:45:21.714337 24647 cluster.cpp:432] Creating default 'local' authorizer [22:45:21]W: [Step 10/10] I0619 22:45:21.714825 24668 slave.cpp:203] Agent started on 468)@172.30.2.247:42024 [22:45:21]W: [Step 10/10] I0619 22:45:21.714839 24668 slave.cpp:204] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_CVAWpG/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/l8346Z/store"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_CVAWpG/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_CVAWpG/http_credentials"""" --image_providers=""""docker"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""docker/runtime,filesystem/linux,network/cni"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --network_cni_config_dir=""""/tmp/l8346Z/configs"""" --network_cni_plugins_dir=""""/tmp/l8346Z/plugins"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_CVAWpG"""" [22:45:21]W: [Step 10/10] I0619 22:45:21.715116 24668 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_CVAWpG/credential' [22:45:21]W: [Step 10/10] I0619 22:45:21.715195 24668 slave.cpp:341] Agent using credential for: test-principal [22:45:21]W: [Step 10/10] I0619 22:45:21.715214 24668 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_INTERNET_CURL_LaunchCommandTask_CVAWpG/http_credentials' [22:45:21]W: [Step 10/10] I0619 22:45:21.715296 24668 slave.cpp:393] Using default 'basic' HTTP authenticator [22:45:21]W: [Step 10/10] I0619 22:45:21.715400 24668 resources.cpp:572] Parsing resources as JSON failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000] [22:45:38] : [Step 10/10] [ RUN ] CniIsolatorTest.ROOT_VerifyCheckpointedInfo [22:45:38]W: [Step 10/10] I0619 22:45:38.459836 24647 cluster.cpp:155] Creating default 'local' authorizer [22:45:38]W: [Step 10/10] I0619 22:45:38.470319 24647 leveldb.cpp:174] Opened db in 10.34226ms [22:45:38]W: [Step 10/10] I0619 22:45:38.472771 24647 leveldb.cpp:181] Compacted db in 2.403554ms [22:45:38]W: [Step 10/10] I0619 22:45:38.472795 24647 leveldb.cpp:196] Created db iterator in 4446ns [22:45:38]W: [Step 10/10] I0619 22:45:38.472801 24647 leveldb.cpp:202] Seeked to beginning of db in 810ns [22:45:38]W: [Step 10/10] I0619 22:45:38.472806 24647 leveldb.cpp:271] Iterated through 0 keys in the db in 393ns [22:45:38]W: [Step 10/10] I0619 22:45:38.472822 24647 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned [22:45:38]W: [Step 10/10] I0619 22:45:38.473093 24665 recover.cpp:451] Starting replica recovery [22:45:38]W: [Step 10/10] I0619 22:45:38.473260 24663 recover.cpp:477] Replica is in EMPTY status [22:45:38]W: [Step 10/10] I0619 22:45:38.473647 24663 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (18464)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.473752 24665 recover.cpp:197] Received a recover response from a replica in EMPTY status [22:45:38]W: [Step 10/10] I0619 22:45:38.473896 24667 recover.cpp:568] Updating replica status to STARTING [22:45:38]W: [Step 10/10] I0619 22:45:38.474319 24663 master.cpp:382] Master 64f1f7ac-e810-4fb1-b549-6e29fc62622b (ip-172-30-2-247.mesosphere.io) started on 172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.474329 24663 master.cpp:384] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/qJWqSY/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/qJWqSY/master"""" --zk_session_timeout=""""10secs"""" [22:45:38]W: [Step 10/10] I0619 22:45:38.474452 24663 master.cpp:434] Master only allowing authenticated frameworks to register [22:45:38]W: [Step 10/10] I0619 22:45:38.474457 24663 master.cpp:448] Master only allowing authenticated agents to register [22:45:38]W: [Step 10/10] I0619 22:45:38.474459 24663 master.cpp:461] Master only allowing authenticated HTTP frameworks to register [22:45:38]W: [Step 10/10] I0619 22:45:38.474463 24663 credentials.hpp:37] Loading credentials for authentication from '/tmp/qJWqSY/credentials' [22:45:38]W: [Step 10/10] I0619 22:45:38.474551 24663 master.cpp:506] Using default 'crammd5' authenticator [22:45:38]W: [Step 10/10] I0619 22:45:38.474598 24663 master.cpp:578] Using default 'basic' HTTP authenticator [22:45:38]W: [Step 10/10] I0619 22:45:38.474643 24663 master.cpp:658] Using default 'basic' HTTP framework authenticator [22:45:38]W: [Step 10/10] I0619 22:45:38.474674 24663 master.cpp:705] Authorization enabled [22:45:38]W: [Step 10/10] I0619 22:45:38.474771 24668 whitelist_watcher.cpp:77] No whitelist given [22:45:38]W: [Step 10/10] I0619 22:45:38.474798 24664 hierarchical.cpp:142] Initialized hierarchical allocator process [22:45:38]W: [Step 10/10] I0619 22:45:38.475177 24663 master.cpp:1969] The newly elected leader is master@172.30.2.247:42024 with id 64f1f7ac-e810-4fb1-b549-6e29fc62622b [22:45:38]W: [Step 10/10] I0619 22:45:38.475188 24663 master.cpp:1982] Elected as the leading master! [22:45:38]W: [Step 10/10] I0619 22:45:38.475191 24663 master.cpp:1669] Recovering from registrar [22:45:38]W: [Step 10/10] I0619 22:45:38.475244 24662 registrar.cpp:332] Recovering registrar [22:45:38]W: [Step 10/10] I0619 22:45:38.476292 24669 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 2.312046ms [22:45:38]W: [Step 10/10] I0619 22:45:38.476308 24669 replica.cpp:320] Persisted replica status to STARTING [22:45:38]W: [Step 10/10] I0619 22:45:38.476368 24669 recover.cpp:477] Replica is in STARTING status [22:45:38]W: [Step 10/10] I0619 22:45:38.476687 24668 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (18467)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.476824 24666 recover.cpp:197] Received a recover response from a replica in STARTING status [22:45:38]W: [Step 10/10] I0619 22:45:38.476953 24668 recover.cpp:568] Updating replica status to VOTING [22:45:38]W: [Step 10/10] I0619 22:45:38.478798 24668 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.793996ms [22:45:38]W: [Step 10/10] I0619 22:45:38.478813 24668 replica.cpp:320] Persisted replica status to VOTING [22:45:38]W: [Step 10/10] I0619 22:45:38.478844 24668 recover.cpp:582] Successfully joined the Paxos group [22:45:38]W: [Step 10/10] I0619 22:45:38.478889 24668 recover.cpp:466] Recover process terminated [22:45:38]W: [Step 10/10] I0619 22:45:38.479060 24665 log.cpp:553] Attempting to start the writer [22:45:38]W: [Step 10/10] I0619 22:45:38.479547 24667 replica.cpp:493] Replica received implicit promise request from (18468)@172.30.2.247:42024 with proposal 1 [22:45:38]W: [Step 10/10] I0619 22:45:38.481433 24667 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.8684ms [22:45:38]W: [Step 10/10] I0619 22:45:38.481449 24667 replica.cpp:342] Persisted promised to 1 [22:45:38]W: [Step 10/10] I0619 22:45:38.481667 24662 coordinator.cpp:238] Coordinator attempting to fill missing positions [22:45:38]W: [Step 10/10] I0619 22:45:38.482067 24668 replica.cpp:388] Replica received explicit promise request from (18469)@172.30.2.247:42024 for position 0 with proposal 2 [22:45:38]W: [Step 10/10] I0619 22:45:38.483842 24668 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 1.754044ms [22:45:38]W: [Step 10/10] I0619 22:45:38.483858 24668 replica.cpp:712] Persisted action at 0 [22:45:38]W: [Step 10/10] I0619 22:45:38.484235 24665 replica.cpp:537] Replica received write request for position 0 from (18470)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.484261 24665 leveldb.cpp:436] Reading position from leveldb took 10298ns [22:45:38]W: [Step 10/10] I0619 22:45:38.486331 24665 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 2.057217ms [22:45:38]W: [Step 10/10] I0619 22:45:38.486346 24665 replica.cpp:712] Persisted action at 0 [22:45:38]W: [Step 10/10] I0619 22:45:38.486574 24669 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 [22:45:38]W: [Step 10/10] I0619 22:45:38.488533 24669 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.941228ms [22:45:38]W: [Step 10/10] I0619 22:45:38.488548 24669 replica.cpp:712] Persisted action at 0 [22:45:38]W: [Step 10/10] I0619 22:45:38.488553 24669 replica.cpp:697] Replica learned NOP action at position 0 [22:45:38]W: [Step 10/10] I0619 22:45:38.488690 24666 log.cpp:569] Writer started with ending position 0 [22:45:38]W: [Step 10/10] I0619 22:45:38.489006 24662 leveldb.cpp:436] Reading position from leveldb took 11082ns [22:45:38]W: [Step 10/10] I0619 22:45:38.489244 24667 registrar.cpp:365] Successfully fetched the registry (0B) in 13.976832ms [22:45:38]W: [Step 10/10] I0619 22:45:38.489276 24667 registrar.cpp:464] Applied 1 operations in 3438ns; attempting to update the 'registry' [22:45:38]W: [Step 10/10] I0619 22:45:38.489450 24662 log.cpp:577] Attempting to append 209 bytes to the log [22:45:38]W: [Step 10/10] I0619 22:45:38.489514 24665 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 [22:45:38]W: [Step 10/10] I0619 22:45:38.489785 24662 replica.cpp:537] Replica received write request for position 1 from (18471)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.491642 24662 leveldb.cpp:341] Persisting action (228 bytes) to leveldb took 1.838371ms [22:45:38]W: [Step 10/10] I0619 22:45:38.491657 24662 replica.cpp:712] Persisted action at 1 [22:45:38]W: [Step 10/10] I0619 22:45:38.491885 24665 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 [22:45:38]W: [Step 10/10] I0619 22:45:38.493649 24665 leveldb.cpp:341] Persisting action (230 bytes) to leveldb took 1.743495ms [22:45:38]W: [Step 10/10] I0619 22:45:38.493665 24665 replica.cpp:712] Persisted action at 1 [22:45:38]W: [Step 10/10] I0619 22:45:38.493670 24665 replica.cpp:697] Replica learned APPEND action at position 1 [22:45:38]W: [Step 10/10] I0619 22:45:38.493930 24669 registrar.cpp:509] Successfully updated the 'registry' in 4.638976ms [22:45:38]W: [Step 10/10] I0619 22:45:38.493983 24667 log.cpp:596] Attempting to truncate the log to 1 [22:45:38]W: [Step 10/10] I0619 22:45:38.493994 24669 registrar.cpp:395] Successfully recovered registrar [22:45:38]W: [Step 10/10] I0619 22:45:38.494034 24668 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 [22:45:38]W: [Step 10/10] I0619 22:45:38.494197 24662 master.cpp:1777] Recovered 0 agents from the Registry (170B) ; allowing 10mins for agents to re-register [22:45:38]W: [Step 10/10] I0619 22:45:38.494210 24666 hierarchical.cpp:169] Skipping recovery of hierarchical allocator: nothing to recover [22:45:38]W: [Step 10/10] I0619 22:45:38.494396 24662 replica.cpp:537] Replica received write request for position 2 from (18472)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.496301 24662 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.884992ms [22:45:38]W: [Step 10/10] I0619 22:45:38.496315 24662 replica.cpp:712] Persisted action at 2 [22:45:38]W: [Step 10/10] I0619 22:45:38.496574 24666 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 [22:45:38]W: [Step 10/10] I0619 22:45:38.498500 24666 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 1.906093ms [22:45:38]W: [Step 10/10] I0619 22:45:38.498529 24666 leveldb.cpp:399] Deleting ~1 keys from leveldb took 13787ns [22:45:38]W: [Step 10/10] I0619 22:45:38.498538 24666 replica.cpp:712] Persisted action at 2 [22:45:38]W: [Step 10/10] I0619 22:45:38.498543 24666 replica.cpp:697] Replica learned TRUNCATE action at position 2 [22:45:38]W: [Step 10/10] I0619 22:45:38.505269 24647 containerizer.cpp:201] Using isolation: network/cni,filesystem/posix [22:45:38]W: [Step 10/10] I0619 22:45:38.508313 24647 linux_launcher.cpp:101] Using /cgroup/freezer as the freezer hierarchy for the Linux launcher [22:45:38]W: [Step 10/10] I0619 22:45:38.509832 24647 cluster.cpp:432] Creating default 'local' authorizer [22:45:38]W: [Step 10/10] I0619 22:45:38.510205 24666 slave.cpp:203] Agent started on 469)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.510213 24666 slave.cpp:204] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""network/cni"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --network_cni_config_dir=""""/tmp/qJWqSY/configs"""" --network_cni_plugins_dir=""""/tmp/qJWqSY/plugins"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru"""" [22:45:38]W: [Step 10/10] I0619 22:45:38.510442 24666 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/credential' [22:45:38]W: [Step 10/10] I0619 22:45:38.510510 24666 slave.cpp:341] Agent using credential for: test-principal [22:45:38]W: [Step 10/10] I0619 22:45:38.510521 24666 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/http_credentials' [22:45:38]W: [Step 10/10] I0619 22:45:38.510604 24666 slave.cpp:393] Using default 'basic' HTTP authenticator [22:45:38]W: [Step 10/10] I0619 22:45:38.510696 24666 resources.cpp:572] Parsing resources as JSON failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000] [22:45:38]W: [Step 10/10] Trying semicolon-delimited string format instead [22:45:38]W: [Step 10/10] I0619 22:45:38.510915 24647 sched.cpp:224] Version: 1.0.0 [22:45:38]W: [Step 10/10] I0619 22:45:38.510962 24666 slave.cpp:592] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [22:45:38]W: [Step 10/10] I0619 22:45:38.510984 24666 slave.cpp:600] Agent attributes: [ ] [22:45:38]W: [Step 10/10] I0619 22:45:38.510989 24666 slave.cpp:605] Agent hostname: ip-172-30-2-247.mesosphere.io [22:45:38]W: [Step 10/10] I0619 22:45:38.511077 24669 sched.cpp:328] New master detected at master@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.511162 24669 sched.cpp:394] Authenticating with master master@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.511173 24669 sched.cpp:401] Using default CRAM-MD5 authenticatee [22:45:38]W: [Step 10/10] I0619 22:45:38.511294 24662 authenticatee.cpp:121] Creating new client SASL connection [22:45:38]W: [Step 10/10] I0619 22:45:38.511371 24667 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/meta' [22:45:38]W: [Step 10/10] I0619 22:45:38.511494 24665 master.cpp:5943] Authenticating scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.511523 24668 status_update_manager.cpp:200] Recovering status update manager [22:45:38]W: [Step 10/10] I0619 22:45:38.511566 24662 authenticator.cpp:414] Starting authentication session for crammd5_authenticatee(957)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.511612 24664 containerizer.cpp:514] Recovering containerizer [22:45:38]W: [Step 10/10] I0619 22:45:38.511706 24667 authenticator.cpp:98] Creating new server SASL connection [22:45:38]W: [Step 10/10] I0619 22:45:38.511800 24667 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [22:45:38]W: [Step 10/10] I0619 22:45:38.511816 24667 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [22:45:38]W: [Step 10/10] I0619 22:45:38.511865 24667 authenticator.cpp:204] Received SASL authentication start [22:45:38]W: [Step 10/10] I0619 22:45:38.511934 24667 authenticator.cpp:326] Authentication requires more steps [22:45:38]W: [Step 10/10] I0619 22:45:38.511977 24667 authenticatee.cpp:259] Received SASL authentication step [22:45:38]W: [Step 10/10] I0619 22:45:38.512080 24668 authenticator.cpp:232] Received SASL authentication step [22:45:38]W: [Step 10/10] I0619 22:45:38.512102 24668 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-247' server FQDN: 'ip-172-30-2-247' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [22:45:38]W: [Step 10/10] I0619 22:45:38.512112 24668 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [22:45:38]W: [Step 10/10] I0619 22:45:38.512125 24668 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [22:45:38]W: [Step 10/10] I0619 22:45:38.512136 24668 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-247' server FQDN: 'ip-172-30-2-247' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [22:45:38]W: [Step 10/10] I0619 22:45:38.512145 24668 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [22:45:38]W: [Step 10/10] I0619 22:45:38.512152 24668 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [22:45:38]W: [Step 10/10] I0619 22:45:38.512166 24668 authenticator.cpp:318] Authentication success [22:45:38]W: [Step 10/10] I0619 22:45:38.512228 24665 authenticatee.cpp:299] Authentication success [22:45:38]W: [Step 10/10] I0619 22:45:38.512233 24668 master.cpp:5973] Successfully authenticated principal 'test-principal' at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.512253 24667 authenticator.cpp:432] Authentication session cleanup for crammd5_authenticatee(957)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.512434 24664 sched.cpp:484] Successfully authenticated with master master@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.512445 24664 sched.cpp:800] Sending SUBSCRIBE call to master@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.512490 24667 provisioner.cpp:253] Provisioner recovery complete [22:45:38]W: [Step 10/10] I0619 22:45:38.512609 24664 sched.cpp:833] Will retry registration in 550.501359ms if necessary [22:45:38]W: [Step 10/10] I0619 22:45:38.512648 24663 master.cpp:2539] Received SUBSCRIBE call for framework 'default' at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.512665 24663 master.cpp:2008] Authorizing framework principal 'test-principal' to receive offers for role '*' [22:45:38]W: [Step 10/10] I0619 22:45:38.512678 24667 slave.cpp:4845] Finished recovery [22:45:38]W: [Step 10/10] I0619 22:45:38.512763 24663 master.cpp:2615] Subscribing framework default with checkpointing disabled and capabilities [ ] [22:45:38]W: [Step 10/10] I0619 22:45:38.512876 24664 hierarchical.cpp:264] Added framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.512893 24664 hierarchical.cpp:1488] No allocations performed [22:45:38]W: [Step 10/10] I0619 22:45:38.512905 24664 hierarchical.cpp:1583] No inverse offers to send out! [22:45:38]W: [Step 10/10] I0619 22:45:38.512922 24664 hierarchical.cpp:1139] Performed allocation for 0 agents in 33065ns [22:45:38]W: [Step 10/10] I0619 22:45:38.512940 24667 slave.cpp:5017] Querying resource estimator for oversubscribable resources [22:45:38]W: [Step 10/10] I0619 22:45:38.513025 24666 sched.cpp:723] Framework registered with 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.513056 24666 sched.cpp:737] Scheduler::registered took 18725ns [22:45:38]W: [Step 10/10] I0619 22:45:38.513074 24669 status_update_manager.cpp:174] Pausing sending status updates [22:45:38]W: [Step 10/10] I0619 22:45:38.513089 24667 slave.cpp:967] New master detected at master@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.513105 24667 slave.cpp:1029] Authenticating with master master@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.513120 24667 slave.cpp:1040] Using default CRAM-MD5 authenticatee [22:45:38]W: [Step 10/10] I0619 22:45:38.513169 24667 slave.cpp:1002] Detecting new master [22:45:38]W: [Step 10/10] I0619 22:45:38.513192 24663 authenticatee.cpp:121] Creating new client SASL connection [22:45:38]W: [Step 10/10] I0619 22:45:38.513260 24667 slave.cpp:5031] Received oversubscribable resources from the resource estimator [22:45:38]W: [Step 10/10] I0619 22:45:38.513324 24666 master.cpp:5943] Authenticating slave(469)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.513365 24666 authenticator.cpp:414] Starting authentication session for crammd5_authenticatee(958)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.513423 24665 authenticator.cpp:98] Creating new server SASL connection [22:45:38]W: [Step 10/10] I0619 22:45:38.513484 24665 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [22:45:38]W: [Step 10/10] I0619 22:45:38.513494 24665 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [22:45:38]W: [Step 10/10] I0619 22:45:38.513525 24665 authenticator.cpp:204] Received SASL authentication start [22:45:38]W: [Step 10/10] I0619 22:45:38.513563 24665 authenticator.cpp:326] Authentication requires more steps [22:45:38]W: [Step 10/10] I0619 22:45:38.513594 24665 authenticatee.cpp:259] Received SASL authentication step [22:45:38]W: [Step 10/10] I0619 22:45:38.513635 24665 authenticator.cpp:232] Received SASL authentication step [22:45:38]W: [Step 10/10] I0619 22:45:38.513653 24665 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-247' server FQDN: 'ip-172-30-2-247' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [22:45:38]W: [Step 10/10] I0619 22:45:38.513661 24665 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [22:45:38]W: [Step 10/10] I0619 22:45:38.513667 24665 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [22:45:38]W: [Step 10/10] I0619 22:45:38.513674 24665 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-247' server FQDN: 'ip-172-30-2-247' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [22:45:38]W: [Step 10/10] I0619 22:45:38.513677 24665 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [22:45:38]W: [Step 10/10] I0619 22:45:38.513680 24665 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [22:45:38]W: [Step 10/10] I0619 22:45:38.513689 24665 authenticator.cpp:318] Authentication success [22:45:38]W: [Step 10/10] I0619 22:45:38.513727 24665 authenticatee.cpp:299] Authentication success [22:45:38]W: [Step 10/10] I0619 22:45:38.513737 24664 authenticator.cpp:432] Authentication session cleanup for crammd5_authenticatee(958)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.513754 24666 master.cpp:5973] Successfully authenticated principal 'test-principal' at slave(469)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.513859 24669 slave.cpp:1108] Successfully authenticated with master master@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.513921 24669 slave.cpp:1511] Will retry registration in 834760ns if necessary [22:45:38]W: [Step 10/10] I0619 22:45:38.513974 24666 master.cpp:4653] Registering agent at slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) with id 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 [22:45:38]W: [Step 10/10] I0619 22:45:38.514077 24668 registrar.cpp:464] Applied 1 operations in 12262ns; attempting to update the 'registry' [22:45:38]W: [Step 10/10] I0619 22:45:38.514245 24666 log.cpp:577] Attempting to append 395 bytes to the log [22:45:38]W: [Step 10/10] I0619 22:45:38.514282 24666 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 [22:45:38]W: [Step 10/10] I0619 22:45:38.514566 24662 replica.cpp:537] Replica received write request for position 3 from (18487)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.515151 24665 slave.cpp:1511] Will retry registration in 1.465145ms if necessary [22:45:38]W: [Step 10/10] I0619 22:45:38.515202 24667 master.cpp:4641] Ignoring register agent message from slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) as admission is already in progress [22:45:38]W: [Step 10/10] I0619 22:45:38.517513 24663 slave.cpp:1511] Will retry registration in 70.844019ms if necessary [22:45:38]W: [Step 10/10] I0619 22:45:38.517555 24664 master.cpp:4641] Ignoring register agent message from slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) as admission is already in progress [22:45:38]W: [Step 10/10] I0619 22:45:38.518628 24662 leveldb.cpp:341] Persisting action (414 bytes) to leveldb took 4.043654ms [22:45:38]W: [Step 10/10] I0619 22:45:38.518643 24662 replica.cpp:712] Persisted action at 3 [22:45:38]W: [Step 10/10] I0619 22:45:38.518877 24665 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 [22:45:38]W: [Step 10/10] I0619 22:45:38.520764 24665 leveldb.cpp:341] Persisting action (416 bytes) to leveldb took 1.869511ms [22:45:38]W: [Step 10/10] I0619 22:45:38.520779 24665 replica.cpp:712] Persisted action at 3 [22:45:38]W: [Step 10/10] I0619 22:45:38.520784 24665 replica.cpp:697] Replica learned APPEND action at position 3 [22:45:38]W: [Step 10/10] I0619 22:45:38.521023 24663 registrar.cpp:509] Successfully updated the 'registry' in 6.930176ms [22:45:38]W: [Step 10/10] I0619 22:45:38.521083 24668 log.cpp:596] Attempting to truncate the log to 3 [22:45:38]W: [Step 10/10] I0619 22:45:38.521152 24665 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 [22:45:38]W: [Step 10/10] I0619 22:45:38.521239 24667 master.cpp:4721] Registered agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 at slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [22:45:38]W: [Step 10/10] I0619 22:45:38.521272 24665 slave.cpp:3747] Received ping from slave-observer(424)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.521280 24664 hierarchical.cpp:473] Added agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 (ip-172-30-2-247.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) [22:45:38]W: [Step 10/10] I0619 22:45:38.521340 24665 slave.cpp:1152] Registered with master master@172.30.2.247:42024; given agent ID 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 [22:45:38]W: [Step 10/10] I0619 22:45:38.521354 24665 fetcher.cpp:86] Clearing fetcher cache [22:45:38]W: [Step 10/10] I0619 22:45:38.521428 24664 hierarchical.cpp:1583] No inverse offers to send out! [22:45:38]W: [Step 10/10] I0619 22:45:38.521455 24664 hierarchical.cpp:1162] Performed allocation for agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 in 131318ns [22:45:38]W: [Step 10/10] I0619 22:45:38.521443 24669 replica.cpp:537] Replica received write request for position 4 from (18488)@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.521466 24662 status_update_manager.cpp:181] Resuming sending status updates [22:45:38]W: [Step 10/10] I0619 22:45:38.521502 24668 master.cpp:5772] Sending 1 offers to framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 (default) at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.521553 24665 slave.cpp:1175] Checkpointing SlaveInfo to '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/meta/slaves/64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0/slave.info' [22:45:38]W: [Step 10/10] I0619 22:45:38.521667 24665 slave.cpp:1212] Forwarding total oversubscribed resources [22:45:38]W: [Step 10/10] I0619 22:45:38.521709 24665 master.cpp:5066] Received update of agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 at slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) with total oversubscribed resources [22:45:38]W: [Step 10/10] I0619 22:45:38.521725 24668 sched.cpp:897] Scheduler::resourceOffers took 35814ns [22:45:38]W: [Step 10/10] I0619 22:45:38.521827 24665 hierarchical.cpp:531] Agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 (ip-172-30-2-247.mesosphere.io) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) [22:45:38]W: [Step 10/10] I0619 22:45:38.521860 24665 hierarchical.cpp:1488] No allocations performed [22:45:38]W: [Step 10/10] I0619 22:45:38.521870 24665 hierarchical.cpp:1583] No inverse offers to send out! [22:45:38]W: [Step 10/10] I0619 22:45:38.521884 24665 hierarchical.cpp:1162] Performed allocation for agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 in 36469ns [22:45:38]W: [Step 10/10] I0619 22:45:38.521885 24647 resources.cpp:572] Parsing resources as JSON failed: cpus:1;mem:128 [22:45:38]W: [Step 10/10] Trying semicolon-delimited string format instead [22:45:38]W: [Step 10/10] I0619 22:45:38.522244 24666 master.cpp:3457] Processing ACCEPT call for offers: [ 64f1f7ac-e810-4fb1-b549-6e29fc62622b-O0 ] on agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 at slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) for framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 (default) at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.522267 24666 master.cpp:3095] Authorizing framework principal 'test-principal' to launch task 123bdde2-b542-4206-9554-249c053f63d2 [22:45:38]W: [Step 10/10] I0619 22:45:38.522642 24666 master.hpp:177] Adding task 123bdde2-b542-4206-9554-249c053f63d2 with resources cpus(*):1; mem(*):128 on agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 (ip-172-30-2-247.mesosphere.io) [22:45:38]W: [Step 10/10] I0619 22:45:38.522666 24666 master.cpp:3946] Launching task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 (default) at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 with resources cpus(*):1; mem(*):128 on agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 at slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) [22:45:38]W: [Step 10/10] I0619 22:45:38.522780 24662 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):896; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):1; mem(*):128) on agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 from framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.522799 24667 slave.cpp:1551] Got assigned task 123bdde2-b542-4206-9554-249c053f63d2 for framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.522804 24662 hierarchical.cpp:928] Framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 filtered agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 for 5secs [22:45:38]W: [Step 10/10] I0619 22:45:38.522893 24667 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [22:45:38]W: [Step 10/10] Trying semicolon-delimited string format instead [22:45:38]W: [Step 10/10] I0619 22:45:38.523059 24667 slave.cpp:1670] Launching task 123bdde2-b542-4206-9554-249c053f63d2 for framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.523108 24667 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [22:45:38]W: [Step 10/10] Trying semicolon-delimited string format instead [22:45:38]W: [Step 10/10] I0619 22:45:38.523439 24669 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.965378ms [22:45:38]W: [Step 10/10] I0619 22:45:38.523454 24669 replica.cpp:712] Persisted action at 4 [22:45:38]W: [Step 10/10] I0619 22:45:38.523521 24667 paths.cpp:528] Trying to chown '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/slaves/64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0/frameworks/64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000/executors/123bdde2-b542-4206-9554-249c053f63d2/runs/e533d091-9fc2-4161-b6b3-4c99a88be466' to user 'root' [22:45:38]W: [Step 10/10] I0619 22:45:38.526239 24665 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 [22:45:38]W: [Step 10/10] I0619 22:45:38.528328 24665 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 2.028744ms [22:45:38]W: [Step 10/10] I0619 22:45:38.528360 24665 leveldb.cpp:399] Deleting ~2 keys from leveldb took 16691ns [22:45:38]W: [Step 10/10] I0619 22:45:38.528368 24665 replica.cpp:712] Persisted action at 4 [22:45:38]W: [Step 10/10] I0619 22:45:38.528374 24665 replica.cpp:697] Replica learned TRUNCATE action at position 4 [22:45:38]W: [Step 10/10] I0619 22:45:38.528923 24667 slave.cpp:5734] Launching executor 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/slaves/64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0/frameworks/64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000/executors/123bdde2-b542-4206-9554-249c053f63d2/runs/e533d091-9fc2-4161-b6b3-4c99a88be466' [22:45:38]W: [Step 10/10] I0619 22:45:38.529093 24667 slave.cpp:1896] Queuing task '123bdde2-b542-4206-9554-249c053f63d2' for executor '123bdde2-b542-4206-9554-249c053f63d2' of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.529100 24669 containerizer.cpp:773] Starting container 'e533d091-9fc2-4161-b6b3-4c99a88be466' for executor '123bdde2-b542-4206-9554-249c053f63d2' of framework '64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000' [22:45:38]W: [Step 10/10] I0619 22:45:38.529126 24667 slave.cpp:920] Successfully attached file '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/slaves/64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0/frameworks/64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000/executors/123bdde2-b542-4206-9554-249c053f63d2/runs/e533d091-9fc2-4161-b6b3-4c99a88be466' [22:45:38]W: [Step 10/10] I0619 22:45:38.529799 24663 containerizer.cpp:1120] Overwriting environment variable 'LIBPROCESS_IP', original: '172.30.2.247', new: '0.0.0.0', for container e533d091-9fc2-4161-b6b3-4c99a88be466 [22:45:38]W: [Step 10/10] I0619 22:45:38.530079 24666 containerizer.cpp:1267] Launching 'mesos-containerizer' with flags '--command=""""{""""shell"""":true,""""value"""":""""\/mnt\/teamcity\/work\/4240ba9ddd0997c3\/build\/src\/mesos-executor""""}"""" --commands=""""{""""commands"""":[]}"""" --help=""""false"""" --pipe_read=""""96"""" --pipe_write=""""106"""" --sandbox=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/slaves/64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0/frameworks/64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000/executors/123bdde2-b542-4206-9554-249c053f63d2/runs/e533d091-9fc2-4161-b6b3-4c99a88be466"""" --user=""""root""""' [22:45:38]W: [Step 10/10] I0619 22:45:38.530154 24666 linux_launcher.cpp:281] Cloning child process with flags = CLONE_NEWUTS | CLONE_NEWNS [22:45:38]W: [Step 10/10] I0619 22:45:38.533272 24662 cni.cpp:683] Bind mounted '/proc/7922/ns/net' to '/var/run/mesos/isolators/network/cni/e533d091-9fc2-4161-b6b3-4c99a88be466/ns' for container e533d091-9fc2-4161-b6b3-4c99a88be466 [22:45:38]W: [Step 10/10] I0619 22:45:38.533452 24662 cni.cpp:977] Invoking CNI plugin 'mockPlugin' with network configuration '{""""args"""":{""""org.apache.mesos"""":{""""network_info"""":{""""name"""":""""__MESOS_TEST__""""}}},""""name"""":""""__MESOS_TEST__"""",""""type"""":""""mockPlugin""""}' [22:45:38]W: [Step 10/10] I0619 22:45:38.606812 24663 cni.cpp:1066] Got assigned IPv4 address '172.17.42.1/16' from CNI network '__MESOS_TEST__' for container e533d091-9fc2-4161-b6b3-4c99a88be466 [22:45:38]W: [Step 10/10] I0619 22:45:38.607293 24666 cni.cpp:808] DNS nameservers for container e533d091-9fc2-4161-b6b3-4c99a88be466 are: [22:45:38]W: [Step 10/10] nameserver 172.30.0.2 [22:45:38]W: [Step 10/10] Failed to synchronize with agent (it's probably exited) [22:45:38]W: [Step 10/10] E0619 22:45:38.707609 24662 slave.cpp:4039] Container 'e533d091-9fc2-4161-b6b3-4c99a88be466' for executor '123bdde2-b542-4206-9554-249c053f63d2' of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 failed to start: Collect failed: Failed to setup hostname and network files: WARNING: Logging before InitGoogleLogging() is written to STDERR [22:45:38]W: [Step 10/10] I0619 22:45:38.645313 7938 cni.cpp:1449] Set hostname to 'e533d091-9fc2-4161-b6b3-4c99a88be466' [22:45:38]W: [Step 10/10] Mount point '/etc/hostname' does not exist on the host filesystem [22:45:38]W: [Step 10/10] I0619 22:45:38.707772 24669 containerizer.cpp:1576] Destroying container 'e533d091-9fc2-4161-b6b3-4c99a88be466' [22:45:38]W: [Step 10/10] I0619 22:45:38.707787 24669 containerizer.cpp:1624] Waiting for the isolators to complete for container 'e533d091-9fc2-4161-b6b3-4c99a88be466' [22:45:38]W: [Step 10/10] I0619 22:45:38.708878 24667 cgroups.cpp:2676] Freezing cgroup /cgroup/freezer/mesos/e533d091-9fc2-4161-b6b3-4c99a88be466 [22:45:38]W: [Step 10/10] I0619 22:45:38.807951 24664 containerizer.cpp:1812] Executor for container 'e533d091-9fc2-4161-b6b3-4c99a88be466' has exited [22:45:38]W: [Step 10/10] I0619 22:45:38.810672 24666 cgroups.cpp:1409] Successfully froze cgroup /cgroup/freezer/mesos/e533d091-9fc2-4161-b6b3-4c99a88be466 after 101.766144ms [22:45:38]W: [Step 10/10] I0619 22:45:38.811637 24668 cgroups.cpp:2694] Thawing cgroup /cgroup/freezer/mesos/e533d091-9fc2-4161-b6b3-4c99a88be466 [22:45:38]W: [Step 10/10] I0619 22:45:38.812523 24664 cgroups.cpp:1438] Successfully thawed cgroup /cgroup/freezer/mesos/e533d091-9fc2-4161-b6b3-4c99a88be466 after 864us [22:45:38]W: [Step 10/10] I0619 22:45:38.908664 24668 cni.cpp:1217] Unmounted the network namespace handle '/var/run/mesos/isolators/network/cni/e533d091-9fc2-4161-b6b3-4c99a88be466/ns' for container e533d091-9fc2-4161-b6b3-4c99a88be466 [22:45:38]W: [Step 10/10] I0619 22:45:38.908843 24668 cni.cpp:1228] Removed the container directory '/var/run/mesos/isolators/network/cni/e533d091-9fc2-4161-b6b3-4c99a88be466' [22:45:38]W: [Step 10/10] I0619 22:45:38.909222 24669 provisioner.cpp:411] Ignoring destroy request for unknown container e533d091-9fc2-4161-b6b3-4c99a88be466 [22:45:38]W: [Step 10/10] I0619 22:45:38.909346 24664 slave.cpp:4152] Executor '123bdde2-b542-4206-9554-249c053f63d2' of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 exited with status 1 [22:45:38]W: [Step 10/10] I0619 22:45:38.909437 24664 slave.cpp:3267] Handling status update TASK_FAILED (UUID: 2b27727d-e7cd-4aea-b5a6-a83c83df5f01) for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 from @0.0.0.0:0 [22:45:38]W: [Step 10/10] I0619 22:45:38.909620 24664 slave.cpp:6074] Terminating task 123bdde2-b542-4206-9554-249c053f63d2 [22:45:38]W: [Step 10/10] W0619 22:45:38.909713 24665 containerizer.cpp:1418] Ignoring update for unknown container: e533d091-9fc2-4161-b6b3-4c99a88be466 [22:45:38]W: [Step 10/10] I0619 22:45:38.909871 24666 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: 2b27727d-e7cd-4aea-b5a6-a83c83df5f01) for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.909888 24666 status_update_manager.cpp:497] Creating StatusUpdate stream for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.910080 24666 status_update_manager.cpp:374] Forwarding update TASK_FAILED (UUID: 2b27727d-e7cd-4aea-b5a6-a83c83df5f01) for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 to the agent [22:45:38]W: [Step 10/10] I0619 22:45:38.910163 24665 slave.cpp:3665] Forwarding the update TASK_FAILED (UUID: 2b27727d-e7cd-4aea-b5a6-a83c83df5f01) for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 to master@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.910253 24665 slave.cpp:3559] Status update manager successfully handled status update TASK_FAILED (UUID: 2b27727d-e7cd-4aea-b5a6-a83c83df5f01) for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.910490 24667 master.cpp:5211] Status update TASK_FAILED (UUID: 2b27727d-e7cd-4aea-b5a6-a83c83df5f01) for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 from agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 at slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) [22:45:38]W: [Step 10/10] I0619 22:45:38.910512 24667 master.cpp:5259] Forwarding status update TASK_FAILED (UUID: 2b27727d-e7cd-4aea-b5a6-a83c83df5f01) for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.910560 24667 master.cpp:6871] Updating the state of task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 (latest state: TASK_FAILED, status update state: TASK_FAILED) [22:45:38] : [Step 10/10] ../../src/tests/containerizer/cni_isolator_tests.cpp:292: Failure [22:45:38]W: [Step 10/10] I0619 22:45:38.910698 24668 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):128 (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 from framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38] : [Step 10/10] Value of: statusRunning->state() [22:45:38]W: [Step 10/10] I0619 22:45:38.910755 24666 sched.cpp:1005] Scheduler::statusUpdate took 50939ns [22:45:38] : [Step 10/10] Actual: TASK_FAILED [22:45:38] : [Step 10/10] Expected: TASK_RUNNING [22:45:38] : [Step 10/10] ../../src/tests/containerizer/cni_isolator_tests.cpp:296: Failure [22:45:38]W: [Step 10/10] I0619 22:45:38.910995 24662 master.cpp:4365] Processing ACKNOWLEDGE call 2b27727d-e7cd-4aea-b5a6-a83c83df5f01 for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 (default) at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 on agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 [22:45:38] : [Step 10/10] Value of: containers.get().size() [22:45:38] : [Step 10/10] Actual: 0 [22:45:38]W: [Step 10/10] I0619 22:45:38.911026 24662 master.cpp:6937] Removing task 123bdde2-b542-4206-9554-249c053f63d2 with resources cpus(*):1; mem(*):128 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 on agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 at slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) [22:45:38] : [Step 10/10] Expected: 1u [22:45:38] : [Step 10/10] Which is: 1 [22:45:38]W: [Step 10/10] I0619 22:45:38.911234 24665 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 2b27727d-e7cd-4aea-b5a6-a83c83df5f01) for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.911278 24665 status_update_manager.cpp:528] Cleaning up status update stream for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.911402 24669 master.cpp:1406] Framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 (default) at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 disconnected [22:45:38]W: [Step 10/10] I0619 22:45:38.911418 24669 master.cpp:2840] Disconnecting framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 (default) at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.911414 24665 slave.cpp:2653] Status update manager successfully handled status update acknowledgement (UUID: 2b27727d-e7cd-4aea-b5a6-a83c83df5f01) for task 123bdde2-b542-4206-9554-249c053f63d2 of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.911433 24669 master.cpp:2864] Deactivating framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 (default) at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.911453 24665 slave.cpp:6115] Completing task 123bdde2-b542-4206-9554-249c053f63d2 [22:45:38]W: [Step 10/10] I0619 22:45:38.911470 24665 slave.cpp:4256] Cleaning up executor '123bdde2-b542-4206-9554-249c053f63d2' of framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.911548 24669 master.cpp:1419] Giving framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 (default) at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 0ns to failover [22:45:38]W: [Step 10/10] I0619 22:45:38.911583 24662 hierarchical.cpp:375] Deactivated framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.911640 24669 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/slaves/64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0/frameworks/64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000/executors/123bdde2-b542-4206-9554-249c053f63d2/runs/e533d091-9fc2-4161-b6b3-4c99a88be466' for gc 6.99998944950222days in the future [22:45:38]W: [Step 10/10] I0619 22:45:38.911680 24665 slave.cpp:4344] Cleaning up framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.911689 24669 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/slaves/64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0/frameworks/64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000/executors/123bdde2-b542-4206-9554-249c053f63d2' for gc 6.99998944832296days in the future [22:45:38]W: [Step 10/10] I0619 22:45:38.911738 24664 status_update_manager.cpp:282] Closing status update streams for framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.911805 24662 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_VerifyCheckpointedInfo_IIWOru/slaves/64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0/frameworks/64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000' for gc 6.99998944754667days in the future [22:45:38]W: [Step 10/10] I0619 22:45:38.911918 24647 slave.cpp:839] Agent terminating [22:45:38]W: [Step 10/10] I0619 22:45:38.911991 24662 master.cpp:1367] Agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 at slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) disconnected [22:45:38]W: [Step 10/10] I0619 22:45:38.912009 24662 master.cpp:2899] Disconnecting agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 at slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) [22:45:38]W: [Step 10/10] I0619 22:45:38.912029 24662 master.cpp:2918] Deactivating agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 at slave(469)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) [22:45:38]W: [Step 10/10] I0619 22:45:38.912135 24665 hierarchical.cpp:560] Agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 deactivated [22:45:38]W: [Step 10/10] I0619 22:45:38.912824 24669 master.cpp:5624] Framework failover timeout, removing framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 (default) at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.912842 24669 master.cpp:6354] Removing framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 (default) at scheduler-17f04bf5-5b53-40c9-8a93-abe9a7966897@172.30.2.247:42024 [22:45:38]W: [Step 10/10] I0619 22:45:38.913030 24669 hierarchical.cpp:326] Removed framework 64f1f7ac-e810-4fb1-b549-6e29fc62622b-0000 [22:45:38]W: [Step 10/10] I0619 22:45:38.913905 24647 master.cpp:1214] Master terminating [22:45:38]W: [Step 10/10] I0619 22:45:38.914031 24664 hierarchical.cpp:505] Removed agent 64f1f7ac-e810-4fb1-b549-6e29fc62622b-S0 [22:45:38] : [Step 10/10] [ FAILED ] CniIsolatorTest.ROOT_VerifyCheckpointedInfo (457 ms) [22:45:39] : [Step 10/10] [ RUN ] CniIsolatorTest.ROOT_SlaveRecovery [22:45:39]W: [Step 10/10] I0619 22:45:39.224643 24647 cluster.cpp:155] Creating default 'local' authorizer [22:45:39]W: [Step 10/10] I0619 22:45:39.232614 24647 leveldb.cpp:174] Opened db in 7.839626ms [22:45:39]W: [Step 10/10] I0619 22:45:39.235198 24647 leveldb.cpp:181] Compacted db in 2.563679ms [22:45:39]W: [Step 10/10] I0619 22:45:39.235219 24647 leveldb.cpp:196] Created db iterator in 4353ns [22:45:39]W: [Step 10/10] I0619 22:45:39.235224 24647 leveldb.cpp:202] Seeked to beginning of db in 668ns [22:45:39]W: [Step 10/10] I0619 22:45:39.235231 24647 leveldb.cpp:271] Iterated through 0 keys in the db in 399ns [22:45:39]W: [Step 10/10] I0619 22:45:39.235246 24647 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned [22:45:39]W: [Step 10/10] I0619 22:45:39.235555 24662 recover.cpp:451] Starting replica recovery [22:45:39]W: [Step 10/10] I0619 22:45:39.235777 24663 recover.cpp:477] Replica is in EMPTY status [22:45:39]W: [Step 10/10] I0619 22:45:39.236134 24669 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (18550)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.236197 24663 recover.cpp:197] Received a recover response from a replica in EMPTY status [22:45:39]W: [Step 10/10] I0619 22:45:39.236351 24667 recover.cpp:568] Updating replica status to STARTING [22:45:39]W: [Step 10/10] I0619 22:45:39.236580 24668 master.cpp:382] Master 032cd99a-1cdc-42d4-b94a-f7b00f37fb52 (ip-172-30-2-247.mesosphere.io) started on 172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.236594 24668 master.cpp:384] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/ghfuib/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/ghfuib/master"""" --zk_session_timeout=""""10secs"""" [22:45:39]W: [Step 10/10] I0619 22:45:39.236723 24668 master.cpp:434] Master only allowing authenticated frameworks to register [22:45:39]W: [Step 10/10] I0619 22:45:39.236729 24668 master.cpp:448] Master only allowing authenticated agents to register [22:45:39]W: [Step 10/10] I0619 22:45:39.236732 24668 master.cpp:461] Master only allowing authenticated HTTP frameworks to register [22:45:39]W: [Step 10/10] I0619 22:45:39.236737 24668 credentials.hpp:37] Loading credentials for authentication from '/tmp/ghfuib/credentials' [22:45:39]W: [Step 10/10] I0619 22:45:39.236829 24668 master.cpp:506] Using default 'crammd5' authenticator [22:45:39]W: [Step 10/10] I0619 22:45:39.236871 24668 master.cpp:578] Using default 'basic' HTTP authenticator [22:45:39]W: [Step 10/10] I0619 22:45:39.236946 24668 master.cpp:658] Using default 'basic' HTTP framework authenticator [22:45:39]W: [Step 10/10] I0619 22:45:39.236991 24668 master.cpp:705] Authorization enabled [22:45:39]W: [Step 10/10] I0619 22:45:39.237077 24663 whitelist_watcher.cpp:77] No whitelist given [22:45:39]W: [Step 10/10] I0619 22:45:39.237159 24665 hierarchical.cpp:142] Initialized hierarchical allocator process [22:45:39]W: [Step 10/10] I0619 22:45:39.237638 24667 master.cpp:1969] The newly elected leader is master@172.30.2.247:42024 with id 032cd99a-1cdc-42d4-b94a-f7b00f37fb52 [22:45:39]W: [Step 10/10] I0619 22:45:39.237650 24667 master.cpp:1982] Elected as the leading master! [22:45:39]W: [Step 10/10] I0619 22:45:39.237655 24667 master.cpp:1669] Recovering from registrar [22:45:39]W: [Step 10/10] I0619 22:45:39.237700 24669 registrar.cpp:332] Recovering registrar [22:45:39]W: [Step 10/10] I0619 22:45:39.239017 24662 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 2.616259ms [22:45:39]W: [Step 10/10] I0619 22:45:39.239032 24662 replica.cpp:320] Persisted replica status to STARTING [22:45:39]W: [Step 10/10] I0619 22:45:39.239084 24662 recover.cpp:477] Replica is in STARTING status [22:45:39]W: [Step 10/10] I0619 22:45:39.239437 24669 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (18553)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.239538 24662 recover.cpp:197] Received a recover response from a replica in STARTING status [22:45:39]W: [Step 10/10] I0619 22:45:39.239672 24663 recover.cpp:568] Updating replica status to VOTING [22:45:39]W: [Step 10/10] I0619 22:45:39.241654 24662 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.871972ms [22:45:39]W: [Step 10/10] I0619 22:45:39.241670 24662 replica.cpp:320] Persisted replica status to VOTING [22:45:39]W: [Step 10/10] I0619 22:45:39.241703 24662 recover.cpp:582] Successfully joined the Paxos group [22:45:39]W: [Step 10/10] I0619 22:45:39.241745 24662 recover.cpp:466] Recover process terminated [22:45:39]W: [Step 10/10] I0619 22:45:39.241880 24662 log.cpp:553] Attempting to start the writer [22:45:39]W: [Step 10/10] I0619 22:45:39.242295 24668 replica.cpp:493] Replica received implicit promise request from (18554)@172.30.2.247:42024 with proposal 1 [22:45:39]W: [Step 10/10] I0619 22:45:39.244303 24668 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.98443ms [22:45:39]W: [Step 10/10] I0619 22:45:39.244318 24668 replica.cpp:342] Persisted promised to 1 [22:45:39]W: [Step 10/10] I0619 22:45:39.244529 24663 coordinator.cpp:238] Coordinator attempting to fill missing positions [22:45:39]W: [Step 10/10] I0619 22:45:39.245007 24664 replica.cpp:388] Replica received explicit promise request from (18555)@172.30.2.247:42024 for position 0 with proposal 2 [22:45:39]W: [Step 10/10] I0619 22:45:39.246898 24664 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 1.869865ms [22:45:39]W: [Step 10/10] I0619 22:45:39.246915 24664 replica.cpp:712] Persisted action at 0 [22:45:39]W: [Step 10/10] I0619 22:45:39.247295 24666 replica.cpp:537] Replica received write request for position 0 from (18556)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.247320 24666 leveldb.cpp:436] Reading position from leveldb took 10783ns [22:45:39]W: [Step 10/10] I0619 22:45:39.249264 24666 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 1.93015ms [22:45:39]W: [Step 10/10] I0619 22:45:39.249279 24666 replica.cpp:712] Persisted action at 0 [22:45:39]W: [Step 10/10] I0619 22:45:39.249492 24663 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 [22:45:39]W: [Step 10/10] I0619 22:45:39.251349 24663 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.840655ms [22:45:39]W: [Step 10/10] I0619 22:45:39.251364 24663 replica.cpp:712] Persisted action at 0 [22:45:39]W: [Step 10/10] I0619 22:45:39.251369 24663 replica.cpp:697] Replica learned NOP action at position 0 [22:45:39]W: [Step 10/10] I0619 22:45:39.251634 24668 log.cpp:569] Writer started with ending position 0 [22:45:39]W: [Step 10/10] I0619 22:45:39.251905 24666 leveldb.cpp:436] Reading position from leveldb took 11014ns [22:45:39]W: [Step 10/10] I0619 22:45:39.252132 24664 registrar.cpp:365] Successfully fetched the registry (0B) in 14.413312ms [22:45:39]W: [Step 10/10] I0619 22:45:39.252176 24664 registrar.cpp:464] Applied 1 operations in 5009ns; attempting to update the 'registry' [22:45:39]W: [Step 10/10] I0619 22:45:39.252378 24663 log.cpp:577] Attempting to append 209 bytes to the log [22:45:39]W: [Step 10/10] I0619 22:45:39.252437 24669 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 [22:45:39]W: [Step 10/10] I0619 22:45:39.252768 24666 replica.cpp:537] Replica received write request for position 1 from (18557)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.254878 24666 leveldb.cpp:341] Persisting action (228 bytes) to leveldb took 2.087874ms [22:45:39]W: [Step 10/10] I0619 22:45:39.254894 24666 replica.cpp:712] Persisted action at 1 [22:45:39]W: [Step 10/10] I0619 22:45:39.255100 24664 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 [22:45:39]W: [Step 10/10] I0619 22:45:39.256983 24664 leveldb.cpp:341] Persisting action (230 bytes) to leveldb took 1.863178ms [22:45:39]W: [Step 10/10] I0619 22:45:39.256999 24664 replica.cpp:712] Persisted action at 1 [22:45:39]W: [Step 10/10] I0619 22:45:39.257004 24664 replica.cpp:697] Replica learned APPEND action at position 1 [22:45:39]W: [Step 10/10] I0619 22:45:39.257231 24663 registrar.cpp:509] Successfully updated the 'registry' in 5.034752ms [22:45:39]W: [Step 10/10] I0619 22:45:39.257283 24663 registrar.cpp:395] Successfully recovered registrar [22:45:39]W: [Step 10/10] I0619 22:45:39.257304 24665 log.cpp:596] Attempting to truncate the log to 1 [22:45:39]W: [Step 10/10] I0619 22:45:39.257431 24666 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 [22:45:39]W: [Step 10/10] I0619 22:45:39.257462 24668 master.cpp:1777] Recovered 0 agents from the Registry (170B) ; allowing 10mins for agents to re-register [22:45:39]W: [Step 10/10] I0619 22:45:39.257484 24662 hierarchical.cpp:169] Skipping recovery of hierarchical allocator: nothing to recover [22:45:39]W: [Step 10/10] I0619 22:45:39.257690 24668 replica.cpp:537] Replica received write request for position 2 from (18558)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.259577 24668 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.867119ms [22:45:39]W: [Step 10/10] I0619 22:45:39.259593 24668 replica.cpp:712] Persisted action at 2 [22:45:39]W: [Step 10/10] I0619 22:45:39.259788 24667 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 [22:45:39]W: [Step 10/10] I0619 22:45:39.261890 24667 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 2.084656ms [22:45:39]W: [Step 10/10] I0619 22:45:39.261920 24667 leveldb.cpp:399] Deleting ~1 keys from leveldb took 13997ns [22:45:39]W: [Step 10/10] I0619 22:45:39.261929 24667 replica.cpp:712] Persisted action at 2 [22:45:39]W: [Step 10/10] I0619 22:45:39.261934 24667 replica.cpp:697] Replica learned TRUNCATE action at position 2 [22:45:39]W: [Step 10/10] I0619 22:45:39.269104 24647 containerizer.cpp:201] Using isolation: network/cni,filesystem/posix [22:45:39]W: [Step 10/10] I0619 22:45:39.272172 24647 linux_launcher.cpp:101] Using /cgroup/freezer as the freezer hierarchy for the Linux launcher [22:45:39]W: [Step 10/10] I0619 22:45:39.273219 24647 cluster.cpp:432] Creating default 'local' authorizer [22:45:39]W: [Step 10/10] I0619 22:45:39.273654 24662 slave.cpp:203] Agent started on 471)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.273664 24662 slave.cpp:204] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""network/cni"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --network_cni_config_dir=""""/tmp/ghfuib/configs"""" --network_cni_plugins_dir=""""/tmp/ghfuib/plugins"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ"""" [22:45:39]W: [Step 10/10] I0619 22:45:39.273874 24662 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/credential' [22:45:39]W: [Step 10/10] I0619 22:45:39.273952 24662 slave.cpp:341] Agent using credential for: test-principal [22:45:39]W: [Step 10/10] I0619 22:45:39.273967 24662 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/http_credentials' [22:45:39]W: [Step 10/10] I0619 22:45:39.274041 24662 slave.cpp:393] Using default 'basic' HTTP authenticator [22:45:39]W: [Step 10/10] I0619 22:45:39.274193 24662 resources.cpp:572] Parsing resources as JSON failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000] [22:45:39]W: [Step 10/10] Trying semicolon-delimited string format instead [22:45:39]W: [Step 10/10] I0619 22:45:39.274448 24647 sched.cpp:224] Version: 1.0.0 [22:45:39]W: [Step 10/10] I0619 22:45:39.274459 24662 slave.cpp:592] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [22:45:39]W: [Step 10/10] I0619 22:45:39.274492 24662 slave.cpp:600] Agent attributes: [ ] [22:45:39]W: [Step 10/10] I0619 22:45:39.274500 24662 slave.cpp:605] Agent hostname: ip-172-30-2-247.mesosphere.io [22:45:39]W: [Step 10/10] I0619 22:45:39.274618 24669 sched.cpp:328] New master detected at master@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.274714 24669 sched.cpp:394] Authenticating with master master@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.274724 24669 sched.cpp:401] Using default CRAM-MD5 authenticatee [22:45:39]W: [Step 10/10] I0619 22:45:39.274826 24667 authenticatee.cpp:121] Creating new client SASL connection [22:45:39]W: [Step 10/10] I0619 22:45:39.274855 24662 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta' [22:45:39]W: [Step 10/10] I0619 22:45:39.274950 24667 master.cpp:5943] Authenticating scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.275002 24669 authenticator.cpp:414] Starting authentication session for crammd5_authenticatee(961)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.275041 24667 status_update_manager.cpp:200] Recovering status update manager [22:45:39]W: [Step 10/10] I0619 22:45:39.275116 24668 authenticator.cpp:98] Creating new server SASL connection [22:45:39]W: [Step 10/10] I0619 22:45:39.275132 24662 containerizer.cpp:514] Recovering containerizer [22:45:39]W: [Step 10/10] I0619 22:45:39.275185 24668 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [22:45:39]W: [Step 10/10] I0619 22:45:39.275197 24668 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [22:45:39]W: [Step 10/10] I0619 22:45:39.275246 24666 authenticator.cpp:204] Received SASL authentication start [22:45:39]W: [Step 10/10] I0619 22:45:39.275322 24666 authenticator.cpp:326] Authentication requires more steps [22:45:39]W: [Step 10/10] I0619 22:45:39.275370 24666 authenticatee.cpp:259] Received SASL authentication step [22:45:39]W: [Step 10/10] I0619 22:45:39.275445 24667 authenticator.cpp:232] Received SASL authentication step [22:45:39]W: [Step 10/10] I0619 22:45:39.275462 24667 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-247' server FQDN: 'ip-172-30-2-247' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [22:45:39]W: [Step 10/10] I0619 22:45:39.275468 24667 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [22:45:39]W: [Step 10/10] I0619 22:45:39.275485 24667 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [22:45:39]W: [Step 10/10] I0619 22:45:39.275492 24667 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-247' server FQDN: 'ip-172-30-2-247' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [22:45:39]W: [Step 10/10] I0619 22:45:39.275497 24667 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [22:45:39]W: [Step 10/10] I0619 22:45:39.275501 24667 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [22:45:39]W: [Step 10/10] I0619 22:45:39.275511 24667 authenticator.cpp:318] Authentication success [22:45:39]W: [Step 10/10] I0619 22:45:39.275563 24667 authenticatee.cpp:299] Authentication success [22:45:39]W: [Step 10/10] I0619 22:45:39.275574 24664 authenticator.cpp:432] Authentication session cleanup for crammd5_authenticatee(961)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.275586 24665 master.cpp:5973] Successfully authenticated principal 'test-principal' at scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.275640 24666 sched.cpp:484] Successfully authenticated with master master@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.275653 24666 sched.cpp:800] Sending SUBSCRIBE call to master@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.275758 24666 sched.cpp:833] Will retry registration in 1.75141928secs if necessary [22:45:39]W: [Step 10/10] I0619 22:45:39.275781 24664 master.cpp:2539] Received SUBSCRIBE call for framework 'default' at scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.275804 24664 master.cpp:2008] Authorizing framework principal 'test-principal' to receive offers for role '*' [22:45:39]W: [Step 10/10] W0619 22:45:39.275826 24665 cni.cpp:503] The checkpointed CNI plugin output '/var/run/mesos/isolators/network/cni/422a6a27-4327-4dc1-9a4c-7de578226eab/__MESOS_TEST__/eth0/network.info' for container 422a6a27-4327-4dc1-9a4c-7de578226eab does not exist [22:45:39]W: [Step 10/10] I0619 22:45:39.275856 24665 cni.cpp:407] Removing unknown orphaned container 422a6a27-4327-4dc1-9a4c-7de578226eab [22:45:39]W: [Step 10/10] I0619 22:45:39.275918 24662 master.cpp:2615] Subscribing framework default with checkpointing enabled and capabilities [ ] [22:45:39]W: [Step 10/10] I0619 22:45:39.278825 24667 hierarchical.cpp:264] Added framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.278898 24667 hierarchical.cpp:1488] No allocations performed [22:45:39]W: [Step 10/10] I0619 22:45:39.278906 24667 hierarchical.cpp:1583] No inverse offers to send out! [22:45:39]W: [Step 10/10] I0619 22:45:39.278916 24667 hierarchical.cpp:1139] Performed allocation for 0 agents in 28334ns [22:45:39]W: [Step 10/10] I0619 22:45:39.278957 24662 sched.cpp:723] Framework registered with 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.279021 24662 sched.cpp:737] Scheduler::registered took 46037ns [22:45:39]W: [Step 10/10] I0619 22:45:39.279216 24669 provisioner.cpp:253] Provisioner recovery complete [22:45:39]W: [Step 10/10] I0619 22:45:39.279381 24663 slave.cpp:4845] Finished recovery [22:45:39]W: [Step 10/10] I0619 22:45:39.279583 24663 slave.cpp:5017] Querying resource estimator for oversubscribable resources [22:45:39]W: [Step 10/10] I0619 22:45:39.279724 24667 status_update_manager.cpp:174] Pausing sending status updates [22:45:39]W: [Step 10/10] I0619 22:45:39.279791 24668 slave.cpp:967] New master detected at master@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.279808 24668 slave.cpp:1029] Authenticating with master master@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.279826 24668 slave.cpp:1040] Using default CRAM-MD5 authenticatee [22:45:39]W: [Step 10/10] I0619 22:45:39.279878 24668 slave.cpp:1002] Detecting new master [22:45:39]W: [Step 10/10] I0619 22:45:39.279916 24666 authenticatee.cpp:121] Creating new client SASL connection [22:45:39]W: [Step 10/10] I0619 22:45:39.279953 24668 slave.cpp:5031] Received oversubscribable resources from the resource estimator [22:45:39]W: [Step 10/10] I0619 22:45:39.280045 24666 master.cpp:5943] Authenticating slave(471)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.280129 24665 authenticator.cpp:414] Starting authentication session for crammd5_authenticatee(962)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.280186 24665 authenticator.cpp:98] Creating new server SASL connection [22:45:39]W: [Step 10/10] I0619 22:45:39.280266 24665 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [22:45:39]W: [Step 10/10] I0619 22:45:39.280279 24665 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [22:45:39]W: [Step 10/10] I0619 22:45:39.280308 24665 authenticator.cpp:204] Received SASL authentication start [22:45:39]W: [Step 10/10] I0619 22:45:39.280345 24665 authenticator.cpp:326] Authentication requires more steps [22:45:39]W: [Step 10/10] I0619 22:45:39.280383 24665 authenticatee.cpp:259] Received SASL authentication step [22:45:39]W: [Step 10/10] I0619 22:45:39.280447 24669 authenticator.cpp:232] Received SASL authentication step [22:45:39]W: [Step 10/10] I0619 22:45:39.280468 24669 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-247' server FQDN: 'ip-172-30-2-247' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [22:45:39]W: [Step 10/10] I0619 22:45:39.280474 24669 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [22:45:39]W: [Step 10/10] I0619 22:45:39.280483 24669 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [22:45:39]W: [Step 10/10] I0619 22:45:39.280488 24669 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-247' server FQDN: 'ip-172-30-2-247' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [22:45:39]W: [Step 10/10] I0619 22:45:39.280493 24669 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [22:45:39]W: [Step 10/10] I0619 22:45:39.280496 24669 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [22:45:39]W: [Step 10/10] I0619 22:45:39.280504 24669 authenticator.cpp:318] Authentication success [22:45:39]W: [Step 10/10] I0619 22:45:39.280544 24669 authenticatee.cpp:299] Authentication success [22:45:39]W: [Step 10/10] I0619 22:45:39.280568 24668 authenticator.cpp:432] Authentication session cleanup for crammd5_authenticatee(962)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.280596 24665 master.cpp:5973] Successfully authenticated principal 'test-principal' at slave(471)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.280673 24669 slave.cpp:1108] Successfully authenticated with master master@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.280725 24669 slave.cpp:1511] Will retry registration in 8.06966ms if necessary [22:45:39]W: [Step 10/10] I0619 22:45:39.280796 24667 master.cpp:4653] Registering agent at slave(471)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) with id 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 [22:45:39]W: [Step 10/10] I0619 22:45:39.280905 24663 registrar.cpp:464] Applied 1 operations in 11081ns; attempting to update the 'registry' [22:45:39]W: [Step 10/10] I0619 22:45:39.281116 24669 log.cpp:577] Attempting to append 395 bytes to the log [22:45:39]W: [Step 10/10] I0619 22:45:39.281182 24664 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 [22:45:39]W: [Step 10/10] I0619 22:45:39.281452 24663 replica.cpp:537] Replica received write request for position 3 from (18575)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.283769 24663 leveldb.cpp:341] Persisting action (414 bytes) to leveldb took 2.297945ms [22:45:39]W: [Step 10/10] I0619 22:45:39.283785 24663 replica.cpp:712] Persisted action at 3 [22:45:39]W: [Step 10/10] I0619 22:45:39.283993 24663 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 [22:45:39]W: [Step 10/10] I0619 22:45:39.285805 24663 leveldb.cpp:341] Persisting action (416 bytes) to leveldb took 1.793213ms [22:45:39]W: [Step 10/10] I0619 22:45:39.285820 24663 replica.cpp:712] Persisted action at 3 [22:45:39]W: [Step 10/10] I0619 22:45:39.285826 24663 replica.cpp:697] Replica learned APPEND action at position 3 [22:45:39]W: [Step 10/10] I0619 22:45:39.286088 24668 registrar.cpp:509] Successfully updated the 'registry' in 5.161984ms [22:45:39]W: [Step 10/10] I0619 22:45:39.286118 24666 log.cpp:596] Attempting to truncate the log to 3 [22:45:39]W: [Step 10/10] I0619 22:45:39.286172 24666 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 [22:45:39]W: [Step 10/10] I0619 22:45:39.286332 24667 slave.cpp:3747] Received ping from slave-observer(426)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.286325 24665 master.cpp:4721] Registered agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 at slave(471)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [22:45:39]W: [Step 10/10] I0619 22:45:39.286382 24668 hierarchical.cpp:473] Added agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 (ip-172-30-2-247.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) [22:45:39]W: [Step 10/10] I0619 22:45:39.286411 24667 slave.cpp:1152] Registered with master master@172.30.2.247:42024; given agent ID 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 [22:45:39]W: [Step 10/10] I0619 22:45:39.286424 24667 fetcher.cpp:86] Clearing fetcher cache [22:45:39]W: [Step 10/10] I0619 22:45:39.286480 24665 status_update_manager.cpp:181] Resuming sending status updates [22:45:39]W: [Step 10/10] I0619 22:45:39.286545 24669 replica.cpp:537] Replica received write request for position 4 from (18576)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.286556 24668 hierarchical.cpp:1583] No inverse offers to send out! [22:45:39]W: [Step 10/10] I0619 22:45:39.286579 24668 hierarchical.cpp:1162] Performed allocation for agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 in 176726ns [22:45:39]W: [Step 10/10] I0619 22:45:39.286625 24667 slave.cpp:1175] Checkpointing SlaveInfo to '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/slave.info' [22:45:39]W: [Step 10/10] I0619 22:45:39.286660 24663 master.cpp:5772] Sending 1 offers to framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 (default) at scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.286782 24667 slave.cpp:1212] Forwarding total oversubscribed resources [22:45:39]W: [Step 10/10] I0619 22:45:39.286842 24667 master.cpp:5066] Received update of agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 at slave(471)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) with total oversubscribed resources [22:45:39]W: [Step 10/10] I0619 22:45:39.286912 24663 sched.cpp:897] Scheduler::resourceOffers took 41729ns [22:45:39]W: [Step 10/10] I0619 22:45:39.286981 24667 hierarchical.cpp:531] Agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 (ip-172-30-2-247.mesosphere.io) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) [22:45:39]W: [Step 10/10] I0619 22:45:39.287026 24667 hierarchical.cpp:1488] No allocations performed [22:45:39]W: [Step 10/10] I0619 22:45:39.287036 24667 hierarchical.cpp:1583] No inverse offers to send out! [22:45:39]W: [Step 10/10] I0619 22:45:39.287051 24667 hierarchical.cpp:1162] Performed allocation for agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 in 40073ns [22:45:39]W: [Step 10/10] I0619 22:45:39.287082 24647 resources.cpp:572] Parsing resources as JSON failed: cpus:1;mem:128 [22:45:39]W: [Step 10/10] Trying semicolon-delimited string format instead [22:45:39]W: [Step 10/10] I0619 22:45:39.287443 24668 master.cpp:3457] Processing ACCEPT call for offers: [ 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-O0 ] on agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 at slave(471)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) for framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 (default) at scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.287467 24668 master.cpp:3095] Authorizing framework principal 'test-principal' to launch task 44eba68b-0f7c-437f-935f-26a52ac3f64b [22:45:39]W: [Step 10/10] I0619 22:45:39.287804 24667 master.hpp:177] Adding task 44eba68b-0f7c-437f-935f-26a52ac3f64b with resources cpus(*):1; mem(*):128 on agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 (ip-172-30-2-247.mesosphere.io) [22:45:39]W: [Step 10/10] I0619 22:45:39.287829 24667 master.cpp:3946] Launching task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 (default) at scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 with resources cpus(*):1; mem(*):128 on agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 at slave(471)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) [22:45:39]W: [Step 10/10] I0619 22:45:39.287947 24664 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):896; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):1; mem(*):128) on agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 from framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.287981 24665 slave.cpp:1551] Got assigned task 44eba68b-0f7c-437f-935f-26a52ac3f64b for framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.287981 24664 hierarchical.cpp:928] Framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 filtered agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 for 5secs [22:45:39]W: [Step 10/10] I0619 22:45:39.288043 24665 slave.cpp:5654] Checkpointing FrameworkInfo to '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/framework.info' [22:45:39]W: [Step 10/10] I0619 22:45:39.288200 24665 slave.cpp:5665] Checkpointing framework pid 'scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024' to '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/framework.pid' [22:45:39]W: [Step 10/10] I0619 22:45:39.288331 24665 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [22:45:39]W: [Step 10/10] Trying semicolon-delimited string format instead [22:45:39]W: [Step 10/10] I0619 22:45:39.288467 24669 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.901433ms [22:45:39]W: [Step 10/10] I0619 22:45:39.288480 24669 replica.cpp:712] Persisted action at 4 [22:45:39]W: [Step 10/10] I0619 22:45:39.288480 24665 slave.cpp:1670] Launching task 44eba68b-0f7c-437f-935f-26a52ac3f64b for framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.288516 24665 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [22:45:39]W: [Step 10/10] Trying semicolon-delimited string format instead [22:45:39]W: [Step 10/10] I0619 22:45:39.288709 24669 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 [22:45:39]W: [Step 10/10] I0619 22:45:39.288784 24665 paths.cpp:528] Trying to chown '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b/runs/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' to user 'root' [22:45:39]W: [Step 10/10] I0619 22:45:39.290657 24669 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 1.915037ms [22:45:39]W: [Step 10/10] I0619 22:45:39.290714 24669 leveldb.cpp:399] Deleting ~2 keys from leveldb took 27510ns [22:45:39]W: [Step 10/10] I0619 22:45:39.290726 24669 replica.cpp:712] Persisted action at 4 [22:45:39]W: [Step 10/10] I0619 22:45:39.290736 24669 replica.cpp:697] Replica learned TRUNCATE action at position 4 [22:45:39]W: [Step 10/10] I0619 22:45:39.292919 24665 slave.cpp:6136] Checkpointing ExecutorInfo to '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b/executor.info' [22:45:39]W: [Step 10/10] I0619 22:45:39.293200 24665 slave.cpp:5734] Launching executor 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b/runs/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' [22:45:39]W: [Step 10/10] I0619 22:45:39.293373 24669 containerizer.cpp:773] Starting container 'ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' for executor '44eba68b-0f7c-437f-935f-26a52ac3f64b' of framework '032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000' [22:45:39]W: [Step 10/10] I0619 22:45:39.293403 24665 slave.cpp:6159] Checkpointing TaskInfo to '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b/runs/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d/tasks/44eba68b-0f7c-437f-935f-26a52ac3f64b/task.info' [22:45:39]W: [Step 10/10] I0619 22:45:39.293581 24665 slave.cpp:1896] Queuing task '44eba68b-0f7c-437f-935f-26a52ac3f64b' for executor '44eba68b-0f7c-437f-935f-26a52ac3f64b' of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.293622 24665 slave.cpp:920] Successfully attached file '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b/runs/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' [22:45:39]W: [Step 10/10] I0619 22:45:39.294059 24662 containerizer.cpp:1120] Overwriting environment variable 'LIBPROCESS_IP', original: '172.30.2.247', new: '0.0.0.0', for container ef0b6221-1073-42f0-adfc-cfe75ddb3a5d [22:45:39]W: [Step 10/10] I0619 22:45:39.294361 24664 containerizer.cpp:1267] Launching 'mesos-containerizer' with flags '--command=""""{""""shell"""":true,""""value"""":""""\/mnt\/teamcity\/work\/4240ba9ddd0997c3\/build\/src\/mesos-executor""""}"""" --commands=""""{""""commands"""":[]}"""" --help=""""false"""" --pipe_read=""""96"""" --pipe_write=""""107"""" --sandbox=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b/runs/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d"""" --user=""""root""""' [22:45:39]W: [Step 10/10] I0619 22:45:39.294427 24664 linux_launcher.cpp:281] Cloning child process with flags = CLONE_NEWUTS | CLONE_NEWNS [22:45:39]W: [Step 10/10] I0619 22:45:39.296911 24664 containerizer.cpp:1302] Checkpointing executor's forked pid 7982 to '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b/runs/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d/pids/forked.pid' [22:45:39]W: [Step 10/10] I0619 22:45:39.297456 24663 cni.cpp:683] Bind mounted '/proc/7982/ns/net' to '/var/run/mesos/isolators/network/cni/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d/ns' for container ef0b6221-1073-42f0-adfc-cfe75ddb3a5d [22:45:39]W: [Step 10/10] I0619 22:45:39.297610 24663 cni.cpp:977] Invoking CNI plugin 'mockPlugin' with network configuration '{""""args"""":{""""org.apache.mesos"""":{""""network_info"""":{""""name"""":""""__MESOS_TEST__""""}}},""""name"""":""""__MESOS_TEST__"""",""""type"""":""""mockPlugin""""}' [22:45:39]W: [Step 10/10] I0619 22:45:39.311471 24667 cni.cpp:1066] Got assigned IPv4 address '172.17.42.1/16' from CNI network '__MESOS_TEST__' for container ef0b6221-1073-42f0-adfc-cfe75ddb3a5d [22:45:39]W: [Step 10/10] I0619 22:45:39.311671 24667 cni.cpp:1217] Unmounted the network namespace handle '/var/run/mesos/isolators/network/cni/422a6a27-4327-4dc1-9a4c-7de578226eab/ns' for container 422a6a27-4327-4dc1-9a4c-7de578226eab [22:45:39]W: [Step 10/10] I0619 22:45:39.311774 24667 cni.cpp:1228] Removed the container directory '/var/run/mesos/isolators/network/cni/422a6a27-4327-4dc1-9a4c-7de578226eab' [22:45:39]W: [Step 10/10] I0619 22:45:39.311911 24667 cni.cpp:808] DNS nameservers for container ef0b6221-1073-42f0-adfc-cfe75ddb3a5d are: [22:45:39]W: [Step 10/10] nameserver 172.30.0.2 [22:45:39]W: [Step 10/10] EFailed to synchronize with agent (it's probably exited)0619 22:45:39.412294 24666 slave.cpp:4039] Container 'ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' for executor '44eba68b-0f7c-437f-935f-26a52ac3f64b' of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 failed to start: Collect failed: Failed to setup hostname and network files: WARNING: Logging before InitGoogleLogging() is written to STDERR [22:45:39]W: [Step 10/10] I0619 22:45:39.352311 7998 cni.cpp:1449] Set hostname to 'ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' [22:45:39]W: [Step 10/10] Mount point '/etc/hostname' does not exist on the host filesystem [22:45:39]W: [Step 10/10] [22:45:39]W: [Step 10/10] I0619 22:45:39.412427 24664 containerizer.cpp:1576] Destroying container 'ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' [22:45:39]W: [Step 10/10] I0619 22:45:39.412444 24664 containerizer.cpp:1624] Waiting for the isolators to complete for container 'ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' [22:45:39]W: [Step 10/10] I0619 22:45:39.413815 24662 cgroups.cpp:2676] Freezing cgroup /cgroup/freezer/mesos/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d [22:45:39]W: [Step 10/10] I0619 22:45:39.512704 24665 containerizer.cpp:1812] Executor for container 'ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' has exited [22:45:39]W: [Step 10/10] I0619 22:45:39.516521 24662 cgroups.cpp:1409] Successfully froze cgroup /cgroup/freezer/mesos/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d after 102.685184ms [22:45:39]W: [Step 10/10] I0619 22:45:39.517462 24664 cgroups.cpp:2694] Thawing cgroup /cgroup/freezer/mesos/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d [22:45:39]W: [Step 10/10] I0619 22:45:39.518301 24662 cgroups.cpp:1438] Successfully thawed cgroup /cgroup/freezer/mesos/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d after 813824ns [22:45:39]W: [Step 10/10] I0619 22:45:39.614039 24664 cni.cpp:1217] Unmounted the network namespace handle '/var/run/mesos/isolators/network/cni/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d/ns' for container ef0b6221-1073-42f0-adfc-cfe75ddb3a5d [22:45:39]W: [Step 10/10] I0619 22:45:39.614209 24664 cni.cpp:1228] Removed the container directory '/var/run/mesos/isolators/network/cni/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' [22:45:39]W: [Step 10/10] I0619 22:45:39.614532 24662 provisioner.cpp:411] Ignoring destroy request for unknown container ef0b6221-1073-42f0-adfc-cfe75ddb3a5d [22:45:39]W: [Step 10/10] I0619 22:45:39.614665 24668 slave.cpp:4152] Executor '44eba68b-0f7c-437f-935f-26a52ac3f64b' of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 exited with status 1 [22:45:39]W: [Step 10/10] I0619 22:45:39.614742 24668 slave.cpp:3267] Handling status update TASK_FAILED (UUID: addda641-2dda-4d21-bc97-f2d8f9e28fe7) for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 from @0.0.0.0:0 [22:45:39]W: [Step 10/10] I0619 22:45:39.614964 24669 slave.cpp:6074] Terminating task 44eba68b-0f7c-437f-935f-26a52ac3f64b [22:45:39]W: [Step 10/10] W0619 22:45:39.615054 24669 containerizer.cpp:1418] Ignoring update for unknown container: ef0b6221-1073-42f0-adfc-cfe75ddb3a5d [22:45:39]W: [Step 10/10] I0619 22:45:39.615197 24668 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: addda641-2dda-4d21-bc97-f2d8f9e28fe7) for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.615211 24668 status_update_manager.cpp:497] Creating StatusUpdate stream for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.615552 24668 status_update_manager.cpp:824] Checkpointing UPDATE for status update TASK_FAILED (UUID: addda641-2dda-4d21-bc97-f2d8f9e28fe7) for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.623800 24668 status_update_manager.cpp:374] Forwarding update TASK_FAILED (UUID: addda641-2dda-4d21-bc97-f2d8f9e28fe7) for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 to the agent [22:45:39]W: [Step 10/10] I0619 22:45:39.623888 24662 slave.cpp:3665] Forwarding the update TASK_FAILED (UUID: addda641-2dda-4d21-bc97-f2d8f9e28fe7) for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 to master@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.623988 24662 slave.cpp:3559] Status update manager successfully handled status update TASK_FAILED (UUID: addda641-2dda-4d21-bc97-f2d8f9e28fe7) for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.624162 24668 master.cpp:5211] Status update TASK_FAILED (UUID: addda641-2dda-4d21-bc97-f2d8f9e28fe7) for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 from agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 at slave(471)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) [22:45:39]W: [Step 10/10] I0619 22:45:39.624181 24668 master.cpp:5259] Forwarding status update TASK_FAILED (UUID: addda641-2dda-4d21-bc97-f2d8f9e28fe7) for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.624243 24668 master.cpp:6871] Updating the state of task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 (latest state: TASK_FAILED, status update state: TASK_FAILED) [22:45:39]W: [Step 10/10] I0619 22:45:39.624356 24666 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):128 (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 from framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.624442 24669 sched.cpp:1005] Scheduler::statusUpdate took 38999ns [22:45:39]W: [Step 10/10] I0619 22:45:39.624614 24664 master.cpp:4365] Processing ACKNOWLEDGE call addda641-2dda-4d21-bc97-f2d8f9e28fe7 for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 (default) at scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 on agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 [22:45:39] : [Step 10/10] ../../src/tests/containerizer/cni_isolator_tests.cpp:487: Failure [22:45:39]W: [Step 10/10] I0619 22:45:39.624634 24664 master.cpp:6937] Removing task 44eba68b-0f7c-437f-935f-26a52ac3f64b with resources cpus(*):1; mem(*):128 of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 on agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 at slave(471)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) [22:45:39] : [Step 10/10] Value of: statusRunning->state() [22:45:39] : [Step 10/10] Actual: TASK_FAILED [22:45:39]W: [Step 10/10] I0619 22:45:39.624804 24665 status_update_manager.cpp:392] Received status update acknowledgement (UUID: addda641-2dda-4d21-bc97-f2d8f9e28fe7) for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39] : [Step 10/10] Expected: TASK_RUNNING [22:45:39]W: [Step 10/10] I0619 22:45:39.624847 24665 status_update_manager.cpp:824] Checkpointing ACK for status update TASK_FAILED (UUID: addda641-2dda-4d21-bc97-f2d8f9e28fe7) for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.628197 24665 status_update_manager.cpp:528] Cleaning up status update stream for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.628276 24665 slave.cpp:2653] Status update manager successfully handled status update acknowledgement (UUID: addda641-2dda-4d21-bc97-f2d8f9e28fe7) for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.628291 24665 slave.cpp:6115] Completing task 44eba68b-0f7c-437f-935f-26a52ac3f64b [22:45:39]W: [Step 10/10] I0619 22:45:39.628304 24665 slave.cpp:4256] Cleaning up executor '44eba68b-0f7c-437f-935f-26a52ac3f64b' of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.628487 24669 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b/runs/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' for gc 6.99999272627556days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.628530 24669 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b' for gc 6.99999272575111days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.628592 24663 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b/runs/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' for gc 6.99999272505778days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.628623 24665 slave.cpp:4344] Cleaning up framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.628635 24663 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b' for gc 6.9999927244563days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.628679 24663 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000' for gc 6.99999272379556days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.628686 24667 status_update_manager.cpp:282] Closing status update streams for framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.628693 24665 slave.cpp:839] Agent terminating [22:45:39]W: [Step 10/10] I0619 22:45:39.628718 24663 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000' for gc 6.9999927235763days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.628746 24665 master.cpp:1367] Agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 at slave(471)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) disconnected [22:45:39]W: [Step 10/10] I0619 22:45:39.628760 24665 master.cpp:2899] Disconnecting agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 at slave(471)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) [22:45:39]W: [Step 10/10] I0619 22:45:39.628783 24665 master.cpp:2918] Deactivating agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 at slave(471)@172.30.2.247:42024 (ip-172-30-2-247.mesosphere.io) [22:45:39]W: [Step 10/10] I0619 22:45:39.628832 24665 hierarchical.cpp:560] Agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 deactivated [22:45:39]W: [Step 10/10] I0619 22:45:39.629724 24647 containerizer.cpp:201] Using isolation: network/cni,filesystem/posix [22:45:39]W: [Step 10/10] I0619 22:45:39.633214 24647 linux_launcher.cpp:101] Using /cgroup/freezer as the freezer hierarchy for the Linux launcher [22:45:39]W: [Step 10/10] I0619 22:45:39.634099 24647 cluster.cpp:432] Creating default 'local' authorizer [22:45:39]W: [Step 10/10] I0619 22:45:39.634585 24664 slave.cpp:203] Agent started on 472)@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.634599 24664 slave.cpp:204] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""network/cni"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --network_cni_config_dir=""""/tmp/ghfuib/configs"""" --network_cni_plugins_dir=""""/tmp/ghfuib/plugins"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ"""" [22:45:39]W: [Step 10/10] I0619 22:45:39.634822 24664 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/credential' [22:45:39]W: [Step 10/10] I0619 22:45:39.634886 24664 slave.cpp:341] Agent using credential for: test-principal [22:45:39]W: [Step 10/10] I0619 22:45:39.634897 24664 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/http_credentials' [22:45:39]W: [Step 10/10] I0619 22:45:39.634953 24664 slave.cpp:393] Using default 'basic' HTTP authenticator [22:45:39]W: [Step 10/10] I0619 22:45:39.635114 24664 resources.cpp:572] Parsing resources as JSON failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000] [22:45:39]W: [Step 10/10] Trying semicolon-delimited string format instead [22:45:39]W: [Step 10/10] I0619 22:45:39.635422 24664 slave.cpp:592] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [22:45:39]W: [Step 10/10] I0619 22:45:39.635444 24664 slave.cpp:600] Agent attributes: [ ] [22:45:39]W: [Step 10/10] I0619 22:45:39.635449 24664 slave.cpp:605] Agent hostname: ip-172-30-2-247.mesosphere.io [22:45:39]W: [Step 10/10] I0619 22:45:39.635941 24668 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta' [22:45:39]W: [Step 10/10] I0619 22:45:39.635964 24668 state.cpp:697] No checkpointed resources found at '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/resources/resources.info' [22:45:39]W: [Step 10/10] W0619 22:45:39.636117 24669 master.cpp:4232] Cannot kill task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 (default) at scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 because it is unknown; performing reconciliation [22:45:39]W: [Step 10/10] I0619 22:45:39.636142 24669 master.cpp:5510] Performing explicit task state reconciliation for 1 tasks of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 (default) at scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 [22:45:39] : [Step 10/10] ../../src/tests/containerizer/cni_isolator_tests.cpp:504: Failure [22:45:39]W: [Step 10/10] I0619 22:45:39.636162 24669 master.cpp:5600] Sending explicit reconciliation state TASK_LOST for task 44eba68b-0f7c-437f-935f-26a52ac3f64b of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 (default) at scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 [22:45:39] : [Step 10/10] Value of: statusKilled->state() [22:45:39]W: [Step 10/10] I0619 22:45:39.636317 24669 sched.cpp:1005] Scheduler::statusUpdate took 28596ns [22:45:39] : [Step 10/10] Actual: TASK_LOST [22:45:39] : [Step 10/10] Expected: TASK_KILLED [22:45:39]W: [Step 10/10] I0619 22:45:39.636548 24647 sched.cpp:1964] Asked to stop the driver [22:45:39]W: [Step 10/10] I0619 22:45:39.636605 24669 sched.cpp:1167] Stopping framework '032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000' [22:45:39]W: [Step 10/10] I0619 22:45:39.636728 24662 master.cpp:6342] Processing TEARDOWN call for framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 (default) at scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 [22:45:39]W: [Step 10/10] W0619 22:45:39.636739 24668 state.cpp:544] Failed to find executor libprocess pid/http marker file [22:45:39]W: [Step 10/10] I0619 22:45:39.636747 24662 master.cpp:6354] Removing framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 (default) at scheduler-fb3e96c6-1106-4910-80d7-e83d75960307@172.30.2.247:42024 [22:45:39]W: [Step 10/10] I0619 22:45:39.636823 24664 hierarchical.cpp:375] Deactivated framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.636973 24667 hierarchical.cpp:326] Removed framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.637154 24663 fetcher.cpp:86] Clearing fetcher cache [22:45:39]W: [Step 10/10] I0619 22:45:39.637192 24663 slave.cpp:4933] Recovering framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.637208 24663 slave.cpp:5858] Recovering executor '44eba68b-0f7c-437f-935f-26a52ac3f64b' of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.637316 24663 slave.cpp:6074] Terminating task 44eba68b-0f7c-437f-935f-26a52ac3f64b [22:45:39]W: [Step 10/10] I0619 22:45:39.637331 24663 slave.cpp:6115] Completing task 44eba68b-0f7c-437f-935f-26a52ac3f64b [22:45:39]W: [Step 10/10] I0619 22:45:39.637421 24666 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b/runs/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' for gc 6.99999262296296days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.637454 24666 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b/runs/ef0b6221-1073-42f0-adfc-cfe75ddb3a5d' for gc 6.99999262252444days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.637460 24663 slave.cpp:4344] Cleaning up framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.637477 24666 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b' for gc 6.99999262231407days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.637514 24666 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000/executors/44eba68b-0f7c-437f-935f-26a52ac3f64b' for gc 6.99999262212741days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.637534 24667 status_update_manager.cpp:282] Closing status update streams for framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.637547 24666 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000' for gc 6.99999262155259days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.637574 24666 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_SlaveRecovery_UbK9bJ/meta/slaves/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0/frameworks/032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000' for gc 6.99999262134222days in the future [22:45:39]W: [Step 10/10] I0619 22:45:39.637648 24668 status_update_manager.cpp:200] Recovering status update manager [22:45:39]W: [Step 10/10] I0619 22:45:39.637660 24668 status_update_manager.cpp:208] Recovering executor '44eba68b-0f7c-437f-935f-26a52ac3f64b' of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 [22:45:39]W: [Step 10/10] I0619 22:45:39.637676 24668 status_update_manager.cpp:233] Skipping recovering updates of executor '44eba68b-0f7c-437f-935f-26a52ac3f64b' of framework 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-0000 because its latest run ef0b6221-1073-42f0-adfc-cfe75ddb3a5d is completed [22:45:39]W: [Step 10/10] I0619 22:45:39.637729 24663 slave.cpp:839] Agent terminating [22:45:39]W: [Step 10/10] I0619 22:45:39.639163 24647 master.cpp:1214] Master terminating [22:45:39]W: [Step 10/10] I0619 22:45:39.639279 24667 hierarchical.cpp:505] Removed agent 032cd99a-1cdc-42d4-b94a-f7b00f37fb52-S0 [22:45:39] : [Step 10/10] [ FAILED ] CniIsolatorTest.ROOT_SlaveRecovery (418 ms) ",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5670","06/21/2016 02:46:46",2,"MemoryPressureMesosTest.CGROUPS_ROOT_SlaveRecovery is flaky. """""," [03:36:29] : [Step 10/10] [ RUN ] MemoryPressureMesosTest.CGROUPS_ROOT_SlaveRecovery [03:36:29]W: [Step 10/10] I0618 03:36:29.461802 2797 cluster.cpp:155] Creating default 'local' authorizer [03:36:29]W: [Step 10/10] I0618 03:36:29.469468 2797 leveldb.cpp:174] Opened db in 7.527163ms [03:36:29]W: [Step 10/10] I0618 03:36:29.470188 2797 leveldb.cpp:181] Compacted db in 699544ns [03:36:29]W: [Step 10/10] I0618 03:36:29.470206 2797 leveldb.cpp:196] Created db iterator in 4293ns [03:36:29]W: [Step 10/10] I0618 03:36:29.470211 2797 leveldb.cpp:202] Seeked to beginning of db in 535ns [03:36:29]W: [Step 10/10] I0618 03:36:29.470216 2797 leveldb.cpp:271] Iterated through 0 keys in the db in 321ns [03:36:29]W: [Step 10/10] I0618 03:36:29.470230 2797 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned [03:36:29]W: [Step 10/10] I0618 03:36:29.470510 2815 recover.cpp:451] Starting replica recovery [03:36:29]W: [Step 10/10] I0618 03:36:29.470592 2817 recover.cpp:477] Replica is in EMPTY status [03:36:29]W: [Step 10/10] I0618 03:36:29.471029 2813 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (19800)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.471139 2816 recover.cpp:197] Received a recover response from a replica in EMPTY status [03:36:29]W: [Step 10/10] I0618 03:36:29.471271 2818 recover.cpp:568] Updating replica status to STARTING [03:36:29]W: [Step 10/10] I0618 03:36:29.471606 2811 master.cpp:382] Master 6d44b7c1-ac0b-4409-97df-a53fa2e39d09 (ip-172-30-2-29.mesosphere.io) started on 172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.471619 2811 master.cpp:384] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/baXWq5/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/baXWq5/master"""" --zk_session_timeout=""""10secs"""" [03:36:29]W: [Step 10/10] I0618 03:36:29.471745 2811 master.cpp:434] Master only allowing authenticated frameworks to register [03:36:29]W: [Step 10/10] I0618 03:36:29.471753 2811 master.cpp:448] Master only allowing authenticated agents to register [03:36:29]W: [Step 10/10] I0618 03:36:29.471757 2811 master.cpp:461] Master only allowing authenticated HTTP frameworks to register [03:36:29]W: [Step 10/10] I0618 03:36:29.471761 2811 credentials.hpp:37] Loading credentials for authentication from '/tmp/baXWq5/credentials' [03:36:29]W: [Step 10/10] I0618 03:36:29.471829 2811 master.cpp:506] Using default 'crammd5' authenticator [03:36:29]W: [Step 10/10] I0618 03:36:29.471868 2811 master.cpp:578] Using default 'basic' HTTP authenticator [03:36:29]W: [Step 10/10] I0618 03:36:29.471941 2811 master.cpp:658] Using default 'basic' HTTP framework authenticator [03:36:29]W: [Step 10/10] I0618 03:36:29.471977 2811 master.cpp:705] Authorization enabled [03:36:29]W: [Step 10/10] I0618 03:36:29.472034 2817 hierarchical.cpp:142] Initialized hierarchical allocator process [03:36:29]W: [Step 10/10] I0618 03:36:29.472038 2814 whitelist_watcher.cpp:77] No whitelist given [03:36:29]W: [Step 10/10] I0618 03:36:29.472506 2811 master.cpp:1969] The newly elected leader is master@172.30.2.29:37328 with id 6d44b7c1-ac0b-4409-97df-a53fa2e39d09 [03:36:29]W: [Step 10/10] I0618 03:36:29.472522 2811 master.cpp:1982] Elected as the leading master! [03:36:29]W: [Step 10/10] I0618 03:36:29.472527 2811 master.cpp:1669] Recovering from registrar [03:36:29]W: [Step 10/10] I0618 03:36:29.472573 2812 registrar.cpp:332] Recovering registrar [03:36:29]W: [Step 10/10] I0618 03:36:29.473511 2816 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 2.195002ms [03:36:29]W: [Step 10/10] I0618 03:36:29.473527 2816 replica.cpp:320] Persisted replica status to STARTING [03:36:29]W: [Step 10/10] I0618 03:36:29.473578 2816 recover.cpp:477] Replica is in STARTING status [03:36:29]W: [Step 10/10] I0618 03:36:29.473877 2815 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (19803)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.473989 2814 recover.cpp:197] Received a recover response from a replica in STARTING status [03:36:29]W: [Step 10/10] I0618 03:36:29.474126 2817 recover.cpp:568] Updating replica status to VOTING [03:36:29]W: [Step 10/10] I0618 03:36:29.474735 2811 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 547332ns [03:36:29]W: [Step 10/10] I0618 03:36:29.474748 2811 replica.cpp:320] Persisted replica status to VOTING [03:36:29]W: [Step 10/10] I0618 03:36:29.474783 2811 recover.cpp:582] Successfully joined the Paxos group [03:36:29]W: [Step 10/10] I0618 03:36:29.474829 2811 recover.cpp:466] Recover process terminated [03:36:29]W: [Step 10/10] I0618 03:36:29.474969 2818 log.cpp:553] Attempting to start the writer [03:36:29]W: [Step 10/10] I0618 03:36:29.475361 2811 replica.cpp:493] Replica received implicit promise request from (19804)@172.30.2.29:37328 with proposal 1 [03:36:29]W: [Step 10/10] I0618 03:36:29.475944 2811 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 559444ns [03:36:29]W: [Step 10/10] I0618 03:36:29.475956 2811 replica.cpp:342] Persisted promised to 1 [03:36:29]W: [Step 10/10] I0618 03:36:29.476215 2815 coordinator.cpp:238] Coordinator attempting to fill missing positions [03:36:29]W: [Step 10/10] I0618 03:36:29.476660 2816 replica.cpp:388] Replica received explicit promise request from (19805)@172.30.2.29:37328 for position 0 with proposal 2 [03:36:29]W: [Step 10/10] I0618 03:36:29.477262 2816 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 584333ns [03:36:29]W: [Step 10/10] I0618 03:36:29.477273 2816 replica.cpp:712] Persisted action at 0 [03:36:29]W: [Step 10/10] I0618 03:36:29.477699 2815 replica.cpp:537] Replica received write request for position 0 from (19806)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.477726 2815 leveldb.cpp:436] Reading position from leveldb took 8842ns [03:36:29]W: [Step 10/10] I0618 03:36:29.478277 2815 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 537361ns [03:36:29]W: [Step 10/10] I0618 03:36:29.478291 2815 replica.cpp:712] Persisted action at 0 [03:36:29]W: [Step 10/10] I0618 03:36:29.478569 2811 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 [03:36:29]W: [Step 10/10] I0618 03:36:29.479132 2811 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 545208ns [03:36:29]W: [Step 10/10] I0618 03:36:29.479146 2811 replica.cpp:712] Persisted action at 0 [03:36:29]W: [Step 10/10] I0618 03:36:29.479152 2811 replica.cpp:697] Replica learned NOP action at position 0 [03:36:29]W: [Step 10/10] I0618 03:36:29.479317 2814 log.cpp:569] Writer started with ending position 0 [03:36:29]W: [Step 10/10] I0618 03:36:29.479568 2811 leveldb.cpp:436] Reading position from leveldb took 8325ns [03:36:29]W: [Step 10/10] I0618 03:36:29.479786 2814 registrar.cpp:365] Successfully fetched the registry (0B) in 7.192064ms [03:36:29]W: [Step 10/10] I0618 03:36:29.479822 2814 registrar.cpp:464] Applied 1 operations in 3018ns; attempting to update the 'registry' [03:36:29]W: [Step 10/10] I0618 03:36:29.479995 2818 log.cpp:577] Attempting to append 205 bytes to the log [03:36:29]W: [Step 10/10] I0618 03:36:29.480044 2818 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 [03:36:29]W: [Step 10/10] I0618 03:36:29.480309 2811 replica.cpp:537] Replica received write request for position 1 from (19807)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.480928 2811 leveldb.cpp:341] Persisting action (224 bytes) to leveldb took 596433ns [03:36:29]W: [Step 10/10] I0618 03:36:29.480942 2811 replica.cpp:712] Persisted action at 1 [03:36:29]W: [Step 10/10] I0618 03:36:29.481148 2815 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 [03:36:29]W: [Step 10/10] I0618 03:36:29.481710 2815 leveldb.cpp:341] Persisting action (226 bytes) to leveldb took 545656ns [03:36:29]W: [Step 10/10] I0618 03:36:29.481722 2815 replica.cpp:712] Persisted action at 1 [03:36:29]W: [Step 10/10] I0618 03:36:29.481727 2815 replica.cpp:697] Replica learned APPEND action at position 1 [03:36:29]W: [Step 10/10] I0618 03:36:29.481958 2816 registrar.cpp:509] Successfully updated the 'registry' in 2.119168ms [03:36:29]W: [Step 10/10] I0618 03:36:29.482014 2816 registrar.cpp:395] Successfully recovered registrar [03:36:29]W: [Step 10/10] I0618 03:36:29.482045 2817 log.cpp:596] Attempting to truncate the log to 1 [03:36:29]W: [Step 10/10] I0618 03:36:29.482117 2817 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 [03:36:29]W: [Step 10/10] I0618 03:36:29.482166 2816 master.cpp:1777] Recovered 0 agents from the Registry (166B) ; allowing 10mins for agents to re-register [03:36:29]W: [Step 10/10] I0618 03:36:29.482177 2817 hierarchical.cpp:169] Skipping recovery of hierarchical allocator: nothing to recover [03:36:29]W: [Step 10/10] I0618 03:36:29.482404 2817 replica.cpp:537] Replica received write request for position 2 from (19808)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.482975 2817 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 552763ns [03:36:29]W: [Step 10/10] I0618 03:36:29.482986 2817 replica.cpp:712] Persisted action at 2 [03:36:29]W: [Step 10/10] I0618 03:36:29.483301 2813 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 [03:36:29]W: [Step 10/10] I0618 03:36:29.483870 2813 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 547529ns [03:36:29]W: [Step 10/10] I0618 03:36:29.483896 2813 leveldb.cpp:399] Deleting ~1 keys from leveldb took 12161ns [03:36:29]W: [Step 10/10] I0618 03:36:29.483904 2813 replica.cpp:712] Persisted action at 2 [03:36:29]W: [Step 10/10] I0618 03:36:29.483911 2813 replica.cpp:697] Replica learned TRUNCATE action at position 2 [03:36:29]W: [Step 10/10] I0618 03:36:29.492995 2797 containerizer.cpp:201] Using isolation: cgroups/mem,filesystem/posix,network/cni [03:36:29]W: [Step 10/10] I0618 03:36:29.496548 2797 linux_launcher.cpp:101] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher [03:36:29]W: [Step 10/10] I0618 03:36:29.503572 2797 cluster.cpp:432] Creating default 'local' authorizer [03:36:29]W: [Step 10/10] I0618 03:36:29.503936 2817 slave.cpp:203] Agent started on 488)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.503952 2817 slave.cpp:204] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos_test_ecfecccd-6714-4ec7-b5eb-a3071b772617"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""cgroups/mem"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL"""" [03:36:29]W: [Step 10/10] I0618 03:36:29.504148 2817 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/credential' [03:36:29]W: [Step 10/10] I0618 03:36:29.504189 2817 slave.cpp:341] Agent using credential for: test-principal [03:36:29]W: [Step 10/10] I0618 03:36:29.504199 2817 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/http_credentials' [03:36:29]W: [Step 10/10] I0618 03:36:29.504245 2817 slave.cpp:393] Using default 'basic' HTTP authenticator [03:36:29]W: [Step 10/10] I0618 03:36:29.504410 2797 sched.cpp:224] Version: 1.0.0 [03:36:29]W: [Step 10/10] I0618 03:36:29.504416 2817 resources.cpp:572] Parsing resources as JSON failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000] [03:36:29]W: [Step 10/10] Trying semicolon-delimited string format instead [03:36:29]W: [Step 10/10] I0618 03:36:29.504580 2818 sched.cpp:328] New master detected at master@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.504613 2818 sched.cpp:394] Authenticating with master master@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.504622 2818 sched.cpp:401] Using default CRAM-MD5 authenticatee [03:36:29]W: [Step 10/10] I0618 03:36:29.504649 2817 slave.cpp:592] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [03:36:29]W: [Step 10/10] I0618 03:36:29.504673 2817 slave.cpp:600] Agent attributes: [ ] [03:36:29]W: [Step 10/10] I0618 03:36:29.504678 2817 slave.cpp:605] Agent hostname: ip-172-30-2-29.mesosphere.io [03:36:29]W: [Step 10/10] I0618 03:36:29.504703 2816 authenticatee.cpp:121] Creating new client SASL connection [03:36:29]W: [Step 10/10] I0618 03:36:29.504830 2818 master.cpp:5943] Authenticating scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.504887 2816 authenticator.cpp:414] Starting authentication session for crammd5_authenticatee(991)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.504982 2811 authenticator.cpp:98] Creating new server SASL connection [03:36:29]W: [Step 10/10] I0618 03:36:29.505004 2816 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/meta' [03:36:29]W: [Step 10/10] I0618 03:36:29.505105 2813 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [03:36:29]W: [Step 10/10] I0618 03:36:29.505131 2813 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [03:36:29]W: [Step 10/10] I0618 03:36:29.505138 2818 status_update_manager.cpp:200] Recovering status update manager [03:36:29]W: [Step 10/10] I0618 03:36:29.505167 2813 authenticator.cpp:204] Received SASL authentication start [03:36:29]W: [Step 10/10] I0618 03:36:29.505200 2813 authenticator.cpp:326] Authentication requires more steps [03:36:29]W: [Step 10/10] I0618 03:36:29.505200 2814 containerizer.cpp:514] Recovering containerizer [03:36:29]W: [Step 10/10] I0618 03:36:29.505241 2813 authenticatee.cpp:259] Received SASL authentication step [03:36:29]W: [Step 10/10] I0618 03:36:29.505300 2812 authenticator.cpp:232] Received SASL authentication step [03:36:29]W: [Step 10/10] I0618 03:36:29.505317 2812 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-29.mesosphere.io' server FQDN: 'ip-172-30-2-29.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [03:36:29]W: [Step 10/10] I0618 03:36:29.505323 2812 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [03:36:29]W: [Step 10/10] I0618 03:36:29.505331 2812 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [03:36:29]W: [Step 10/10] I0618 03:36:29.505337 2812 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-29.mesosphere.io' server FQDN: 'ip-172-30-2-29.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [03:36:29]W: [Step 10/10] I0618 03:36:29.505342 2812 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [03:36:29]W: [Step 10/10] I0618 03:36:29.505347 2812 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [03:36:29]W: [Step 10/10] I0618 03:36:29.505355 2812 authenticator.cpp:318] Authentication success [03:36:29]W: [Step 10/10] I0618 03:36:29.505399 2813 authenticatee.cpp:299] Authentication success [03:36:29]W: [Step 10/10] I0618 03:36:29.505421 2811 authenticator.cpp:432] Authentication session cleanup for crammd5_authenticatee(991)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.505436 2812 master.cpp:5973] Successfully authenticated principal 'test-principal' at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.505534 2816 sched.cpp:484] Successfully authenticated with master master@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.505553 2816 sched.cpp:800] Sending SUBSCRIBE call to master@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.505591 2816 sched.cpp:833] Will retry registration in 11.319315ms if necessary [03:36:29]W: [Step 10/10] I0618 03:36:29.505672 2815 master.cpp:2539] Received SUBSCRIBE call for framework 'default' at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.505702 2815 master.cpp:2008] Authorizing framework principal 'test-principal' to receive offers for role '*' [03:36:29]W: [Step 10/10] I0618 03:36:29.505854 2818 master.cpp:2615] Subscribing framework default with checkpointing enabled and capabilities [ ] [03:36:29]W: [Step 10/10] I0618 03:36:29.506031 2818 sched.cpp:723] Framework registered with 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.506050 2816 hierarchical.cpp:264] Added framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.506072 2816 hierarchical.cpp:1488] No allocations performed [03:36:29]W: [Step 10/10] I0618 03:36:29.506073 2818 sched.cpp:737] Scheduler::registered took 28711ns [03:36:29]W: [Step 10/10] I0618 03:36:29.506093 2816 hierarchical.cpp:1583] No inverse offers to send out! [03:36:29]W: [Step 10/10] I0618 03:36:29.506126 2816 hierarchical.cpp:1139] Performed allocation for 0 agents in 59667ns [03:36:29]W: [Step 10/10] I0618 03:36:29.506428 2818 provisioner.cpp:253] Provisioner recovery complete [03:36:29]W: [Step 10/10] I0618 03:36:29.506570 2815 slave.cpp:4845] Finished recovery [03:36:29]W: [Step 10/10] I0618 03:36:29.506747 2815 slave.cpp:5017] Querying resource estimator for oversubscribable resources [03:36:29]W: [Step 10/10] I0618 03:36:29.506878 2813 slave.cpp:967] New master detected at master@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.506886 2814 status_update_manager.cpp:174] Pausing sending status updates [03:36:29]W: [Step 10/10] I0618 03:36:29.506903 2813 slave.cpp:1029] Authenticating with master master@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.506924 2813 slave.cpp:1040] Using default CRAM-MD5 authenticatee [03:36:29]W: [Step 10/10] I0618 03:36:29.506976 2813 slave.cpp:1002] Detecting new master [03:36:29]W: [Step 10/10] I0618 03:36:29.506989 2816 authenticatee.cpp:121] Creating new client SASL connection [03:36:29]W: [Step 10/10] I0618 03:36:29.507069 2813 slave.cpp:5031] Received oversubscribable resources from the resource estimator [03:36:29]W: [Step 10/10] I0618 03:36:29.507145 2815 master.cpp:5943] Authenticating slave(488)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.507202 2811 authenticator.cpp:414] Starting authentication session for crammd5_authenticatee(992)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.507264 2817 authenticator.cpp:98] Creating new server SASL connection [03:36:29]W: [Step 10/10] I0618 03:36:29.507374 2817 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [03:36:29]W: [Step 10/10] I0618 03:36:29.507387 2817 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [03:36:29]W: [Step 10/10] I0618 03:36:29.507433 2813 authenticator.cpp:204] Received SASL authentication start [03:36:29]W: [Step 10/10] I0618 03:36:29.507467 2813 authenticator.cpp:326] Authentication requires more steps [03:36:29]W: [Step 10/10] I0618 03:36:29.507511 2813 authenticatee.cpp:259] Received SASL authentication step [03:36:29]W: [Step 10/10] I0618 03:36:29.507578 2811 authenticator.cpp:232] Received SASL authentication step [03:36:29]W: [Step 10/10] I0618 03:36:29.507597 2811 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-29.mesosphere.io' server FQDN: 'ip-172-30-2-29.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [03:36:29]W: [Step 10/10] I0618 03:36:29.507606 2811 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [03:36:29]W: [Step 10/10] I0618 03:36:29.507617 2811 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [03:36:29]W: [Step 10/10] I0618 03:36:29.507629 2811 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-29.mesosphere.io' server FQDN: 'ip-172-30-2-29.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [03:36:29]W: [Step 10/10] I0618 03:36:29.507640 2811 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [03:36:29]W: [Step 10/10] I0618 03:36:29.507648 2811 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [03:36:29]W: [Step 10/10] I0618 03:36:29.507686 2811 authenticator.cpp:318] Authentication success [03:36:29]W: [Step 10/10] I0618 03:36:29.507750 2817 authenticatee.cpp:299] Authentication success [03:36:29]W: [Step 10/10] I0618 03:36:29.507766 2811 authenticator.cpp:432] Authentication session cleanup for crammd5_authenticatee(992)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.507786 2813 master.cpp:5973] Successfully authenticated principal 'test-principal' at slave(488)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.507863 2817 slave.cpp:1108] Successfully authenticated with master master@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.507910 2817 slave.cpp:1511] Will retry registration in 10.588836ms if necessary [03:36:29]W: [Step 10/10] I0618 03:36:29.507966 2812 master.cpp:4653] Registering agent at slave(488)@172.30.2.29:37328 (ip-172-30-2-29.mesosphere.io) with id 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 [03:36:29]W: [Step 10/10] I0618 03:36:29.508059 2817 registrar.cpp:464] Applied 1 operations in 13429ns; attempting to update the 'registry' [03:36:29]W: [Step 10/10] I0618 03:36:29.508244 2812 log.cpp:577] Attempting to append 390 bytes to the log [03:36:29]W: [Step 10/10] I0618 03:36:29.508296 2817 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 [03:36:29]W: [Step 10/10] I0618 03:36:29.508546 2815 replica.cpp:537] Replica received write request for position 3 from (19831)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.509158 2815 leveldb.cpp:341] Persisting action (409 bytes) to leveldb took 589901ns [03:36:29]W: [Step 10/10] I0618 03:36:29.509171 2815 replica.cpp:712] Persisted action at 3 [03:36:29]W: [Step 10/10] I0618 03:36:29.509403 2815 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 [03:36:29]W: [Step 10/10] I0618 03:36:29.509980 2815 leveldb.cpp:341] Persisting action (411 bytes) to leveldb took 558737ns [03:36:29]W: [Step 10/10] I0618 03:36:29.509992 2815 replica.cpp:712] Persisted action at 3 [03:36:29]W: [Step 10/10] I0618 03:36:29.509999 2815 replica.cpp:697] Replica learned APPEND action at position 3 [03:36:29]W: [Step 10/10] I0618 03:36:29.510262 2818 registrar.cpp:509] Successfully updated the 'registry' in 2.178048ms [03:36:29]W: [Step 10/10] I0618 03:36:29.510313 2811 log.cpp:596] Attempting to truncate the log to 3 [03:36:29]W: [Step 10/10] I0618 03:36:29.510375 2817 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 [03:36:29]W: [Step 10/10] I0618 03:36:29.510486 2818 slave.cpp:3747] Received ping from slave-observer(447)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.510519 2816 master.cpp:4721] Registered agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 at slave(488)@172.30.2.29:37328 (ip-172-30-2-29.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [03:36:29]W: [Step 10/10] I0618 03:36:29.510540 2818 slave.cpp:1152] Registered with master master@172.30.2.29:37328; given agent ID 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 [03:36:29]W: [Step 10/10] I0618 03:36:29.510577 2818 fetcher.cpp:86] Clearing fetcher cache [03:36:29]W: [Step 10/10] I0618 03:36:29.510577 2815 hierarchical.cpp:473] Added agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 (ip-172-30-2-29.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) [03:36:29]W: [Step 10/10] I0618 03:36:29.510639 2811 replica.cpp:537] Replica received write request for position 4 from (19832)@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.510658 2816 status_update_manager.cpp:181] Resuming sending status updates [03:36:29]W: [Step 10/10] I0618 03:36:29.510730 2815 hierarchical.cpp:1583] No inverse offers to send out! [03:36:29]W: [Step 10/10] I0618 03:36:29.510747 2815 hierarchical.cpp:1162] Performed allocation for agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 in 127305ns [03:36:29]W: [Step 10/10] I0618 03:36:29.510766 2818 slave.cpp:1175] Checkpointing SlaveInfo to '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/meta/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/slave.info' [03:36:29]W: [Step 10/10] I0618 03:36:29.510848 2816 master.cpp:5772] Sending 1 offers to framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.510892 2818 slave.cpp:1212] Forwarding total oversubscribed resources [03:36:29]W: [Step 10/10] I0618 03:36:29.510956 2818 master.cpp:5066] Received update of agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 at slave(488)@172.30.2.29:37328 (ip-172-30-2-29.mesosphere.io) with total oversubscribed resources [03:36:29]W: [Step 10/10] I0618 03:36:29.510987 2817 sched.cpp:897] Scheduler::resourceOffers took 30391ns [03:36:29]W: [Step 10/10] I0618 03:36:29.511080 2816 hierarchical.cpp:531] Agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 (ip-172-30-2-29.mesosphere.io) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) [03:36:29]W: [Step 10/10] I0618 03:36:29.511124 2816 hierarchical.cpp:1488] No allocations performed [03:36:29]W: [Step 10/10] I0618 03:36:29.511132 2797 resources.cpp:572] Parsing resources as JSON failed: cpus:1;mem:256;disk:1024 [03:36:29]W: [Step 10/10] Trying semicolon-delimited string format instead [03:36:29]W: [Step 10/10] I0618 03:36:29.511133 2816 hierarchical.cpp:1583] No inverse offers to send out! [03:36:29]W: [Step 10/10] I0618 03:36:29.511167 2816 hierarchical.cpp:1162] Performed allocation for agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 in 57933ns [03:36:29]W: [Step 10/10] I0618 03:36:29.511201 2811 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 542938ns [03:36:29]W: [Step 10/10] I0618 03:36:29.511214 2811 replica.cpp:712] Persisted action at 4 [03:36:29]W: [Step 10/10] I0618 03:36:29.511431 2818 master.cpp:3457] Processing ACCEPT call for offers: [ 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-O0 ] on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 at slave(488)@172.30.2.29:37328 (ip-172-30-2-29.mesosphere.io) for framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.511461 2818 master.cpp:3095] Authorizing framework principal 'test-principal' to launch task e9fcbad2-73bf-409e-9f71-023b826b5286 [03:36:29]W: [Step 10/10] I0618 03:36:29.511560 2816 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 [03:36:29]W: [Step 10/10] I0618 03:36:29.511827 2811 master.hpp:177] Adding task e9fcbad2-73bf-409e-9f71-023b826b5286 with resources cpus(*):1; mem(*):256; disk(*):1024 on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 (ip-172-30-2-29.mesosphere.io) [03:36:29]W: [Step 10/10] I0618 03:36:29.511859 2811 master.cpp:3946] Launching task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 with resources cpus(*):1; mem(*):256; disk(*):1024 on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 at slave(488)@172.30.2.29:37328 (ip-172-30-2-29.mesosphere.io) [03:36:29]W: [Step 10/10] I0618 03:36:29.511968 2814 slave.cpp:1551] Got assigned task e9fcbad2-73bf-409e-9f71-023b826b5286 for framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.511984 2815 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):768; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):1; mem(*):256; disk(*):1024) on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 from framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.512009 2815 hierarchical.cpp:928] Framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 filtered agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 for 5secs [03:36:29]W: [Step 10/10] I0618 03:36:29.512022 2814 slave.cpp:5654] Checkpointing FrameworkInfo to '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/meta/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/framework.info' [03:36:29]W: [Step 10/10] I0618 03:36:29.512127 2816 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 544409ns [03:36:29]W: [Step 10/10] I0618 03:36:29.512138 2814 slave.cpp:5665] Checkpointing framework pid 'scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328' to '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/meta/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/framework.pid' [03:36:29]W: [Step 10/10] I0618 03:36:29.512153 2816 leveldb.cpp:399] Deleting ~2 keys from leveldb took 13134ns [03:36:29]W: [Step 10/10] I0618 03:36:29.512162 2816 replica.cpp:712] Persisted action at 4 [03:36:29]W: [Step 10/10] I0618 03:36:29.512167 2816 replica.cpp:697] Replica learned TRUNCATE action at position 4 [03:36:29]W: [Step 10/10] I0618 03:36:29.512245 2814 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [03:36:29]W: [Step 10/10] Trying semicolon-delimited string format instead [03:36:29]W: [Step 10/10] I0618 03:36:29.512377 2814 slave.cpp:1670] Launching task e9fcbad2-73bf-409e-9f71-023b826b5286 for framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.512408 2814 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [03:36:29]W: [Step 10/10] Trying semicolon-delimited string format instead [03:36:29]W: [Step 10/10] I0618 03:36:29.512596 2814 paths.cpp:528] Trying to chown '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286/runs/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82' to user 'root' [03:36:29]W: [Step 10/10] I0618 03:36:29.517411 2814 slave.cpp:6136] Checkpointing ExecutorInfo to '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/meta/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286/executor.info' [03:36:29]W: [Step 10/10] I0618 03:36:29.517659 2814 slave.cpp:5734] Launching executor e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286/runs/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82' [03:36:29]W: [Step 10/10] I0618 03:36:29.517853 2814 slave.cpp:6159] Checkpointing TaskInfo to '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/meta/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286/runs/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82/tasks/e9fcbad2-73bf-409e-9f71-023b826b5286/task.info' [03:36:29]W: [Step 10/10] I0618 03:36:29.517861 2818 containerizer.cpp:773] Starting container '8a5ba23c-d1d2-4708-ab2f-40a6c269ef82' for executor 'e9fcbad2-73bf-409e-9f71-023b826b5286' of framework '6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000' [03:36:29]W: [Step 10/10] I0618 03:36:29.518013 2814 slave.cpp:1896] Queuing task 'e9fcbad2-73bf-409e-9f71-023b826b5286' for executor 'e9fcbad2-73bf-409e-9f71-023b826b5286' of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.518056 2814 slave.cpp:920] Successfully attached file '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286/runs/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82' [03:36:29]W: [Step 10/10] I0618 03:36:29.519455 2817 mem.cpp:602] Started listening for OOM events for container 8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 [03:36:29]W: [Step 10/10] I0618 03:36:29.519815 2817 mem.cpp:722] Started listening on low memory pressure events for container 8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 [03:36:29]W: [Step 10/10] I0618 03:36:29.520133 2817 mem.cpp:722] Started listening on medium memory pressure events for container 8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 [03:36:29]W: [Step 10/10] I0618 03:36:29.520447 2817 mem.cpp:722] Started listening on critical memory pressure events for container 8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 [03:36:29]W: [Step 10/10] I0618 03:36:29.520769 2817 mem.cpp:353] Updated 'memory.soft_limit_in_bytes' to 288MB for container 8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 [03:36:29]W: [Step 10/10] I0618 03:36:29.521339 2817 mem.cpp:388] Updated 'memory.limit_in_bytes' to 288MB for container 8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 [03:36:29]W: [Step 10/10] I0618 03:36:29.521926 2816 containerizer.cpp:1267] Launching 'mesos-containerizer' with flags '--command=""""{""""shell"""":true,""""value"""":""""\/mnt\/teamcity\/work\/4240ba9ddd0997c3\/build\/src\/mesos-executor""""}"""" --commands=""""{""""commands"""":[]}"""" --help=""""false"""" --pipe_read=""""119"""" --pipe_write=""""120"""" --sandbox=""""/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286/runs/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82"""" --user=""""root""""' [03:36:29]W: [Step 10/10] I0618 03:36:29.521984 2816 linux_launcher.cpp:281] Cloning child process with flags = [03:36:29]W: [Step 10/10] I0618 03:36:29.544052 2816 containerizer.cpp:1302] Checkpointing executor's forked pid 20673 to '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/meta/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286/runs/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82/pids/forked.pid' [03:36:29]W: [Step 10/10] WARNING: Logging before InitGoogleLogging() is written to STDERR [03:36:29]W: [Step 10/10] I0618 03:36:29.603862 20687 process.cpp:1060] libprocess is initialized on 172.30.2.29:44617 with 8 worker threads [03:36:29]W: [Step 10/10] I0618 03:36:29.605692 20687 logging.cpp:199] Logging to STDERR [03:36:29]W: [Step 10/10] I0618 03:36:29.606240 20687 exec.cpp:161] Version: 1.0.0 [03:36:29]W: [Step 10/10] I0618 03:36:29.606302 20704 exec.cpp:211] Executor started at: executor(1)@172.30.2.29:44617 with pid 20687 [03:36:29]W: [Step 10/10] I0618 03:36:29.606724 2814 slave.cpp:2884] Got registration for executor 'e9fcbad2-73bf-409e-9f71-023b826b5286' of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 from executor(1)@172.30.2.29:44617 [03:36:29]W: [Step 10/10] I0618 03:36:29.606885 2814 slave.cpp:2970] Checkpointing executor pid 'executor(1)@172.30.2.29:44617' to '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/meta/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286/runs/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82/pids/libprocess.pid' [03:36:29]W: [Step 10/10] I0618 03:36:29.607306 20703 exec.cpp:236] Executor registered on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 [03:36:29]W: [Step 10/10] I0618 03:36:29.607925 2815 mem.cpp:353] Updated 'memory.soft_limit_in_bytes' to 288MB for container 8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 [03:36:29]W: [Step 10/10] I0618 03:36:29.608141 20703 exec.cpp:248] Executor::registered took 89576ns [03:36:29]W: [Step 10/10] I0618 03:36:29.608538 2816 slave.cpp:2061] Sending queued task 'e9fcbad2-73bf-409e-9f71-023b826b5286' to executor 'e9fcbad2-73bf-409e-9f71-023b826b5286' of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 at executor(1)@172.30.2.29:44617 [03:36:29]W: [Step 10/10] I0618 03:36:29.608767 20705 exec.cpp:323] Executor asked to run task 'e9fcbad2-73bf-409e-9f71-023b826b5286' [03:36:29]W: [Step 10/10] I0618 03:36:29.608811 20705 exec.cpp:332] Executor::launchTask took 26475ns [03:36:29] : [Step 10/10] Received SUBSCRIBED event [03:36:29] : [Step 10/10] Subscribed executor on ip-172-30-2-29.mesosphere.io [03:36:29] : [Step 10/10] Received LAUNCH event [03:36:29] : [Step 10/10] Starting task e9fcbad2-73bf-409e-9f71-023b826b5286 [03:36:29] : [Step 10/10] Forked command at 20710 [03:36:29] : [Step 10/10] sh -c 'while true; do dd count=512 bs=1M if=/dev/zero of=./temp; done' [03:36:29]W: [Step 10/10] I0618 03:36:29.611716 20705 exec.cpp:546] Executor sending status update TASK_RUNNING (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.611974 2815 slave.cpp:3267] Handling status update TASK_RUNNING (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 from executor(1)@172.30.2.29:44617 [03:36:29]W: [Step 10/10] I0618 03:36:29.612499 2818 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.612527 2818 status_update_manager.cpp:497] Creating StatusUpdate stream for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.612751 2818 status_update_manager.cpp:824] Checkpointing UPDATE for status update TASK_RUNNING (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.725725 2818 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 to the agent [03:36:29]W: [Step 10/10] I0618 03:36:29.725908 2817 slave.cpp:3665] Forwarding the update TASK_RUNNING (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 to master@172.30.2.29:37328 [03:36:29]W: [Step 10/10] I0618 03:36:29.725999 2817 slave.cpp:3559] Status update manager successfully handled status update TASK_RUNNING (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.726016 2817 slave.cpp:3575] Sending acknowledgement for status update TASK_RUNNING (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 to executor(1)@172.30.2.29:44617 [03:36:29]W: [Step 10/10] I0618 03:36:29.726124 2813 master.cpp:5211] Status update TASK_RUNNING (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 from agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 at slave(488)@172.30.2.29:37328 (ip-172-30-2-29.mesosphere.io) [03:36:29]W: [Step 10/10] I0618 03:36:29.726157 2813 master.cpp:5259] Forwarding status update TASK_RUNNING (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.726238 2813 master.cpp:6871] Updating the state of task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) [03:36:29]W: [Step 10/10] I0618 03:36:29.726300 20701 exec.cpp:369] Executor received status update acknowledgement bea75e2e-9827-4410-9864-288f29c0a618 for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.726363 2818 sched.cpp:1005] Scheduler::statusUpdate took 77055ns [03:36:29]W: [Step 10/10] I0618 03:36:29.726517 2814 master.cpp:4365] Processing ACKNOWLEDGE call bea75e2e-9827-4410-9864-288f29c0a618 for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 [03:36:29]W: [Step 10/10] I0618 03:36:29.726757 2816 status_update_manager.cpp:392] Received status update acknowledgement (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:29]W: [Step 10/10] I0618 03:36:29.726812 2816 status_update_manager.cpp:824] Checkpointing ACK for status update TASK_RUNNING (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:30]W: [Step 10/10] I0618 03:36:30.472790 2817 hierarchical.cpp:1674] Filtered offer with cpus(*):1; mem(*):768; ports(*):[31000-32000] on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 for framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:30]W: [Step 10/10] I0618 03:36:30.472841 2817 hierarchical.cpp:1488] No allocations performed [03:36:30]W: [Step 10/10] I0618 03:36:30.472847 2817 hierarchical.cpp:1583] No inverse offers to send out! [03:36:30]W: [Step 10/10] I0618 03:36:30.472864 2817 hierarchical.cpp:1139] Performed allocation for 1 agents in 181038ns [03:36:31]W: [Step 10/10] I0618 03:36:31.474026 2814 hierarchical.cpp:1674] Filtered offer with cpus(*):1; mem(*):768; ports(*):[31000-32000] on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 for framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:31]W: [Step 10/10] I0618 03:36:31.474076 2814 hierarchical.cpp:1488] No allocations performed [03:36:31]W: [Step 10/10] I0618 03:36:31.474083 2814 hierarchical.cpp:1583] No inverse offers to send out! [03:36:31]W: [Step 10/10] I0618 03:36:31.474097 2814 hierarchical.cpp:1139] Performed allocation for 1 agents in 180187ns [03:36:32]W: [Step 10/10] I0618 03:36:32.475332 2817 hierarchical.cpp:1674] Filtered offer with cpus(*):1; mem(*):768; ports(*):[31000-32000] on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 for framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:32]W: [Step 10/10] I0618 03:36:32.475383 2817 hierarchical.cpp:1488] No allocations performed [03:36:32]W: [Step 10/10] I0618 03:36:32.475389 2817 hierarchical.cpp:1583] No inverse offers to send out! [03:36:32]W: [Step 10/10] I0618 03:36:32.475402 2817 hierarchical.cpp:1139] Performed allocation for 1 agents in 176560ns [03:36:33]W: [Step 10/10] I0618 03:36:33.476011 2814 hierarchical.cpp:1674] Filtered offer with cpus(*):1; mem(*):768; ports(*):[31000-32000] on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 for framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:33]W: [Step 10/10] I0618 03:36:33.476059 2814 hierarchical.cpp:1488] No allocations performed [03:36:33]W: [Step 10/10] I0618 03:36:33.476066 2814 hierarchical.cpp:1583] No inverse offers to send out! [03:36:33]W: [Step 10/10] I0618 03:36:33.476080 2814 hierarchical.cpp:1139] Performed allocation for 1 agents in 194002ns [03:36:33]W: [Step 10/10] 512+0 records in [03:36:33]W: [Step 10/10] 512+0 records out [03:36:33]W: [Step 10/10] 536870912 bytes (537 MB, 512 MiB) copied, 4.23412 s, 127 MB/s [03:36:34]W: [Step 10/10] I0618 03:36:34.477355 2814 hierarchical.cpp:1674] Filtered offer with cpus(*):1; mem(*):768; ports(*):[31000-32000] on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 for framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:34]W: [Step 10/10] I0618 03:36:34.477406 2814 hierarchical.cpp:1488] No allocations performed [03:36:34]W: [Step 10/10] I0618 03:36:34.477413 2814 hierarchical.cpp:1583] No inverse offers to send out! [03:36:34]W: [Step 10/10] I0618 03:36:34.477427 2814 hierarchical.cpp:1139] Performed allocation for 1 agents in 184403ns [03:36:35]W: [Step 10/10] I0618 03:36:35.477726 2811 hierarchical.cpp:1583] No inverse offers to send out! [03:36:35]W: [Step 10/10] I0618 03:36:35.477774 2811 hierarchical.cpp:1139] Performed allocation for 1 agents in 202326ns [03:36:35]W: [Step 10/10] I0618 03:36:35.477824 2818 master.cpp:5772] Sending 1 offers to framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 [03:36:35]W: [Step 10/10] I0618 03:36:35.477948 2818 sched.cpp:897] Scheduler::resourceOffers took 9712ns [03:36:36]W: [Step 10/10] I0618 03:36:36.478219 2814 hierarchical.cpp:1488] No allocations performed [03:36:36]W: [Step 10/10] I0618 03:36:36.478235 2814 hierarchical.cpp:1583] No inverse offers to send out! [03:36:36]W: [Step 10/10] I0618 03:36:36.478245 2814 hierarchical.cpp:1139] Performed allocation for 1 agents in 47187ns [03:36:37]W: [Step 10/10] I0618 03:36:37.478663 2811 hierarchical.cpp:1488] No allocations performed [03:36:37]W: [Step 10/10] I0618 03:36:37.478678 2811 hierarchical.cpp:1583] No inverse offers to send out! [03:36:37]W: [Step 10/10] I0618 03:36:37.478693 2811 hierarchical.cpp:1139] Performed allocation for 1 agents in 45629ns [03:36:38]W: [Step 10/10] I0618 03:36:38.479481 2817 hierarchical.cpp:1488] No allocations performed [03:36:38]W: [Step 10/10] I0618 03:36:38.479516 2817 hierarchical.cpp:1583] No inverse offers to send out! [03:36:38]W: [Step 10/10] I0618 03:36:38.479532 2817 hierarchical.cpp:1139] Performed allocation for 1 agents in 98966ns [03:36:39]W: [Step 10/10] I0618 03:36:39.480494 2813 hierarchical.cpp:1488] No allocations performed [03:36:39]W: [Step 10/10] I0618 03:36:39.480526 2813 hierarchical.cpp:1583] No inverse offers to send out! [03:36:39]W: [Step 10/10] I0618 03:36:39.480543 2813 hierarchical.cpp:1139] Performed allocation for 1 agents in 87017ns [03:36:40]W: [Step 10/10] I0618 03:36:40.481472 2812 hierarchical.cpp:1488] No allocations performed [03:36:40]W: [Step 10/10] I0618 03:36:40.481504 2812 hierarchical.cpp:1583] No inverse offers to send out! [03:36:40]W: [Step 10/10] I0618 03:36:40.481519 2812 hierarchical.cpp:1139] Performed allocation for 1 agents in 122806ns [03:36:41]W: [Step 10/10] I0618 03:36:41.482342 2813 hierarchical.cpp:1488] No allocations performed [03:36:41]W: [Step 10/10] I0618 03:36:41.482378 2813 hierarchical.cpp:1583] No inverse offers to send out! [03:36:41]W: [Step 10/10] I0618 03:36:41.482393 2813 hierarchical.cpp:1139] Performed allocation for 1 agents in 98739ns [03:36:42]W: [Step 10/10] I0618 03:36:42.483055 2817 hierarchical.cpp:1488] No allocations performed [03:36:42]W: [Step 10/10] I0618 03:36:42.483083 2817 hierarchical.cpp:1583] No inverse offers to send out! [03:36:42]W: [Step 10/10] I0618 03:36:42.483095 2817 hierarchical.cpp:1139] Performed allocation for 1 agents in 73620ns [03:36:43]W: [Step 10/10] I0618 03:36:43.483800 2811 hierarchical.cpp:1488] No allocations performed [03:36:43]W: [Step 10/10] I0618 03:36:43.483837 2811 hierarchical.cpp:1583] No inverse offers to send out! [03:36:43]W: [Step 10/10] I0618 03:36:43.483853 2811 hierarchical.cpp:1139] Performed allocation for 1 agents in 103486ns [03:36:44]W: [Step 10/10] I0618 03:36:44.484480 2818 hierarchical.cpp:1488] No allocations performed [03:36:44]W: [Step 10/10] I0618 03:36:44.484508 2818 hierarchical.cpp:1583] No inverse offers to send out! [03:36:44]W: [Step 10/10] I0618 03:36:44.484522 2818 hierarchical.cpp:1139] Performed allocation for 1 agents in 76447ns [03:36:44]W: [Step 10/10] I0618 03:36:44.507843 2815 slave.cpp:5017] Querying resource estimator for oversubscribable resources [03:36:44]W: [Step 10/10] I0618 03:36:44.507937 2815 slave.cpp:5031] Received oversubscribable resources from the resource estimator [03:36:44]W: [Step 10/10] I0618 03:36:44.511128 2812 slave.cpp:3747] Received ping from slave-observer(447)@172.30.2.29:37328 [03:36:44] : [Step 10/10] ../../src/tests/containerizer/memory_pressure_tests.cpp:263: Failure [03:36:44] : [Step 10/10] Failed to wait 15secs for _statusUpdateAcknowledgement [03:36:44]W: [Step 10/10] I0618 03:36:44.727337 2815 master.cpp:1406] Framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 disconnected [03:36:44]W: [Step 10/10] I0618 03:36:44.727363 2815 master.cpp:2840] Disconnecting framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 [03:36:44]W: [Step 10/10] I0618 03:36:44.727396 2815 master.cpp:2864] Deactivating framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 [03:36:44]W: [Step 10/10] I0618 03:36:44.727478 2814 hierarchical.cpp:375] Deactivated framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:44]W: [Step 10/10] W0618 03:36:44.727489 2815 master.hpp:1967] Master attempted to send message to disconnected framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 [03:36:44]W: [Step 10/10] I0618 03:36:44.727519 2815 master.cpp:1419] Giving framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 0ns to failover [03:36:44]W: [Step 10/10] I0618 03:36:44.727556 2814 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):768; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):1; mem(*):256; disk(*):1024) on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 from framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:44]W: [Step 10/10] I0618 03:36:44.727741 2814 containerizer.cpp:1576] Destroying container '8a5ba23c-d1d2-4708-ab2f-40a6c269ef82' [03:36:44]W: [Step 10/10] I0618 03:36:44.728740 2813 master.cpp:5624] Framework failover timeout, removing framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 [03:36:44]W: [Step 10/10] I0618 03:36:44.728765 2813 master.cpp:6354] Removing framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (default) at scheduler-3e992438-052b-45f0-af6a-851091145739@172.30.2.29:37328 [03:36:44]W: [Step 10/10] I0618 03:36:44.728817 2813 master.cpp:6871] Updating the state of task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 (latest state: TASK_KILLED, status update state: TASK_KILLED) [03:36:44]W: [Step 10/10] I0618 03:36:44.728827 2817 slave.cpp:2274] Asked to shut down framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 by master@172.30.2.29:37328 [03:36:44]W: [Step 10/10] I0618 03:36:44.728853 2817 slave.cpp:2299] Shutting down framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:44]W: [Step 10/10] I0618 03:36:44.728869 2817 slave.cpp:4470] Shutting down executor 'e9fcbad2-73bf-409e-9f71-023b826b5286' of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 at executor(1)@172.30.2.29:44617 [03:36:44]W: [Step 10/10] I0618 03:36:44.728896 2811 cgroups.cpp:2676] Freezing cgroup /sys/fs/cgroup/freezer/mesos_test_ecfecccd-6714-4ec7-b5eb-a3071b772617/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 [03:36:44]W: [Step 10/10] I0618 03:36:44.728937 2815 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):256; disk(*):1024 (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 from framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:44] : [Step 10/10] Received SHUTDOWN event [03:36:44] : [Step 10/10] Shutting down [03:36:44]W: [Step 10/10] I0618 03:36:44.728950 2813 master.cpp:6937] Removing task e9fcbad2-73bf-409e-9f71-023b826b5286 with resources cpus(*):1; mem(*):256; disk(*):1024 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 on agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 at slave(488)@172.30.2.29:37328 (ip-172-30-2-29.mesosphere.io) [03:36:44] : [Step 10/10] Sending SIGTERM to process tree at pid 20710 [03:36:44]W: [Step 10/10] I0618 03:36:44.729131 20707 exec.cpp:410] Executor asked to shutdown [03:36:44]W: [Step 10/10] I0618 03:36:44.729141 2815 hierarchical.cpp:326] Removed framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:44]W: [Step 10/10] I0618 03:36:44.729179 20707 exec.cpp:425] Executor::shutdown took 6153ns [03:36:44]W: [Step 10/10] I0618 03:36:44.729199 20707 exec.cpp:92] Scheduling shutdown of the executor in 5secs [03:36:45]W: [Step 10/10] I0618 03:36:45.485015 2818 hierarchical.cpp:1488] No allocations performed [03:36:45]W: [Step 10/10] I0618 03:36:45.485038 2818 hierarchical.cpp:1139] Performed allocation for 1 agents in 47043ns [03:36:46]W: [Step 10/10] I0618 03:36:46.485332 2811 hierarchical.cpp:1488] No allocations performed [03:36:46]W: [Step 10/10] I0618 03:36:46.485350 2811 hierarchical.cpp:1139] Performed allocation for 1 agents in 33542ns [03:36:47]W: [Step 10/10] I0618 03:36:47.486548 2817 hierarchical.cpp:1488] No allocations performed [03:36:47]W: [Step 10/10] I0618 03:36:47.486588 2817 hierarchical.cpp:1139] Performed allocation for 1 agents in 84621ns [03:36:48]W: [Step 10/10] I0618 03:36:48.487707 2813 hierarchical.cpp:1488] No allocations performed [03:36:48]W: [Step 10/10] I0618 03:36:48.487751 2813 hierarchical.cpp:1139] Performed allocation for 1 agents in 83039ns [03:36:49]W: [Step 10/10] I0618 03:36:49.488706 2812 hierarchical.cpp:1488] No allocations performed [03:36:49]W: [Step 10/10] I0618 03:36:49.488745 2812 hierarchical.cpp:1139] Performed allocation for 1 agents in 78192ns [03:36:49]W: [Step 10/10] I0618 03:36:49.729018 2811 slave.cpp:4543] Killing executor 'e9fcbad2-73bf-409e-9f71-023b826b5286' of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 at executor(1)@172.30.2.29:44617 [03:36:50]W: [Step 10/10] I0618 03:36:50.489168 2817 hierarchical.cpp:1488] No allocations performed [03:36:50]W: [Step 10/10] I0618 03:36:50.489207 2817 hierarchical.cpp:1139] Performed allocation for 1 agents in 87236ns [03:36:51]W: [Step 10/10] I0618 03:36:51.369570 2818 slave.cpp:2653] Status update manager successfully handled status update acknowledgement (UUID: bea75e2e-9827-4410-9864-288f29c0a618) for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:51]W: [Step 10/10] I0618 03:36:51.430644 2813 cgroups.cpp:1409] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos_test_ecfecccd-6714-4ec7-b5eb-a3071b772617/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 after 6.70171904secs [03:36:51]W: [Step 10/10] I0618 03:36:51.431812 2818 cgroups.cpp:2694] Thawing cgroup /sys/fs/cgroup/freezer/mesos_test_ecfecccd-6714-4ec7-b5eb-a3071b772617/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 [03:36:51]W: [Step 10/10] I0618 03:36:51.432981 2817 cgroups.cpp:1438] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos_test_ecfecccd-6714-4ec7-b5eb-a3071b772617/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 after 1.140992ms [03:36:51]W: [Step 10/10] I0618 03:36:51.433709 2816 slave.cpp:3793] executor(1)@172.30.2.29:44617 exited [03:36:51]W: [Step 10/10] I0618 03:36:51.443989 2813 containerizer.cpp:1812] Executor for container '8a5ba23c-d1d2-4708-ab2f-40a6c269ef82' has exited [03:36:51]W: [Step 10/10] I0618 03:36:51.446597 2818 provisioner.cpp:411] Ignoring destroy request for unknown container 8a5ba23c-d1d2-4708-ab2f-40a6c269ef82 [03:36:51]W: [Step 10/10] I0618 03:36:51.446734 2813 slave.cpp:4152] Executor 'e9fcbad2-73bf-409e-9f71-023b826b5286' of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 terminated with signal Killed [03:36:51]W: [Step 10/10] I0618 03:36:51.446758 2813 slave.cpp:4256] Cleaning up executor 'e9fcbad2-73bf-409e-9f71-023b826b5286' of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 at executor(1)@172.30.2.29:44617 [03:36:51]W: [Step 10/10] I0618 03:36:51.446943 2812 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286/runs/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82' for gc 6.99999482767407days in the future [03:36:51]W: [Step 10/10] I0618 03:36:51.447018 2813 slave.cpp:4344] Cleaning up framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:51]W: [Step 10/10] I0618 03:36:51.447038 2812 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286' for gc 6.9999948270963days in the future [03:36:51]W: [Step 10/10] I0618 03:36:51.447082 2816 status_update_manager.cpp:282] Closing status update streams for framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:51]W: [Step 10/10] I0618 03:36:51.447098 2816 status_update_manager.cpp:528] Cleaning up status update stream for task e9fcbad2-73bf-409e-9f71-023b826b5286 of framework 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000 [03:36:51]W: [Step 10/10] I0618 03:36:51.447100 2812 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/meta/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286/runs/8a5ba23c-d1d2-4708-ab2f-40a6c269ef82' for gc 6.99999482669037days in the future [03:36:51]W: [Step 10/10] I0618 03:36:51.447103 2813 slave.cpp:839] Agent terminating [03:36:51]W: [Step 10/10] I0618 03:36:51.447149 2812 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/meta/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000/executors/e9fcbad2-73bf-409e-9f71-023b826b5286' for gc 6.99999482630815days in the future [03:36:51]W: [Step 10/10] I0618 03:36:51.447190 2816 master.cpp:1367] Agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 at slave(488)@172.30.2.29:37328 (ip-172-30-2-29.mesosphere.io) disconnected [03:36:51]W: [Step 10/10] I0618 03:36:51.447209 2816 master.cpp:2899] Disconnecting agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 at slave(488)@172.30.2.29:37328 (ip-172-30-2-29.mesosphere.io) [03:36:51]W: [Step 10/10] I0618 03:36:51.447211 2812 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000' for gc 6.99999482555556days in the future [03:36:51]W: [Step 10/10] I0618 03:36:51.447237 2816 master.cpp:2918] Deactivating agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 at slave(488)@172.30.2.29:37328 (ip-172-30-2-29.mesosphere.io) [03:36:51]W: [Step 10/10] I0618 03:36:51.447254 2812 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_SlaveRecovery_MBzwwL/meta/slaves/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0/frameworks/6d44b7c1-ac0b-4409-97df-a53fa2e39d09-0000' for gc 6.99999482534815days in the future [03:36:51]W: [Step 10/10] I0618 03:36:51.447300 2816 hierarchical.cpp:560] Agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 deactivated [03:36:51]W: [Step 10/10] I0618 03:36:51.448766 2797 master.cpp:1214] Master terminating [03:36:51]W: [Step 10/10] I0618 03:36:51.448875 2814 hierarchical.cpp:505] Removed agent 6d44b7c1-ac0b-4409-97df-a53fa2e39d09-S0 [03:36:51]W: [Step 10/10] I0618 03:36:51.460062 2813 cgroups.cpp:2676] Freezing cgroup /sys/fs/cgroup/freezer/mesos_test_ecfecccd-6714-4ec7-b5eb-a3071b772617 [03:36:51]W: [Step 10/10] I0618 03:36:51.562192 2816 cgroups.cpp:1409] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos_test_ecfecccd-6714-4ec7-b5eb-a3071b772617 after 102.104064ms [03:36:51]W: [Step 10/10] I0618 03:36:51.563100 2816 cgroups.cpp:2694] Thawing cgroup /sys/fs/cgroup/freezer/mesos_test_ecfecccd-6714-4ec7-b5eb-a3071b772617 [03:36:51]W: [Step 10/10] I0618 03:36:51.564021 2815 cgroups.cpp:1438] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos_test_ecfecccd-6714-4ec7-b5eb-a3071b772617 after 901888ns [03:36:51] : [Step 10/10] [ FAILED ] MemoryPressureMesosTest.CGROUPS_ROOT_SlaveRecovery (22119 ms) ",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5671","06/21/2016 02:50:40",2,"MemoryPressureMesosTest.CGROUPS_ROOT_Statistics is flaky. """""," [00:48:29] : [Step 10/10] [ RUN ] MemoryPressureMesosTest.CGROUPS_ROOT_Statistics [00:48:29]W: [Step 10/10] 1+0 records in [00:48:29]W: [Step 10/10] 1+0 records out [00:48:29]W: [Step 10/10] 1048576 bytes (1.0 MB) copied, 0.000517638 s, 2.0 GB/s [00:48:30]W: [Step 10/10] I0617 00:48:30.000998 25413 cluster.cpp:155] Creating default 'local' authorizer [00:48:30]W: [Step 10/10] I0617 00:48:30.020459 25413 leveldb.cpp:174] Opened db in 19.338463ms [00:48:30]W: [Step 10/10] I0617 00:48:30.022897 25413 leveldb.cpp:181] Compacted db in 2.416906ms [00:48:30]W: [Step 10/10] I0617 00:48:30.022919 25413 leveldb.cpp:196] Created db iterator in 4037ns [00:48:30]W: [Step 10/10] I0617 00:48:30.022927 25413 leveldb.cpp:202] Seeked to beginning of db in 769ns [00:48:30]W: [Step 10/10] I0617 00:48:30.022932 25413 leveldb.cpp:271] Iterated through 0 keys in the db in 390ns [00:48:30]W: [Step 10/10] I0617 00:48:30.022944 25413 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned [00:48:30]W: [Step 10/10] I0617 00:48:30.023272 25432 recover.cpp:451] Starting replica recovery [00:48:30]W: [Step 10/10] I0617 00:48:30.023425 25434 recover.cpp:477] Replica is in EMPTY status [00:48:30]W: [Step 10/10] I0617 00:48:30.023748 25434 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (19361)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.023849 25429 recover.cpp:197] Received a recover response from a replica in EMPTY status [00:48:30]W: [Step 10/10] I0617 00:48:30.024019 25435 recover.cpp:568] Updating replica status to STARTING [00:48:30]W: [Step 10/10] I0617 00:48:30.024338 25432 master.cpp:382] Master 0e92ffa4-4f26-4cea-84d3-9c67612de1bd (ip-172-30-2-56.mesosphere.io) started on 172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.024348 25432 master.cpp:384] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/jBjY5p/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/jBjY5p/master"""" --zk_session_timeout=""""10secs"""" [00:48:30]W: [Step 10/10] I0617 00:48:30.024502 25432 master.cpp:434] Master only allowing authenticated frameworks to register [00:48:30]W: [Step 10/10] I0617 00:48:30.024508 25432 master.cpp:448] Master only allowing authenticated agents to register [00:48:30]W: [Step 10/10] I0617 00:48:30.024513 25432 master.cpp:461] Master only allowing authenticated HTTP frameworks to register [00:48:30]W: [Step 10/10] I0617 00:48:30.024516 25432 credentials.hpp:37] Loading credentials for authentication from '/tmp/jBjY5p/credentials' [00:48:30]W: [Step 10/10] I0617 00:48:30.024603 25432 master.cpp:506] Using default 'crammd5' authenticator [00:48:30]W: [Step 10/10] I0617 00:48:30.024644 25432 master.cpp:578] Using default 'basic' HTTP authenticator [00:48:30]W: [Step 10/10] I0617 00:48:30.024701 25432 master.cpp:658] Using default 'basic' HTTP framework authenticator [00:48:30]W: [Step 10/10] I0617 00:48:30.024770 25432 master.cpp:705] Authorization enabled [00:48:30]W: [Step 10/10] I0617 00:48:30.024883 25435 whitelist_watcher.cpp:77] No whitelist given [00:48:30]W: [Step 10/10] I0617 00:48:30.024885 25434 hierarchical.cpp:142] Initialized hierarchical allocator process [00:48:30]W: [Step 10/10] I0617 00:48:30.025539 25433 master.cpp:1969] The newly elected leader is master@172.30.2.56:53790 with id 0e92ffa4-4f26-4cea-84d3-9c67612de1bd [00:48:30]W: [Step 10/10] I0617 00:48:30.025555 25433 master.cpp:1982] Elected as the leading master! [00:48:30]W: [Step 10/10] I0617 00:48:30.025560 25433 master.cpp:1669] Recovering from registrar [00:48:30]W: [Step 10/10] I0617 00:48:30.025611 25432 registrar.cpp:332] Recovering registrar [00:48:30]W: [Step 10/10] I0617 00:48:30.026397 25431 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 2.288187ms [00:48:30]W: [Step 10/10] I0617 00:48:30.026438 25431 replica.cpp:320] Persisted replica status to STARTING [00:48:30]W: [Step 10/10] I0617 00:48:30.026486 25431 recover.cpp:477] Replica is in STARTING status [00:48:30]W: [Step 10/10] I0617 00:48:30.026793 25432 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (19364)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.026897 25429 recover.cpp:197] Received a recover response from a replica in STARTING status [00:48:30]W: [Step 10/10] I0617 00:48:30.027031 25428 recover.cpp:568] Updating replica status to VOTING [00:48:30]W: [Step 10/10] I0617 00:48:30.028960 25432 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.874668ms [00:48:30]W: [Step 10/10] I0617 00:48:30.028975 25432 replica.cpp:320] Persisted replica status to VOTING [00:48:30]W: [Step 10/10] I0617 00:48:30.029007 25432 recover.cpp:582] Successfully joined the Paxos group [00:48:30]W: [Step 10/10] I0617 00:48:30.029047 25432 recover.cpp:466] Recover process terminated [00:48:30]W: [Step 10/10] I0617 00:48:30.029209 25430 log.cpp:553] Attempting to start the writer [00:48:30]W: [Step 10/10] I0617 00:48:30.029614 25429 replica.cpp:493] Replica received implicit promise request from (19365)@172.30.2.56:53790 with proposal 1 [00:48:30]W: [Step 10/10] I0617 00:48:30.031486 25429 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.850474ms [00:48:30]W: [Step 10/10] I0617 00:48:30.031502 25429 replica.cpp:342] Persisted promised to 1 [00:48:30]W: [Step 10/10] I0617 00:48:30.031726 25431 coordinator.cpp:238] Coordinator attempting to fill missing positions [00:48:30]W: [Step 10/10] I0617 00:48:30.032245 25428 replica.cpp:388] Replica received explicit promise request from (19366)@172.30.2.56:53790 for position 0 with proposal 2 [00:48:30]W: [Step 10/10] I0617 00:48:30.034101 25428 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 1.831441ms [00:48:30]W: [Step 10/10] I0617 00:48:30.034117 25428 replica.cpp:712] Persisted action at 0 [00:48:30]W: [Step 10/10] I0617 00:48:30.034561 25433 replica.cpp:537] Replica received write request for position 0 from (19367)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.034589 25433 leveldb.cpp:436] Reading position from leveldb took 10586ns [00:48:30]W: [Step 10/10] I0617 00:48:30.036419 25433 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 1.817267ms [00:48:30]W: [Step 10/10] I0617 00:48:30.036434 25433 replica.cpp:712] Persisted action at 0 [00:48:30]W: [Step 10/10] I0617 00:48:30.036679 25429 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 [00:48:30]W: [Step 10/10] I0617 00:48:30.038661 25429 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.96521ms [00:48:30]W: [Step 10/10] I0617 00:48:30.038677 25429 replica.cpp:712] Persisted action at 0 [00:48:30]W: [Step 10/10] I0617 00:48:30.038682 25429 replica.cpp:697] Replica learned NOP action at position 0 [00:48:30]W: [Step 10/10] I0617 00:48:30.038839 25435 log.cpp:569] Writer started with ending position 0 [00:48:30]W: [Step 10/10] I0617 00:48:30.039198 25433 leveldb.cpp:436] Reading position from leveldb took 10572ns [00:48:30]W: [Step 10/10] I0617 00:48:30.039412 25433 registrar.cpp:365] Successfully fetched the registry (0B) in 13.778944ms [00:48:30]W: [Step 10/10] I0617 00:48:30.039448 25433 registrar.cpp:464] Applied 1 operations in 4778ns; attempting to update the 'registry' [00:48:30]W: [Step 10/10] I0617 00:48:30.039643 25428 log.cpp:577] Attempting to append 205 bytes to the log [00:48:30]W: [Step 10/10] I0617 00:48:30.039696 25432 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 [00:48:30]W: [Step 10/10] I0617 00:48:30.039945 25430 replica.cpp:537] Replica received write request for position 1 from (19368)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.041738 25430 leveldb.cpp:341] Persisting action (224 bytes) to leveldb took 1.771112ms [00:48:30]W: [Step 10/10] I0617 00:48:30.041754 25430 replica.cpp:712] Persisted action at 1 [00:48:30]W: [Step 10/10] I0617 00:48:30.041977 25432 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 [00:48:30]W: [Step 10/10] I0617 00:48:30.043805 25432 leveldb.cpp:341] Persisting action (226 bytes) to leveldb took 1.810425ms [00:48:30]W: [Step 10/10] I0617 00:48:30.043820 25432 replica.cpp:712] Persisted action at 1 [00:48:30]W: [Step 10/10] I0617 00:48:30.043825 25432 replica.cpp:697] Replica learned APPEND action at position 1 [00:48:30]W: [Step 10/10] I0617 00:48:30.044040 25430 registrar.cpp:509] Successfully updated the 'registry' in 4.556032ms [00:48:30]W: [Step 10/10] I0617 00:48:30.044100 25430 registrar.cpp:395] Successfully recovered registrar [00:48:30]W: [Step 10/10] I0617 00:48:30.044124 25428 log.cpp:596] Attempting to truncate the log to 1 [00:48:30]W: [Step 10/10] I0617 00:48:30.044215 25431 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 [00:48:30]W: [Step 10/10] I0617 00:48:30.044244 25430 master.cpp:1777] Recovered 0 agents from the Registry (166B) ; allowing 10mins for agents to re-register [00:48:30]W: [Step 10/10] I0617 00:48:30.044317 25433 hierarchical.cpp:169] Skipping recovery of hierarchical allocator: nothing to recover [00:48:30]W: [Step 10/10] I0617 00:48:30.044497 25433 replica.cpp:537] Replica received write request for position 2 from (19369)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.046368 25433 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.851883ms [00:48:30]W: [Step 10/10] I0617 00:48:30.046383 25433 replica.cpp:712] Persisted action at 2 [00:48:30]W: [Step 10/10] I0617 00:48:30.046583 25430 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 [00:48:30]W: [Step 10/10] I0617 00:48:30.048426 25430 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 1.821628ms [00:48:30]W: [Step 10/10] I0617 00:48:30.048455 25430 leveldb.cpp:399] Deleting ~1 keys from leveldb took 14283ns [00:48:30]W: [Step 10/10] I0617 00:48:30.048463 25430 replica.cpp:712] Persisted action at 2 [00:48:30]W: [Step 10/10] I0617 00:48:30.048468 25430 replica.cpp:697] Replica learned TRUNCATE action at position 2 [00:48:30]W: [Step 10/10] I0617 00:48:30.055145 25413 containerizer.cpp:203] Using isolation: cgroups/mem,filesystem/posix,network/cni [00:48:30]W: [Step 10/10] I0617 00:48:30.058349 25413 linux_launcher.cpp:101] Using /cgroup/freezer as the freezer hierarchy for the Linux launcher [00:48:30]W: [Step 10/10] I0617 00:48:30.069301 25413 cluster.cpp:432] Creating default 'local' authorizer [00:48:30]W: [Step 10/10] I0617 00:48:30.069707 25431 slave.cpp:203] Agent started on 485)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.069718 25431 slave.cpp:204] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos_test_d7ff4961-cb6d-4d51-bb21-10129a5c5572"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""cgroups/mem"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p"""" [00:48:30]W: [Step 10/10] I0617 00:48:30.069916 25431 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p/credential' [00:48:30]W: [Step 10/10] I0617 00:48:30.069967 25431 slave.cpp:341] Agent using credential for: test-principal [00:48:30]W: [Step 10/10] I0617 00:48:30.069984 25431 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p/http_credentials' [00:48:30]W: [Step 10/10] I0617 00:48:30.070050 25431 slave.cpp:393] Using default 'basic' HTTP authenticator [00:48:30]W: [Step 10/10] I0617 00:48:30.070127 25431 resources.cpp:572] Parsing resources as JSON failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000] [00:48:30]W: [Step 10/10] Trying semicolon-delimited string format instead [00:48:30]W: [Step 10/10] I0617 00:48:30.070282 25431 slave.cpp:592] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [00:48:30]W: [Step 10/10] I0617 00:48:30.070309 25431 slave.cpp:600] Agent attributes: [ ] [00:48:30]W: [Step 10/10] I0617 00:48:30.070314 25431 slave.cpp:605] Agent hostname: ip-172-30-2-56.mesosphere.io [00:48:30]W: [Step 10/10] I0617 00:48:30.070484 25413 sched.cpp:224] Version: 1.0.0 [00:48:30]W: [Step 10/10] I0617 00:48:30.070667 25433 sched.cpp:328] New master detected at master@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.070711 25429 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p/meta' [00:48:30]W: [Step 10/10] I0617 00:48:30.070749 25433 sched.cpp:394] Authenticating with master master@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.070758 25433 sched.cpp:401] Using default CRAM-MD5 authenticatee [00:48:30]W: [Step 10/10] I0617 00:48:30.070793 25430 status_update_manager.cpp:200] Recovering status update manager [00:48:30]W: [Step 10/10] I0617 00:48:30.070904 25432 authenticatee.cpp:121] Creating new client SASL connection [00:48:30]W: [Step 10/10] I0617 00:48:30.070914 25430 containerizer.cpp:518] Recovering containerizer [00:48:30]W: [Step 10/10] I0617 00:48:30.071049 25432 master.cpp:5943] Authenticating scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.071105 25428 authenticator.cpp:414] Starting authentication session for crammd5_authenticatee(984)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.071164 25434 authenticator.cpp:98] Creating new server SASL connection [00:48:30]W: [Step 10/10] I0617 00:48:30.071241 25434 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [00:48:30]W: [Step 10/10] I0617 00:48:30.071254 25434 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [00:48:30]W: [Step 10/10] I0617 00:48:30.071292 25434 authenticator.cpp:204] Received SASL authentication start [00:48:30]W: [Step 10/10] I0617 00:48:30.071336 25434 authenticator.cpp:326] Authentication requires more steps [00:48:30]W: [Step 10/10] I0617 00:48:30.071374 25434 authenticatee.cpp:259] Received SASL authentication step [00:48:30]W: [Step 10/10] I0617 00:48:30.071553 25434 authenticator.cpp:232] Received SASL authentication step [00:48:30]W: [Step 10/10] I0617 00:48:30.071574 25434 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-56' server FQDN: 'ip-172-30-2-56' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [00:48:30]W: [Step 10/10] I0617 00:48:30.071586 25434 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [00:48:30]W: [Step 10/10] I0617 00:48:30.071594 25434 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [00:48:30]W: [Step 10/10] I0617 00:48:30.071604 25434 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-56' server FQDN: 'ip-172-30-2-56' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [00:48:30]W: [Step 10/10] I0617 00:48:30.071615 25434 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [00:48:30]W: [Step 10/10] I0617 00:48:30.071619 25434 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [00:48:30]W: [Step 10/10] I0617 00:48:30.071630 25434 authenticator.cpp:318] Authentication success [00:48:30]W: [Step 10/10] I0617 00:48:30.071684 25428 authenticator.cpp:432] Authentication session cleanup for crammd5_authenticatee(984)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.071687 25431 authenticatee.cpp:299] Authentication success [00:48:30]W: [Step 10/10] I0617 00:48:30.071704 25434 master.cpp:5973] Successfully authenticated principal 'test-principal' at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.071826 25431 sched.cpp:484] Successfully authenticated with master master@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.071841 25431 sched.cpp:800] Sending SUBSCRIBE call to master@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.071954 25431 sched.cpp:833] Will retry registration in 731.385085ms if necessary [00:48:30]W: [Step 10/10] I0617 00:48:30.071996 25434 master.cpp:2539] Received SUBSCRIBE call for framework 'default' at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.072013 25434 master.cpp:2008] Authorizing framework principal 'test-principal' to receive offers for role '*' [00:48:30]W: [Step 10/10] I0617 00:48:30.072180 25430 master.cpp:2615] Subscribing framework default with checkpointing disabled and capabilities [ ] [00:48:30]W: [Step 10/10] I0617 00:48:30.072305 25429 hierarchical.cpp:264] Added framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.072326 25429 hierarchical.cpp:1488] No allocations performed [00:48:30]W: [Step 10/10] I0617 00:48:30.072335 25429 hierarchical.cpp:1583] No inverse offers to send out! [00:48:30]W: [Step 10/10] I0617 00:48:30.072347 25429 hierarchical.cpp:1139] Performed allocation for 0 agents in 26673ns [00:48:30]W: [Step 10/10] I0617 00:48:30.072351 25431 provisioner.cpp:253] Provisioner recovery complete [00:48:30]W: [Step 10/10] I0617 00:48:30.072371 25430 sched.cpp:723] Framework registered with 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.072403 25430 sched.cpp:737] Scheduler::registered took 11852ns [00:48:30]W: [Step 10/10] I0617 00:48:30.072587 25433 slave.cpp:4840] Finished recovery [00:48:30]W: [Step 10/10] I0617 00:48:30.072760 25433 slave.cpp:5012] Querying resource estimator for oversubscribable resources [00:48:30]W: [Step 10/10] I0617 00:48:30.072865 25431 status_update_manager.cpp:174] Pausing sending status updates [00:48:30]W: [Step 10/10] I0617 00:48:30.072893 25432 slave.cpp:962] New master detected at master@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.072906 25432 slave.cpp:1024] Authenticating with master master@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.072917 25432 slave.cpp:1035] Using default CRAM-MD5 authenticatee [00:48:30]W: [Step 10/10] I0617 00:48:30.072948 25432 slave.cpp:997] Detecting new master [00:48:30]W: [Step 10/10] I0617 00:48:30.072976 25432 slave.cpp:5026] Received oversubscribable resources from the resource estimator [00:48:30]W: [Step 10/10] I0617 00:48:30.072974 25435 authenticatee.cpp:121] Creating new client SASL connection [00:48:30]W: [Step 10/10] I0617 00:48:30.073099 25434 master.cpp:5943] Authenticating slave(485)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.073142 25434 authenticator.cpp:414] Starting authentication session for crammd5_authenticatee(985)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.073213 25431 authenticator.cpp:98] Creating new server SASL connection [00:48:30]W: [Step 10/10] I0617 00:48:30.073268 25431 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [00:48:30]W: [Step 10/10] I0617 00:48:30.073287 25431 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [00:48:30]W: [Step 10/10] I0617 00:48:30.073320 25431 authenticator.cpp:204] Received SASL authentication start [00:48:30]W: [Step 10/10] I0617 00:48:30.073353 25431 authenticator.cpp:326] Authentication requires more steps [00:48:30]W: [Step 10/10] I0617 00:48:30.073390 25431 authenticatee.cpp:259] Received SASL authentication step [00:48:30]W: [Step 10/10] I0617 00:48:30.073444 25435 authenticator.cpp:232] Received SASL authentication step [00:48:30]W: [Step 10/10] I0617 00:48:30.073460 25435 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-56' server FQDN: 'ip-172-30-2-56' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [00:48:30]W: [Step 10/10] I0617 00:48:30.073465 25435 auxprop.cpp:179] Looking up auxiliary property '*userPassword' [00:48:30]W: [Step 10/10] I0617 00:48:30.073472 25435 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [00:48:30]W: [Step 10/10] I0617 00:48:30.073477 25435 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-56' server FQDN: 'ip-172-30-2-56' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [00:48:30]W: [Step 10/10] I0617 00:48:30.073480 25435 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [00:48:30]W: [Step 10/10] I0617 00:48:30.073484 25435 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [00:48:30]W: [Step 10/10] I0617 00:48:30.073493 25435 authenticator.cpp:318] Authentication success [00:48:30]W: [Step 10/10] I0617 00:48:30.073534 25431 authenticatee.cpp:299] Authentication success [00:48:30]W: [Step 10/10] I0617 00:48:30.073561 25435 master.cpp:5973] Successfully authenticated principal 'test-principal' at slave(485)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.073590 25433 authenticator.cpp:432] Authentication session cleanup for crammd5_authenticatee(985)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.073698 25431 slave.cpp:1103] Successfully authenticated with master master@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.073742 25431 slave.cpp:1506] Will retry registration in 17.704164ms if necessary [00:48:30]W: [Step 10/10] I0617 00:48:30.073786 25434 master.cpp:4653] Registering agent at slave(485)@172.30.2.56:53790 (ip-172-30-2-56.mesosphere.io) with id 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 [00:48:30]W: [Step 10/10] I0617 00:48:30.073874 25434 registrar.cpp:464] Applied 1 operations in 9493ns; attempting to update the 'registry' [00:48:30]W: [Step 10/10] I0617 00:48:30.074077 25430 log.cpp:577] Attempting to append 390 bytes to the log [00:48:30]W: [Step 10/10] I0617 00:48:30.074152 25432 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 [00:48:30]W: [Step 10/10] I0617 00:48:30.074385 25431 replica.cpp:537] Replica received write request for position 3 from (19391)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.076269 25431 leveldb.cpp:341] Persisting action (409 bytes) to leveldb took 1.86243ms [00:48:30]W: [Step 10/10] I0617 00:48:30.076284 25431 replica.cpp:712] Persisted action at 3 [00:48:30]W: [Step 10/10] I0617 00:48:30.076551 25434 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 [00:48:30]W: [Step 10/10] I0617 00:48:30.078383 25434 leveldb.cpp:341] Persisting action (411 bytes) to leveldb took 1.815955ms [00:48:30]W: [Step 10/10] I0617 00:48:30.078398 25434 replica.cpp:712] Persisted action at 3 [00:48:30]W: [Step 10/10] I0617 00:48:30.078404 25434 replica.cpp:697] Replica learned APPEND action at position 3 [00:48:30]W: [Step 10/10] I0617 00:48:30.078703 25432 registrar.cpp:509] Successfully updated the 'registry' in 4.813056ms [00:48:30]W: [Step 10/10] I0617 00:48:30.078745 25429 log.cpp:596] Attempting to truncate the log to 3 [00:48:30]W: [Step 10/10] I0617 00:48:30.078806 25433 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 [00:48:30]W: [Step 10/10] I0617 00:48:30.078909 25431 master.cpp:4721] Registered agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 at slave(485)@172.30.2.56:53790 (ip-172-30-2-56.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [00:48:30]W: [Step 10/10] I0617 00:48:30.078928 25428 slave.cpp:3742] Received ping from slave-observer(439)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.078991 25430 hierarchical.cpp:473] Added agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 (ip-172-30-2-56.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) [00:48:30]W: [Step 10/10] I0617 00:48:30.079001 25428 slave.cpp:1147] Registered with master master@172.30.2.56:53790; given agent ID 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 [00:48:30]W: [Step 10/10] I0617 00:48:30.079020 25428 fetcher.cpp:86] Clearing fetcher cache [00:48:30]W: [Step 10/10] I0617 00:48:30.079093 25430 hierarchical.cpp:1583] No inverse offers to send out! [00:48:30]W: [Step 10/10] I0617 00:48:30.079111 25430 hierarchical.cpp:1162] Performed allocation for agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 in 100093ns [00:48:30]W: [Step 10/10] I0617 00:48:30.079150 25435 replica.cpp:537] Replica received write request for position 4 from (19392)@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.079233 25429 master.cpp:5772] Sending 1 offers to framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (default) at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.079259 25434 status_update_manager.cpp:181] Resuming sending status updates [00:48:30]W: [Step 10/10] I0617 00:48:30.079263 25428 slave.cpp:1170] Checkpointing SlaveInfo to '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p/meta/slaves/0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0/slave.info' [00:48:30]W: [Step 10/10] I0617 00:48:30.079396 25428 slave.cpp:1207] Forwarding total oversubscribed resources [00:48:30]W: [Step 10/10] I0617 00:48:30.079427 25429 sched.cpp:897] Scheduler::resourceOffers took 25735ns [00:48:30]W: [Step 10/10] I0617 00:48:30.079448 25428 master.cpp:5066] Received update of agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 at slave(485)@172.30.2.56:53790 (ip-172-30-2-56.mesosphere.io) with total oversubscribed resources [00:48:30]W: [Step 10/10] I0617 00:48:30.079608 25413 resources.cpp:572] Parsing resources as JSON failed: cpus:1;mem:256;disk:1024 [00:48:30]W: [Step 10/10] Trying semicolon-delimited string format instead [00:48:30]W: [Step 10/10] I0617 00:48:30.079612 25434 hierarchical.cpp:531] Agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 (ip-172-30-2-56.mesosphere.io) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) [00:48:30]W: [Step 10/10] I0617 00:48:30.079645 25434 hierarchical.cpp:1488] No allocations performed [00:48:30]W: [Step 10/10] I0617 00:48:30.079651 25434 hierarchical.cpp:1583] No inverse offers to send out! [00:48:30]W: [Step 10/10] I0617 00:48:30.079660 25434 hierarchical.cpp:1162] Performed allocation for agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 in 26873ns [00:48:30]W: [Step 10/10] I0617 00:48:30.079957 25428 master.cpp:3457] Processing ACCEPT call for offers: [ 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-O0 ] on agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 at slave(485)@172.30.2.56:53790 (ip-172-30-2-56.mesosphere.io) for framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (default) at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.079979 25428 master.cpp:3095] Authorizing framework principal 'test-principal' to launch task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb [00:48:30]W: [Step 10/10] I0617 00:48:30.080334 25432 master.hpp:178] Adding task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb with resources cpus(*):1; mem(*):256; disk(*):1024 on agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 (ip-172-30-2-56.mesosphere.io) [00:48:30]W: [Step 10/10] I0617 00:48:30.080365 25432 master.cpp:3946] Launching task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (default) at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 with resources cpus(*):1; mem(*):256; disk(*):1024 on agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 at slave(485)@172.30.2.56:53790 (ip-172-30-2-56.mesosphere.io) [00:48:30]W: [Step 10/10] I0617 00:48:30.080495 25429 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):768; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):1; mem(*):256; disk(*):1024) on agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 from framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.080507 25428 slave.cpp:1546] Got assigned task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb for framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.080528 25429 hierarchical.cpp:928] Framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 filtered agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 for 5secs [00:48:30]W: [Step 10/10] I0617 00:48:30.080602 25428 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [00:48:30]W: [Step 10/10] Trying semicolon-delimited string format instead [00:48:30]W: [Step 10/10] I0617 00:48:30.080718 25428 slave.cpp:1665] Launching task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb for framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.080747 25428 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 [00:48:30]W: [Step 10/10] Trying semicolon-delimited string format instead [00:48:30]W: [Step 10/10] I0617 00:48:30.081048 25428 paths.cpp:528] Trying to chown '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p/slaves/0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0/frameworks/0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000/executors/497f3d0f-ed77-4aef-9326-cf76c2dcbafb/runs/9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8' to user 'root' [00:48:30]W: [Step 10/10] I0617 00:48:30.082818 25435 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 3.508394ms [00:48:30]W: [Step 10/10] I0617 00:48:30.082859 25435 replica.cpp:712] Persisted action at 4 [00:48:30]W: [Step 10/10] I0617 00:48:30.083400 25435 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 [00:48:30]W: [Step 10/10] I0617 00:48:30.085247 25435 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 1.827229ms [00:48:30]W: [Step 10/10] I0617 00:48:30.085294 25435 leveldb.cpp:399] Deleting ~2 keys from leveldb took 30113ns [00:48:30]W: [Step 10/10] I0617 00:48:30.085304 25435 replica.cpp:712] Persisted action at 4 [00:48:30]W: [Step 10/10] I0617 00:48:30.085310 25435 replica.cpp:697] Replica learned TRUNCATE action at position 4 [00:48:30]W: [Step 10/10] I0617 00:48:30.085690 25428 slave.cpp:5729] Launching executor 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p/slaves/0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0/frameworks/0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000/executors/497f3d0f-ed77-4aef-9326-cf76c2dcbafb/runs/9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8' [00:48:30]W: [Step 10/10] I0617 00:48:30.085846 25428 slave.cpp:1891] Queuing task '497f3d0f-ed77-4aef-9326-cf76c2dcbafb' for executor '497f3d0f-ed77-4aef-9326-cf76c2dcbafb' of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.085849 25429 containerizer.cpp:777] Starting container '9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8' for executor '497f3d0f-ed77-4aef-9326-cf76c2dcbafb' of framework '0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000' [00:48:30]W: [Step 10/10] I0617 00:48:30.085898 25428 slave.cpp:915] Successfully attached file '/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p/slaves/0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0/frameworks/0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000/executors/497f3d0f-ed77-4aef-9326-cf76c2dcbafb/runs/9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8' [00:48:30]W: [Step 10/10] I0617 00:48:30.087308 25428 mem.cpp:602] Started listening for OOM events for container 9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:48:30]W: [Step 10/10] I0617 00:48:30.087671 25428 mem.cpp:722] Started listening on low memory pressure events for container 9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:48:30]W: [Step 10/10] I0617 00:48:30.088007 25428 mem.cpp:722] Started listening on medium memory pressure events for container 9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:48:30]W: [Step 10/10] I0617 00:48:30.088412 25428 mem.cpp:722] Started listening on critical memory pressure events for container 9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:48:30]W: [Step 10/10] I0617 00:48:30.088750 25428 mem.cpp:353] Updated 'memory.soft_limit_in_bytes' to 288MB for container 9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:48:30]W: [Step 10/10] I0617 00:48:30.089221 25428 mem.cpp:388] Updated 'memory.limit_in_bytes' to 288MB for container 9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:48:30]W: [Step 10/10] I0617 00:48:30.089759 25430 containerizer.cpp:1271] Launching 'mesos-containerizer' with flags '--command=""""{""""shell"""":true,""""value"""":""""\/mnt\/teamcity\/work\/4240ba9ddd0997c3\/build\/src\/mesos-executor""""}"""" --commands=""""{""""commands"""":[]}"""" --help=""""false"""" --pipe_read=""""110"""" --pipe_write=""""111"""" --sandbox=""""/mnt/teamcity/temp/buildTmp/MemoryPressureMesosTest_CGROUPS_ROOT_Statistics_AF5X0p/slaves/0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0/frameworks/0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000/executors/497f3d0f-ed77-4aef-9326-cf76c2dcbafb/runs/9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8"""" --user=""""root""""' [00:48:30]W: [Step 10/10] I0617 00:48:30.089825 25430 linux_launcher.cpp:281] Cloning child process with flags = [00:48:30]W: [Step 10/10] WARNING: Logging before InitGoogleLogging() is written to STDERR [00:48:30]W: [Step 10/10] I0617 00:48:30.153952 10096 process.cpp:1060] libprocess is initialized on 172.30.2.56:34658 with 8 worker threads [00:48:30]W: [Step 10/10] I0617 00:48:30.156230 10096 logging.cpp:199] Logging to STDERR [00:48:30]W: [Step 10/10] I0617 00:48:30.157129 10096 exec.cpp:161] Version: 1.0.0 [00:48:30]W: [Step 10/10] I0617 00:48:30.157197 10125 exec.cpp:211] Executor started at: executor(1)@172.30.2.56:34658 with pid 10096 [00:48:30]W: [Step 10/10] I0617 00:48:30.157687 25431 slave.cpp:2879] Got registration for executor '497f3d0f-ed77-4aef-9326-cf76c2dcbafb' of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 from executor(1)@172.30.2.56:34658 [00:48:30]W: [Step 10/10] I0617 00:48:30.158280 10129 exec.cpp:236] Executor registered on agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 [00:48:30]W: [Step 10/10] I0617 00:48:30.158689 25433 mem.cpp:353] Updated 'memory.soft_limit_in_bytes' to 288MB for container 9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:48:30]W: [Step 10/10] I0617 00:48:30.159274 25435 slave.cpp:2056] Sending queued task '497f3d0f-ed77-4aef-9326-cf76c2dcbafb' to executor '497f3d0f-ed77-4aef-9326-cf76c2dcbafb' of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 at executor(1)@172.30.2.56:34658 [00:48:30]W: [Step 10/10] I0617 00:48:30.159399 10129 exec.cpp:248] Executor::registered took 64598ns [00:48:30]W: [Step 10/10] I0617 00:48:30.159651 10128 exec.cpp:323] Executor asked to run task '497f3d0f-ed77-4aef-9326-cf76c2dcbafb' [00:48:30]W: [Step 10/10] I0617 00:48:30.159704 10128 exec.cpp:332] Executor::launchTask took 30558ns [00:48:30] : [Step 10/10] Received SUBSCRIBED event [00:48:30] : [Step 10/10] Subscribed executor on ip-172-30-2-56.mesosphere.io [00:48:30] : [Step 10/10] Received LAUNCH event [00:48:30] : [Step 10/10] Starting task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb [00:48:30] : [Step 10/10] sh -c 'while true; do dd count=512 bs=1M if=/dev/zero of=./temp; done' [00:48:30] : [Step 10/10] Forked command at 10134 [00:48:30]W: [Step 10/10] I0617 00:48:30.163949 10126 exec.cpp:546] Executor sending status update TASK_RUNNING (UUID: 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1) for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.164324 25431 slave.cpp:3262] Handling status update TASK_RUNNING (UUID: 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1) for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 from executor(1)@172.30.2.56:34658 [00:48:30]W: [Step 10/10] I0617 00:48:30.164824 25428 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1) for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.164849 25428 status_update_manager.cpp:497] Creating StatusUpdate stream for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.165026 25428 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1) for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 to the agent [00:48:30]W: [Step 10/10] I0617 00:48:30.165132 25433 slave.cpp:3660] Forwarding the update TASK_RUNNING (UUID: 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1) for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 to master@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.165230 25433 slave.cpp:3554] Status update manager successfully handled status update TASK_RUNNING (UUID: 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1) for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.165251 25433 slave.cpp:3570] Sending acknowledgement for status update TASK_RUNNING (UUID: 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1) for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 to executor(1)@172.30.2.56:34658 [00:48:30]W: [Step 10/10] I0617 00:48:30.165329 25430 master.cpp:5211] Status update TASK_RUNNING (UUID: 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1) for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 from agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 at slave(485)@172.30.2.56:53790 (ip-172-30-2-56.mesosphere.io) [00:48:30]W: [Step 10/10] I0617 00:48:30.165349 25430 master.cpp:5259] Forwarding status update TASK_RUNNING (UUID: 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1) for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.165410 25430 master.cpp:6871] Updating the state of task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) [00:48:30]W: [Step 10/10] I0617 00:48:30.165560 10128 exec.cpp:369] Executor received status update acknowledgement 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1 for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.165628 25432 sched.cpp:1005] Scheduler::statusUpdate took 78385ns [00:48:30]W: [Step 10/10] I0617 00:48:30.165765 25432 master.cpp:4365] Processing ACKNOWLEDGE call 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1 for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (default) at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 on agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 [00:48:30]W: [Step 10/10] I0617 00:48:30.165927 25428 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1) for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.166052 25428 slave.cpp:2648] Status update manager successfully handled status update acknowledgement (UUID: 3e8d37f5-2ac4-4b5e-ba0a-83215b0eaae1) for task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.583686 25428 master.cpp:4269] Telling agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 at slave(485)@172.30.2.56:53790 (ip-172-30-2-56.mesosphere.io) to kill task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (default) at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 [00:48:30]W: [Step 10/10] I0617 00:48:30.583760 25428 slave.cpp:2086] Asked to kill task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:30]W: [Step 10/10] I0617 00:48:30.584074 10125 exec.cpp:343] Executor asked to kill task '497f3d0f-ed77-4aef-9326-cf76c2dcbafb' [00:48:30]W: [Step 10/10] I0617 00:48:30.584121 10125 exec.cpp:352] Executor::killTask took 27333ns [00:48:30]W: [Step 10/10] I0617 00:48:30.959868 25430 mem.cpp:625] OOM notifier is triggered for container 9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:48:30]W: [Step 10/10] I0617 00:48:30.959900 25430 mem.cpp:644] OOM detected for container 9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:48:30]W: [Step 10/10] I0617 00:48:30.960912 25430 mem.cpp:685] Memory limit exceeded: Requested: 288MB Maximum Used: 288MB [00:48:30]W: [Step 10/10] [00:48:30]W: [Step 10/10] MEMORY STATISTICS: [00:48:30]W: [Step 10/10] cache 297218048 [00:48:30]W: [Step 10/10] rss 4771840 [00:48:30]W: [Step 10/10] rss_huge 0 [00:48:30]W: [Step 10/10] mapped_file 0 [00:48:30]W: [Step 10/10] pgpgin 75849 [00:48:30]W: [Step 10/10] pgpgout 2121 [00:48:30]W: [Step 10/10] pgfault 19539 [00:48:30]W: [Step 10/10] pgmajfault 0 [00:48:30]W: [Step 10/10] inactive_anon 0 [00:48:30]W: [Step 10/10] active_anon 4771840 [00:48:30]W: [Step 10/10] inactive_file 296955904 [00:48:30]W: [Step 10/10] active_file 253952 [00:48:30]W: [Step 10/10] unevictable 0 [00:48:30]W: [Step 10/10] hierarchical_memory_limit 301989888 [00:48:30]W: [Step 10/10] total_cache 297218048 [00:48:30]W: [Step 10/10] total_rss 4771840 [00:48:30]W: [Step 10/10] total_rss_huge 0 [00:48:30]W: [Step 10/10] total_mapped_file 0 [00:48:30]W: [Step 10/10] total_pgpgin 75849 [00:48:30]W: [Step 10/10] total_pgpgout 2121 [00:48:30]W: [Step 10/10] total_pgfault 19539 [00:48:30]W: [Step 10/10] total_pgmajfault 0 [00:48:30]W: [Step 10/10] total_inactive_anon 0 [00:48:30]W: [Step 10/10] total_active_anon 4771840 [00:48:30]W: [Step 10/10] total_inactive_file 296873984 [00:48:30]W: [Step 10/10] total_active_file 253952 [00:48:30]W: [Step 10/10] total_unevictable 0 [00:48:30]W: [Step 10/10] I0617 00:48:30.961012 25430 containerizer.cpp:1833] Container 9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 has reached its limit for resource mem(*):288 and will be terminated [00:48:30]W: [Step 10/10] I0617 00:48:30.961050 25430 containerizer.cpp:1580] Destroying container '9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8' [00:48:30]W: [Step 10/10] I0617 00:48:30.962447 25431 cgroups.cpp:2676] Freezing cgroup /cgroup/freezer/mesos_test_d7ff4961-cb6d-4d51-bb21-10129a5c5572/9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:48:45] : [Step 10/10] ../../src/tests/containerizer/memory_pressure_tests.cpp:172: Failure [00:48:45] : [Step 10/10] Failed to wait 15secs for killed [00:48:45]W: [Step 10/10] I0617 00:48:45.585013 25429 master.cpp:1406] Framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (default) at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 disconnected [00:48:45]W: [Step 10/10] I0617 00:48:45.585052 25429 master.cpp:2840] Disconnecting framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (default) at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 [00:48:45]W: [Step 10/10] I0617 00:48:45.585072 25429 master.cpp:2864] Deactivating framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (default) at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 [00:48:45]W: [Step 10/10] I0617 00:48:45.585110 25429 master.cpp:1419] Giving framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (default) at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 0ns to failover [00:48:45]W: [Step 10/10] I0617 00:48:45.585193 25432 hierarchical.cpp:375] Deactivated framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:45]W: [Step 10/10] I0617 00:48:45.585247 25431 master.cpp:5624] Framework failover timeout, removing framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (default) at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 [00:48:45]W: [Step 10/10] I0617 00:48:45.585269 25431 master.cpp:6354] Removing framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (default) at scheduler-21f8a988-6288-4ec1-9d6a-b66ae746896a@172.30.2.56:53790 [00:48:45]W: [Step 10/10] I0617 00:48:45.585325 25431 master.cpp:6871] Updating the state of task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 (latest state: TASK_KILLED, status update state: TASK_KILLED) [00:48:45]W: [Step 10/10] I0617 00:48:45.585352 25434 slave.cpp:2269] Asked to shut down framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 by master@172.30.2.56:53790 [00:48:45] : [Step 10/10] ../../src/tests/containerizer/memory_pressure_tests.cpp:128: Failure [00:48:45]W: [Step 10/10] I0617 00:48:45.585373 25434 slave.cpp:2294] Shutting down framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:45] : [Step 10/10] Actual function call count doesn't match EXPECT_CALL(sched, statusUpdate(&driver, _))... [00:48:45] : [Step 10/10] Expected: to be called at least twice [00:48:45]W: [Step 10/10] I0617 00:48:45.585387 25434 slave.cpp:4465] Shutting down executor '497f3d0f-ed77-4aef-9326-cf76c2dcbafb' of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 at executor(1)@172.30.2.56:34658 [00:48:45] : [Step 10/10] Actual: called once - unsatisfied and active [00:48:45]W: [Step 10/10] I0617 00:48:45.585476 25429 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):256; disk(*):1024 (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 from framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:48:45]W: [Step 10/10] I0617 00:48:45.585492 25431 master.cpp:6937] Removing task 497f3d0f-ed77-4aef-9326-cf76c2dcbafb with resources cpus(*):1; mem(*):256; disk(*):1024 of framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 on agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 at slave(485)@172.30.2.56:53790 (ip-172-30-2-56.mesosphere.io) [00:48:45]W: [Step 10/10] I0617 00:48:45.585698 25431 hierarchical.cpp:326] Removed framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 [00:49:00] : [Step 10/10] ../../src/tests/cluster.cpp:551: Failure [00:49:00] : [Step 10/10] Failed to wait 15secs for wait [00:49:00]W: [Step 10/10] I0617 00:49:00.596581 25413 slave.cpp:834] Agent terminating [00:49:00]W: [Step 10/10] I0617 00:49:00.596611 25413 slave.cpp:2269] Asked to shut down framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 by @0.0.0.0:0 [00:49:00]W: [Step 10/10] W0617 00:49:00.596624 25413 slave.cpp:2290] Ignoring shutdown framework 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-0000 because it is terminating [00:49:00]W: [Step 10/10] I0617 00:49:00.596742 25428 master.cpp:1367] Agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 at slave(485)@172.30.2.56:53790 (ip-172-30-2-56.mesosphere.io) disconnected [00:49:00]W: [Step 10/10] I0617 00:49:00.596761 25428 master.cpp:2899] Disconnecting agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 at slave(485)@172.30.2.56:53790 (ip-172-30-2-56.mesosphere.io) [00:49:00]W: [Step 10/10] I0617 00:49:00.596807 25428 master.cpp:2918] Deactivating agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 at slave(485)@172.30.2.56:53790 (ip-172-30-2-56.mesosphere.io) [00:49:00]W: [Step 10/10] I0617 00:49:00.596863 25428 hierarchical.cpp:560] Agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 deactivated [00:49:00]W: [Step 10/10] I0617 00:49:00.598708 25413 master.cpp:1214] Master terminating [00:49:00]W: [Step 10/10] I0617 00:49:00.598848 25431 hierarchical.cpp:505] Removed agent 0e92ffa4-4f26-4cea-84d3-9c67612de1bd-S0 [00:49:00]W: [Step 10/10] I0617 00:49:00.601809 25433 cgroups.cpp:2676] Freezing cgroup /cgroup/freezer/mesos_test_d7ff4961-cb6d-4d51-bb21-10129a5c5572/9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:49:00]W: [Step 10/10] I0617 00:49:00.602758 25434 cgroups.cpp:1409] Successfully froze cgroup /cgroup/freezer/mesos_test_d7ff4961-cb6d-4d51-bb21-10129a5c5572/9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 after 0ns [00:49:00]W: [Step 10/10] I0617 00:49:00.603759 25431 cgroups.cpp:2694] Thawing cgroup /cgroup/freezer/mesos_test_d7ff4961-cb6d-4d51-bb21-10129a5c5572/9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 [00:49:00]W: [Step 10/10] I0617 00:49:00.604717 25433 cgroups.cpp:1438] Successfully thawed cgroup /cgroup/freezer/mesos_test_d7ff4961-cb6d-4d51-bb21-10129a5c5572/9b1880d4-4b70-4eaf-b72e-cf207c4ee6b8 after 0ns [00:49:00]W: [Step 10/10] E0617 00:49:00.605662 25436 process.cpp:2050] Failed to shutdown socket with fd 111: Transport endpoint is not connected [00:49:15] : [Step 10/10] ../../src/tests/mesos.cpp:937: Failure [00:49:15] : [Step 10/10] Failed to wait 15secs for cgroups::destroy(hierarchy, cgroup) [00:49:15] : [Step 10/10] [ FAILED ] MemoryPressureMesosTest.CGROUPS_ROOT_Statistics (45618 ms) ",0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5673","06/21/2016 03:02:25",3,"Port mapping isolator may cause segfault if it bind mount root does not exist. ""A check is needed for port mapping isolator for its bind mount root. Otherwise, non-existed port-mapping bind mount root may cause segmentation fault for some cases. Here is the test log: """," [00:57:42] : [Step 10/10] [----------] 11 tests from PortMappingIsolatorTest [00:57:42] : [Step 10/10] [ RUN ] PortMappingIsolatorTest.ROOT_NC_ContainerToContainerTCP [00:57:42]W: [Step 10/10] I0604 00:57:42.723029 24841 port_mapping_tests.cpp:229] Using eth0 as the public interface [00:57:42]W: [Step 10/10] I0604 00:57:42.723348 24841 port_mapping_tests.cpp:237] Using lo as the loopback interface [00:57:42]W: [Step 10/10] I0604 00:57:42.735090 24841 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ephemeral_ports:[30001-30999];ports:[31000-32000] [00:57:42]W: [Step 10/10] Trying semicolon-delimited string format instead [00:57:42]W: [Step 10/10] I0604 00:57:42.736006 24841 port_mapping.cpp:1557] Using eth0 as the public interface [00:57:42]W: [Step 10/10] I0604 00:57:42.736331 24841 port_mapping.cpp:1582] Using lo as the loopback interface [00:57:42]W: [Step 10/10] I0604 00:57:42.737501 24841 port_mapping.cpp:1869] /proc/sys/net/ipv4/neigh/default/gc_thresh3 = '1024' [00:57:42]W: [Step 10/10] I0604 00:57:42.737545 24841 port_mapping.cpp:1869] /proc/sys/net/ipv4/neigh/default/gc_thresh1 = '128' [00:57:42]W: [Step 10/10] I0604 00:57:42.737578 24841 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_wmem = '4096 16384 4194304' [00:57:42]W: [Step 10/10] I0604 00:57:42.737608 24841 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_synack_retries = '5' [00:57:42]W: [Step 10/10] I0604 00:57:42.737637 24841 port_mapping.cpp:1869] /proc/sys/net/core/rmem_max = '212992' [00:57:42]W: [Step 10/10] I0604 00:57:42.737666 24841 port_mapping.cpp:1869] /proc/sys/net/core/somaxconn = '128' [00:57:42]W: [Step 10/10] I0604 00:57:42.737694 24841 port_mapping.cpp:1869] /proc/sys/net/core/wmem_max = '212992' [00:57:42]W: [Step 10/10] I0604 00:57:42.737720 24841 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_rmem = '4096 87380 6291456' [00:57:42]W: [Step 10/10] I0604 00:57:42.737746 24841 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_keepalive_time = '7200' [00:57:42]W: [Step 10/10] I0604 00:57:42.737772 24841 port_mapping.cpp:1869] /proc/sys/net/ipv4/neigh/default/gc_thresh2 = '512' [00:57:42]W: [Step 10/10] I0604 00:57:42.737798 24841 port_mapping.cpp:1869] /proc/sys/net/core/netdev_max_backlog = '1000' [00:57:42]W: [Step 10/10] I0604 00:57:42.737828 24841 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_keepalive_intvl = '75' [00:57:42]W: [Step 10/10] I0604 00:57:42.737854 24841 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_keepalive_probes = '9' [00:57:42]W: [Step 10/10] I0604 00:57:42.737879 24841 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_max_syn_backlog = '512' [00:57:42]W: [Step 10/10] I0604 00:57:42.737905 24841 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_retries2 = '15' [00:57:42]W: [Step 10/10] F0604 00:57:42.737968 24841 port_mapping_tests.cpp:448] CHECK_SOME(isolator): Failed to get realpath for bind mount root '/var/run/netns': Not found [00:57:42]W: [Step 10/10] *** Check failure stack trace: *** [00:57:42]W: [Step 10/10] @ 0x7f8bd52583d2 google::LogMessage::Fail() [00:57:42]W: [Step 10/10] @ 0x7f8bd525832b google::LogMessage::SendToLog() [00:57:42]W: [Step 10/10] @ 0x7f8bd5257d21 google::LogMessage::Flush() [00:57:42]W: [Step 10/10] @ 0x7f8bd525ab92 google::LogMessageFatal::~LogMessageFatal() [00:57:42]W: [Step 10/10] @ 0xa62171 _CheckFatal::~_CheckFatal() [00:57:42]W: [Step 10/10] @ 0x1931b17 mesos::internal::tests::PortMappingIsolatorTest_ROOT_NC_ContainerToContainerTCP_Test::TestBody() [00:57:42]W: [Step 10/10] @ 0x19e17b6 testing::internal::HandleSehExceptionsInMethodIfSupported<>() [00:57:42]W: [Step 10/10] @ 0x19dc864 testing::internal::HandleExceptionsInMethodIfSupported<>() [00:57:42]W: [Step 10/10] @ 0x19bd2ae testing::Test::Run() [00:57:42]W: [Step 10/10] @ 0x19bda66 testing::TestInfo::Run() [00:57:42]W: [Step 10/10] @ 0x19be0b7 testing::TestCase::Run() [00:57:42]W: [Step 10/10] @ 0x19c4bf5 testing::internal::UnitTestImpl::RunAllTests() [00:57:42]W: [Step 10/10] @ 0x19e247d testing::internal::HandleSehExceptionsInMethodIfSupported<>() [00:57:42]W: [Step 10/10] @ 0x19dd3a4 testing::internal::HandleExceptionsInMethodIfSupported<>() [00:57:42]W: [Step 10/10] @ 0x19c38d1 testing::UnitTest::Run() [00:57:42]W: [Step 10/10] @ 0xfd28cb RUN_ALL_TESTS() [00:57:42]W: [Step 10/10] @ 0xfd24b1 main [00:57:42]W: [Step 10/10] @ 0x7f8bceb89580 __libc_start_main [00:57:42]W: [Step 10/10] @ 0xa607c9 _start [00:57:43]W: [Step 10/10] /mnt/teamcity/temp/agentTmp/custom_script659125926639545396: line 3: 24841 Aborted (core dumped) GLOG_v=1 ./bin/mesos-tests.sh --verbose --gtest_filter=""""$GTEST_FILTER"""" [00:57:43]W: [Step 10/10] Process exited with code 134 ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5674","06/21/2016 03:19:03",3,"Port mapping isolator may fail in 'isolate' method. ""Port mapping isolator may return failure in isolate method, if a symlink to the network namespace handle using that ContainerId already existed. We should overwrite the symlink if it exist. This affects a couple test failures: Here is an example failure test log: """," PortMappingIsolatorTest.ROOT_TooManyContainers PortMappingIsolatorTest.ROOT_ContainerARPExternal PortMappingIsolatorTest.ROOT_ContainerCMPInternal PortMappingIsolatorTest.ROOT_NC_HostToContainerTCP [00:28:37] : [Step 10/10] [ RUN ] PortMappingIsolatorTest.ROOT_TooManyContainers [00:28:37]W: [Step 10/10] I0606 00:28:37.046444 24846 port_mapping_tests.cpp:229] Using eth0 as the public interface [00:28:37]W: [Step 10/10] I0606 00:28:37.046728 24846 port_mapping_tests.cpp:237] Using lo as the loopback interface [00:28:37]W: [Step 10/10] I0606 00:28:37.058758 24846 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ephemeral_ports:[30001-30999];ports:[31000-32000] [00:28:37]W: [Step 10/10] Trying semicolon-delimited string format instead [00:28:37]W: [Step 10/10] I0606 00:28:37.059711 24846 port_mapping.cpp:1557] Using eth0 as the public interface [00:28:37]W: [Step 10/10] I0606 00:28:37.059998 24846 port_mapping.cpp:1582] Using lo as the loopback interface [00:28:37]W: [Step 10/10] I0606 00:28:37.061126 24846 port_mapping.cpp:1869] /proc/sys/net/ipv4/neigh/default/gc_thresh3 = '1024' [00:28:37]W: [Step 10/10] I0606 00:28:37.061172 24846 port_mapping.cpp:1869] /proc/sys/net/ipv4/neigh/default/gc_thresh1 = '128' [00:28:37]W: [Step 10/10] I0606 00:28:37.061206 24846 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_wmem = '4096 16384 4194304' [00:28:37]W: [Step 10/10] I0606 00:28:37.061256 24846 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_synack_retries = '5' [00:28:37]W: [Step 10/10] I0606 00:28:37.061297 24846 port_mapping.cpp:1869] /proc/sys/net/core/rmem_max = '212992' [00:28:37]W: [Step 10/10] I0606 00:28:37.061331 24846 port_mapping.cpp:1869] /proc/sys/net/core/somaxconn = '128' [00:28:37]W: [Step 10/10] I0606 00:28:37.061360 24846 port_mapping.cpp:1869] /proc/sys/net/core/wmem_max = '212992' [00:28:37]W: [Step 10/10] I0606 00:28:37.061390 24846 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_rmem = '4096 87380 6291456' [00:28:37]W: [Step 10/10] I0606 00:28:37.061419 24846 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_keepalive_time = '7200' [00:28:37]W: [Step 10/10] I0606 00:28:37.061450 24846 port_mapping.cpp:1869] /proc/sys/net/ipv4/neigh/default/gc_thresh2 = '512' [00:28:37]W: [Step 10/10] I0606 00:28:37.061480 24846 port_mapping.cpp:1869] /proc/sys/net/core/netdev_max_backlog = '1000' [00:28:37]W: [Step 10/10] I0606 00:28:37.061511 24846 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_keepalive_intvl = '75' [00:28:37]W: [Step 10/10] I0606 00:28:37.061540 24846 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_keepalive_probes = '9' [00:28:37]W: [Step 10/10] I0606 00:28:37.061569 24846 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_max_syn_backlog = '512' [00:28:37]W: [Step 10/10] I0606 00:28:37.061599 24846 port_mapping.cpp:1869] /proc/sys/net/ipv4/tcp_retries2 = '15' [00:28:37]W: [Step 10/10] I0606 00:28:37.069964 24846 linux_launcher.cpp:101] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher [00:28:37]W: [Step 10/10] I0606 00:28:37.070144 24846 resources.cpp:572] Parsing resources as JSON failed: ports:[31000-31499] [00:28:37]W: [Step 10/10] Trying semicolon-delimited string format instead [00:28:37]W: [Step 10/10] I0606 00:28:37.070677 24867 port_mapping.cpp:2512] Using non-ephemeral ports {[31000,31500)} and ephemeral ports [30208,30720) for container container1 of executor '' [00:28:37]W: [Step 10/10] I0606 00:28:37.071688 24846 linux_launcher.cpp:281] Cloning child process with flags = CLONE_NEWNS | CLONE_NEWNET [00:28:37]W: [Step 10/10] I0606 00:28:37.084079 24863 port_mapping.cpp:2576] Bind mounted '/proc/11997/ns/net' to '/run/netns/11997' for container container1 [00:28:37] : [Step 10/10] ../../src/tests/containerizer/port_mapping_tests.cpp:1438: Failure [00:28:37] : [Step 10/10] (isolator.get()->isolate(containerId1, pid.get())).failure(): Failed to symlink the network namespace handle '/var/run/mesos/netns/container1' -> '/run/netns/11997': File exists [00:28:37] : [Step 10/10] [ FAILED ] PortMappingIsolatorTest.ROOT_TooManyContainers (57 ms) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"MESOS-5675","06/21/2016 12:41:24",3,"Add support for master capabilities ""Right now, frameworks can advertise their capabilities to the master via the {{FrameworkInfo}} they use for registration/re-registration. This allows masters to provide backward compatibility for old frameworks that don't support new functionality. To allow new frameworks to support backward compatibility with old masters, the inverse concept would be useful: masters would tell frameworks which capabilities are supported by the master, which the frameworks could then use to decide whether to use features only supported by more recent versions of the master. For now, frameworks can workaround this by looking at the master's version number, but that seems a bit fragile and hacky.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5691","06/23/2016 02:58:55",5,"SSL downgrade support will leak sockets in CLOSE_WAIT status ""Repro steps: 1) Start a master: 2) Start an agent with SSL and downgrade enabled: 3) Start a framework that launches lots of executors, one after another: 4) Check FDs, repeatedly The number of sockets in {{CLOSE_WAIT}} will increase linearly with the number of launched executors."""," bin/mesos-master.sh --work_dir=/tmp/master # Taken from http://mesos.apache.org/documentation/latest/ssl/ openssl genrsa -des3 -f4 -passout pass:some_password -out key.pem 4096 openssl req -new -x509 -passin pass:some_password -days 365 -key key.pem -out cert.pem SSL_KEY_FILE=key.pem SSL_CERT_FILE=cert.pem SSL_ENABLED=true SSL_SUPPORT_DOWNGRADE=true sudo -E bin/mesos-agent.sh --master=localhost:5050 --work_dir=/tmp/agent sudo src/balloon-framework --master=localhost:5050 --task_memory=64mb --task_memory_usage_limit=256mb --long_running sudo lsof -i | grep mesos | grep CLOSE_WAIT | wc -l ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5697","06/23/2016 19:57:25",3,"Support file volume in mesos containerizer. ""Currently in mesos containerizer, the host_path volume (to be bind mounted from a host path) specified in ContainerInfo can only be a directory. We should also support the volume type as a file.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-5698","06/23/2016 21:48:08",5,"Quota sorter not updated for resource changes at agent. ""Consider this sequence of events: 1. Slave connects, with 128MB of disk. 2. Master offers resources at slave to framework 3. Framework creates a dynamic reservation for 1MB and a persistent volume of the same size on the slave's resources. => This invokes {{Master::apply}}, which invokes {{allocator->updateAllocation}}, which invokes {{Sorter::update()}} on the framework sorter and role sorter. If the framework's role has a configured quota, it also invokes {{update}} on the quota role sorter -- in this case, the framework's role has no quota, so the quota role sorter is *not* updated. => {{DRFSorter::update}} updates the *total* resources at a given slave, among updating other state. New total resources will be 127MB of unreserved disk and 1MB of reserved disk with a volume. Note that the quota role sorter still thinks the slave has 128MB of unreserved disk. 4. The slave is removed from the cluster. {{HierarchicalAllocatorProcess::removeSlave}} invokes: {{slaves\[slaveId\].total.nonRevocable()}} is 127MB of unreserved disk and 1MB of reserved disk with a volume. When we remove this from the quota role sorter, we're left with total resources on the reserved slave of 1MB of unreserved disk, since that is the result of subtracting <127MB unreserved, 1MB reserved+volume> from <128MB unreserved>. The implications of this can't be good: at minimum, we're leaking resources for removed slaves in the quota role sorter. We're also introducing an inconsistency between {{total_.resources\[slaveId\]}} and {{total_.scalarQuantities}}, since the latter has already stripped-out volume/reservation information."""," roleSorter->remove(slaveId, slaves[slaveId].total); quotaRoleSorter->remove(slaveId, slaves[slaveId].total.nonRevocable()); ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5699","06/24/2016 00:58:11",1,"Create new documentation for Mesos networking. ""With introduction of CNI and dockers support docker user-defined networks, there are quite a few options within Mesos for IP-per-container solutions for container networking. We therefore need to re-write networking documentation for Mesos highlighting all the networking support that Mesos provides for orchestrating containers on IP networks.""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5704","06/24/2016 13:00:30",3,"Fine-grained authorization on /frameworks ""Even if ACLs were defined for the actions VIEW_FRAMEWORKS, VIEW_EXECUTORS and VIEW_TASKS, the data these actions were supposed to protect, could still leaked through the master's /frameworks endpoint, since it didn't enable any authorization mechanism.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5705","06/24/2016 13:05:42",5,"ZK credential is exposed in /flags and /state ""Mesos allows zk credentials to be embedded in the zk url, but exposes these credentials in the /flags and /state endpoint. Even though /state is authorized, it only filters out frameworks/tasks, so the top-level flags are shown to any authenticated user. """"zk"""": """"zk://dcos_mesos_master:my_secret_password@127.0.0.1:2181/mesos"""", We need to find some way to hide this data, or even add a first-class VIEW_FLAGS acl that applies to any endpoint that exposes flags.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5706","06/24/2016 13:09:34",2,"GET_ENDPOINT_WITH_PATH authz doesn't make sense for /flags ""The master or agent flags are exposed in /state as well as /flags, so any user who wants to disable/control access to the flags likely intends to control access to flags no matter what endpoint exposes them. As such, /flags is a poor candidate for GET_ENDPOINT_WITH_PATH authz, since we care more about protecting the flag data than the specific endpoint path. We should remove the GET_ENDPOINT authz from master and agent /flags until we can come up with a better solution, perhaps a first-class VIEW_FLAGS acl.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5707","06/24/2016 13:12:38",3,"LocalAuthorizer should error if passed a GET_ENDPOINT ACL with an unhandled path ""Since GET_ENDPOINT_WITH_PATH doesn't (yet) work with any arbitrary path, we should a) validate --acls and error if GET_ENDPOINT_WITH_PATH has a path object that doesn't match an endpoint that uses this authz strategy. b) document exactly which endpoints support GET_ENDPOINT_WITH_PATH""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5708","06/24/2016 13:15:21",3,"Add authz to /files/debug ""The /files/debug endpoint exposes the attached master/agent log paths and every attached sandbox path, which includes the frameworkId and executorId. Even if sandboxes are protected, we still don't want to expose this information to unauthorized users.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5709","06/24/2016 13:18:58",3,"Authorization for /roles ""The /roles endpoint exposes the list of all roles and their weights, as well as the list of all frameworkIds registered with each role. This is a superset of the information exposed on GET /weights, which we already protect. We should protect the data in /roles the same way. - Should we reuse VIEW_FRAMEWORK with role (from /state)? - Should we add a new VIEW_ROLE and adapt GET_WEIGHTS to use it?""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5711","06/24/2016 13:38:36",2,"Update AUTHORIZATION strings in endpoint help ""The endpoint help macros support AUTHENTICATION and AUTHORIZATION sections. We added AUTHORIZATION help for some of the newer endpoints, but not the previously authenticated endpoints. Authorization endpoints needing help string updates: Master::Http::CREATE_VOLUMES_HELP Master::Http::DESTROY_VOLUMES_HELP Master::Http::RESERVE_HELP Master::Http::STATE_HELP Master::Http::STATESUMMARY_HELP Master::Http::TEARDOWN_HELP Master::Http::TASKS_HELP Master::Http::UNRESERVE_HELP Slave::Http::STATE_HELP""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5712","06/24/2016 13:40:56",1,"Document exactly what is handled by GET_ENDPOINTS_WITH_PATH acl ""Users may expect that the GET_ENDPOINT_WITH_PATH acl can be used with any Mesos endpoint, but that is not (yet) the case. We should clearly document the list of applicable endpoints, in authorization.md and probably even upgrades.md.""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5716","06/27/2016 08:23:45",3,"Document docker private registry with authentication support in Unified Containerizer. ""Add documentation for docker private registry with authentication support in unified containerizer. This is the basic support for docker private registry.""","",0,1,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5723","06/27/2016 19:40:57",2,"SSL-enabled libprocess will leak incoming links to forks ""Encountered two different buggy behaviors that can be tracked down to the same underlying problem. Repro #1 (non-crashy): (1) Start a master. Doesn't matter if SSL is enabled or not. (2) Start an agent, with SSL enabled. Downgrade support has the same problem. The master/agent {{link}} to one another. (3) Run a sleep task. Keep this alive. If you inspect FDs at this point, you'll notice the task has inherited the {{link}} FD (master -> agent). (4) Restart the agent. Due to (3), the master's {{link}} stays open. (5) Check master's logs for the agent's re-registration message. (6) Check the agent's logs for re-registration. The message will not appear. The master is actually using the old {{link}} which is not connected to the agent. ---- Repro #2 (crashy): (1) Start a master. Doesn't matter if SSL is enabled or not. (2) Start an agent, with SSL enabled. Downgrade support has the same problem. (3) Run ~100 sleep task one after the other, keep them all alive. Each task links back to the agent. Due to an FD leak, each task will inherit the incoming links from all other actors... (4) At some point, the agent will run out of FDs and kernel panic. ---- It appears that the SSL socket {{accept}} call is missing {{os::nonblock}} and {{os::cloexec}} calls: https://github.com/apache/mesos/blob/4b91d936f50885b6a66277e26ea3c32fe942cf1a/3rdparty/libprocess/src/libevent_ssl_socket.cpp#L794-L806 For reference, here's {{poll}} socket's {{accept}}: https://github.com/apache/mesos/blob/4b91d936f50885b6a66277e26ea3c32fe942cf1a/3rdparty/libprocess/src/poll_socket.cpp#L53-L75 ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5727","06/28/2016 00:51:10",5,"Command executor health check does not work when the task specifies container image. ""Since we launch the task after pivot_root, we no longer has the access to the mesos-health-check binary. The solution is to refactor health check to be a library (libprocess) so that it does not depend on the underlying filesystem. One note here is that we should strive to keep both the command executor and the task in the same mount namespace so that Mesos CLI tooling does not need to find the mount namespace for the task. It just need to find the corresponding pid for the executor. This statement is *arguable*, see the comment below.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5740","06/29/2016 03:24:43",3,"Consider adding `relink` functionality to libprocess ""Currently we don't have the {{relink}} functionality in libprocess. i.e. A way to create a new persistent connection between actors, even if a connection already exists. This can benefit us in a couple of ways: - The application may have more information on the state of a connection than libprocess does, as libprocess only checks if the connection is alive or not. For example, a linkee may accept a connection, then fork, pass the connection to a child, and subsequently exit. As the connection is still active, libprocess may not detect the exit. - Sometimes, the {{ExitedEvent}} might be delayed or might be dropped due to the remote instance being unavailable (e.g., partition, network intermediaries not sending RST's etc). ""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5745","06/29/2016 15:15:04",3,"AuthenticationTest.UnauthenticatedSlave fails with clang++3.8 ""With {{clang++-3.8}}, {{make check}} fails with the following message: """," [ RUN ] AuthenticationTest.UnauthenticatedSlave *** Aborted at 1467208613 (unix time) try """"date -d @1467208613"""" if you are using GNU date *** PC: @ 0x10b7f5a8b std::__1::__tree<>::__assign_multi<>() *** SIGSEGV (@0x0) received by PID 40053 (TID 0x7fff73aaf000) stack trace: *** @ 0x7fff8af4252a _sigtramp @ 0x110216a00 (unknown) @ 0x10b7f5881 mesos::internal::logging::Flags::operator=() @ 0x10b7f3076 mesos::internal::slave::Flags::operator=() @ 0x10b7f1cbf mesos::internal::tests::cluster::Slave::start() @ 0x10bf1a2d1 mesos::internal::tests::MesosTest::StartSlave() @ 0x10b7511b9 mesos::internal::tests::AuthenticationTest_UnauthenticatedSlave_Test::TestBody() @ 0x10c703caa testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x10c703b0a testing::Test::Run() @ 0x10c704b02 testing::TestInfo::Run() @ 0x10c7053c3 testing::TestCase::Run() @ 0x10c70cefb testing::internal::UnitTestImpl::RunAllTests() @ 0x10c70ca43 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x10c70c95e testing::UnitTest::Run() @ 0x10bbe44f3 main @ 0x7fff9071a5ad start make[3]: *** [check-local] Segmentation fault: 11 make[2]: *** [check-am] Error 2 make[1]: *** [check] Error 2 make: *** [check-recursive] Error 1 ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5748","06/30/2016 00:49:53",2,"Potential segfault in `link` and `send` when linking to a remote process ""There is a race in the SocketManager, between a remote {{link}} and disconnection of the underlying socket. We potentially segfault here: https://github.com/apache/mesos/blob/215e79f571a989e998488077d713c28c7528926e/3rdparty/libprocess/src/process.cpp#L1512 {{\*socket}} dereferences the shared pointer underpinning the {{Socket*}} object. However, the code above this line actually has ownership of the pointer: https://github.com/apache/mesos/blob/215e79f571a989e998488077d713c28c7528926e/3rdparty/libprocess/src/process.cpp#L1494-L1499 If the socket dies during the link, the {{ignore_recv_data}} may delete the Socket underneath {{link}}: https://github.com/apache/mesos/blob/215e79f571a989e998488077d713c28c7528926e/3rdparty/libprocess/src/process.cpp#L1399-L1411 ---- The same race exists for {{send}}. This race was discovered while running a new test in repetition: https://reviews.apache.org/r/49175/ On OSX, I hit the race consistently every 500-800 repetitions: """," 3rdparty/libprocess/libprocess-tests --gtest_filter=""""ProcessRemoteLinkTest.RemoteLink"""" --gtest_break_on_failure --gtest_repeat=1000 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5753","06/30/2016 22:22:10",8,"Command executor should use `mesos-containerizer launch` to launch user task. ""Currently, command executor and `mesos-containerizer launch` share a lot of the logic. Command executor should in fact, just use `mesos-containerizer launch` to launch the user task. Potentially, `mesos-containerizer launch` can be also used by custom executor to launch user tasks.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5759","07/01/2016 02:07:17",1,"ProcessRemoteLinkTest.RemoteUseStaleLink and RemoteStaleLinkRelink are flaky ""{{ProcessRemoteLinkTest.RemoteUseStaleLink}} and {{ProcessRemoteLinkTest.RemoteStaleLinkRelink}} are failing occasionally with the error: There appears to be a race between establishing a socket connection and the test calling {{::shutdown}} on the socket. Under some circumstances, the {{::shutdown}} may actually result in failing the future in {{SocketManager::link_connect}} error and thereby trigger {{SocketManager::close}}."""," [ RUN ] ProcessRemoteLinkTest.RemoteStaleLinkRelink WARNING: Logging before InitGoogleLogging() is written to STDERR I0630 07:42:34.661110 18888 process.cpp:1066] libprocess is initialized on 172.17.0.2:56294 with 16 worker threads E0630 07:42:34.666393 18765 process.cpp:2104] Failed to shutdown socket with fd 7: Transport endpoint is not connected /mesos/3rdparty/libprocess/src/tests/process_tests.cpp:1059: Failure Value of: exitedPid.isPending() Actual: false Expected: true [ FAILED ] ProcessRemoteLinkTest.RemoteStaleLinkRelink (56 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5782","07/05/2016 07:51:42",2,"Renamed 'commands' to 'pre_exec_commands' in ContainerLaunchInfo. ""Currently the 'commands' in isolator.proto ContainerLaunchInfo is somehow confusing. It is a pre-executed command (can be any script or shell command) before launch. We should renamed 'commands' to 'pre_exec_commands' in ContainerLaunchInfo and add comments.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5792","07/05/2016 23:54:06",8,"Add mesos tests to CMake (make check) ""Provide CMakeLists.txt and configuration files to build mesos tests using CMake.""","",0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5802","07/07/2016 16:53:59",2,"SlaveAuthorizerTest/0.ViewFlags is flaky. """""," [15:24:47] : [Step 10/10] [ RUN ] SlaveAuthorizerTest/0.ViewFlags [15:24:47]W: [Step 10/10] I0707 15:24:47.025609 25322 containerizer.cpp:196] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni [15:24:47]W: [Step 10/10] I0707 15:24:47.030421 25322 linux_launcher.cpp:101] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher [15:24:47]W: [Step 10/10] I0707 15:24:47.032060 25339 slave.cpp:205] Agent started on 335)@172.30.2.7:43076 [15:24:47]W: [Step 10/10] I0707 15:24:47.032078 25339 slave.cpp:206] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/SlaveAuthorizerTest_0_ViewFlags_OsJb5C/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/SlaveAuthorizerTest_0_ViewFlags_OsJb5C/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""true"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/SlaveAuthorizerTest_0_ViewFlags_OsJb5C/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/SlaveAuthorizerTest_0_ViewFlags_OsJb5C"""" --xfs_project_range=""""[5000-10000]"""" [15:24:47]W: [Step 10/10] I0707 15:24:47.032306 25339 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/SlaveAuthorizerTest_0_ViewFlags_OsJb5C/credential' [15:24:47]W: [Step 10/10] I0707 15:24:47.032424 25339 slave.cpp:343] Agent using credential for: test-principal [15:24:47]W: [Step 10/10] I0707 15:24:47.032441 25339 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/SlaveAuthorizerTest_0_ViewFlags_OsJb5C/http_credentials' [15:24:47]W: [Step 10/10] I0707 15:24:47.032528 25339 slave.cpp:395] Using default 'basic' HTTP authenticator [15:24:47]W: [Step 10/10] I0707 15:24:47.032754 25339 resources.cpp:572] Parsing resources as JSON failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000] [15:24:47]W: [Step 10/10] Trying semicolon-delimited string format instead [15:24:47]W: [Step 10/10] I0707 15:24:47.032838 25339 resources.cpp:572] Parsing resources as JSON failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000] [15:24:47]W: [Step 10/10] Trying semicolon-delimited string format instead [15:24:47]W: [Step 10/10] I0707 15:24:47.032968 25339 slave.cpp:594] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [15:24:47]W: [Step 10/10] I0707 15:24:47.032994 25339 slave.cpp:602] Agent attributes: [ ] [15:24:47]W: [Step 10/10] I0707 15:24:47.032999 25339 slave.cpp:607] Agent hostname: ip-172-30-2-7.ec2.internal.mesosphere.io [15:24:47]W: [Step 10/10] I0707 15:24:47.033291 25339 process.cpp:3322] Handling HTTP event for process 'slave(335)' with path: '/slave(335)/flags' [15:24:47]W: [Step 10/10] I0707 15:24:47.033329 25343 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/SlaveAuthorizerTest_0_ViewFlags_OsJb5C/meta' [15:24:47]W: [Step 10/10] I0707 15:24:47.033576 25342 status_update_manager.cpp:200] Recovering status update manager [15:24:47] : [Step 10/10] ../../src/tests/slave_authorization_tests.cpp:316: Failure [15:24:47]W: [Step 10/10] I0707 15:24:47.033604 25340 http.cpp:269] HTTP GET for /slave(335)/flags from 172.30.2.7:33866 [15:24:47] : [Step 10/10] Value of: (response).get().status [15:24:47] : [Step 10/10] Actual: """"503 Service Unavailable"""" [15:24:47]W: [Step 10/10] I0707 15:24:47.033687 25340 containerizer.cpp:522] Recovering containerizer [15:24:47] : [Step 10/10] Expected: OK().status [15:24:47] : [Step 10/10] Which is: """"200 OK"""" [15:24:47]W: [Step 10/10] I0707 15:24:47.034953 25340 process.cpp:3322] Handling HTTP event for process 'slave(335)' with path: '/slave(335)/state' [15:24:47] : [Step 10/10] Agent has not finished recovery [15:24:47] : [Step 10/10] ../../src/tests/slave_authorization_tests.cpp:320: Failure [15:24:47]W: [Step 10/10] I0707 15:24:47.035152 25343 http.cpp:269] HTTP GET for /slave(335)/state from 172.30.2.7:33868 [15:24:47] : [Step 10/10] parse: syntax error at line 1 near: Agent has not finished recovery [15:24:47]W: [Step 10/10] I0707 15:24:47.035768 25341 slave.cpp:841] Agent terminating [15:24:47]W: [Step 10/10] I0707 15:24:47.036150 25337 provisioner.cpp:253] Provisioner recovery complete [15:24:47] : [Step 10/10] [ FAILED ] SlaveAuthorizerTest/0.ViewFlags, where TypeParam = mesos::internal::LocalAuthorizer (14 ms) ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5804","07/07/2016 21:15:52",3,"ExamplesTest.DynamicReservationFramework is flaky ""Showed up on ASF CI: https://builds.apache.org/job/Mesos/BUILDTOOL=autotools,COMPILER=clang,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A14.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-6)/2466/changes Logs from a previous good run: """," [ RUN ] ExamplesTest.DynamicReservationFramework Using temporary directory '/tmp/ExamplesTest_DynamicReservationFramework_xp2TU9' /mesos/mesos-1.0.0/src/tests/dynamic_reservation_framework_test.sh: line 19: /mesos/mesos-1.0.0/_build/src/colors.sh: No such file or directory /mesos/mesos-1.0.0/src/tests/dynamic_reservation_framework_test.sh: line 20: /mesos/mesos-1.0.0/_build/src/atexit.sh: No such file or directory WARNING: Logging before InitGoogleLogging() is written to STDERR I0707 19:30:31.102650 29946 resources.cpp:572] Parsing resources as JSON failed: cpus:1;mem:128 Trying semicolon-delimited string format instead I0707 19:30:31.125845 29946 process.cpp:1066] libprocess is initialized on 172.17.0.7:37568 with 16 worker threads I0707 19:30:31.125954 29946 logging.cpp:199] Logging to STDERR I0707 19:30:31.237936 29946 leveldb.cpp:174] Opened db in 101.67046ms I0707 19:30:31.272083 29946 leveldb.cpp:181] Compacted db in 34.088797ms I0707 19:30:31.272655 29946 leveldb.cpp:196] Created db iterator in 104307ns I0707 19:30:31.272855 29946 leveldb.cpp:202] Seeked to beginning of db in 20581ns I0707 19:30:31.273027 29946 leveldb.cpp:271] Iterated through 0 keys in the db in 13839ns I0707 19:30:31.273460 29946 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0707 19:30:31.277535 29979 recover.cpp:451] Starting replica recovery I0707 19:30:31.279044 29979 recover.cpp:477] Replica is in EMPTY status I0707 19:30:31.285576 29984 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (3)@172.17.0.7:37568 I0707 19:30:31.290812 29983 recover.cpp:197] Received a recover response from a replica in EMPTY status I0707 19:30:31.300268 29972 recover.cpp:568] Updating replica status to STARTING I0707 19:30:31.307143 29946 local.cpp:255] Creating default 'local' authorizer I0707 19:30:31.324632 29972 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 23.394808ms I0707 19:30:31.325036 29972 replica.cpp:320] Persisted replica status to STARTING I0707 19:30:31.325812 29972 recover.cpp:477] Replica is in STARTING status I0707 19:30:31.328284 29972 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (5)@172.17.0.7:37568 I0707 19:30:31.328945 29972 recover.cpp:197] Received a recover response from a replica in STARTING status I0707 19:30:31.329859 29972 recover.cpp:568] Updating replica status to VOTING I0707 19:30:31.335539 29974 master.cpp:382] Master 443ee691-d272-454c-90fe-959c95948252 (89b080073abb) started on 172.17.0.7:37568 I0707 19:30:31.335839 29974 master.cpp:384] Flags at startup: --acls=""""permissive: true register_frameworks { principals { type: ANY } roles { type: SOME values: """"test"""" } } """" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""false"""" --authenticate_frameworks=""""false"""" --authenticate_http=""""false"""" --authenticate_http_frameworks=""""false"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/ExamplesTest_DynamicReservationFramework_xp2TU9/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""20secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-1.0.0/src/webui"""" --work_dir=""""/tmp/mesos-zPIQS8"""" --zk_session_timeout=""""10secs"""" I0707 19:30:31.337158 29974 master.cpp:436] Master allowing unauthenticated frameworks to register I0707 19:30:31.337323 29974 master.cpp:450] Master allowing unauthenticated agents to register I0707 19:30:31.337527 29974 master.cpp:464] Master allowing HTTP frameworks to register without authentication I0707 19:30:31.337689 29974 credentials.hpp:37] Loading credentials for authentication from '/tmp/ExamplesTest_DynamicReservationFramework_xp2TU9/credentials' W0707 19:30:31.337962 29974 credentials.hpp:52] Permissions on credentials file '/tmp/ExamplesTest_DynamicReservationFramework_xp2TU9/credentials' are too open. It is recommended that your credentials file is NOT accessible by others. I0707 19:30:31.338336 29974 master.cpp:506] Using default 'crammd5' authenticator I0707 19:30:31.338723 29974 authenticator.cpp:519] Initializing server SASL I0707 19:30:31.340744 29974 auxprop.cpp:73] Initialized in-memory auxiliary property plugin I0707 19:30:31.341084 29974 master.cpp:705] Authorization enabled I0707 19:30:31.342696 29971 hierarchical.cpp:151] Initialized hierarchical allocator process I0707 19:30:31.342895 29977 whitelist_watcher.cpp:77] No whitelist given I0707 19:30:31.358129 29972 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 27.780299ms I0707 19:30:31.358496 29972 replica.cpp:320] Persisted replica status to VOTING I0707 19:30:31.358949 29972 recover.cpp:582] Successfully joined the Paxos group I0707 19:30:31.359601 29972 recover.cpp:466] Recover process terminated I0707 19:30:31.365345 29946 containerizer.cpp:196] Using isolation: filesystem/posix,posix/cpu,posix/mem,network/cni W0707 19:30:31.368975 29946 backend.cpp:75] Failed to create 'aufs' backend: AufsBackend requires root privileges, but is running as user mesos W0707 19:30:31.369699 29946 backend.cpp:75] Failed to create 'bind' backend: BindBackend requires root privileges I0707 19:30:31.393633 29977 slave.cpp:205] Agent started on 1)@172.17.0.7:37568 I0707 19:30:31.394129 29977 slave.cpp:206] Flags at startup: --acls=""""permissive: true register_frameworks { principals { type: ANY } roles { type: SOME values: """"test"""" } } """" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/mesos/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""filesystem/posix,posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/mesos-1.0.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""1secs"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/mesos-zPIQS8/0"""" I0707 19:30:31.395762 29977 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 19:30:31.396198 29977 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 19:30:31.397099 29977 slave.cpp:594] Agent resources: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:31.397364 29977 slave.cpp:602] Agent attributes: [ ] I0707 19:30:31.397557 29977 slave.cpp:607] Agent hostname: 89b080073abb I0707 19:30:31.403342 29981 state.cpp:57] Recovering state from '/tmp/mesos-zPIQS8/0/meta' I0707 19:30:31.411643 29973 status_update_manager.cpp:200] Recovering status update manager I0707 19:30:31.412467 29983 containerizer.cpp:522] Recovering containerizer I0707 19:30:31.417868 29975 provisioner.cpp:253] Provisioner recovery complete I0707 19:30:31.419260 29977 slave.cpp:4856] Finished recovery I0707 19:30:31.420929 29977 slave.cpp:5028] Querying resource estimator for oversubscribable resources I0707 19:30:31.422238 29970 status_update_manager.cpp:174] Pausing sending status updates I0707 19:30:31.422533 29977 slave.cpp:969] New master detected at master@172.17.0.7:37568 I0707 19:30:31.422721 29977 slave.cpp:990] No credentials provided. Attempting to register without authentication I0707 19:30:31.422902 29977 slave.cpp:1001] Detecting new master I0707 19:30:31.423362 29977 slave.cpp:5042] Received oversubscribable resources from the resource estimator I0707 19:30:31.429898 29974 master.cpp:1973] The newly elected leader is master@172.17.0.7:37568 with id 443ee691-d272-454c-90fe-959c95948252 I0707 19:30:31.429949 29974 master.cpp:1986] Elected as the leading master! I0707 19:30:31.429968 29974 master.cpp:1673] Recovering from registrar I0707 19:30:31.431020 29976 registrar.cpp:332] Recovering registrar I0707 19:30:31.433168 29971 log.cpp:553] Attempting to start the writer I0707 19:30:31.439359 29982 replica.cpp:493] Replica received implicit promise request from (21)@172.17.0.7:37568 with proposal 1 I0707 19:30:31.441862 29946 containerizer.cpp:196] Using isolation: filesystem/posix,posix/cpu,posix/mem,network/cni W0707 19:30:31.443104 29946 backend.cpp:75] Failed to create 'aufs' backend: AufsBackend requires root privileges, but is running as user mesos W0707 19:30:31.443366 29946 backend.cpp:75] Failed to create 'bind' backend: BindBackend requires root privileges I0707 19:30:31.457201 29975 slave.cpp:205] Agent started on 2)@172.17.0.7:37568 I0707 19:30:31.457254 29975 slave.cpp:206] Flags at startup: --acls=""""permissive: true register_frameworks { principals { type: ANY } roles { type: SOME values: """"test"""" } } """" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/mesos/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""filesystem/posix,posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/mesos-1.0.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""1secs"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/mesos-zPIQS8/1"""" I0707 19:30:31.458678 29982 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 19.283309ms I0707 19:30:31.458717 29982 replica.cpp:342] Persisted promised to 1 I0707 19:30:31.461284 29969 coordinator.cpp:238] Coordinator attempting to fill missing positions I0707 19:30:31.461690 29975 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 19:30:31.461866 29975 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 19:30:31.462319 29975 slave.cpp:594] Agent resources: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:31.462396 29975 slave.cpp:602] Agent attributes: [ ] I0707 19:30:31.464599 29975 slave.cpp:607] Agent hostname: 89b080073abb I0707 19:30:31.466464 29978 replica.cpp:388] Replica received explicit promise request from (33)@172.17.0.7:37568 for position 0 with proposal 2 I0707 19:30:31.468361 29975 state.cpp:57] Recovering state from '/tmp/mesos-zPIQS8/1/meta' I0707 19:30:31.468951 29975 status_update_manager.cpp:200] Recovering status update manager I0707 19:30:31.469187 29975 containerizer.cpp:522] Recovering containerizer I0707 19:30:31.472386 29969 provisioner.cpp:253] Provisioner recovery complete I0707 19:30:31.473125 29969 slave.cpp:4856] Finished recovery I0707 19:30:31.473996 29969 slave.cpp:5028] Querying resource estimator for oversubscribable resources I0707 19:30:31.474643 29982 slave.cpp:969] New master detected at master@172.17.0.7:37568 I0707 19:30:31.474673 29982 slave.cpp:990] No credentials provided. Attempting to register without authentication I0707 19:30:31.474726 29982 slave.cpp:1001] Detecting new master I0707 19:30:31.474833 29982 status_update_manager.cpp:174] Pausing sending status updates I0707 19:30:31.475157 29969 slave.cpp:5042] Received oversubscribable resources from the resource estimator I0707 19:30:31.479303 29946 containerizer.cpp:196] Using isolation: filesystem/posix,posix/cpu,posix/mem,network/cni W0707 19:30:31.484933 29946 backend.cpp:75] Failed to create 'aufs' backend: AufsBackend requires root privileges, but is running as user mesos W0707 19:30:31.485230 29946 backend.cpp:75] Failed to create 'bind' backend: BindBackend requires root privileges I0707 19:30:31.492482 29978 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 25.968225ms I0707 19:30:31.492543 29978 replica.cpp:712] Persisted action at 0 I0707 19:30:31.495333 29972 replica.cpp:537] Replica received write request for position 0 from (46)@172.17.0.7:37568 I0707 19:30:31.495918 29972 leveldb.cpp:436] Reading position from leveldb took 553942ns I0707 19:30:31.505445 29973 slave.cpp:205] Agent started on 3)@172.17.0.7:37568 I0707 19:30:31.505492 29973 slave.cpp:206] Flags at startup: --acls=""""permissive: true register_frameworks { principals { type: ANY } roles { type: SOME values: """"test"""" } } """" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/mesos/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""filesystem/posix,posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/mesos-1.0.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""1secs"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/mesos-zPIQS8/2"""" I0707 19:30:31.506813 29973 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 19:30:31.506990 29973 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 19:30:31.507602 29973 slave.cpp:594] Agent resources: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:31.507680 29973 slave.cpp:602] Agent attributes: [ ] I0707 19:30:31.507695 29973 slave.cpp:607] Agent hostname: 89b080073abb I0707 19:30:31.510499 29973 state.cpp:57] Recovering state from '/tmp/mesos-zPIQS8/2/meta' I0707 19:30:31.511034 29973 status_update_manager.cpp:200] Recovering status update manager I0707 19:30:31.511270 29973 containerizer.cpp:522] Recovering containerizer I0707 19:30:31.514657 29984 provisioner.cpp:253] Provisioner recovery complete I0707 19:30:31.515745 29970 slave.cpp:4856] Finished recovery I0707 19:30:31.516332 29970 slave.cpp:5028] Querying resource estimator for oversubscribable resources I0707 19:30:31.517103 29970 slave.cpp:969] New master detected at master@172.17.0.7:37568 I0707 19:30:31.517134 29970 slave.cpp:990] No credentials provided. Attempting to register without authentication I0707 19:30:31.517190 29970 slave.cpp:1001] Detecting new master I0707 19:30:31.517294 29970 slave.cpp:5042] Received oversubscribable resources from the resource estimator I0707 19:30:31.517375 29970 status_update_manager.cpp:174] Pausing sending status updates I0707 19:30:31.519979 29946 sched.cpp:226] Version: 1.0.0 I0707 19:30:31.521474 29980 sched.cpp:330] New master detected at master@172.17.0.7:37568 I0707 19:30:31.521586 29980 sched.cpp:341] No credentials provided. Attempting to register without authentication I0707 19:30:31.521613 29980 sched.cpp:820] Sending SUBSCRIBE call to master@172.17.0.7:37568 I0707 19:30:31.521769 29980 sched.cpp:853] Will retry registration in 898.210224ms if necessary I0707 19:30:31.521977 29980 master.cpp:1500] Dropping 'mesos.scheduler.Call' message since not recovered yet I0707 19:30:31.522469 29972 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 26.469135ms I0707 19:30:31.522514 29972 replica.cpp:712] Persisted action at 0 I0707 19:30:31.523948 29980 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0707 19:30:31.538797 29972 slave.cpp:1529] Will retry registration in 1.972934225secs if necessary I0707 19:30:31.538925 29972 master.cpp:1500] Dropping 'mesos.internal.RegisterSlaveMessage' message since not recovered yet I0707 19:30:31.555934 29980 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 31.978704ms I0707 19:30:31.556016 29980 replica.cpp:712] Persisted action at 0 I0707 19:30:31.556066 29980 replica.cpp:697] Replica learned NOP action at position 0 I0707 19:30:31.557960 29980 log.cpp:569] Writer started with ending position 0 I0707 19:30:31.561957 29976 leveldb.cpp:436] Reading position from leveldb took 90775ns I0707 19:30:31.566825 29979 slave.cpp:1529] Will retry registration in 382.223275ms if necessary I0707 19:30:31.566967 29979 master.cpp:1500] Dropping 'mesos.internal.RegisterSlaveMessage' message since not recovered yet I0707 19:30:31.582073 29981 registrar.cpp:365] Successfully fetched the registry (0B) in 150.98496ms I0707 19:30:31.582437 29981 registrar.cpp:464] Applied 1 operations in 94170ns; attempting to update the 'registry' I0707 19:30:31.587924 29975 log.cpp:577] Attempting to append 168 bytes to the log I0707 19:30:31.588234 29975 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0707 19:30:31.589561 29978 replica.cpp:537] Replica received write request for position 1 from (51)@172.17.0.7:37568 I0707 19:30:31.621119 29978 leveldb.cpp:341] Persisting action (187 bytes) to leveldb took 31.540172ms I0707 19:30:31.621209 29978 replica.cpp:712] Persisted action at 1 I0707 19:30:31.623564 29978 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0707 19:30:31.656234 29978 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 32.657222ms I0707 19:30:31.656424 29978 replica.cpp:712] Persisted action at 1 I0707 19:30:31.656786 29978 replica.cpp:697] Replica learned APPEND action at position 1 I0707 19:30:31.660815 29978 registrar.cpp:509] Successfully updated the 'registry' in 78.219008ms I0707 19:30:31.661057 29978 registrar.cpp:395] Successfully recovered registrar I0707 19:30:31.661593 29978 log.cpp:596] Attempting to truncate the log to 1 I0707 19:30:31.662271 29978 master.cpp:1781] Recovered 0 agents from the Registry (129B) ; allowing 10mins for agents to re-register I0707 19:30:31.662566 29978 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0707 19:30:31.663004 29978 hierarchical.cpp:178] Skipping recovery of hierarchical allocator: nothing to recover I0707 19:30:31.664005 29975 replica.cpp:537] Replica received write request for position 2 from (52)@172.17.0.7:37568 I0707 19:30:31.696493 29975 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 32.24974ms I0707 19:30:31.696583 29975 replica.cpp:712] Persisted action at 2 I0707 19:30:31.698271 29984 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0707 19:30:31.731513 29984 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 32.894448ms I0707 19:30:31.731775 29984 leveldb.cpp:399] Deleting ~1 keys from leveldb took 95908ns I0707 19:30:31.732022 29984 replica.cpp:712] Persisted action at 2 I0707 19:30:31.732120 29984 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0707 19:30:31.950920 29984 slave.cpp:1529] Will retry registration in 3.638047644secs if necessary I0707 19:30:31.951601 29983 master.cpp:4676] Registering agent at slave(3)@172.17.0.7:37568 (89b080073abb) with id 443ee691-d272-454c-90fe-959c95948252-S0 I0707 19:30:31.953089 29974 registrar.cpp:464] Applied 1 operations in 182983ns; attempting to update the 'registry' I0707 19:30:31.957223 29983 log.cpp:577] Attempting to append 337 bytes to the log I0707 19:30:31.957545 29983 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0707 19:30:31.958920 29983 replica.cpp:537] Replica received write request for position 3 from (53)@172.17.0.7:37568 I0707 19:30:31.989977 29983 leveldb.cpp:341] Persisting action (356 bytes) to leveldb took 30.902846ms I0707 19:30:31.990154 29983 replica.cpp:712] Persisted action at 3 I0707 19:30:31.991781 29974 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0707 19:30:32.024132 29974 leveldb.cpp:341] Persisting action (358 bytes) to leveldb took 32.308737ms I0707 19:30:32.024305 29974 replica.cpp:712] Persisted action at 3 I0707 19:30:32.024449 29974 replica.cpp:697] Replica learned APPEND action at position 3 I0707 19:30:32.027683 29975 registrar.cpp:509] Successfully updated the 'registry' in 74.444032ms I0707 19:30:32.029734 29974 log.cpp:596] Attempting to truncate the log to 3 I0707 19:30:32.030093 29974 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0707 19:30:32.030804 29974 slave.cpp:3760] Received ping from slave-observer(1)@172.17.0.7:37568 I0707 19:30:32.031373 29974 slave.cpp:1169] Registered with master master@172.17.0.7:37568; given agent ID 443ee691-d272-454c-90fe-959c95948252-S0 I0707 19:30:32.031460 29974 fetcher.cpp:86] Clearing fetcher cache I0707 19:30:32.032008 29974 slave.cpp:1192] Checkpointing SlaveInfo to '/tmp/mesos-zPIQS8/2/meta/slaves/443ee691-d272-454c-90fe-959c95948252-S0/slave.info' I0707 19:30:32.031088 29975 master.cpp:4745] Registered agent 443ee691-d272-454c-90fe-959c95948252-S0 at slave(3)@172.17.0.7:37568 (89b080073abb) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:32.033082 29975 hierarchical.cpp:478] Added agent 443ee691-d272-454c-90fe-959c95948252-S0 (89b080073abb) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] (allocated: ) I0707 19:30:32.033608 29975 hierarchical.cpp:1537] No allocations performed I0707 19:30:32.033747 29975 hierarchical.cpp:1195] Performed allocation for agent 443ee691-d272-454c-90fe-959c95948252-S0 in 584676ns I0707 19:30:32.034116 29975 status_update_manager.cpp:181] Resuming sending status updates I0707 19:30:32.034010 29974 slave.cpp:1229] Forwarding total oversubscribed resources I0707 19:30:32.034950 29974 master.cpp:5128] Received update of agent 443ee691-d272-454c-90fe-959c95948252-S0 at slave(3)@172.17.0.7:37568 (89b080073abb) with total oversubscribed resources I0707 19:30:32.035320 29975 replica.cpp:537] Replica received write request for position 4 from (54)@172.17.0.7:37568 I0707 19:30:32.036041 29971 hierarchical.cpp:542] Agent 443ee691-d272-454c-90fe-959c95948252-S0 (89b080073abb) updated with oversubscribed resources (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) I0707 19:30:32.036212 29971 hierarchical.cpp:1537] No allocations performed I0707 19:30:32.036327 29971 hierarchical.cpp:1195] Performed allocation for agent 443ee691-d272-454c-90fe-959c95948252-S0 in 212809ns I0707 19:30:32.196679 29976 master.cpp:4676] Registering agent at slave(2)@172.17.0.7:37568 (89b080073abb) with id 443ee691-d272-454c-90fe-959c95948252-S1 I0707 19:30:32.196384 29979 slave.cpp:1529] Will retry registration in 1.893622708secs if necessary I0707 19:30:32.197633 29976 registrar.cpp:464] Applied 1 operations in 273890ns; attempting to update the 'registry' I0707 19:30:32.343791 29979 hierarchical.cpp:1537] No allocations performed I0707 19:30:32.344105 29979 hierarchical.cpp:1172] Performed allocation for 1 agents in 555357ns I0707 19:30:32.373800 29975 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 338.056804ms I0707 19:30:32.373987 29975 replica.cpp:712] Persisted action at 4 I0707 19:30:32.387712 29973 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0707 19:30:32.420934 29981 sched.cpp:820] Sending SUBSCRIBE call to master@172.17.0.7:37568 I0707 19:30:32.421331 29981 sched.cpp:853] Will retry registration in 2.058099434secs if necessary I0707 19:30:32.421792 29981 master.cpp:2550] Received SUBSCRIBE call for framework 'Dynamic Reservation Framework (C++)' at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 I0707 19:30:32.421934 29981 master.cpp:2012] Authorizing framework principal 'test' to receive offers for role 'test' I0707 19:30:32.423535 29981 master.cpp:2626] Subscribing framework Dynamic Reservation Framework (C++) with checkpointing disabled and capabilities [ ] I0707 19:30:32.425323 29976 hierarchical.cpp:271] Added framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:32.426686 29973 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 38.63187ms I0707 19:30:32.426865 29973 leveldb.cpp:399] Deleting ~2 keys from leveldb took 95262ns I0707 19:30:32.426981 29973 replica.cpp:712] Persisted action at 4 I0707 19:30:32.427096 29973 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0707 19:30:32.428614 29973 log.cpp:577] Attempting to append 503 bytes to the log I0707 19:30:32.426307 29981 sched.cpp:743] Framework registered with 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:32.428905 29981 dynamic_reservation_framework.cpp:73] Registered! I0707 19:30:32.429059 29981 sched.cpp:757] Scheduler::registered took 167468ns I0707 19:30:32.429239 29981 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 5 I0707 19:30:32.431745 29976 hierarchical.cpp:1632] No inverse offers to send out! I0707 19:30:32.432610 29984 master.cpp:5835] Sending 1 offers to framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 I0707 19:30:32.433627 29984 dynamic_reservation_framework.cpp:84] Received offer 443ee691-d272-454c-90fe-959c95948252-O0 with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:32.434248 29984 sched.cpp:917] Scheduler::resourceOffers took 642030ns I0707 19:30:32.436048 29984 master.cpp:3468] Processing ACCEPT call for offers: [ 443ee691-d272-454c-90fe-959c95948252-O0 ] on agent 443ee691-d272-454c-90fe-959c95948252-S0 at slave(3)@172.17.0.7:37568 (89b080073abb) for framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 I0707 19:30:32.436368 29984 master.cpp:3144] Authorizing principal 'test' to reserve resources 'cpus(test, test):1; mem(test, test):128' I0707 19:30:32.438547 29976 hierarchical.cpp:1172] Performed allocation for 1 agents in 12.203221ms I0707 19:30:32.432860 29981 replica.cpp:537] Replica received write request for position 5 from (55)@172.17.0.7:37568 I0707 19:30:32.439970 29984 master.cpp:3695] Applying RESERVE operation for resources cpus(test, test):1; mem(test, test):128 from framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 to agent 443ee691-d272-454c-90fe-959c95948252-S0 at slave(3)@172.17.0.7:37568 (89b080073abb) I0707 19:30:32.440765 29984 master.cpp:7098] Sending checkpointed resources cpus(test, test):1; mem(test, test):128 to agent 443ee691-d272-454c-90fe-959c95948252-S0 at slave(3)@172.17.0.7:37568 (89b080073abb) I0707 19:30:32.444211 29976 hierarchical.cpp:683] Updated allocation of framework 443ee691-d272-454c-90fe-959c95948252-0000 on agent 443ee691-d272-454c-90fe-959c95948252-S0 from cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] to cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 I0707 19:30:32.444527 29984 slave.cpp:2600] Updated checkpointed resources from to cpus(test, test):1; mem(test, test):128 I0707 19:30:32.445664 29976 hierarchical.cpp:924] Recovered cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ) on agent 443ee691-d272-454c-90fe-959c95948252-S0 from framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:32.467499 29981 leveldb.cpp:341] Persisting action (522 bytes) to leveldb took 28.613107ms I0707 19:30:32.467705 29981 replica.cpp:712] Persisted action at 5 I0707 19:30:32.483840 29971 replica.cpp:691] Replica received learned notice for position 5 from @0.0.0.0:0 I0707 19:30:32.511849 29971 leveldb.cpp:341] Persisting action (524 bytes) to leveldb took 27.859875ms I0707 19:30:32.512235 29971 replica.cpp:712] Persisted action at 5 I0707 19:30:32.512511 29971 replica.cpp:697] Replica learned APPEND action at position 5 I0707 19:30:32.516393 29971 registrar.cpp:509] Successfully updated the 'registry' in 318.636032ms I0707 19:30:32.517113 29971 log.cpp:596] Attempting to truncate the log to 5 I0707 19:30:32.518293 29971 master.cpp:4745] Registered agent 443ee691-d272-454c-90fe-959c95948252-S1 at slave(2)@172.17.0.7:37568 (89b080073abb) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:32.518659 29971 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 6 I0707 19:30:32.519564 29971 hierarchical.cpp:478] Added agent 443ee691-d272-454c-90fe-959c95948252-S1 (89b080073abb) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] (allocated: ) I0707 19:30:32.520804 29971 hierarchical.cpp:1632] No inverse offers to send out! I0707 19:30:32.521106 29971 hierarchical.cpp:1195] Performed allocation for agent 443ee691-d272-454c-90fe-959c95948252-S1 in 1.298948ms I0707 19:30:32.521436 29971 slave.cpp:1169] Registered with master master@172.17.0.7:37568; given agent ID 443ee691-d272-454c-90fe-959c95948252-S1 I0707 19:30:32.521669 29971 fetcher.cpp:86] Clearing fetcher cache I0707 19:30:32.522266 29971 slave.cpp:1192] Checkpointing SlaveInfo to '/tmp/mesos-zPIQS8/1/meta/slaves/443ee691-d272-454c-90fe-959c95948252-S1/slave.info' I0707 19:30:32.523712 29971 slave.cpp:1229] Forwarding total oversubscribed resources I0707 19:30:32.523080 29981 master.cpp:5835] Sending 1 offers to framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 I0707 19:30:32.524303 29979 status_update_manager.cpp:181] Resuming sending status updates I0707 19:30:32.524688 29981 master.cpp:5128] Received update of agent 443ee691-d272-454c-90fe-959c95948252-S1 at slave(2)@172.17.0.7:37568 (89b080073abb) with total oversubscribed resources I0707 19:30:32.525228 29970 dynamic_reservation_framework.cpp:84] Received offer 443ee691-d272-454c-90fe-959c95948252-O1 with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:32.525902 29970 sched.cpp:917] Scheduler::resourceOffers took 691140ns I0707 19:30:32.525388 29971 slave.cpp:3760] Received ping from slave-observer(2)@172.17.0.7:37568 I0707 19:30:32.527039 29982 replica.cpp:537] Replica received write request for position 6 from (58)@172.17.0.7:37568 I0707 19:30:32.528058 29981 master.cpp:3468] Processing ACCEPT call for offers: [ 443ee691-d272-454c-90fe-959c95948252-O1 ] on agent 443ee691-d272-454c-90fe-959c95948252-S1 at slave(2)@172.17.0.7:37568 (89b080073abb) for framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 I0707 19:30:32.528295 29979 hierarchical.cpp:542] Agent 443ee691-d272-454c-90fe-959c95948252-S1 (89b080073abb) updated with oversubscribed resources (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000]) I0707 19:30:32.529708 29979 hierarchical.cpp:1537] No allocations performed I0707 19:30:32.529754 29979 hierarchical.cpp:1632] No inverse offers to send out! I0707 19:30:32.529820 29979 hierarchical.cpp:1195] Performed allocation for agent 443ee691-d272-454c-90fe-959c95948252-S1 in 1.473485ms I0707 19:30:32.529899 29981 master.cpp:3144] Authorizing principal 'test' to reserve resources 'cpus(test, test):1; mem(test, test):128' I0707 19:30:32.531919 29984 master.cpp:3695] Applying RESERVE operation for resources cpus(test, test):1; mem(test, test):128 from framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 to agent 443ee691-d272-454c-90fe-959c95948252-S1 at slave(2)@172.17.0.7:37568 (89b080073abb) I0707 19:30:32.532374 29984 master.cpp:7098] Sending checkpointed resources cpus(test, test):1; mem(test, test):128 to agent 443ee691-d272-454c-90fe-959c95948252-S1 at slave(2)@172.17.0.7:37568 (89b080073abb) I0707 19:30:32.534451 29977 hierarchical.cpp:683] Updated allocation of framework 443ee691-d272-454c-90fe-959c95948252-0000 on agent 443ee691-d272-454c-90fe-959c95948252-S1 from cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] to cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 I0707 19:30:32.537169 29980 hierarchical.cpp:924] Recovered cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ) on agent 443ee691-d272-454c-90fe-959c95948252-S1 from framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:32.535399 29984 slave.cpp:2600] Updated checkpointed resources from to cpus(test, test):1; mem(test, test):128 I0707 19:30:32.554222 29982 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 27.170492ms I0707 19:30:32.554395 29982 replica.cpp:712] Persisted action at 6 I0707 19:30:32.556767 29970 replica.cpp:691] Replica received learned notice for position 6 from @0.0.0.0:0 I0707 19:30:32.579500 29970 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 22.578289ms I0707 19:30:32.579659 29970 leveldb.cpp:399] Deleting ~2 keys from leveldb took 86499ns I0707 19:30:32.579692 29970 replica.cpp:712] Persisted action at 6 I0707 19:30:32.579746 29970 replica.cpp:697] Replica learned TRUNCATE action at position 6 I0707 19:30:33.347929 29970 hierarchical.cpp:1632] No inverse offers to send out! I0707 19:30:33.349206 29970 hierarchical.cpp:1172] Performed allocation for 2 agents in 3.521151ms I0707 19:30:33.349076 29977 master.cpp:5835] Sending 2 offers to framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 I0707 19:30:33.350098 29977 dynamic_reservation_framework.cpp:84] Received offer 443ee691-d272-454c-90fe-959c95948252-O2 with cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:33.350462 29977 dynamic_reservation_framework.cpp:150] Launching task 0 using offer 443ee691-d272-454c-90fe-959c95948252-O2 I0707 19:30:33.350879 29977 dynamic_reservation_framework.cpp:84] Received offer 443ee691-d272-454c-90fe-959c95948252-O3 with cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:33.351199 29977 dynamic_reservation_framework.cpp:150] Launching task 1 using offer 443ee691-d272-454c-90fe-959c95948252-O3 I0707 19:30:33.351398 29977 sched.cpp:917] Scheduler::resourceOffers took 1.321372ms I0707 19:30:33.352967 29977 master.cpp:3468] Processing ACCEPT call for offers: [ 443ee691-d272-454c-90fe-959c95948252-O2 ] on agent 443ee691-d272-454c-90fe-959c95948252-S0 at slave(3)@172.17.0.7:37568 (89b080073abb) for framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 I0707 19:30:33.353111 29977 master.cpp:3106] Authorizing framework principal 'test' to launch task 0 I0707 19:30:33.355710 29977 master.cpp:3468] Processing ACCEPT call for offers: [ 443ee691-d272-454c-90fe-959c95948252-O3 ] on agent 443ee691-d272-454c-90fe-959c95948252-S1 at slave(2)@172.17.0.7:37568 (89b080073abb) for framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 I0707 19:30:33.355829 29977 master.cpp:3106] Authorizing framework principal 'test' to launch task 1 I0707 19:30:33.359690 29977 master.cpp:7565] Adding task 0 with resources cpus(test, test):1; mem(test, test):128 on agent 443ee691-d272-454c-90fe-959c95948252-S0 (89b080073abb) I0707 19:30:33.359900 29977 master.cpp:3957] Launching task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 with resources cpus(test, test):1; mem(test, test):128 on agent 443ee691-d272-454c-90fe-959c95948252-S0 at slave(3)@172.17.0.7:37568 (89b080073abb) I0707 19:30:33.360642 29970 slave.cpp:1569] Got assigned task 0 for framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.361142 29970 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 19:30:33.362133 29983 hierarchical.cpp:924] Recovered ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):15; mem(*):47142 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: cpus(test, test):1; mem(test, test):128) on agent 443ee691-d272-454c-90fe-959c95948252-S0 from framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.362866 29970 slave.cpp:1688] Launching task 0 for framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.362995 29970 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 19:30:33.367036 29977 master.cpp:7565] Adding task 1 with resources cpus(test, test):1; mem(test, test):128 on agent 443ee691-d272-454c-90fe-959c95948252-S1 (89b080073abb) I0707 19:30:33.367182 29977 master.cpp:3957] Launching task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 with resources cpus(test, test):1; mem(test, test):128 on agent 443ee691-d272-454c-90fe-959c95948252-S1 at slave(2)@172.17.0.7:37568 (89b080073abb) I0707 19:30:33.367564 29978 slave.cpp:1569] Got assigned task 1 for framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.367811 29978 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 19:30:33.368335 29978 slave.cpp:1688] Launching task 1 for framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.368511 29978 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 19:30:33.373251 29970 paths.cpp:528] Trying to chown '/tmp/mesos-zPIQS8/2/slaves/443ee691-d272-454c-90fe-959c95948252-S0/frameworks/443ee691-d272-454c-90fe-959c95948252-0000/executors/0/runs/5f6efb5e-5357-4514-964a-5af3d0ec33f1' to user 'mesos' I0707 19:30:33.376608 29977 hierarchical.cpp:924] Recovered ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):15; mem(*):47142 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: cpus(test, test):1; mem(test, test):128) on agent 443ee691-d272-454c-90fe-959c95948252-S1 from framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.376976 29978 paths.cpp:528] Trying to chown '/tmp/mesos-zPIQS8/1/slaves/443ee691-d272-454c-90fe-959c95948252-S1/frameworks/443ee691-d272-454c-90fe-959c95948252-0000/executors/1/runs/34287f06-c6d4-4ab6-b706-5abf0e314655' to user 'mesos' I0707 19:30:33.378888 29970 slave.cpp:5748] Launching executor 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/mesos-zPIQS8/2/slaves/443ee691-d272-454c-90fe-959c95948252-S0/frameworks/443ee691-d272-454c-90fe-959c95948252-0000/executors/0/runs/5f6efb5e-5357-4514-964a-5af3d0ec33f1' I0707 19:30:33.380379 29980 containerizer.cpp:781] Starting container '5f6efb5e-5357-4514-964a-5af3d0ec33f1' for executor '0' of framework '443ee691-d272-454c-90fe-959c95948252-0000' I0707 19:30:33.383648 29978 slave.cpp:5748] Launching executor 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/mesos-zPIQS8/1/slaves/443ee691-d272-454c-90fe-959c95948252-S1/frameworks/443ee691-d272-454c-90fe-959c95948252-0000/executors/1/runs/34287f06-c6d4-4ab6-b706-5abf0e314655' I0707 19:30:33.384750 29971 containerizer.cpp:781] Starting container '34287f06-c6d4-4ab6-b706-5abf0e314655' for executor '1' of framework '443ee691-d272-454c-90fe-959c95948252-0000' I0707 19:30:33.384395 29978 slave.cpp:1914] Queuing task '1' for executor '1' of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.385623 29978 slave.cpp:922] Successfully attached file '/tmp/mesos-zPIQS8/1/slaves/443ee691-d272-454c-90fe-959c95948252-S1/frameworks/443ee691-d272-454c-90fe-959c95948252-0000/executors/1/runs/34287f06-c6d4-4ab6-b706-5abf0e314655' I0707 19:30:33.399920 29979 containerizer.cpp:1284] Launching 'mesos-containerizer' with flags '--command=""""{""""arguments"""":[""""mesos-executor"""",""""--launcher_dir=\/mesos\/mesos-1.0.0\/_build\/src""""],""""shell"""":false,""""value"""":""""\/mesos\/mesos-1.0.0\/_build\/src\/mesos-executor""""}"""" --help=""""false"""" --pipe_read=""""9"""" --pipe_write=""""12"""" --pre_exec_commands=""""[]"""" --unshare_namespace_mnt=""""false"""" --user=""""mesos"""" --working_directory=""""/tmp/mesos-zPIQS8/2/slaves/443ee691-d272-454c-90fe-959c95948252-S0/frameworks/443ee691-d272-454c-90fe-959c95948252-0000/executors/0/runs/5f6efb5e-5357-4514-964a-5af3d0ec33f1""""' I0707 19:30:33.403643 29971 containerizer.cpp:1284] Launching 'mesos-containerizer' with flags '--command=""""{""""arguments"""":[""""mesos-executor"""",""""--launcher_dir=\/mesos\/mesos-1.0.0\/_build\/src""""],""""shell"""":false,""""value"""":""""\/mesos\/mesos-1.0.0\/_build\/src\/mesos-executor""""}"""" --help=""""false"""" --pipe_read=""""13"""" --pipe_write=""""14"""" --pre_exec_commands=""""[]"""" --unshare_namespace_mnt=""""false"""" --user=""""mesos"""" --working_directory=""""/tmp/mesos-zPIQS8/1/slaves/443ee691-d272-454c-90fe-959c95948252-S1/frameworks/443ee691-d272-454c-90fe-959c95948252-0000/executors/1/runs/34287f06-c6d4-4ab6-b706-5abf0e314655""""' I0707 19:30:33.406067 29979 launcher.cpp:126] Forked child with pid '29991' for container '5f6efb5e-5357-4514-964a-5af3d0ec33f1' I0707 19:30:33.407141 29971 launcher.cpp:126] Forked child with pid '29992' for container '34287f06-c6d4-4ab6-b706-5abf0e314655' I0707 19:30:33.405388 29970 slave.cpp:1914] Queuing task '0' for executor '0' of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.408761 29970 slave.cpp:922] Successfully attached file '/tmp/mesos-zPIQS8/2/slaves/443ee691-d272-454c-90fe-959c95948252-S0/frameworks/443ee691-d272-454c-90fe-959c95948252-0000/executors/0/runs/5f6efb5e-5357-4514-964a-5af3d0ec33f1' I0707 19:30:33.512244 29979 slave.cpp:1529] Will retry registration in 109.135061ms if necessary I0707 19:30:33.512642 29979 master.cpp:4676] Registering agent at slave(1)@172.17.0.7:37568 (89b080073abb) with id 443ee691-d272-454c-90fe-959c95948252-S2 I0707 19:30:33.513365 29979 registrar.cpp:464] Applied 1 operations in 161276ns; attempting to update the 'registry' I0707 19:30:33.522032 29979 log.cpp:577] Attempting to append 669 bytes to the log I0707 19:30:33.522296 29979 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 7 I0707 19:30:33.525583 29975 replica.cpp:537] Replica received write request for position 7 from (67)@172.17.0.7:37568 I0707 19:30:33.556705 29975 leveldb.cpp:341] Persisting action (688 bytes) to leveldb took 31.103148ms I0707 19:30:33.556799 29975 replica.cpp:712] Persisted action at 7 I0707 19:30:33.558219 29982 replica.cpp:691] Replica received learned notice for position 7 from @0.0.0.0:0 I0707 19:30:33.590251 29982 leveldb.cpp:341] Persisting action (690 bytes) to leveldb took 32.026154ms I0707 19:30:33.590347 29982 replica.cpp:712] Persisted action at 7 I0707 19:30:33.590395 29982 replica.cpp:697] Replica learned APPEND action at position 7 I0707 19:30:33.595444 29975 log.cpp:596] Attempting to truncate the log to 7 I0707 19:30:33.595649 29975 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 8 I0707 19:30:33.596961 29975 replica.cpp:537] Replica received write request for position 8 from (68)@172.17.0.7:37568 I0707 19:30:33.595394 29982 registrar.cpp:509] Successfully updated the 'registry' in 81.936128ms I0707 19:30:33.598295 29969 master.cpp:4745] Registered agent 443ee691-d272-454c-90fe-959c95948252-S2 at slave(1)@172.17.0.7:37568 (89b080073abb) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:33.599622 29984 hierarchical.cpp:478] Added agent 443ee691-d272-454c-90fe-959c95948252-S2 (89b080073abb) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] (allocated: ) I0707 19:30:33.600498 29978 slave.cpp:1169] Registered with master master@172.17.0.7:37568; given agent ID 443ee691-d272-454c-90fe-959c95948252-S2 I0707 19:30:33.600530 29978 fetcher.cpp:86] Clearing fetcher cache I0707 19:30:33.601135 29978 slave.cpp:1192] Checkpointing SlaveInfo to '/tmp/mesos-zPIQS8/0/meta/slaves/443ee691-d272-454c-90fe-959c95948252-S2/slave.info' I0707 19:30:33.601351 29981 status_update_manager.cpp:181] Resuming sending status updates I0707 19:30:33.601753 29978 slave.cpp:1229] Forwarding total oversubscribed resources I0707 19:30:33.601856 29978 slave.cpp:3760] Received ping from slave-observer(3)@172.17.0.7:37568 I0707 19:30:33.602028 29978 master.cpp:5128] Received update of agent 443ee691-d272-454c-90fe-959c95948252-S2 at slave(1)@172.17.0.7:37568 (89b080073abb) with total oversubscribed resources I0707 19:30:33.602550 29984 hierarchical.cpp:1632] No inverse offers to send out! I0707 19:30:33.602638 29984 hierarchical.cpp:1195] Performed allocation for agent 443ee691-d272-454c-90fe-959c95948252-S2 in 2.976114ms I0707 19:30:33.602772 29984 hierarchical.cpp:542] Agent 443ee691-d272-454c-90fe-959c95948252-S2 (89b080073abb) updated with oversubscribed resources (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000]) I0707 19:30:33.603118 29984 hierarchical.cpp:1537] No allocations performed I0707 19:30:33.603154 29984 hierarchical.cpp:1632] No inverse offers to send out! I0707 19:30:33.603205 29984 hierarchical.cpp:1195] Performed allocation for agent 443ee691-d272-454c-90fe-959c95948252-S2 in 384101ns I0707 19:30:33.604754 29984 master.cpp:5835] Sending 1 offers to framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 I0707 19:30:33.605154 29984 dynamic_reservation_framework.cpp:84] Received offer 443ee691-d272-454c-90fe-959c95948252-O4 with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:33.605554 29984 sched.cpp:917] Scheduler::resourceOffers took 415259ns I0707 19:30:33.606514 29984 master.cpp:3468] Processing ACCEPT call for offers: [ 443ee691-d272-454c-90fe-959c95948252-O4 ] on agent 443ee691-d272-454c-90fe-959c95948252-S2 at slave(1)@172.17.0.7:37568 (89b080073abb) for framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 I0707 19:30:33.606920 29984 master.cpp:3144] Authorizing principal 'test' to reserve resources 'cpus(test, test):1; mem(test, test):128' I0707 19:30:33.616320 29979 master.cpp:3695] Applying RESERVE operation for resources cpus(test, test):1; mem(test, test):128 from framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 to agent 443ee691-d272-454c-90fe-959c95948252-S2 at slave(1)@172.17.0.7:37568 (89b080073abb) I0707 19:30:33.616832 29979 master.cpp:7098] Sending checkpointed resources cpus(test, test):1; mem(test, test):128 to agent 443ee691-d272-454c-90fe-959c95948252-S2 at slave(1)@172.17.0.7:37568 (89b080073abb) I0707 19:30:33.620553 29979 hierarchical.cpp:683] Updated allocation of framework 443ee691-d272-454c-90fe-959c95948252-0000 on agent 443ee691-d272-454c-90fe-959c95948252-S2 from cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] to cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 I0707 19:30:33.621477 29979 hierarchical.cpp:924] Recovered cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ) on agent 443ee691-d272-454c-90fe-959c95948252-S2 from framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.626524 29969 slave.cpp:2600] Updated checkpointed resources from to cpus(test, test):1; mem(test, test):128 I0707 19:30:33.632431 29975 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 35.442219ms I0707 19:30:33.632493 29975 replica.cpp:712] Persisted action at 8 I0707 19:30:33.634073 29975 replica.cpp:691] Replica received learned notice for position 8 from @0.0.0.0:0 I0707 19:30:33.662283 29975 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 28.207233ms I0707 19:30:33.662469 29975 leveldb.cpp:399] Deleting ~2 keys from leveldb took 111311ns I0707 19:30:33.662504 29975 replica.cpp:712] Persisted action at 8 I0707 19:30:33.662557 29975 replica.cpp:697] Replica learned TRUNCATE action at position 8 I0707 19:30:33.750787 29992 exec.cpp:161] Version: 1.0.0 I0707 19:30:33.758561 29975 slave.cpp:2902] Got registration for executor '1' of framework 443ee691-d272-454c-90fe-959c95948252-0000 from executor(1)@172.17.0.7:38689 I0707 19:30:33.766023 29975 slave.cpp:2079] Sending queued task '1' to executor '1' of framework 443ee691-d272-454c-90fe-959c95948252-0000 at executor(1)@172.17.0.7:38689 I0707 19:30:33.774989 30060 exec.cpp:236] Executor registered on agent 443ee691-d272-454c-90fe-959c95948252-S1 Received SUBSCRIBED event Subscribed executor on 89b080073abb Received LAUNCH event Starting task 1 /mesos/mesos-1.0.0/_build/src/mesos-containerizer launch --command=""""{""""shell"""":true,""""value"""":""""echo hello""""}"""" --help=""""false"""" --unshare_namespace_mnt=""""false"""" Forked command at 30062 I0707 19:30:33.842072 29981 slave.cpp:3285] Handling status update TASK_RUNNING (UUID: 1a550a5a-a5c4-4d37-9018-f6a41ca4eb14) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 from executor(1)@172.17.0.7:38689 I0707 19:30:33.847599 29974 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 1a550a5a-a5c4-4d37-9018-f6a41ca4eb14) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.847682 29974 status_update_manager.cpp:497] Creating StatusUpdate stream for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.849678 29974 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 1a550a5a-a5c4-4d37-9018-f6a41ca4eb14) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to the agent I0707 19:30:33.850505 29973 slave.cpp:3678] Forwarding the update TASK_RUNNING (UUID: 1a550a5a-a5c4-4d37-9018-f6a41ca4eb14) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to master@172.17.0.7:37568 I0707 19:30:33.850747 29973 slave.cpp:3572] Status update manager successfully handled status update TASK_RUNNING (UUID: 1a550a5a-a5c4-4d37-9018-f6a41ca4eb14) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.850805 29973 slave.cpp:3588] Sending acknowledgement for status update TASK_RUNNING (UUID: 1a550a5a-a5c4-4d37-9018-f6a41ca4eb14) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to executor(1)@172.17.0.7:38689 I0707 19:30:33.851368 29973 master.cpp:5273] Status update TASK_RUNNING (UUID: 1a550a5a-a5c4-4d37-9018-f6a41ca4eb14) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 from agent 443ee691-d272-454c-90fe-959c95948252-S1 at slave(2)@172.17.0.7:37568 (89b080073abb) I0707 19:30:33.852810 29973 master.cpp:5321] Forwarding status update TASK_RUNNING (UUID: 1a550a5a-a5c4-4d37-9018-f6a41ca4eb14) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.853047 29973 master.cpp:6959] Updating the state of task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0707 19:30:33.853245 29973 dynamic_reservation_framework.cpp:211] Task 1 is in state TASK_RUNNING I0707 19:30:33.853276 29973 sched.cpp:1025] Scheduler::statusUpdate took 41044ns I0707 19:30:33.854579 29970 master.cpp:4388] Processing ACKNOWLEDGE call 1a550a5a-a5c4-4d37-9018-f6a41ca4eb14 for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 on agent 443ee691-d272-454c-90fe-959c95948252-S1 I0707 19:30:33.855051 29970 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 1a550a5a-a5c4-4d37-9018-f6a41ca4eb14) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:33.855461 29972 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: 1a550a5a-a5c4-4d37-9018-f6a41ca4eb14) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.054215 30083 exec.cpp:161] Version: 1.0.0 I0707 19:30:34.069156 29972 slave.cpp:2902] Got registration for executor '0' of framework 443ee691-d272-454c-90fe-959c95948252-0000 from executor(1)@172.17.0.7:37892 I0707 19:30:34.077040 29972 slave.cpp:2079] Sending queued task '0' to executor '0' of framework 443ee691-d272-454c-90fe-959c95948252-0000 at executor(1)@172.17.0.7:37892 I0707 19:30:34.082531 30081 exec.cpp:236] Executor registered on agent 443ee691-d272-454c-90fe-959c95948252-S0 Received SUBSCRIBED event Subscribed executor on 89b080073abb Received LAUNCH event Starting task 0 /mesos/mesos-1.0.0/_build/src/mesos-containerizer launch --command=""""{""""shell"""":true,""""value"""":""""echo hello""""}"""" --help=""""false"""" --unshare_namespace_mnt=""""false"""" Forked command at 30093 I0707 19:30:34.121834 29971 slave.cpp:3285] Handling status update TASK_RUNNING (UUID: 4b52fae2-8c9c-4dd5-8459-729d84b86a2e) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 from executor(1)@172.17.0.7:37892 I0707 19:30:34.125562 29982 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 4b52fae2-8c9c-4dd5-8459-729d84b86a2e) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.125622 29982 status_update_manager.cpp:497] Creating StatusUpdate stream for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.126216 29982 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 4b52fae2-8c9c-4dd5-8459-729d84b86a2e) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to the agent I0707 19:30:34.126847 29971 slave.cpp:3678] Forwarding the update TASK_RUNNING (UUID: 4b52fae2-8c9c-4dd5-8459-729d84b86a2e) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to master@172.17.0.7:37568 I0707 19:30:34.127034 29971 slave.cpp:3572] Status update manager successfully handled status update TASK_RUNNING (UUID: 4b52fae2-8c9c-4dd5-8459-729d84b86a2e) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.127084 29971 slave.cpp:3588] Sending acknowledgement for status update TASK_RUNNING (UUID: 4b52fae2-8c9c-4dd5-8459-729d84b86a2e) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to executor(1)@172.17.0.7:37892 I0707 19:30:34.128938 29971 master.cpp:5273] Status update TASK_RUNNING (UUID: 4b52fae2-8c9c-4dd5-8459-729d84b86a2e) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 from agent 443ee691-d272-454c-90fe-959c95948252-S0 at slave(3)@172.17.0.7:37568 (89b080073abb) I0707 19:30:34.128993 29971 master.cpp:5321] Forwarding status update TASK_RUNNING (UUID: 4b52fae2-8c9c-4dd5-8459-729d84b86a2e) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.129176 29971 master.cpp:6959] Updating the state of task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0707 19:30:34.129348 29971 dynamic_reservation_framework.cpp:211] Task 0 is in state TASK_RUNNING I0707 19:30:34.129369 29971 sched.cpp:1025] Scheduler::statusUpdate took 31771ns I0707 19:30:34.130749 29971 master.cpp:4388] Processing ACKNOWLEDGE call 4b52fae2-8c9c-4dd5-8459-729d84b86a2e for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 on agent 443ee691-d272-454c-90fe-959c95948252-S0 I0707 19:30:34.131034 29971 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 4b52fae2-8c9c-4dd5-8459-729d84b86a2e) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.131378 29971 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: 4b52fae2-8c9c-4dd5-8459-729d84b86a2e) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 hello Command exited with status 0 (pid: 30062) I0707 19:30:34.343664 29977 slave.cpp:3285] Handling status update TASK_FINISHED (UUID: 08fecbb0-8539-4123-916a-37cda28ec934) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 from executor(1)@172.17.0.7:38689 I0707 19:30:34.346058 29984 slave.cpp:6088] Terminating task 1 I0707 19:30:34.350986 29984 status_update_manager.cpp:320] Received status update TASK_FINISHED (UUID: 08fecbb0-8539-4123-916a-37cda28ec934) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.351263 29984 status_update_manager.cpp:374] Forwarding update TASK_FINISHED (UUID: 08fecbb0-8539-4123-916a-37cda28ec934) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to the agent I0707 19:30:34.353025 29984 slave.cpp:3678] Forwarding the update TASK_FINISHED (UUID: 08fecbb0-8539-4123-916a-37cda28ec934) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to master@172.17.0.7:37568 I0707 19:30:34.353231 29984 slave.cpp:3572] Status update manager successfully handled status update TASK_FINISHED (UUID: 08fecbb0-8539-4123-916a-37cda28ec934) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.353282 29984 slave.cpp:3588] Sending acknowledgement for status update TASK_FINISHED (UUID: 08fecbb0-8539-4123-916a-37cda28ec934) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to executor(1)@172.17.0.7:38689 I0707 19:30:34.353657 29969 master.cpp:5273] Status update TASK_FINISHED (UUID: 08fecbb0-8539-4123-916a-37cda28ec934) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 from agent 443ee691-d272-454c-90fe-959c95948252-S1 at slave(2)@172.17.0.7:37568 (89b080073abb) I0707 19:30:34.353708 29969 master.cpp:5321] Forwarding status update TASK_FINISHED (UUID: 08fecbb0-8539-4123-916a-37cda28ec934) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.353883 29969 master.cpp:6959] Updating the state of task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 (latest state: TASK_FINISHED, status update state: TASK_FINISHED) I0707 19:30:34.354302 29969 dynamic_reservation_framework.cpp:208] Task 1 is finished at agent 443ee691-d272-454c-90fe-959c95948252-S1 I0707 19:30:34.354327 29969 sched.cpp:1025] Scheduler::statusUpdate took 42112ns I0707 19:30:34.354652 29969 master.cpp:4388] Processing ACKNOWLEDGE call 08fecbb0-8539-4123-916a-37cda28ec934 for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 on agent 443ee691-d272-454c-90fe-959c95948252-S1 I0707 19:30:34.354730 29969 master.cpp:7025] Removing task 1 with resources cpus(test, test):1; mem(test, test):128 of framework 443ee691-d272-454c-90fe-959c95948252-0000 on agent 443ee691-d272-454c-90fe-959c95948252-S1 at slave(2)@172.17.0.7:37568 (89b080073abb) I0707 19:30:34.357210 29977 hierarchical.cpp:1632] No inverse offers to send out! I0707 19:30:34.357326 29977 hierarchical.cpp:1172] Performed allocation for 3 agents in 7.731795ms I0707 19:30:34.358654 29969 master.cpp:5835] Sending 3 offers to framework 443ee691-d272-454c-90fe-959c95948252-0000 (Dynamic Reservation Framework (C++)) at scheduler-a956abb7-0f5d-46e3-a670-a3f684eccbb5@172.17.0.7:37568 I0707 19:30:34.359386 29969 dynamic_reservation_framework.cpp:84] Received offer 443ee691-d272-454c-90fe-959c95948252-O5 with cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:34.359557 29969 dynamic_reservation_framework.cpp:164] The task on 443ee691-d272-454c-90fe-959c95948252-S0 is running, waiting for task done I0707 19:30:34.359571 29969 dynamic_reservation_framework.cpp:84] Received offer 443ee691-d272-454c-90fe-959c95948252-O6 with cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 19:30:34.359845 29969 dynamic_reservation_framework.cpp:150] Launching task 2 using offer 443ee691-d272-454c-90fe-959c95948252-O6 I0707 19:30:34.360051 29969 dynamic_reservation_framework.cpp:84] Received offer 443ee691-d272-454c-90fe-959c95948252-O7 with cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] F0707 19:30:34.360167 29969 dynamic_reservation_framework.cpp:135] Check failed: reserved.contains(taskResources) *** Check failure stack trace: *** I0707 19:30:34.361974 29975 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 08fecbb0-8539-4123-916a-37cda28ec934) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.362541 29975 status_update_manager.cpp:528] Cleaning up status update stream for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.362946 29977 hierarchical.cpp:924] Recovered cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):15; mem(*):47142) on agent 443ee691-d272-454c-90fe-959c95948252-S1 from framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.363346 29975 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: 08fecbb0-8539-4123-916a-37cda28ec934) for task 1 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.363533 29975 slave.cpp:6129] Completing task 1 hello Command exited with status 0 (pid: 30093) I0707 19:30:34.421588 29981 slave.cpp:3285] Handling status update TASK_FINISHED (UUID: 14598eb7-e5a3-4aec-9b92-abe6c26957c9) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 from executor(1)@172.17.0.7:37892 I0707 19:30:34.424176 29981 slave.cpp:6088] Terminating task 0 I0707 19:30:34.427489 29970 status_update_manager.cpp:320] Received status update TASK_FINISHED (UUID: 14598eb7-e5a3-4aec-9b92-abe6c26957c9) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.427922 29970 status_update_manager.cpp:374] Forwarding update TASK_FINISHED (UUID: 14598eb7-e5a3-4aec-9b92-abe6c26957c9) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to the agent I0707 19:30:34.428794 29970 slave.cpp:3678] Forwarding the update TASK_FINISHED (UUID: 14598eb7-e5a3-4aec-9b92-abe6c26957c9) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to master@172.17.0.7:37568 I0707 19:30:34.429209 29970 slave.cpp:3572] Status update manager successfully handled status update TASK_FINISHED (UUID: 14598eb7-e5a3-4aec-9b92-abe6c26957c9) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.429819 29970 slave.cpp:3588] Sending acknowledgement for status update TASK_FINISHED (UUID: 14598eb7-e5a3-4aec-9b92-abe6c26957c9) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 to executor(1)@172.17.0.7:37892 I0707 19:30:34.429739 29971 master.cpp:5273] Status update TASK_FINISHED (UUID: 14598eb7-e5a3-4aec-9b92-abe6c26957c9) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 from agent 443ee691-d272-454c-90fe-959c95948252-S0 at slave(3)@172.17.0.7:37568 (89b080073abb) I0707 19:30:34.430163 29971 master.cpp:5321] Forwarding status update TASK_FINISHED (UUID: 14598eb7-e5a3-4aec-9b92-abe6c26957c9) for task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 I0707 19:30:34.430559 29971 master.cpp:6959] Updating the state of task 0 of framework 443ee691-d272-454c-90fe-959c95948252-0000 (latest state: TASK_FINISHED, status update state: TASK_FINISHED) I0707 19:30:34.432127 29971 hierarchical.cpp:924] Recovered cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):15; mem(*):47142) on agent 443ee691-d272-454c-90fe-959c95948252-S0 from framework 443ee691-d272-454c-90fe-959c95948252-0000 @ 0x2b569c2093ed google::LogMessage::Fail() @ 0x2b569c2087ce google::LogMessage::SendToLog() @ 0x2b569c2090ad google::LogMessage::Flush() @ 0x2b569c20c528 google::LogMessageFatal::~LogMessageFatal() @ 0x45dba0 DynamicReservationScheduler::resourceOffers() @ 0x2b569b0a75d8 mesos::internal::SchedulerProcess::resourceOffers() @ 0x2b569b0bb3cf ProtobufProcess<>::handler2<>() @ 0x2b569b0bcdcd _ZNSt5_BindIFPFvPN5mesos8internal16SchedulerProcessEMS2_FvRKN7process4UPIDERKSt6vectorINS0_5OfferESaIS9_EERKS8_ISsSaISsEEEMNS1_21ResourceOffersMessageEKFRKN6google8protobuf16RepeatedPtrFieldIS9_EEvEMSK_KFRKNSN_ISsEEvES7_RKSsES3_SJ_SS_SX_St12_PlaceholderILi1EES12_ILi2EEEE6__callIvJS7_SZ_EJLm0ELm1ELm2ELm3ELm4ELm5EEEET_OSt5tupleIJDpT0_EESt12_Index_tupleIJXspT1_EEE @ 0x2b569b0bcc16 _ZNSt5_BindIFPFvPN5mesos8internal16SchedulerProcessEMS2_FvRKN7process4UPIDERKSt6vectorINS0_5OfferESaIS9_EERKS8_ISsSaISsEEEMNS1_21ResourceOffersMessageEKFRKN6google8protobuf16RepeatedPtrFieldIS9_EEvEMSK_KFRKNSN_ISsEEvES7_RKSsES3_SJ_SS_SX_St12_PlaceholderILi1EES12_ILi2EEEEclIJS7_SZ_EvEET0_DpOT_ @ 0x2b569b0bc9b7 std::_Function_handler<>::_M_invoke() @ 0x2b569a964bb0 std::function<>::operator()() @ 0x2b569b0a2acb ProtobufProcess<>::visit() @ 0x2b569b0a2be7 ProtobufProcess<>::visit() @ 0x2b569af25d8e process::MessageEvent::visit() @ 0x2b569a95aad1 process::ProcessBase::serve() @ 0x2b569c120984 process::ProcessManager::resume() @ 0x2b569c12b6fc process::ProcessManager::init_threads()::$_0::operator()() @ 0x2b569c12b605 _ZNSt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvE3$_0vEE9_M_invokeIJEEEvSt12_Index_tupleIJXspT_EEE @ 0x2b569c12b5d5 std::_Bind_simple<>::operator()() @ 0x2b569c12b5ac std::thread::_Impl<>::_M_run() @ 0x2b569d80aa60 (unknown) @ 0x2b569df81184 start_thread @ 0x2b569e29137d (unknown) ../../src/tests/script.cpp:80: Failure Failed dynamic_reservation_framework_test.sh terminated with signal Aborted [ FAILED ] ExamplesTest.DynamicReservationFramework (7146 ms) [ RUN ] ExamplesTest.DynamicReservationFramework Using temporary directory '/tmp/ExamplesTest_DynamicReservationFramework_mXcx0v' /mesos/mesos-1.0.0/src/tests/dynamic_reservation_framework_test.sh: line 19: /mesos/mesos-1.0.0/_build/src/colors.sh: No such file or directory /mesos/mesos-1.0.0/src/tests/dynamic_reservation_framework_test.sh: line 20: /mesos/mesos-1.0.0/_build/src/atexit.sh: No such file or directory WARNING: Logging before InitGoogleLogging() is written to STDERR I0707 18:06:58.103094 31500 resources.cpp:572] Parsing resources as JSON failed: cpus:1;mem:128 Trying semicolon-delimited string format instead I0707 18:06:58.116397 31500 process.cpp:1066] libprocess is initialized on 172.17.0.7:39581 with 16 worker threads I0707 18:06:58.116585 31500 logging.cpp:199] Logging to STDERR I0707 18:06:58.206984 31500 leveldb.cpp:174] Opened db in 84.981237ms I0707 18:06:58.240731 31500 leveldb.cpp:181] Compacted db in 33.702091ms I0707 18:06:58.240833 31500 leveldb.cpp:196] Created db iterator in 66372ns I0707 18:06:58.240985 31500 leveldb.cpp:202] Seeked to beginning of db in 4465ns I0707 18:06:58.241019 31500 leveldb.cpp:271] Iterated through 0 keys in the db in 450ns I0707 18:06:58.241217 31500 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0707 18:06:58.244328 31500 local.cpp:255] Creating default 'local' authorizer I0707 18:06:58.244483 31529 recover.cpp:451] Starting replica recovery I0707 18:06:58.245290 31529 recover.cpp:477] Replica is in EMPTY status I0707 18:06:58.248134 31529 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (4)@172.17.0.7:39581 I0707 18:06:58.249553 31534 recover.cpp:197] Received a recover response from a replica in EMPTY status I0707 18:06:58.250586 31525 recover.cpp:568] Updating replica status to STARTING I0707 18:06:58.252516 31533 master.cpp:382] Master 7892fbb2-1ac1-450f-8576-10c1df35f765 (753c2ae3a486) started on 172.17.0.7:39581 I0707 18:06:58.252545 31533 master.cpp:384] Flags at startup: --acls=""""permissive: true register_frameworks { principals { type: ANY } roles { type: SOME values: """"test"""" } } """" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""false"""" --authenticate_frameworks=""""false"""" --authenticate_http=""""false"""" --authenticate_http_frameworks=""""false"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/ExamplesTest_DynamicReservationFramework_mXcx0v/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""20secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-1.0.0/src/webui"""" --work_dir=""""/tmp/mesos-IFR4rG"""" --zk_session_timeout=""""10secs"""" I0707 18:06:58.253370 31533 master.cpp:436] Master allowing unauthenticated frameworks to register I0707 18:06:58.253386 31533 master.cpp:450] Master allowing unauthenticated agents to register I0707 18:06:58.253397 31533 master.cpp:464] Master allowing HTTP frameworks to register without authentication I0707 18:06:58.253464 31533 credentials.hpp:37] Loading credentials for authentication from '/tmp/ExamplesTest_DynamicReservationFramework_mXcx0v/credentials' W0707 18:06:58.253582 31533 credentials.hpp:52] Permissions on credentials file '/tmp/ExamplesTest_DynamicReservationFramework_mXcx0v/credentials' are too open. It is recommended that your credentials file is NOT accessible by others. I0707 18:06:58.253777 31533 master.cpp:506] Using default 'crammd5' authenticator I0707 18:06:58.253957 31533 authenticator.cpp:519] Initializing server SASL I0707 18:06:58.259035 31533 auxprop.cpp:73] Initialized in-memory auxiliary property plugin I0707 18:06:58.259171 31533 master.cpp:705] Authorization enabled I0707 18:06:58.260638 31538 hierarchical.cpp:151] Initialized hierarchical allocator process I0707 18:06:58.260769 31538 whitelist_watcher.cpp:77] No whitelist given I0707 18:06:58.261309 31500 containerizer.cpp:196] Using isolation: filesystem/posix,posix/cpu,posix/mem,network/cni W0707 18:06:58.266885 31500 backend.cpp:75] Failed to create 'aufs' backend: AufsBackend requires root privileges, but is running as user mesos W0707 18:06:58.267235 31500 backend.cpp:75] Failed to create 'bind' backend: BindBackend requires root privileges I0707 18:06:58.274190 31525 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 23.019016ms I0707 18:06:58.274230 31525 replica.cpp:320] Persisted replica status to STARTING I0707 18:06:58.274695 31527 recover.cpp:477] Replica is in STARTING status I0707 18:06:58.276667 31531 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (15)@172.17.0.7:39581 I0707 18:06:58.277261 31538 recover.cpp:197] Received a recover response from a replica in STARTING status I0707 18:06:58.277840 31536 recover.cpp:568] Updating replica status to VOTING I0707 18:06:58.279667 31532 slave.cpp:205] Agent started on 1)@172.17.0.7:39581 I0707 18:06:58.279690 31532 slave.cpp:206] Flags at startup: --acls=""""permissive: true register_frameworks { principals { type: ANY } roles { type: SOME values: """"test"""" } } """" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/mesos/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""filesystem/posix,posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/mesos-1.0.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""1secs"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/mesos-IFR4rG/0"""" I0707 18:06:58.280802 31532 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 18:06:58.280975 31532 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 18:06:58.281785 31532 slave.cpp:594] Agent resources: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:06:58.281886 31532 slave.cpp:602] Agent attributes: [ ] I0707 18:06:58.281918 31532 slave.cpp:607] Agent hostname: 753c2ae3a486 I0707 18:06:58.285878 31500 containerizer.cpp:196] Using isolation: filesystem/posix,posix/cpu,posix/mem,network/cni I0707 18:06:58.286325 31535 state.cpp:57] Recovering state from '/tmp/mesos-IFR4rG/0/meta' I0707 18:06:58.289796 31525 master.cpp:1973] The newly elected leader is master@172.17.0.7:39581 with id 7892fbb2-1ac1-450f-8576-10c1df35f765 I0707 18:06:58.289837 31525 master.cpp:1986] Elected as the leading master! I0707 18:06:58.289881 31525 master.cpp:1673] Recovering from registrar I0707 18:06:58.290081 31528 registrar.cpp:332] Recovering registrar I0707 18:06:58.298210 31532 status_update_manager.cpp:200] Recovering status update manager I0707 18:06:58.298463 31532 containerizer.cpp:522] Recovering containerizer I0707 18:06:58.310628 31530 provisioner.cpp:253] Provisioner recovery complete I0707 18:06:58.311688 31531 slave.cpp:4856] Finished recovery I0707 18:06:58.314482 31538 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 36.508519ms I0707 18:06:58.314538 31538 replica.cpp:320] Persisted replica status to VOTING I0707 18:06:58.314743 31538 recover.cpp:582] Successfully joined the Paxos group I0707 18:06:58.315083 31538 recover.cpp:466] Recover process terminated I0707 18:06:58.316979 31538 log.cpp:553] Attempting to start the writer W0707 18:06:58.319859 31500 backend.cpp:75] Failed to create 'aufs' backend: AufsBackend requires root privileges, but is running as user mesos W0707 18:06:58.320173 31500 backend.cpp:75] Failed to create 'bind' backend: BindBackend requires root privileges I0707 18:06:58.325480 31531 slave.cpp:5028] Querying resource estimator for oversubscribable resources I0707 18:06:58.326918 31525 slave.cpp:5042] Received oversubscribable resources from the resource estimator I0707 18:06:58.332331 31531 status_update_manager.cpp:174] Pausing sending status updates I0707 18:06:58.332393 31534 slave.cpp:969] New master detected at master@172.17.0.7:39581 I0707 18:06:58.332491 31534 slave.cpp:990] No credentials provided. Attempting to register without authentication I0707 18:06:58.332593 31534 slave.cpp:1001] Detecting new master I0707 18:06:58.333231 31539 replica.cpp:493] Replica received implicit promise request from (30)@172.17.0.7:39581 with proposal 1 I0707 18:06:58.340519 31527 slave.cpp:205] Agent started on 2)@172.17.0.7:39581 I0707 18:06:58.340543 31527 slave.cpp:206] Flags at startup: --acls=""""permissive: true register_frameworks { principals { type: ANY } roles { type: SOME values: """"test"""" } } """" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/mesos/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""filesystem/posix,posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/mesos-1.0.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""1secs"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/mesos-IFR4rG/1"""" I0707 18:06:58.341367 31527 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 18:06:58.343765 31527 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 18:06:58.582685 31527 slave.cpp:594] Agent resources: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:06:58.582808 31527 slave.cpp:602] Agent attributes: [ ] I0707 18:06:58.582825 31527 slave.cpp:607] Agent hostname: 753c2ae3a486 I0707 18:06:58.583901 31536 slave.cpp:1529] Will retry registration in 1.442011578secs if necessary I0707 18:06:58.584017 31536 master.cpp:1500] Dropping 'mesos.internal.RegisterSlaveMessage' message since not recovered yet I0707 18:06:58.586772 31527 state.cpp:57] Recovering state from '/tmp/mesos-IFR4rG/1/meta' I0707 18:06:58.587353 31527 status_update_manager.cpp:200] Recovering status update manager I0707 18:06:58.587736 31525 containerizer.cpp:522] Recovering containerizer I0707 18:06:58.590937 31500 containerizer.cpp:196] Using isolation: filesystem/posix,posix/cpu,posix/mem,network/cni I0707 18:06:58.592527 31529 provisioner.cpp:253] Provisioner recovery complete I0707 18:06:58.594597 31527 slave.cpp:4856] Finished recovery I0707 18:06:58.595170 31527 slave.cpp:5028] Querying resource estimator for oversubscribable resources I0707 18:06:58.596602 31529 slave.cpp:5042] Received oversubscribable resources from the resource estimator W0707 18:06:58.597026 31500 backend.cpp:75] Failed to create 'aufs' backend: AufsBackend requires root privileges, but is running as user mesos I0707 18:06:58.597048 31527 slave.cpp:969] New master detected at master@172.17.0.7:39581 I0707 18:06:58.597074 31527 slave.cpp:990] No credentials provided. Attempting to register without authentication I0707 18:06:58.597183 31529 status_update_manager.cpp:174] Pausing sending status updates W0707 18:06:58.597195 31500 backend.cpp:75] Failed to create 'bind' backend: BindBackend requires root privileges I0707 18:06:58.597316 31527 slave.cpp:1001] Detecting new master I0707 18:06:58.622475 31528 slave.cpp:205] Agent started on 3)@172.17.0.7:39581 I0707 18:06:58.622517 31528 slave.cpp:206] Flags at startup: --acls=""""permissive: true register_frameworks { principals { type: ANY } roles { type: SOME values: """"test"""" } } """" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/mesos/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""true"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""filesystem/posix,posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/mesos-1.0.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""1secs"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/mesos-IFR4rG/2"""" I0707 18:06:58.623533 31528 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 18:06:58.623703 31528 resources.cpp:572] Parsing resources as JSON failed: Trying semicolon-delimited string format instead I0707 18:06:58.624135 31528 slave.cpp:594] Agent resources: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:06:58.624209 31528 slave.cpp:602] Agent attributes: [ ] I0707 18:06:58.624224 31528 slave.cpp:607] Agent hostname: 753c2ae3a486 I0707 18:06:58.628917 31533 state.cpp:57] Recovering state from '/tmp/mesos-IFR4rG/2/meta' I0707 18:06:58.629308 31533 status_update_manager.cpp:200] Recovering status update manager I0707 18:06:58.629530 31533 containerizer.cpp:522] Recovering containerizer I0707 18:06:58.631003 31500 sched.cpp:226] Version: 1.0.0 I0707 18:06:58.632382 31536 provisioner.cpp:253] Provisioner recovery complete I0707 18:06:58.632655 31530 sched.cpp:330] New master detected at master@172.17.0.7:39581 I0707 18:06:58.632758 31530 sched.cpp:341] No credentials provided. Attempting to register without authentication I0707 18:06:58.632786 31530 sched.cpp:820] Sending SUBSCRIBE call to master@172.17.0.7:39581 I0707 18:06:58.632921 31530 sched.cpp:853] Will retry registration in 1.972934225secs if necessary I0707 18:06:58.633108 31535 master.cpp:1500] Dropping 'mesos.scheduler.Call' message since not recovered yet I0707 18:06:58.633489 31530 slave.cpp:4856] Finished recovery I0707 18:06:58.633956 31530 slave.cpp:5028] Querying resource estimator for oversubscribable resources I0707 18:06:58.634172 31530 slave.cpp:969] New master detected at master@172.17.0.7:39581 I0707 18:06:58.634199 31530 slave.cpp:990] No credentials provided. Attempting to register without authentication I0707 18:06:58.634241 31530 slave.cpp:1001] Detecting new master I0707 18:06:58.634254 31538 status_update_manager.cpp:174] Pausing sending status updates I0707 18:06:58.634354 31530 slave.cpp:5042] Received oversubscribable resources from the resource estimator I0707 18:06:58.646832 31538 slave.cpp:1529] Will retry registration in 421.765838ms if necessary I0707 18:06:58.647001 31538 master.cpp:1500] Dropping 'mesos.internal.RegisterSlaveMessage' message since not recovered yet I0707 18:06:58.814708 31539 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 481.415169ms I0707 18:06:58.814787 31539 replica.cpp:342] Persisted promised to 1 I0707 18:06:58.816249 31539 coordinator.cpp:238] Coordinator attempting to fill missing positions I0707 18:06:58.820055 31525 replica.cpp:388] Replica received explicit promise request from (49)@172.17.0.7:39581 for position 0 with proposal 2 I0707 18:06:58.865041 31525 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 44.725594ms I0707 18:06:58.865135 31525 replica.cpp:712] Persisted action at 0 I0707 18:06:58.868293 31531 replica.cpp:537] Replica received write request for position 0 from (50)@172.17.0.7:39581 I0707 18:06:58.868521 31531 leveldb.cpp:436] Reading position from leveldb took 71896ns I0707 18:06:58.901504 31531 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 32.785731ms I0707 18:06:58.901585 31531 replica.cpp:712] Persisted action at 0 I0707 18:06:58.903040 31534 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0707 18:06:58.934504 31534 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 31.403773ms I0707 18:06:58.934590 31534 replica.cpp:712] Persisted action at 0 I0707 18:06:58.934640 31534 replica.cpp:697] Replica learned NOP action at position 0 I0707 18:06:58.936309 31534 log.cpp:569] Writer started with ending position 0 I0707 18:06:58.941576 31534 leveldb.cpp:436] Reading position from leveldb took 74550ns I0707 18:06:58.950364 31534 registrar.cpp:365] Successfully fetched the registry (0B) in 660.20608ms I0707 18:06:58.952555 31534 registrar.cpp:464] Applied 1 operations in 61140ns; attempting to update the 'registry' I0707 18:06:58.955271 31535 log.cpp:577] Attempting to append 168 bytes to the log I0707 18:06:58.955555 31534 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0707 18:06:58.958256 31534 replica.cpp:537] Replica received write request for position 1 from (51)@172.17.0.7:39581 I0707 18:06:59.000967 31534 leveldb.cpp:341] Persisting action (187 bytes) to leveldb took 42.408267ms I0707 18:06:59.001049 31534 replica.cpp:712] Persisted action at 1 I0707 18:06:59.004665 31535 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0707 18:06:59.051337 31535 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 46.611618ms I0707 18:06:59.051431 31535 replica.cpp:712] Persisted action at 1 I0707 18:06:59.051483 31535 replica.cpp:697] Replica learned APPEND action at position 1 I0707 18:06:59.054003 31537 registrar.cpp:509] Successfully updated the 'registry' in 101.27104ms I0707 18:06:59.054268 31537 registrar.cpp:395] Successfully recovered registrar I0707 18:06:59.054762 31525 log.cpp:596] Attempting to truncate the log to 1 I0707 18:06:59.055158 31537 master.cpp:1781] Recovered 0 agents from the Registry (129B) ; allowing 10mins for agents to re-register I0707 18:06:59.055218 31539 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0707 18:06:59.055346 31525 hierarchical.cpp:178] Skipping recovery of hierarchical allocator: nothing to recover I0707 18:06:59.056404 31539 replica.cpp:537] Replica received write request for position 2 from (52)@172.17.0.7:39581 I0707 18:06:59.069908 31537 slave.cpp:1529] Will retry registration in 2.057528722secs if necessary I0707 18:06:59.070539 31530 master.cpp:4676] Registering agent at slave(2)@172.17.0.7:39581 (753c2ae3a486) with id 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 I0707 18:06:59.071838 31530 registrar.cpp:464] Applied 1 operations in 139897ns; attempting to update the 'registry' I0707 18:06:59.101510 31539 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 44.978496ms I0707 18:06:59.101680 31539 replica.cpp:712] Persisted action at 2 I0707 18:06:59.102838 31527 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0707 18:06:59.165279 31527 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 62.339577ms I0707 18:06:59.165451 31527 leveldb.cpp:399] Deleting ~1 keys from leveldb took 89301ns I0707 18:06:59.165479 31527 replica.cpp:712] Persisted action at 2 I0707 18:06:59.165531 31527 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0707 18:06:59.167273 31533 log.cpp:577] Attempting to append 337 bytes to the log I0707 18:06:59.167647 31526 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0707 18:06:59.169075 31526 replica.cpp:537] Replica received write request for position 3 from (53)@172.17.0.7:39581 I0707 18:06:59.224467 31526 leveldb.cpp:341] Persisting action (356 bytes) to leveldb took 55.235676ms I0707 18:06:59.224548 31526 replica.cpp:712] Persisted action at 3 I0707 18:06:59.226044 31526 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0707 18:06:59.262912 31525 hierarchical.cpp:1537] No allocations performed I0707 18:06:59.263056 31525 hierarchical.cpp:1172] Performed allocation for 0 agents in 345421ns I0707 18:06:59.283505 31526 leveldb.cpp:341] Persisting action (358 bytes) to leveldb took 57.362529ms I0707 18:06:59.283589 31526 replica.cpp:712] Persisted action at 3 I0707 18:06:59.283638 31526 replica.cpp:697] Replica learned APPEND action at position 3 I0707 18:06:59.287037 31538 registrar.cpp:509] Successfully updated the 'registry' in 215.051008ms I0707 18:06:59.287451 31533 log.cpp:596] Attempting to truncate the log to 3 I0707 18:06:59.288493 31539 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0707 18:06:59.289965 31532 replica.cpp:537] Replica received write request for position 4 from (54)@172.17.0.7:39581 I0707 18:06:59.291105 31533 master.cpp:4745] Registered agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:06:59.291481 31533 slave.cpp:1169] Registered with master master@172.17.0.7:39581; given agent ID 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 I0707 18:06:59.291512 31533 fetcher.cpp:86] Clearing fetcher cache I0707 18:06:59.291749 31539 hierarchical.cpp:478] Added agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 (753c2ae3a486) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] (allocated: ) I0707 18:06:59.292026 31533 slave.cpp:1192] Checkpointing SlaveInfo to '/tmp/mesos-IFR4rG/1/meta/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/slave.info' I0707 18:06:59.292233 31525 status_update_manager.cpp:181] Resuming sending status updates I0707 18:06:59.292330 31539 hierarchical.cpp:1537] No allocations performed I0707 18:06:59.292726 31533 slave.cpp:1229] Forwarding total oversubscribed resources I0707 18:06:59.292819 31539 hierarchical.cpp:1195] Performed allocation for agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 in 968472ns I0707 18:06:59.292860 31533 slave.cpp:3760] Received ping from slave-observer(1)@172.17.0.7:39581 I0707 18:06:59.293042 31525 master.cpp:5128] Received update of agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) with total oversubscribed resources I0707 18:06:59.293349 31525 hierarchical.cpp:542] Agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 (753c2ae3a486) updated with oversubscribed resources (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) I0707 18:06:59.294507 31525 hierarchical.cpp:1537] No allocations performed I0707 18:06:59.294569 31525 hierarchical.cpp:1195] Performed allocation for agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 in 1.17629ms I0707 18:06:59.343272 31539 slave.cpp:1529] Will retry registration in 1.912221755secs if necessary I0707 18:06:59.343785 31528 master.cpp:4676] Registering agent at slave(3)@172.17.0.7:39581 (753c2ae3a486) with id 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 I0707 18:06:59.344733 31528 registrar.cpp:464] Applied 1 operations in 125766ns; attempting to update the 'registry' I0707 18:06:59.362001 31532 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 71.968888ms I0707 18:06:59.362097 31532 replica.cpp:712] Persisted action at 4 I0707 18:06:59.363359 31527 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0707 18:06:59.412232 31527 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 48.705675ms I0707 18:06:59.412684 31527 leveldb.cpp:399] Deleting ~2 keys from leveldb took 95246ns I0707 18:06:59.412853 31527 replica.cpp:712] Persisted action at 4 I0707 18:06:59.413059 31527 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0707 18:06:59.414814 31536 log.cpp:577] Attempting to append 503 bytes to the log I0707 18:06:59.414932 31527 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 5 I0707 18:06:59.415906 31536 replica.cpp:537] Replica received write request for position 5 from (55)@172.17.0.7:39581 I0707 18:06:59.462564 31536 leveldb.cpp:341] Persisting action (522 bytes) to leveldb took 46.531781ms I0707 18:06:59.462729 31536 replica.cpp:712] Persisted action at 5 I0707 18:06:59.463939 31536 replica.cpp:691] Replica received learned notice for position 5 from @0.0.0.0:0 I0707 18:06:59.534988 31536 leveldb.cpp:341] Persisting action (524 bytes) to leveldb took 70.884462ms I0707 18:06:59.535076 31536 replica.cpp:712] Persisted action at 5 I0707 18:06:59.535125 31536 replica.cpp:697] Replica learned APPEND action at position 5 I0707 18:06:59.540696 31533 registrar.cpp:509] Successfully updated the 'registry' in 195.819008ms I0707 18:06:59.541802 31533 log.cpp:596] Attempting to truncate the log to 5 I0707 18:06:59.544545 31533 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 6 I0707 18:06:59.545786 31536 master.cpp:4745] Registered agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:06:59.546710 31536 hierarchical.cpp:478] Added agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 (753c2ae3a486) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] (allocated: ) I0707 18:06:59.547562 31536 hierarchical.cpp:1537] No allocations performed I0707 18:06:59.547972 31536 hierarchical.cpp:1195] Performed allocation for agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 in 540048ns I0707 18:06:59.547883 31533 slave.cpp:1169] Registered with master master@172.17.0.7:39581; given agent ID 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 I0707 18:06:59.549479 31533 fetcher.cpp:86] Clearing fetcher cache I0707 18:06:59.549988 31533 slave.cpp:1192] Checkpointing SlaveInfo to '/tmp/mesos-IFR4rG/2/meta/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S1/slave.info' I0707 18:06:59.550889 31533 slave.cpp:1229] Forwarding total oversubscribed resources I0707 18:06:59.551502 31533 slave.cpp:3760] Received ping from slave-observer(2)@172.17.0.7:39581 I0707 18:06:59.550431 31536 replica.cpp:537] Replica received write request for position 6 from (56)@172.17.0.7:39581 I0707 18:06:59.551846 31533 status_update_manager.cpp:181] Resuming sending status updates I0707 18:06:59.552496 31533 master.cpp:5128] Received update of agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) with total oversubscribed resources I0707 18:06:59.552784 31533 hierarchical.cpp:542] Agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 (753c2ae3a486) updated with oversubscribed resources (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) I0707 18:06:59.553241 31533 hierarchical.cpp:1537] No allocations performed I0707 18:06:59.553311 31533 hierarchical.cpp:1195] Performed allocation for agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 in 481606ns I0707 18:06:59.586164 31536 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 34.414546ms I0707 18:06:59.586247 31536 replica.cpp:712] Persisted action at 6 I0707 18:06:59.587699 31536 replica.cpp:691] Replica received learned notice for position 6 from @0.0.0.0:0 I0707 18:06:59.619674 31536 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 31.87307ms I0707 18:06:59.619837 31536 leveldb.cpp:399] Deleting ~2 keys from leveldb took 82424ns I0707 18:06:59.619864 31536 replica.cpp:712] Persisted action at 6 I0707 18:06:59.619913 31536 replica.cpp:697] Replica learned TRUNCATE action at position 6 I0707 18:07:00.026949 31524 slave.cpp:1529] Will retry registration in 2.762006963secs if necessary I0707 18:07:00.027307 31524 master.cpp:4676] Registering agent at slave(1)@172.17.0.7:39581 (753c2ae3a486) with id 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 I0707 18:07:00.028275 31525 registrar.cpp:464] Applied 1 operations in 158666ns; attempting to update the 'registry' I0707 18:07:00.032213 31524 log.cpp:577] Attempting to append 669 bytes to the log I0707 18:07:00.032482 31525 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 7 I0707 18:07:00.034119 31525 replica.cpp:537] Replica received write request for position 7 from (57)@172.17.0.7:39581 I0707 18:07:00.048461 31525 leveldb.cpp:341] Persisting action (688 bytes) to leveldb took 14.18893ms I0707 18:07:00.048629 31525 replica.cpp:712] Persisted action at 7 I0707 18:07:00.050050 31530 replica.cpp:691] Replica received learned notice for position 7 from @0.0.0.0:0 I0707 18:07:00.113382 31530 leveldb.cpp:341] Persisting action (690 bytes) to leveldb took 63.230657ms I0707 18:07:00.113576 31530 replica.cpp:712] Persisted action at 7 I0707 18:07:00.113853 31530 replica.cpp:697] Replica learned APPEND action at position 7 I0707 18:07:00.117030 31526 registrar.cpp:509] Successfully updated the 'registry' in 88.66816ms I0707 18:07:00.117674 31530 log.cpp:596] Attempting to truncate the log to 7 I0707 18:07:00.117910 31528 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 8 I0707 18:07:00.118896 31526 master.cpp:4745] Registered agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:00.119174 31526 replica.cpp:537] Replica received write request for position 8 from (58)@172.17.0.7:39581 I0707 18:07:00.119213 31531 slave.cpp:1169] Registered with master master@172.17.0.7:39581; given agent ID 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 I0707 18:07:00.119254 31531 fetcher.cpp:86] Clearing fetcher cache I0707 18:07:00.119482 31527 status_update_manager.cpp:181] Resuming sending status updates I0707 18:07:00.119736 31531 slave.cpp:1192] Checkpointing SlaveInfo to '/tmp/mesos-IFR4rG/0/meta/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/slave.info' I0707 18:07:00.120182 31531 slave.cpp:1229] Forwarding total oversubscribed resources I0707 18:07:00.120273 31531 slave.cpp:3760] Received ping from slave-observer(3)@172.17.0.7:39581 I0707 18:07:00.120461 31531 master.cpp:5128] Received update of agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) with total oversubscribed resources I0707 18:07:00.120740 31539 hierarchical.cpp:478] Added agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 (753c2ae3a486) with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] (allocated: ) I0707 18:07:00.120904 31539 hierarchical.cpp:1537] No allocations performed I0707 18:07:00.121006 31539 hierarchical.cpp:1195] Performed allocation for agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 in 222368ns I0707 18:07:00.121196 31539 hierarchical.cpp:542] Agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 (753c2ae3a486) updated with oversubscribed resources (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) I0707 18:07:00.121511 31539 hierarchical.cpp:1537] No allocations performed I0707 18:07:00.121609 31539 hierarchical.cpp:1195] Performed allocation for agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 in 178299ns I0707 18:07:00.168759 31526 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 49.379822ms I0707 18:07:00.168941 31526 replica.cpp:712] Persisted action at 8 I0707 18:07:00.173840 31535 replica.cpp:691] Replica received learned notice for position 8 from @0.0.0.0:0 I0707 18:07:00.227267 31535 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 53.368171ms I0707 18:07:00.227453 31535 leveldb.cpp:399] Deleting ~2 keys from leveldb took 98129ns I0707 18:07:00.227483 31535 replica.cpp:712] Persisted action at 8 I0707 18:07:00.227536 31535 replica.cpp:697] Replica learned TRUNCATE action at position 8 I0707 18:07:00.264490 31537 hierarchical.cpp:1537] No allocations performed I0707 18:07:00.264606 31537 hierarchical.cpp:1172] Performed allocation for 3 agents in 410062ns I0707 18:07:00.606685 31537 sched.cpp:820] Sending SUBSCRIBE call to master@172.17.0.7:39581 I0707 18:07:00.606863 31537 sched.cpp:853] Will retry registration in 3.19389747secs if necessary I0707 18:07:00.607161 31537 master.cpp:2550] Received SUBSCRIBE call for framework 'Dynamic Reservation Framework (C++)' at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:00.607316 31537 master.cpp:2012] Authorizing framework principal 'test' to receive offers for role 'test' I0707 18:07:00.608633 31537 master.cpp:2626] Subscribing framework Dynamic Reservation Framework (C++) with checkpointing disabled and capabilities [ ] I0707 18:07:00.610199 31535 hierarchical.cpp:271] Added framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:00.611109 31537 sched.cpp:743] Framework registered with 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:00.611136 31537 dynamic_reservation_framework.cpp:73] Registered! I0707 18:07:00.611152 31537 sched.cpp:757] Scheduler::registered took 18259ns I0707 18:07:00.613695 31535 hierarchical.cpp:1632] No inverse offers to send out! I0707 18:07:00.613819 31535 hierarchical.cpp:1172] Performed allocation for 3 agents in 3.590355ms I0707 18:07:00.615160 31535 master.cpp:5835] Sending 3 offers to framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:00.616071 31535 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O0 with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:00.616686 31535 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O1 with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:00.617288 31535 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O2 with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:00.617588 31535 sched.cpp:917] Scheduler::resourceOffers took 1.516257ms I0707 18:07:00.619355 31531 master.cpp:3468] Processing ACCEPT call for offers: [ 7892fbb2-1ac1-450f-8576-10c1df35f765-O0 ] on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:00.619647 31531 master.cpp:3144] Authorizing principal 'test' to reserve resources 'cpus(test, test):1; mem(test, test):128' I0707 18:07:00.621302 31531 master.cpp:3468] Processing ACCEPT call for offers: [ 7892fbb2-1ac1-450f-8576-10c1df35f765-O1 ] on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:00.621472 31531 master.cpp:3144] Authorizing principal 'test' to reserve resources 'cpus(test, test):1; mem(test, test):128' I0707 18:07:00.622601 31531 master.cpp:3468] Processing ACCEPT call for offers: [ 7892fbb2-1ac1-450f-8576-10c1df35f765-O2 ] on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:00.622714 31531 master.cpp:3144] Authorizing principal 'test' to reserve resources 'cpus(test, test):1; mem(test, test):128' I0707 18:07:00.623579 31531 master.cpp:3695] Applying RESERVE operation for resources cpus(test, test):1; mem(test, test):128 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:00.624171 31531 master.cpp:7098] Sending checkpointed resources cpus(test, test):1; mem(test, test):128 to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:00.624826 31531 master.cpp:3695] Applying RESERVE operation for resources cpus(test, test):1; mem(test, test):128 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:00.625102 31524 slave.cpp:2600] Updated checkpointed resources from to cpus(test, test):1; mem(test, test):128 I0707 18:07:00.625347 31531 master.cpp:7098] Sending checkpointed resources cpus(test, test):1; mem(test, test):128 to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:00.625932 31531 master.cpp:3695] Applying RESERVE operation for resources cpus(test, test):1; mem(test, test):128 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:00.626421 31531 master.cpp:7098] Sending checkpointed resources cpus(test, test):1; mem(test, test):128 to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:00.626845 31537 slave.cpp:2600] Updated checkpointed resources from to cpus(test, test):1; mem(test, test):128 I0707 18:07:00.627255 31538 hierarchical.cpp:683] Updated allocation of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 from cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] to cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 I0707 18:07:00.627341 31531 slave.cpp:2600] Updated checkpointed resources from to cpus(test, test):1; mem(test, test):128 I0707 18:07:00.628175 31538 hierarchical.cpp:924] Recovered cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:00.630285 31538 hierarchical.cpp:683] Updated allocation of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 from cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] to cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 I0707 18:07:00.631186 31538 hierarchical.cpp:924] Recovered cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:00.633533 31538 hierarchical.cpp:683] Updated allocation of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 from cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] to cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 I0707 18:07:00.634449 31538 hierarchical.cpp:924] Recovered cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.268144 31537 hierarchical.cpp:1632] No inverse offers to send out! I0707 18:07:01.268594 31537 hierarchical.cpp:1172] Performed allocation for 3 agents in 3.471848ms I0707 18:07:01.269795 31532 master.cpp:5835] Sending 3 offers to framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:01.272722 31532 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O3 with cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:01.273185 31532 dynamic_reservation_framework.cpp:150] Launching task 0 using offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O3 I0707 18:07:01.274802 31532 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O4 with cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:01.275229 31532 dynamic_reservation_framework.cpp:150] Launching task 1 using offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O4 I0707 18:07:01.275583 31532 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O5 with cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:01.276630 31532 dynamic_reservation_framework.cpp:150] Launching task 2 using offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O5 I0707 18:07:01.277556 31532 sched.cpp:917] Scheduler::resourceOffers took 4.841343ms I0707 18:07:01.279577 31529 master.cpp:3468] Processing ACCEPT call for offers: [ 7892fbb2-1ac1-450f-8576-10c1df35f765-O3 ] on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:01.279718 31529 master.cpp:3106] Authorizing framework principal 'test' to launch task 0 I0707 18:07:01.282876 31529 master.cpp:3468] Processing ACCEPT call for offers: [ 7892fbb2-1ac1-450f-8576-10c1df35f765-O4 ] on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:01.282953 31529 master.cpp:3106] Authorizing framework principal 'test' to launch task 1 I0707 18:07:01.287205 31529 master.cpp:3468] Processing ACCEPT call for offers: [ 7892fbb2-1ac1-450f-8576-10c1df35f765-O5 ] on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:01.287295 31529 master.cpp:3106] Authorizing framework principal 'test' to launch task 2 I0707 18:07:01.292567 31529 master.cpp:7565] Adding task 0 with resources cpus(test, test):1; mem(test, test):128 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 (753c2ae3a486) I0707 18:07:01.292743 31529 master.cpp:3957] Launching task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 with resources cpus(test, test):1; mem(test, test):128 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.294683 31539 slave.cpp:1569] Got assigned task 0 for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.295222 31539 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 18:07:01.297250 31532 hierarchical.cpp:924] Recovered ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):15; mem(*):47142 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: cpus(test, test):1; mem(test, test):128) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.298192 31539 slave.cpp:1688] Launching task 0 for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.298318 31539 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 18:07:01.302230 31529 master.cpp:7565] Adding task 1 with resources cpus(test, test):1; mem(test, test):128 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 (753c2ae3a486) I0707 18:07:01.303160 31539 paths.cpp:528] Trying to chown '/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/0/runs/bdca1a15-bdb1-45bb-b19e-df07fb74e5db' to user 'mesos' I0707 18:07:01.302377 31529 master.cpp:3957] Launching task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 with resources cpus(test, test):1; mem(test, test):128 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.304913 31527 slave.cpp:1569] Got assigned task 1 for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.305199 31527 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 18:07:01.307775 31539 slave.cpp:5748] Launching executor 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/0/runs/bdca1a15-bdb1-45bb-b19e-df07fb74e5db' I0707 18:07:01.309219 31527 slave.cpp:1688] Launching task 1 for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.309334 31527 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 18:07:01.309659 31536 containerizer.cpp:781] Starting container 'bdca1a15-bdb1-45bb-b19e-df07fb74e5db' for executor '0' of framework '7892fbb2-1ac1-450f-8576-10c1df35f765-0000' I0707 18:07:01.312726 31539 slave.cpp:1914] Queuing task '0' for executor '0' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.312937 31539 slave.cpp:922] Successfully attached file '/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/0/runs/bdca1a15-bdb1-45bb-b19e-df07fb74e5db' I0707 18:07:01.317831 31528 hierarchical.cpp:924] Recovered ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):15; mem(*):47142 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: cpus(test, test):1; mem(test, test):128) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.319255 31529 master.cpp:7565] Adding task 2 with resources cpus(test, test):1; mem(test, test):128 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 (753c2ae3a486) I0707 18:07:01.319463 31529 master.cpp:3957] Launching task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 with resources cpus(test, test):1; mem(test, test):128 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.319934 31526 slave.cpp:1569] Got assigned task 2 for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.320222 31526 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 18:07:01.320818 31526 slave.cpp:1688] Launching task 2 for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.320926 31526 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 18:07:01.322808 31524 containerizer.cpp:1284] Launching 'mesos-containerizer' with flags '--command=""""{""""arguments"""":[""""mesos-executor"""",""""--launcher_dir=\/mesos\/mesos-1.0.0\/_build\/src""""],""""shell"""":false,""""value"""":""""\/mesos\/mesos-1.0.0\/_build\/src\/mesos-executor""""}"""" --help=""""false"""" --pipe_read=""""12"""" --pipe_write=""""13"""" --pre_exec_commands=""""[]"""" --unshare_namespace_mnt=""""false"""" --user=""""mesos"""" --working_directory=""""/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/0/runs/bdca1a15-bdb1-45bb-b19e-df07fb74e5db""""' I0707 18:07:01.325748 31524 launcher.cpp:126] Forked child with pid '31544' for container 'bdca1a15-bdb1-45bb-b19e-df07fb74e5db' I0707 18:07:01.328867 31533 hierarchical.cpp:924] Recovered ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):15; mem(*):47142 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: cpus(test, test):1; mem(test, test):128) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.351835 31526 paths.cpp:528] Trying to chown '/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/2/runs/cffaea8f-effc-4388-902d-1ae39d1e5bfb' to user 'mesos' I0707 18:07:01.353647 31527 paths.cpp:528] Trying to chown '/tmp/mesos-IFR4rG/2/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S1/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/1/runs/cde8072a-fc80-426d-833f-57e3cf50f368' to user 'mesos' I0707 18:07:01.360844 31526 slave.cpp:5748] Launching executor 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/2/runs/cffaea8f-effc-4388-902d-1ae39d1e5bfb' I0707 18:07:01.362640 31525 containerizer.cpp:781] Starting container 'cffaea8f-effc-4388-902d-1ae39d1e5bfb' for executor '2' of framework '7892fbb2-1ac1-450f-8576-10c1df35f765-0000' I0707 18:07:01.370136 31525 containerizer.cpp:1284] Launching 'mesos-containerizer' with flags '--command=""""{""""arguments"""":[""""mesos-executor"""",""""--launcher_dir=\/mesos\/mesos-1.0.0\/_build\/src""""],""""shell"""":false,""""value"""":""""\/mesos\/mesos-1.0.0\/_build\/src\/mesos-executor""""}"""" --help=""""false"""" --pipe_read=""""9"""" --pipe_write=""""12"""" --pre_exec_commands=""""[]"""" --unshare_namespace_mnt=""""false"""" --user=""""mesos"""" --working_directory=""""/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/2/runs/cffaea8f-effc-4388-902d-1ae39d1e5bfb""""' I0707 18:07:01.370545 31526 slave.cpp:1914] Queuing task '2' for executor '2' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.370671 31526 slave.cpp:922] Successfully attached file '/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/2/runs/cffaea8f-effc-4388-902d-1ae39d1e5bfb' I0707 18:07:01.372215 31525 launcher.cpp:126] Forked child with pid '31562' for container 'cffaea8f-effc-4388-902d-1ae39d1e5bfb' I0707 18:07:01.385299 31527 slave.cpp:5748] Launching executor 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/mesos-IFR4rG/2/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S1/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/1/runs/cde8072a-fc80-426d-833f-57e3cf50f368' I0707 18:07:01.386274 31527 slave.cpp:1914] Queuing task '1' for executor '1' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.387212 31527 slave.cpp:922] Successfully attached file '/tmp/mesos-IFR4rG/2/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S1/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/1/runs/cde8072a-fc80-426d-833f-57e3cf50f368' I0707 18:07:01.386729 31536 containerizer.cpp:781] Starting container 'cde8072a-fc80-426d-833f-57e3cf50f368' for executor '1' of framework '7892fbb2-1ac1-450f-8576-10c1df35f765-0000' I0707 18:07:01.393424 31530 containerizer.cpp:1284] Launching 'mesos-containerizer' with flags '--command=""""{""""arguments"""":[""""mesos-executor"""",""""--launcher_dir=\/mesos\/mesos-1.0.0\/_build\/src""""],""""shell"""":false,""""value"""":""""\/mesos\/mesos-1.0.0\/_build\/src\/mesos-executor""""}"""" --help=""""false"""" --pipe_read=""""9"""" --pipe_write=""""12"""" --pre_exec_commands=""""[]"""" --unshare_namespace_mnt=""""false"""" --user=""""mesos"""" --working_directory=""""/tmp/mesos-IFR4rG/2/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S1/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/1/runs/cde8072a-fc80-426d-833f-57e3cf50f368""""' I0707 18:07:01.395277 31530 launcher.cpp:126] Forked child with pid '31567' for container 'cde8072a-fc80-426d-833f-57e3cf50f368' I0707 18:07:01.597575 31544 exec.cpp:161] Version: 1.0.0 I0707 18:07:01.603597 31530 slave.cpp:2902] Got registration for executor '0' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:59475 I0707 18:07:01.610890 31530 slave.cpp:2079] Sending queued task '0' to executor '0' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 at executor(1)@172.17.0.7:59475 I0707 18:07:01.610896 31623 exec.cpp:236] Executor registered on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 Received SUBSCRIBED event Subscribed executor on 753c2ae3a486 Received LAUNCH event Starting task 0 /mesos/mesos-1.0.0/_build/src/mesos-containerizer launch --command=""""{""""shell"""":true,""""value"""":""""echo hello""""}"""" --help=""""false"""" --unshare_namespace_mnt=""""false"""" Forked command at 31646 I0707 18:07:01.638350 31527 slave.cpp:3285] Handling status update TASK_RUNNING (UUID: 24b1f98c-f325-4ecc-b839-07580dcccd52) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:59475 I0707 18:07:01.643460 31532 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 24b1f98c-f325-4ecc-b839-07580dcccd52) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.643530 31532 status_update_manager.cpp:497] Creating StatusUpdate stream for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.644738 31532 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 24b1f98c-f325-4ecc-b839-07580dcccd52) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to the agent I0707 18:07:01.646585 31527 slave.cpp:3678] Forwarding the update TASK_RUNNING (UUID: 24b1f98c-f325-4ecc-b839-07580dcccd52) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to master@172.17.0.7:39581 I0707 18:07:01.646842 31527 slave.cpp:3572] Status update manager successfully handled status update TASK_RUNNING (UUID: 24b1f98c-f325-4ecc-b839-07580dcccd52) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.646900 31527 slave.cpp:3588] Sending acknowledgement for status update TASK_RUNNING (UUID: 24b1f98c-f325-4ecc-b839-07580dcccd52) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to executor(1)@172.17.0.7:59475 I0707 18:07:01.647038 31538 master.cpp:5273] Status update TASK_RUNNING (UUID: 24b1f98c-f325-4ecc-b839-07580dcccd52) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.647094 31538 master.cpp:5321] Forwarding status update TASK_RUNNING (UUID: 24b1f98c-f325-4ecc-b839-07580dcccd52) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.647313 31538 master.cpp:6959] Updating the state of task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0707 18:07:01.647377 31527 dynamic_reservation_framework.cpp:211] Task 0 is in state TASK_RUNNING I0707 18:07:01.647423 31527 sched.cpp:1025] Scheduler::statusUpdate took 33812ns I0707 18:07:01.647711 31527 master.cpp:4388] Processing ACKNOWLEDGE call 24b1f98c-f325-4ecc-b839-07580dcccd52 for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 I0707 18:07:01.648241 31529 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 24b1f98c-f325-4ecc-b839-07580dcccd52) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.648927 31529 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: 24b1f98c-f325-4ecc-b839-07580dcccd52) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.688079 31670 exec.cpp:161] Version: 1.0.0 I0707 18:07:01.701555 31529 slave.cpp:2902] Got registration for executor '2' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:51409 I0707 18:07:01.716229 31668 exec.cpp:236] Executor registered on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 I0707 18:07:01.719995 31529 slave.cpp:2079] Sending queued task '2' to executor '2' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 at executor(1)@172.17.0.7:51409 I0707 18:07:01.740218 31567 exec.cpp:161] Version: 1.0.0 I0707 18:07:01.745529 31534 slave.cpp:2902] Got registration for executor '1' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:34513 hello I0707 18:07:01.747939 31539 slave.cpp:2079] Sending queued task '1' to executor '1' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 at executor(1)@172.17.0.7:34513 I0707 18:07:01.749034 31677 exec.cpp:236] Executor registered on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 Received SUBSCRIBED event Subscribed executor on 753c2ae3a486 Received LAUNCH event Starting task 2 /mesos/mesos-1.0.0/_build/src/mesos-containerizer launch --command=""""{""""shell"""":true,""""value"""":""""echo hello""""}"""" --help=""""false"""" --unshare_namespace_mnt=""""false"""" Forked command at 31694 I0707 18:07:01.763070 31533 slave.cpp:3285] Handling status update TASK_RUNNING (UUID: c30ef1c4-d5f2-4556-875b-df44c8e586d5) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:51409 I0707 18:07:01.765763 31531 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: c30ef1c4-d5f2-4556-875b-df44c8e586d5) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.765916 31531 status_update_manager.cpp:497] Creating StatusUpdate stream for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.766629 31531 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: c30ef1c4-d5f2-4556-875b-df44c8e586d5) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to the agent I0707 18:07:01.767174 31531 slave.cpp:3678] Forwarding the update TASK_RUNNING (UUID: c30ef1c4-d5f2-4556-875b-df44c8e586d5) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to master@172.17.0.7:39581 I0707 18:07:01.767484 31531 slave.cpp:3572] Status update manager successfully handled status update TASK_RUNNING (UUID: c30ef1c4-d5f2-4556-875b-df44c8e586d5) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.767638 31531 slave.cpp:3588] Sending acknowledgement for status update TASK_RUNNING (UUID: c30ef1c4-d5f2-4556-875b-df44c8e586d5) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to executor(1)@172.17.0.7:51409 I0707 18:07:01.767709 31529 master.cpp:5273] Status update TASK_RUNNING (UUID: c30ef1c4-d5f2-4556-875b-df44c8e586d5) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.768045 31529 master.cpp:5321] Forwarding status update TASK_RUNNING (UUID: c30ef1c4-d5f2-4556-875b-df44c8e586d5) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.768604 31535 dynamic_reservation_framework.cpp:211] Task 2 is in state TASK_RUNNING I0707 18:07:01.768832 31535 sched.cpp:1025] Scheduler::statusUpdate took 272020ns I0707 18:07:01.769253 31529 master.cpp:6959] Updating the state of task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0707 18:07:01.769649 31529 master.cpp:4388] Processing ACKNOWLEDGE call c30ef1c4-d5f2-4556-875b-df44c8e586d5 for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 I0707 18:07:01.770125 31529 status_update_manager.cpp:392] Received status update acknowledgement (UUID: c30ef1c4-d5f2-4556-875b-df44c8e586d5) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.770845 31524 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: c30ef1c4-d5f2-4556-875b-df44c8e586d5) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 Received SUBSCRIBED event Subscribed executor on 753c2ae3a486 Received LAUNCH event Starting task 1 /mesos/mesos-1.0.0/_build/src/mesos-containerizer launch --command=""""{""""shell"""":true,""""value"""":""""echo hello""""}"""" --help=""""false"""" --unshare_namespace_mnt=""""false"""" Forked command at 31702 I0707 18:07:01.786492 31528 slave.cpp:3285] Handling status update TASK_RUNNING (UUID: 41438479-6ebc-417c-a357-9446db661f27) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:34513 I0707 18:07:01.789589 31535 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 41438479-6ebc-417c-a357-9446db661f27) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.789639 31535 status_update_manager.cpp:497] Creating StatusUpdate stream for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.790278 31535 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 41438479-6ebc-417c-a357-9446db661f27) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to the agent I0707 18:07:01.790938 31539 slave.cpp:3678] Forwarding the update TASK_RUNNING (UUID: 41438479-6ebc-417c-a357-9446db661f27) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to master@172.17.0.7:39581 I0707 18:07:01.791113 31539 slave.cpp:3572] Status update manager successfully handled status update TASK_RUNNING (UUID: 41438479-6ebc-417c-a357-9446db661f27) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.791162 31539 slave.cpp:3588] Sending acknowledgement for status update TASK_RUNNING (UUID: 41438479-6ebc-417c-a357-9446db661f27) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to executor(1)@172.17.0.7:34513 I0707 18:07:01.792708 31539 master.cpp:5273] Status update TASK_RUNNING (UUID: 41438479-6ebc-417c-a357-9446db661f27) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.792759 31539 master.cpp:5321] Forwarding status update TASK_RUNNING (UUID: 41438479-6ebc-417c-a357-9446db661f27) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.792935 31539 master.cpp:6959] Updating the state of task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0707 18:07:01.793094 31539 dynamic_reservation_framework.cpp:211] Task 1 is in state TASK_RUNNING I0707 18:07:01.793119 31539 sched.cpp:1025] Scheduler::statusUpdate took 26767ns I0707 18:07:01.793341 31539 master.cpp:4388] Processing ACKNOWLEDGE call 41438479-6ebc-417c-a357-9446db661f27 for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 I0707 18:07:01.793712 31539 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 41438479-6ebc-417c-a357-9446db661f27) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.794035 31539 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: 41438479-6ebc-417c-a357-9446db661f27) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 Command exited with status 0 (pid: 31646) I0707 18:07:01.849810 31535 slave.cpp:3285] Handling status update TASK_FINISHED (UUID: 2c0ce9f5-7167-4fb9-8636-ae0e465d08fc) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:59475 I0707 18:07:01.854346 31535 slave.cpp:6088] Terminating task 0 I0707 18:07:01.860373 31530 status_update_manager.cpp:320] Received status update TASK_FINISHED (UUID: 2c0ce9f5-7167-4fb9-8636-ae0e465d08fc) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.861690 31530 status_update_manager.cpp:374] Forwarding update TASK_FINISHED (UUID: 2c0ce9f5-7167-4fb9-8636-ae0e465d08fc) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to the agent I0707 18:07:01.862217 31530 slave.cpp:3678] Forwarding the update TASK_FINISHED (UUID: 2c0ce9f5-7167-4fb9-8636-ae0e465d08fc) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to master@172.17.0.7:39581 I0707 18:07:01.863664 31533 master.cpp:5273] Status update TASK_FINISHED (UUID: 2c0ce9f5-7167-4fb9-8636-ae0e465d08fc) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.863718 31533 master.cpp:5321] Forwarding status update TASK_FINISHED (UUID: 2c0ce9f5-7167-4fb9-8636-ae0e465d08fc) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.863888 31533 master.cpp:6959] Updating the state of task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (latest state: TASK_FINISHED, status update state: TASK_FINISHED) I0707 18:07:01.864351 31533 dynamic_reservation_framework.cpp:208] Task 0 is finished at agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 I0707 18:07:01.864377 31533 sched.cpp:1025] Scheduler::statusUpdate took 42319ns I0707 18:07:01.865952 31533 hierarchical.cpp:924] Recovered cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.866158 31533 master.cpp:4388] Processing ACKNOWLEDGE call 2c0ce9f5-7167-4fb9-8636-ae0e465d08fc for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 I0707 18:07:01.866231 31533 master.cpp:7025] Removing task 0 with resources cpus(test, test):1; mem(test, test):128 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.866461 31530 slave.cpp:3572] Status update manager successfully handled status update TASK_FINISHED (UUID: 2c0ce9f5-7167-4fb9-8636-ae0e465d08fc) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.866519 31530 slave.cpp:3588] Sending acknowledgement for status update TASK_FINISHED (UUID: 2c0ce9f5-7167-4fb9-8636-ae0e465d08fc) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to executor(1)@172.17.0.7:59475 I0707 18:07:01.869339 31533 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 2c0ce9f5-7167-4fb9-8636-ae0e465d08fc) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.869580 31533 status_update_manager.cpp:528] Cleaning up status update stream for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.870098 31533 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: 2c0ce9f5-7167-4fb9-8636-ae0e465d08fc) for task 0 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.870162 31533 slave.cpp:6129] Completing task 0 hello Command exited with status 0 (pid: 31694) hello I0707 18:07:01.965605 31533 slave.cpp:3285] Handling status update TASK_FINISHED (UUID: 60c0ce74-18df-458f-bef4-1f82fb03412e) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:51409 I0707 18:07:01.968528 31535 slave.cpp:6088] Terminating task 2 I0707 18:07:01.970181 31539 status_update_manager.cpp:320] Received status update TASK_FINISHED (UUID: 60c0ce74-18df-458f-bef4-1f82fb03412e) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.970440 31539 status_update_manager.cpp:374] Forwarding update TASK_FINISHED (UUID: 60c0ce74-18df-458f-bef4-1f82fb03412e) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to the agent I0707 18:07:01.970832 31533 slave.cpp:3678] Forwarding the update TASK_FINISHED (UUID: 60c0ce74-18df-458f-bef4-1f82fb03412e) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to master@172.17.0.7:39581 I0707 18:07:01.971072 31533 slave.cpp:3572] Status update manager successfully handled status update TASK_FINISHED (UUID: 60c0ce74-18df-458f-bef4-1f82fb03412e) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.971158 31533 slave.cpp:3588] Sending acknowledgement for status update TASK_FINISHED (UUID: 60c0ce74-18df-458f-bef4-1f82fb03412e) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to executor(1)@172.17.0.7:51409 I0707 18:07:01.971346 31535 master.cpp:5273] Status update TASK_FINISHED (UUID: 60c0ce74-18df-458f-bef4-1f82fb03412e) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.971515 31535 master.cpp:5321] Forwarding status update TASK_FINISHED (UUID: 60c0ce74-18df-458f-bef4-1f82fb03412e) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.971796 31535 master.cpp:6959] Updating the state of task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (latest state: TASK_FINISHED, status update state: TASK_FINISHED) I0707 18:07:01.971863 31539 dynamic_reservation_framework.cpp:208] Task 2 is finished at agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 I0707 18:07:01.971887 31539 sched.cpp:1025] Scheduler::statusUpdate took 36513ns I0707 18:07:01.972437 31535 master.cpp:4388] Processing ACKNOWLEDGE call 60c0ce74-18df-458f-bef4-1f82fb03412e for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 I0707 18:07:01.972524 31528 hierarchical.cpp:924] Recovered cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.972553 31535 master.cpp:7025] Removing task 2 with resources cpus(test, test):1; mem(test, test):128 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.973103 31535 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 60c0ce74-18df-458f-bef4-1f82fb03412e) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.973393 31535 status_update_manager.cpp:528] Cleaning up status update stream for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.973930 31535 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: 60c0ce74-18df-458f-bef4-1f82fb03412e) for task 2 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.974081 31535 slave.cpp:6129] Completing task 2 Command exited with status 0 (pid: 31702) I0707 18:07:01.988955 31530 slave.cpp:3285] Handling status update TASK_FINISHED (UUID: 9e47c164-0f30-4b73-aab4-249a77bdc04d) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:34513 I0707 18:07:01.990218 31534 slave.cpp:6088] Terminating task 1 I0707 18:07:01.991665 31534 status_update_manager.cpp:320] Received status update TASK_FINISHED (UUID: 9e47c164-0f30-4b73-aab4-249a77bdc04d) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.991874 31534 status_update_manager.cpp:374] Forwarding update TASK_FINISHED (UUID: 9e47c164-0f30-4b73-aab4-249a77bdc04d) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to the agent I0707 18:07:01.992138 31531 slave.cpp:3678] Forwarding the update TASK_FINISHED (UUID: 9e47c164-0f30-4b73-aab4-249a77bdc04d) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to master@172.17.0.7:39581 I0707 18:07:01.992455 31531 slave.cpp:3572] Status update manager successfully handled status update TASK_FINISHED (UUID: 9e47c164-0f30-4b73-aab4-249a77bdc04d) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.992539 31531 slave.cpp:3588] Sending acknowledgement for status update TASK_FINISHED (UUID: 9e47c164-0f30-4b73-aab4-249a77bdc04d) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to executor(1)@172.17.0.7:34513 I0707 18:07:01.992573 31535 master.cpp:5273] Status update TASK_FINISHED (UUID: 9e47c164-0f30-4b73-aab4-249a77bdc04d) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.992619 31535 master.cpp:5321] Forwarding status update TASK_FINISHED (UUID: 9e47c164-0f30-4b73-aab4-249a77bdc04d) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.992776 31535 master.cpp:6959] Updating the state of task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (latest state: TASK_FINISHED, status update state: TASK_FINISHED) I0707 18:07:01.993170 31535 dynamic_reservation_framework.cpp:208] Task 1 is finished at agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 I0707 18:07:01.993191 31535 sched.cpp:1025] Scheduler::statusUpdate took 32857ns I0707 18:07:01.993369 31535 master.cpp:4388] Processing ACKNOWLEDGE call 9e47c164-0f30-4b73-aab4-249a77bdc04d for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 I0707 18:07:01.993633 31531 hierarchical.cpp:924] Recovered cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.994060 31535 master.cpp:7025] Removing task 1 with resources cpus(test, test):1; mem(test, test):128 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:01.994530 31535 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 9e47c164-0f30-4b73-aab4-249a77bdc04d) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.994729 31535 status_update_manager.cpp:528] Cleaning up status update stream for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.995079 31537 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: 9e47c164-0f30-4b73-aab4-249a77bdc04d) for task 1 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:01.995172 31537 slave.cpp:6129] Completing task 1 I0707 18:07:02.274876 31537 hierarchical.cpp:1632] No inverse offers to send out! I0707 18:07:02.275038 31537 hierarchical.cpp:1172] Performed allocation for 3 agents in 4.318297ms I0707 18:07:02.278640 31537 master.cpp:5835] Sending 3 offers to framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:02.280704 31524 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O6 with cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:02.280966 31524 dynamic_reservation_framework.cpp:150] Launching task 3 using offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O6 I0707 18:07:02.281510 31524 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O7 with cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:02.281695 31524 dynamic_reservation_framework.cpp:150] Launching task 4 using offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O7 I0707 18:07:02.282440 31524 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O8 with cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:02.282804 31524 sched.cpp:917] Scheduler::resourceOffers took 2.103878ms I0707 18:07:02.285826 31537 master.cpp:3468] Processing ACCEPT call for offers: [ 7892fbb2-1ac1-450f-8576-10c1df35f765-O6 ] on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:02.285923 31537 master.cpp:3106] Authorizing framework principal 'test' to launch task 3 I0707 18:07:02.291869 31537 master.cpp:3468] Processing ACCEPT call for offers: [ 7892fbb2-1ac1-450f-8576-10c1df35f765-O7 ] on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:02.291971 31537 master.cpp:3106] Authorizing framework principal 'test' to launch task 4 I0707 18:07:02.297622 31537 master.cpp:3468] Processing ACCEPT call for offers: [ 7892fbb2-1ac1-450f-8576-10c1df35f765-O8 ] on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:02.299481 31537 master.cpp:3201] Authorizing principal 'test' to unreserve resources 'cpus(test, test):1; mem(test, test):128' I0707 18:07:02.304149 31537 master.cpp:7565] Adding task 3 with resources cpus(test, test):1; mem(test, test):128 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 (753c2ae3a486) I0707 18:07:02.304298 31537 master.cpp:3957] Launching task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 with resources cpus(test, test):1; mem(test, test):128 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:02.305838 31532 slave.cpp:1569] Got assigned task 3 for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.306040 31532 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 18:07:02.307746 31532 slave.cpp:1688] Launching task 3 for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.307853 31532 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 18:07:02.312942 31532 paths.cpp:528] Trying to chown '/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/3/runs/e31dfccd-5f2a-40e1-95a0-b5b253fac912' to user 'mesos' I0707 18:07:02.317132 31531 hierarchical.cpp:924] Recovered ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):15; mem(*):47142 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: cpus(test, test):1; mem(test, test):128) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.318964 31532 slave.cpp:5748] Launching executor 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/3/runs/e31dfccd-5f2a-40e1-95a0-b5b253fac912' I0707 18:07:02.319262 31537 master.cpp:7565] Adding task 4 with resources cpus(test, test):1; mem(test, test):128 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 (753c2ae3a486) I0707 18:07:02.319537 31537 master.cpp:3957] Launching task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 with resources cpus(test, test):1; mem(test, test):128 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:02.319990 31527 slave.cpp:1569] Got assigned task 4 for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.320173 31537 master.cpp:3747] Applying UNRESERVE operation for resources cpus(test, test):1; mem(test, test):128 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:02.321169 31539 hierarchical.cpp:924] Recovered ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):15; mem(*):47142 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: cpus(test, test):1; mem(test, test):128) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.321388 31527 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 18:07:02.322088 31537 master.cpp:7098] Sending checkpointed resources to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 at slave(3)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:02.322193 31527 slave.cpp:1688] Launching task 4 for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.322312 31527 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32 Trying semicolon-delimited string format instead I0707 18:07:02.322660 31537 slave.cpp:2600] Updated checkpointed resources from cpus(test, test):1; mem(test, test):128 to I0707 18:07:02.323094 31527 paths.cpp:528] Trying to chown '/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/4/runs/f4bdc7d6-e7b1-4bd7-84e3-a7353f8b3e84' to user 'mesos' I0707 18:07:02.324539 31532 slave.cpp:1914] Queuing task '3' for executor '3' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.324769 31528 containerizer.cpp:781] Starting container 'e31dfccd-5f2a-40e1-95a0-b5b253fac912' for executor '3' of framework '7892fbb2-1ac1-450f-8576-10c1df35f765-0000' I0707 18:07:02.324848 31532 slave.cpp:922] Successfully attached file '/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/3/runs/e31dfccd-5f2a-40e1-95a0-b5b253fac912' I0707 18:07:02.326263 31539 hierarchical.cpp:683] Updated allocation of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 from cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] to ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):16; mem(*):47270 I0707 18:07:02.327944 31539 hierarchical.cpp:924] Recovered ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):16; mem(*):47270 (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.330935 31527 slave.cpp:5748] Launching executor 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/4/runs/f4bdc7d6-e7b1-4bd7-84e3-a7353f8b3e84' I0707 18:07:02.331660 31527 slave.cpp:1914] Queuing task '4' for executor '4' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.331791 31527 slave.cpp:922] Successfully attached file '/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/4/runs/f4bdc7d6-e7b1-4bd7-84e3-a7353f8b3e84' I0707 18:07:02.331665 31536 containerizer.cpp:781] Starting container 'f4bdc7d6-e7b1-4bd7-84e3-a7353f8b3e84' for executor '4' of framework '7892fbb2-1ac1-450f-8576-10c1df35f765-0000' I0707 18:07:02.334722 31530 containerizer.cpp:1284] Launching 'mesos-containerizer' with flags '--command=""""{""""arguments"""":[""""mesos-executor"""",""""--launcher_dir=\/mesos\/mesos-1.0.0\/_build\/src""""],""""shell"""":false,""""value"""":""""\/mesos\/mesos-1.0.0\/_build\/src\/mesos-executor""""}"""" --help=""""false"""" --pipe_read=""""17"""" --pipe_write=""""18"""" --pre_exec_commands=""""[]"""" --unshare_namespace_mnt=""""false"""" --user=""""mesos"""" --working_directory=""""/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/3/runs/e31dfccd-5f2a-40e1-95a0-b5b253fac912""""' I0707 18:07:02.335965 31530 launcher.cpp:126] Forked child with pid '31726' for container 'e31dfccd-5f2a-40e1-95a0-b5b253fac912' I0707 18:07:02.340095 31536 containerizer.cpp:1284] Launching 'mesos-containerizer' with flags '--command=""""{""""arguments"""":[""""mesos-executor"""",""""--launcher_dir=\/mesos\/mesos-1.0.0\/_build\/src""""],""""shell"""":false,""""value"""":""""\/mesos\/mesos-1.0.0\/_build\/src\/mesos-executor""""}"""" --help=""""false"""" --pipe_read=""""19"""" --pipe_write=""""20"""" --pre_exec_commands=""""[]"""" --unshare_namespace_mnt=""""false"""" --user=""""mesos"""" --working_directory=""""/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/4/runs/f4bdc7d6-e7b1-4bd7-84e3-a7353f8b3e84""""' I0707 18:07:02.341588 31536 launcher.cpp:126] Forked child with pid '31727' for container 'f4bdc7d6-e7b1-4bd7-84e3-a7353f8b3e84' I0707 18:07:02.587986 31726 exec.cpp:161] Version: 1.0.0 I0707 18:07:02.592061 31727 exec.cpp:161] Version: 1.0.0 I0707 18:07:02.592990 31535 slave.cpp:2902] Got registration for executor '3' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:55420 I0707 18:07:02.595854 31533 slave.cpp:2902] Got registration for executor '4' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:56283 I0707 18:07:02.599692 31535 slave.cpp:2079] Sending queued task '3' to executor '3' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 at executor(1)@172.17.0.7:55420 I0707 18:07:02.600880 31811 exec.cpp:236] Executor registered on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 I0707 18:07:02.601768 31533 slave.cpp:2079] Sending queued task '4' to executor '4' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 at executor(1)@172.17.0.7:56283 I0707 18:07:02.603651 31792 exec.cpp:236] Executor registered on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 Received SUBSCRIBED event Subscribed executor on 753c2ae3a486 Received LAUNCH event Starting task 3 Received SUBSCRIBED event Subscribed executor on 753c2ae3a486 Received LAUNCH event Starting task 4 /mesos/mesos-1.0.0/_build/src/mesos-containerizer launch --command=""""{""""shell"""":true,""""value"""":""""echo hello""""}"""" --help=""""false"""" --unshare_namespace_mnt=""""false"""" Forked command at 31815 /mesos/mesos-1.0.0/_build/src/mesos-containerizer launch --command=""""{""""shell"""":true,""""value"""":""""echo hello""""}"""" --help=""""false"""" --unshare_namespace_mnt=""""false"""" Forked command at 31814 I0707 18:07:02.632139 31530 slave.cpp:3285] Handling status update TASK_RUNNING (UUID: 58bae39d-f2fa-4160-84d2-ddf3d1c728bf) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:55420 I0707 18:07:02.634009 31537 slave.cpp:3285] Handling status update TASK_RUNNING (UUID: f7349218-9f36-4867-8b7e-b980f525c673) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:56283 I0707 18:07:02.636554 31527 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: f7349218-9f36-4867-8b7e-b980f525c673) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.636744 31527 status_update_manager.cpp:497] Creating StatusUpdate stream for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.637475 31527 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: f7349218-9f36-4867-8b7e-b980f525c673) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to the agent I0707 18:07:02.640130 31527 slave.cpp:3678] Forwarding the update TASK_RUNNING (UUID: f7349218-9f36-4867-8b7e-b980f525c673) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to master@172.17.0.7:39581 I0707 18:07:02.641506 31527 slave.cpp:3572] Status update manager successfully handled status update TASK_RUNNING (UUID: f7349218-9f36-4867-8b7e-b980f525c673) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.643518 31527 slave.cpp:3588] Sending acknowledgement for status update TASK_RUNNING (UUID: f7349218-9f36-4867-8b7e-b980f525c673) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to executor(1)@172.17.0.7:56283 I0707 18:07:02.642694 31529 master.cpp:5273] Status update TASK_RUNNING (UUID: f7349218-9f36-4867-8b7e-b980f525c673) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:02.644038 31529 master.cpp:5321] Forwarding status update TASK_RUNNING (UUID: f7349218-9f36-4867-8b7e-b980f525c673) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.644603 31524 dynamic_reservation_framework.cpp:211] Task 4 is in state TASK_RUNNING I0707 18:07:02.644876 31524 sched.cpp:1025] Scheduler::statusUpdate took 310563ns I0707 18:07:02.645233 31529 master.cpp:6959] Updating the state of task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0707 18:07:02.645633 31529 master.cpp:4388] Processing ACKNOWLEDGE call f7349218-9f36-4867-8b7e-b980f525c673 for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 I0707 18:07:02.648191 31530 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 58bae39d-f2fa-4160-84d2-ddf3d1c728bf) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.648598 31530 status_update_manager.cpp:497] Creating StatusUpdate stream for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.648950 31527 status_update_manager.cpp:392] Received status update acknowledgement (UUID: f7349218-9f36-4867-8b7e-b980f525c673) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.649775 31527 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: f7349218-9f36-4867-8b7e-b980f525c673) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.650616 31530 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 58bae39d-f2fa-4160-84d2-ddf3d1c728bf) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to the agent I0707 18:07:02.651362 31530 slave.cpp:3678] Forwarding the update TASK_RUNNING (UUID: 58bae39d-f2fa-4160-84d2-ddf3d1c728bf) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to master@172.17.0.7:39581 I0707 18:07:02.651851 31530 slave.cpp:3572] Status update manager successfully handled status update TASK_RUNNING (UUID: 58bae39d-f2fa-4160-84d2-ddf3d1c728bf) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.652447 31530 slave.cpp:3588] Sending acknowledgement for status update TASK_RUNNING (UUID: 58bae39d-f2fa-4160-84d2-ddf3d1c728bf) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to executor(1)@172.17.0.7:55420 I0707 18:07:02.652318 31538 master.cpp:5273] Status update TASK_RUNNING (UUID: 58bae39d-f2fa-4160-84d2-ddf3d1c728bf) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:02.652909 31538 master.cpp:5321] Forwarding status update TASK_RUNNING (UUID: 58bae39d-f2fa-4160-84d2-ddf3d1c728bf) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.653316 31538 master.cpp:6959] Updating the state of task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0707 18:07:02.653714 31526 dynamic_reservation_framework.cpp:211] Task 3 is in state TASK_RUNNING I0707 18:07:02.653985 31526 sched.cpp:1025] Scheduler::statusUpdate took 274166ns I0707 18:07:02.654438 31526 master.cpp:4388] Processing ACKNOWLEDGE call 58bae39d-f2fa-4160-84d2-ddf3d1c728bf for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 I0707 18:07:02.656904 31525 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 58bae39d-f2fa-4160-84d2-ddf3d1c728bf) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.657546 31530 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: 58bae39d-f2fa-4160-84d2-ddf3d1c728bf) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 hello hello Command exited with status 0 (pid: 31814) Command exited with status 0 (pid: 31815) I0707 18:07:02.834297 31526 slave.cpp:3285] Handling status update TASK_FINISHED (UUID: 5c12302d-6330-4df9-bf8f-d3443e6bccc6) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:55420 I0707 18:07:02.836560 31529 slave.cpp:6088] Terminating task 3 I0707 18:07:02.837365 31526 slave.cpp:3285] Handling status update TASK_FINISHED (UUID: 7e8d7fb0-613e-4a73-b07f-d3eee49496d5) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from executor(1)@172.17.0.7:56283 I0707 18:07:02.838392 31534 status_update_manager.cpp:320] Received status update TASK_FINISHED (UUID: 5c12302d-6330-4df9-bf8f-d3443e6bccc6) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.838690 31534 status_update_manager.cpp:374] Forwarding update TASK_FINISHED (UUID: 5c12302d-6330-4df9-bf8f-d3443e6bccc6) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to the agent I0707 18:07:02.839052 31537 slave.cpp:3678] Forwarding the update TASK_FINISHED (UUID: 5c12302d-6330-4df9-bf8f-d3443e6bccc6) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to master@172.17.0.7:39581 I0707 18:07:02.839562 31537 slave.cpp:3572] Status update manager successfully handled status update TASK_FINISHED (UUID: 5c12302d-6330-4df9-bf8f-d3443e6bccc6) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.839733 31526 master.cpp:5273] Status update TASK_FINISHED (UUID: 5c12302d-6330-4df9-bf8f-d3443e6bccc6) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:02.839855 31526 master.cpp:5321] Forwarding status update TASK_FINISHED (UUID: 5c12302d-6330-4df9-bf8f-d3443e6bccc6) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.839866 31537 slave.cpp:3588] Sending acknowledgement for status update TASK_FINISHED (UUID: 5c12302d-6330-4df9-bf8f-d3443e6bccc6) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to executor(1)@172.17.0.7:55420 I0707 18:07:02.840096 31526 master.cpp:6959] Updating the state of task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (latest state: TASK_FINISHED, status update state: TASK_FINISHED) I0707 18:07:02.840512 31535 dynamic_reservation_framework.cpp:208] Task 3 is finished at agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 I0707 18:07:02.840540 31535 sched.cpp:1025] Scheduler::statusUpdate took 46734ns I0707 18:07:02.840790 31535 master.cpp:4388] Processing ACKNOWLEDGE call 5c12302d-6330-4df9-bf8f-d3443e6bccc6 for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 I0707 18:07:02.840873 31535 master.cpp:7025] Removing task 3 with resources cpus(test, test):1; mem(test, test):128 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:02.841006 31526 hierarchical.cpp:924] Recovered cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.841898 31532 slave.cpp:6088] Terminating task 4 I0707 18:07:02.843639 31535 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 5c12302d-6330-4df9-bf8f-d3443e6bccc6) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.843842 31535 status_update_manager.cpp:528] Cleaning up status update stream for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.844398 31535 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: 5c12302d-6330-4df9-bf8f-d3443e6bccc6) for task 3 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.844481 31535 slave.cpp:6129] Completing task 3 I0707 18:07:02.845114 31532 status_update_manager.cpp:320] Received status update TASK_FINISHED (UUID: 7e8d7fb0-613e-4a73-b07f-d3eee49496d5) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.845590 31532 status_update_manager.cpp:374] Forwarding update TASK_FINISHED (UUID: 7e8d7fb0-613e-4a73-b07f-d3eee49496d5) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to the agent I0707 18:07:02.846128 31532 slave.cpp:3678] Forwarding the update TASK_FINISHED (UUID: 7e8d7fb0-613e-4a73-b07f-d3eee49496d5) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to master@172.17.0.7:39581 I0707 18:07:02.846406 31532 slave.cpp:3572] Status update manager successfully handled status update TASK_FINISHED (UUID: 7e8d7fb0-613e-4a73-b07f-d3eee49496d5) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.846519 31532 slave.cpp:3588] Sending acknowledgement for status update TASK_FINISHED (UUID: 7e8d7fb0-613e-4a73-b07f-d3eee49496d5) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 to executor(1)@172.17.0.7:56283 I0707 18:07:02.846554 31527 master.cpp:5273] Status update TASK_FINISHED (UUID: 7e8d7fb0-613e-4a73-b07f-d3eee49496d5) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 from agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:02.846730 31527 master.cpp:5321] Forwarding status update TASK_FINISHED (UUID: 7e8d7fb0-613e-4a73-b07f-d3eee49496d5) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.846974 31527 master.cpp:6959] Updating the state of task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (latest state: TASK_FINISHED, status update state: TASK_FINISHED) I0707 18:07:02.847877 31524 hierarchical.cpp:924] Recovered cpus(test, test):1; mem(test, test):128 (total: cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000]; cpus(test, test):1; mem(test, test):128, allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.848093 31532 dynamic_reservation_framework.cpp:208] Task 4 is finished at agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 I0707 18:07:02.848188 31532 dynamic_reservation_framework.cpp:226] All tasks done, waiting for unreserving resources I0707 18:07:02.848259 31532 sched.cpp:1025] Scheduler::statusUpdate took 173818ns I0707 18:07:02.848578 31532 master.cpp:4388] Processing ACKNOWLEDGE call 7e8d7fb0-613e-4a73-b07f-d3eee49496d5 for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 I0707 18:07:02.848701 31532 master.cpp:7025] Removing task 4 with resources cpus(test, test):1; mem(test, test):128 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:02.849139 31532 slave.cpp:3806] executor(1)@172.17.0.7:59475 exited I0707 18:07:02.849176 31532 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 7e8d7fb0-613e-4a73-b07f-d3eee49496d5) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.849354 31532 status_update_manager.cpp:528] Cleaning up status update stream for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.850083 31532 slave.cpp:2671] Status update manager successfully handled status update acknowledgement (UUID: 7e8d7fb0-613e-4a73-b07f-d3eee49496d5) for task 4 of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:02.850138 31532 slave.cpp:6129] Completing task 4 I0707 18:07:02.947499 31529 containerizer.cpp:1863] Executor for container 'bdca1a15-bdb1-45bb-b19e-df07fb74e5db' has exited I0707 18:07:02.947567 31529 containerizer.cpp:1622] Destroying container 'bdca1a15-bdb1-45bb-b19e-df07fb74e5db' I0707 18:07:02.960427 31539 provisioner.cpp:411] Ignoring destroy request for unknown container bdca1a15-bdb1-45bb-b19e-df07fb74e5db I0707 18:07:02.961381 31526 slave.cpp:4163] Executor '0' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 exited with status 0 I0707 18:07:02.961753 31526 slave.cpp:4267] Cleaning up executor '0' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 at executor(1)@172.17.0.7:59475 I0707 18:07:02.962579 31535 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/0/runs/bdca1a15-bdb1-45bb-b19e-df07fb74e5db' for gc 6.99998886310222days in the future I0707 18:07:02.963564 31535 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/0' for gc 6.99998885004741days in the future I0707 18:07:02.973430 31538 slave.cpp:3806] executor(1)@172.17.0.7:51409 exited I0707 18:07:02.995627 31536 slave.cpp:3806] executor(1)@172.17.0.7:34513 exited I0707 18:07:03.050102 31538 containerizer.cpp:1863] Executor for container 'cffaea8f-effc-4388-902d-1ae39d1e5bfb' has exited I0707 18:07:03.050295 31538 containerizer.cpp:1622] Destroying container 'cffaea8f-effc-4388-902d-1ae39d1e5bfb' I0707 18:07:03.052471 31531 containerizer.cpp:1863] Executor for container 'cde8072a-fc80-426d-833f-57e3cf50f368' has exited I0707 18:07:03.052614 31531 containerizer.cpp:1622] Destroying container 'cde8072a-fc80-426d-833f-57e3cf50f368' I0707 18:07:03.068886 31533 provisioner.cpp:411] Ignoring destroy request for unknown container cffaea8f-effc-4388-902d-1ae39d1e5bfb I0707 18:07:03.069725 31524 slave.cpp:4163] Executor '2' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 exited with status 0 I0707 18:07:03.069861 31524 slave.cpp:4267] Cleaning up executor '2' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 at executor(1)@172.17.0.7:51409 I0707 18:07:03.071491 31524 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/2/runs/cffaea8f-effc-4388-902d-1ae39d1e5bfb' for gc 6.9999991881837days in the future I0707 18:07:03.071869 31524 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/2' for gc 6.99999918568296days in the future I0707 18:07:03.084206 31528 provisioner.cpp:411] Ignoring destroy request for unknown container cde8072a-fc80-426d-833f-57e3cf50f368 I0707 18:07:03.084844 31530 slave.cpp:4163] Executor '1' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 exited with status 0 I0707 18:07:03.085223 31530 slave.cpp:4267] Cleaning up executor '1' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 at executor(1)@172.17.0.7:34513 I0707 18:07:03.085597 31526 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/2/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S1/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/1/runs/cde8072a-fc80-426d-833f-57e3cf50f368' for gc 6.99999901009185days in the future I0707 18:07:03.085814 31530 slave.cpp:4355] Cleaning up framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:03.086110 31530 status_update_manager.cpp:282] Closing status update streams for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:03.086287 31526 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/2/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S1/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/1' for gc 6.99999900754667days in the future I0707 18:07:03.086448 31526 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/2/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S1/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000' for gc 6.99999900508148days in the future I0707 18:07:03.282027 31534 hierarchical.cpp:1632] No inverse offers to send out! I0707 18:07:03.282196 31534 hierarchical.cpp:1172] Performed allocation for 3 agents in 6.269452ms I0707 18:07:03.285627 31534 master.cpp:5835] Sending 3 offers to framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:03.288548 31534 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O9 with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:03.288714 31534 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O10 with cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:03.289059 31534 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O11 with cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:03.289353 31534 sched.cpp:917] Scheduler::resourceOffers took 808575ns I0707 18:07:03.295822 31534 master.cpp:3468] Processing ACCEPT call for offers: [ 7892fbb2-1ac1-450f-8576-10c1df35f765-O10 ] on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:03.296022 31534 master.cpp:3201] Authorizing principal 'test' to unreserve resources 'cpus(test, test):1; mem(test, test):128' I0707 18:07:03.299430 31534 master.cpp:3468] Processing ACCEPT call for offers: [ 7892fbb2-1ac1-450f-8576-10c1df35f765-O11 ] on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:03.299563 31534 master.cpp:3201] Authorizing principal 'test' to unreserve resources 'cpus(test, test):1; mem(test, test):128' I0707 18:07:03.300375 31534 master.cpp:3747] Applying UNRESERVE operation for resources cpus(test, test):1; mem(test, test):128 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:03.302352 31534 master.cpp:7098] Sending checkpointed resources to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 at slave(2)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:03.303211 31534 master.cpp:3747] Applying UNRESERVE operation for resources cpus(test, test):1; mem(test, test):128 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:03.303390 31527 slave.cpp:2600] Updated checkpointed resources from cpus(test, test):1; mem(test, test):128 to I0707 18:07:03.304319 31528 hierarchical.cpp:683] Updated allocation of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 from cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] to ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):16; mem(*):47270 I0707 18:07:03.305121 31534 master.cpp:7098] Sending checkpointed resources to agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 at slave(1)@172.17.0.7:39581 (753c2ae3a486) I0707 18:07:03.305532 31531 slave.cpp:2600] Updated checkpointed resources from cpus(test, test):1; mem(test, test):128 to I0707 18:07:03.306602 31528 hierarchical.cpp:924] Recovered ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):16; mem(*):47270 (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:03.311444 31528 hierarchical.cpp:683] Updated allocation of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 from cpus(test, test):1; mem(test, test):128; cpus(*):15; mem(*):47142; disk(*):3.70122e+06; ports(*):[31000-32000] to ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):16; mem(*):47270 I0707 18:07:03.312047 31528 hierarchical.cpp:924] Recovered ports(*):[31000-32000]; disk(*):3.70122e+06; cpus(*):16; mem(*):47270 (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:03.841215 31530 slave.cpp:3806] executor(1)@172.17.0.7:55420 exited I0707 18:07:03.841861 31533 slave.cpp:3806] executor(1)@172.17.0.7:56283 exited I0707 18:07:03.857960 31525 containerizer.cpp:1863] Executor for container 'e31dfccd-5f2a-40e1-95a0-b5b253fac912' has exited I0707 18:07:03.858224 31525 containerizer.cpp:1622] Destroying container 'e31dfccd-5f2a-40e1-95a0-b5b253fac912' I0707 18:07:03.858165 31537 containerizer.cpp:1863] Executor for container 'f4bdc7d6-e7b1-4bd7-84e3-a7353f8b3e84' has exited I0707 18:07:03.858378 31537 containerizer.cpp:1622] Destroying container 'f4bdc7d6-e7b1-4bd7-84e3-a7353f8b3e84' I0707 18:07:03.866737 31532 provisioner.cpp:411] Ignoring destroy request for unknown container f4bdc7d6-e7b1-4bd7-84e3-a7353f8b3e84 I0707 18:07:03.867043 31527 slave.cpp:4163] Executor '4' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 exited with status 0 I0707 18:07:03.867420 31527 slave.cpp:4267] Cleaning up executor '4' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 at executor(1)@172.17.0.7:56283 I0707 18:07:03.867700 31536 provisioner.cpp:411] Ignoring destroy request for unknown container e31dfccd-5f2a-40e1-95a0-b5b253fac912 I0707 18:07:03.868448 31538 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/4/runs/f4bdc7d6-e7b1-4bd7-84e3-a7353f8b3e84' for gc 6.99998995501037days in the future I0707 18:07:03.868978 31531 slave.cpp:4163] Executor '3' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 exited with status 0 I0707 18:07:03.869102 31531 slave.cpp:4267] Cleaning up executor '3' of framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 at executor(1)@172.17.0.7:55420 I0707 18:07:03.869125 31533 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/4' for gc 6.99998994908444days in the future I0707 18:07:03.869405 31533 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/3/runs/e31dfccd-5f2a-40e1-95a0-b5b253fac912' for gc 6.99998993851852days in the future I0707 18:07:03.869010 31527 slave.cpp:4355] Cleaning up framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:03.869788 31526 status_update_manager.cpp:282] Closing status update streams for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:03.869946 31531 slave.cpp:4355] Cleaning up framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:03.870136 31539 status_update_manager.cpp:282] Closing status update streams for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:03.870043 31538 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000/executors/3' for gc 6.99998993607704days in the future I0707 18:07:03.870403 31538 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/1/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S0/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000' for gc 6.99998992845926days in the future I0707 18:07:03.870625 31533 gc.cpp:55] Scheduling '/tmp/mesos-IFR4rG/0/slaves/7892fbb2-1ac1-450f-8576-10c1df35f765-S2/frameworks/7892fbb2-1ac1-450f-8576-10c1df35f765-0000' for gc 6.99998993375111days in the future I0707 18:07:04.284621 31537 hierarchical.cpp:1632] No inverse offers to send out! I0707 18:07:04.284754 31537 hierarchical.cpp:1172] Performed allocation for 3 agents in 1.977235ms I0707 18:07:04.285658 31534 master.cpp:5835] Sending 2 offers to framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:04.286209 31534 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O12 with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:04.286334 31534 dynamic_reservation_framework.cpp:84] Received offer 7892fbb2-1ac1-450f-8576-10c1df35f765-O13 with cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] I0707 18:07:04.286553 31534 sched.cpp:1987] Asked to stop the driver I0707 18:07:04.286667 31534 sched.cpp:917] Scheduler::resourceOffers took 458638ns I0707 18:07:04.286797 31534 sched.cpp:1187] Stopping framework '7892fbb2-1ac1-450f-8576-10c1df35f765-0000' I0707 18:07:04.287029 31539 master.cpp:6410] Processing TEARDOWN call for framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:04.287063 31539 master.cpp:6422] Removing framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 (Dynamic Reservation Framework (C++)) at scheduler-c51aa6e6-f5b6-4bfc-982c-9a71ea56a862@172.17.0.7:39581 I0707 18:07:04.288715 31539 hierarchical.cpp:382] Deactivated framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:04.289376 31539 hierarchical.cpp:924] Recovered cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:04.290086 31539 hierarchical.cpp:924] Recovered cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:04.290809 31539 hierarchical.cpp:924] Recovered cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000] (total: cpus(*):16; mem(*):47270; disk(*):3.70122e+06; ports(*):[31000-32000], allocated: ) on agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 from framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:04.291211 31539 hierarchical.cpp:333] Removed framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:04.291306 31539 slave.cpp:2292] Asked to shut down framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 by master@172.17.0.7:39581 W0707 18:07:04.291337 31539 slave.cpp:2307] Cannot shut down unknown framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:04.291376 31539 slave.cpp:2292] Asked to shut down framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 by master@172.17.0.7:39581 W0707 18:07:04.291400 31539 slave.cpp:2307] Cannot shut down unknown framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:04.291467 31539 slave.cpp:2292] Asked to shut down framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 by master@172.17.0.7:39581 W0707 18:07:04.291494 31539 slave.cpp:2307] Cannot shut down unknown framework 7892fbb2-1ac1-450f-8576-10c1df35f765-0000 I0707 18:07:04.292186 31500 sched.cpp:1987] Asked to stop the driver I0707 18:07:04.292228 31500 sched.cpp:1990] Ignoring stop because the status of the driver is DRIVER_STOPPED I0707 18:07:04.292887 31537 master.cpp:1218] Master terminating I0707 18:07:04.293656 31525 hierarchical.cpp:510] Removed agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S2 I0707 18:07:04.293948 31525 hierarchical.cpp:510] Removed agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S1 I0707 18:07:04.294634 31526 hierarchical.cpp:510] Removed agent 7892fbb2-1ac1-450f-8576-10c1df35f765-S0 I0707 18:07:04.295132 31535 slave.cpp:3806] master@172.17.0.7:39581 exited I0707 18:07:04.295198 31534 slave.cpp:3806] master@172.17.0.7:39581 exited I0707 18:07:04.295245 31526 slave.cpp:3806] master@172.17.0.7:39581 exited W0707 18:07:04.300513 31535 slave.cpp:3811] Master disconnected! Waiting for a new master to be elected W0707 18:07:04.300745 31534 slave.cpp:3811] Master disconnected! Waiting for a new master to be elected W0707 18:07:04.300839 31526 slave.cpp:3811] Master disconnected! Waiting for a new master to be elected I0707 18:07:04.300978 31534 slave.cpp:841] Agent terminating I0707 18:07:04.305454 31538 slave.cpp:841] Agent terminating I0707 18:07:04.308804 31500 slave.cpp:841] Agent terminating [ OK ] ExamplesTest.DynamicReservationFramework (10418 ms) ",0,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5806","07/08/2016 01:10:10",5,"CNI isolator should prepare network related /etc/* files for containers using host mode but specify container images. ""Currently, the CNI isolator will just ignore those containers that want to join the host network (i.e., not specifying NetworkInfo). However, if the container specifies a container image, we need to make sure that it has access to host /etc/* files. We should perform the bind mount for the container. This is also what docker does when a container is running in host mode.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"MESOS-5812","07/08/2016 19:48:56",3,"MasterAPITest.Subscribe is flaky ""This test seems to be flaky, although on Mac OS X and CentOS 7 the error a bit different. On Mac OS X: On CentOS 7 ""","[ RUN ] ContentType/MasterAPITest.Subscribe/0 I0708 11:42:48.474665 1927435008 cluster.cpp:155] Creating default 'local' authorizer I0708 11:42:48.480677 1927435008 leveldb.cpp:174] Opened db in 5727us I0708 11:42:48.481494 1927435008 leveldb.cpp:181] Compacted db in 722us I0708 11:42:48.481541 1927435008 leveldb.cpp:196] Created db iterator in 19us I0708 11:42:48.481572 1927435008 leveldb.cpp:202] Seeked to beginning of db in 9us I0708 11:42:48.481587 1927435008 leveldb.cpp:271] Iterated through 0 keys in the db in 7us I0708 11:42:48.481617 1927435008 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0708 11:42:48.482030 350982144 recover.cpp:451] Starting replica recovery I0708 11:42:48.482203 350982144 recover.cpp:477] Replica is in EMPTY status I0708 11:42:48.484107 348299264 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (3780)@127.0.0.1:50325 I0708 11:42:48.484318 350982144 recover.cpp:197] Received a recover response from a replica in EMPTY status I0708 11:42:48.484750 348835840 master.cpp:382] Master e055d60c-05ff-487e-82da-d0a43e52605c (localhost) started on 127.0.0.1:50325 I0708 11:42:48.484850 349908992 recover.cpp:568] Updating replica status to STARTING I0708 11:42:48.484788 348835840 master.cpp:384] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/private/tmp/Sn2Kf4/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/private/tmp/Sn2Kf4/master"""" --zk_session_timeout=""""10secs"""" W0708 11:42:48.485263 348835840 master.cpp:387] ************************************************** Master bound to loopback interface! Cannot communicate with remote schedulers or agents. You might want to set '--ip' flag to a routable IP address. ************************************************** I0708 11:42:48.485291 348835840 master.cpp:434] Master only allowing authenticated frameworks to register I0708 11:42:48.485314 348835840 master.cpp:448] Master only allowing authenticated agents to register I0708 11:42:48.485335 348835840 master.cpp:461] Master only allowing authenticated HTTP frameworks to register I0708 11:42:48.485347 348835840 credentials.hpp:37] Loading credentials for authentication from '/private/tmp/Sn2Kf4/credentials' I0708 11:42:48.485373 349372416 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 397us I0708 11:42:48.485414 349372416 replica.cpp:320] Persisted replica status to STARTING I0708 11:42:48.485608 350982144 recover.cpp:477] Replica is in STARTING status I0708 11:42:48.485749 348835840 master.cpp:506] Using default 'crammd5' authenticator I0708 11:42:48.485852 348835840 master.cpp:578] Using default 'basic' HTTP authenticator I0708 11:42:48.486018 348835840 master.cpp:658] Using default 'basic' HTTP framework authenticator I0708 11:42:48.486140 348835840 master.cpp:705] Authorization enabled I0708 11:42:48.486486 350982144 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (3783)@127.0.0.1:50325 I0708 11:42:48.486758 352055296 recover.cpp:197] Received a recover response from a replica in STARTING status I0708 11:42:48.487176 350982144 recover.cpp:568] Updating replica status to VOTING I0708 11:42:48.487576 352055296 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 300us I0708 11:42:48.487658 352055296 replica.cpp:320] Persisted replica status to VOTING I0708 11:42:48.487736 350982144 recover.cpp:582] Successfully joined the Paxos group I0708 11:42:48.487951 350982144 recover.cpp:466] Recover process terminated I0708 11:42:48.489441 348835840 master.cpp:1973] The newly elected leader is master@127.0.0.1:50325 with id e055d60c-05ff-487e-82da-d0a43e52605c I0708 11:42:48.489518 348835840 master.cpp:1986] Elected as the leading master! I0708 11:42:48.489545 348835840 master.cpp:1673] Recovering from registrar I0708 11:42:48.489637 350982144 registrar.cpp:332] Recovering registrar I0708 11:42:48.490120 351518720 log.cpp:553] Attempting to start the writer I0708 11:42:48.491161 350445568 replica.cpp:493] Replica received implicit promise request from (3784)@127.0.0.1:50325 with proposal 1 I0708 11:42:48.491461 350445568 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 252us I0708 11:42:48.491528 350445568 replica.cpp:342] Persisted promised to 1 I0708 11:42:48.492337 348299264 coordinator.cpp:238] Coordinator attempting to fill missing positions I0708 11:42:48.493482 349372416 replica.cpp:388] Replica received explicit promise request from (3785)@127.0.0.1:50325 for position 0 with proposal 2 I0708 11:42:48.493854 349372416 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 283us I0708 11:42:48.493904 349372416 replica.cpp:712] Persisted action at 0 I0708 11:42:48.495302 348299264 replica.cpp:537] Replica received write request for position 0 from (3786)@127.0.0.1:50325 I0708 11:42:48.495455 348299264 leveldb.cpp:436] Reading position from leveldb took 45us I0708 11:42:48.495761 348299264 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 261us I0708 11:42:48.495803 348299264 replica.cpp:712] Persisted action at 0 I0708 11:42:48.496484 350445568 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0708 11:42:48.496795 350445568 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 255us I0708 11:42:48.496857 350445568 replica.cpp:712] Persisted action at 0 I0708 11:42:48.496896 350445568 replica.cpp:697] Replica learned NOP action at position 0 I0708 11:42:48.497445 350982144 log.cpp:569] Writer started with ending position 0 I0708 11:42:48.498523 350982144 leveldb.cpp:436] Reading position from leveldb took 80us I0708 11:42:48.499307 349908992 registrar.cpp:365] Successfully fetched the registry (0B) in 9.63712ms I0708 11:42:48.499464 349908992 registrar.cpp:464] Applied 1 operations in 36us; attempting to update the 'registry' I0708 11:42:48.499953 351518720 log.cpp:577] Attempting to append 159 bytes to the log I0708 11:42:48.500088 350982144 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0708 11:42:48.500880 348299264 replica.cpp:537] Replica received write request for position 1 from (3787)@127.0.0.1:50325 I0708 11:42:48.501186 348299264 leveldb.cpp:341] Persisting action (178 bytes) to leveldb took 259us I0708 11:42:48.501231 348299264 replica.cpp:712] Persisted action at 1 I0708 11:42:48.501786 351518720 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0708 11:42:48.502118 351518720 leveldb.cpp:341] Persisting action (180 bytes) to leveldb took 311us I0708 11:42:48.502260 351518720 replica.cpp:712] Persisted action at 1 I0708 11:42:48.502305 351518720 replica.cpp:697] Replica learned APPEND action at position 1 I0708 11:42:48.503475 349908992 registrar.cpp:509] Successfully updated the 'registry' in 3.944192ms I0708 11:42:48.503909 349908992 registrar.cpp:395] Successfully recovered registrar I0708 11:42:48.504003 350982144 log.cpp:596] Attempting to truncate the log to 1 I0708 11:42:48.504250 349372416 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0708 11:42:48.504546 350445568 master.cpp:1781] Recovered 0 agents from the Registry (121B) ; allowing 10mins for agents to re-register I0708 11:42:48.506022 352055296 replica.cpp:537] Replica received write request for position 2 from (3788)@127.0.0.1:50325 I0708 11:42:48.506479 352055296 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 320us I0708 11:42:48.506513 352055296 replica.cpp:712] Persisted action at 2 I0708 11:42:48.506978 351518720 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0708 11:42:48.507155 351518720 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 169us I0708 11:42:48.507237 351518720 leveldb.cpp:399] Deleting ~1 keys from leveldb took 37us I0708 11:42:48.507264 351518720 replica.cpp:712] Persisted action at 2 I0708 11:42:48.507285 351518720 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0708 11:42:48.521363 1927435008 cluster.cpp:432] Creating default 'local' authorizer I0708 11:42:48.522498 350982144 slave.cpp:205] Agent started on 119)@127.0.0.1:50325 I0708 11:42:48.522538 350982144 slave.cpp:206] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/Users/zhitao/Uber/sync/zhitao-mesos1.dev.uber.com/home/uber/mesos/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX"""" W0708 11:42:48.522903 350982144 slave.cpp:209] ************************************************** Agent bound to loopback interface! Cannot communicate with remote master(s). You might want to set '--ip' flag to a routable IP address. ************************************************** I0708 11:42:48.522922 350982144 credentials.hpp:86] Loading credential for authentication from '/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX/credential' W0708 11:42:48.522965 1927435008 scheduler.cpp:157] ************************************************** Scheduler driver bound to loopback interface! Cannot communicate with remote master(s). You might want to set 'LIBPROCESS_IP' environment variable to use a routable IP address. ************************************************** I0708 11:42:48.522992 1927435008 scheduler.cpp:172] Version: 1.0.0 I0708 11:42:48.523066 350982144 slave.cpp:343] Agent using credential for: test-principal I0708 11:42:48.523092 350982144 credentials.hpp:37] Loading credentials for authentication from '/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX/http_credentials' I0708 11:42:48.523334 350982144 slave.cpp:395] Using default 'basic' HTTP authenticator I0708 11:42:48.523973 352055296 scheduler.cpp:461] New master detected at master@127.0.0.1:50325 I0708 11:42:48.524050 350982144 slave.cpp:594] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0708 11:42:48.524196 350982144 slave.cpp:602] Agent attributes: [ ] I0708 11:42:48.524224 350982144 slave.cpp:607] Agent hostname: localhost I0708 11:42:48.525522 350445568 state.cpp:57] Recovering state from '/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX/meta' I0708 11:42:48.525853 350445568 status_update_manager.cpp:200] Recovering status update manager I0708 11:42:48.526165 350445568 slave.cpp:4856] Finished recovery I0708 11:42:48.527223 349372416 status_update_manager.cpp:174] Pausing sending status updates I0708 11:42:48.527231 352055296 slave.cpp:969] New master detected at master@127.0.0.1:50325 I0708 11:42:48.527276 352055296 slave.cpp:1028] Authenticating with master master@127.0.0.1:50325 I0708 11:42:48.527328 352055296 slave.cpp:1039] Using default CRAM-MD5 authenticatee I0708 11:42:48.527561 352055296 slave.cpp:1001] Detecting new master I0708 11:42:48.527582 348299264 authenticatee.cpp:121] Creating new client SASL connection I0708 11:42:48.528666 349908992 master.cpp:6006] Authenticating slave(119)@127.0.0.1:50325 I0708 11:42:48.528880 352055296 authenticator.cpp:98] Creating new server SASL connection I0708 11:42:48.529089 350445568 http.cpp:381] HTTP POST for /master/api/v1/scheduler from 127.0.0.1:50918 I0708 11:42:48.529233 350445568 master.cpp:2272] Received subscription request for HTTP framework 'default' I0708 11:42:48.529261 350445568 master.cpp:2012] Authorizing framework principal 'test-principal' to receive offers for role '*' I0708 11:42:48.529323 352055296 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I0708 11:42:48.529357 352055296 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I0708 11:42:48.529417 352055296 authenticator.cpp:204] Received SASL authentication start I0708 11:42:48.529503 352055296 authenticator.cpp:326] Authentication requires more steps I0708 11:42:48.529561 352055296 master.cpp:2370] Subscribing framework 'default' with checkpointing disabled and capabilities [ ] I0708 11:42:48.529721 349908992 authenticatee.cpp:259] Received SASL authentication step I0708 11:42:48.530005 348835840 authenticator.cpp:232] Received SASL authentication step I0708 11:42:48.530241 348835840 authenticator.cpp:318] Authentication success I0708 11:42:48.530254 350445568 hierarchical.cpp:271] Added framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 I0708 11:42:48.530900 349908992 authenticatee.cpp:299] Authentication success I0708 11:42:48.531186 350982144 master.cpp:6036] Successfully authenticated principal 'test-principal' at slave(119)@127.0.0.1:50325 I0708 11:42:48.531657 348299264 slave.cpp:1123] Successfully authenticated with master master@127.0.0.1:50325 I0708 11:42:48.531935 349372416 master.cpp:4676] Registering agent at slave(119)@127.0.0.1:50325 (localhost) with id e055d60c-05ff-487e-82da-d0a43e52605c-S0 I0708 11:42:48.532304 349908992 registrar.cpp:464] Applied 1 operations in 55us; attempting to update the 'registry' I0708 11:42:48.532908 348835840 log.cpp:577] Attempting to append 326 bytes to the log I0708 11:42:48.533015 352055296 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0708 11:42:48.533641 349372416 replica.cpp:537] Replica received write request for position 3 from (3798)@127.0.0.1:50325 I0708 11:42:48.533867 349372416 leveldb.cpp:341] Persisting action (345 bytes) to leveldb took 186us I0708 11:42:48.533917 349372416 replica.cpp:712] Persisted action at 3 I0708 11:42:48.537066 349908992 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0708 11:42:48.538169 349908992 leveldb.cpp:341] Persisting action (347 bytes) to leveldb took 914us I0708 11:42:48.538226 349908992 replica.cpp:712] Persisted action at 3 I0708 11:42:48.538255 349908992 replica.cpp:697] Replica learned APPEND action at position 3 I0708 11:42:48.539247 352055296 registrar.cpp:509] Successfully updated the 'registry' in 6.895104ms I0708 11:42:48.539302 348299264 log.cpp:596] Attempting to truncate the log to 3 I0708 11:42:48.539393 348299264 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0708 11:42:48.539798 348835840 master.cpp:4745] Registered agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 at slave(119)@127.0.0.1:50325 (localhost) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0708 11:42:48.539881 348299264 hierarchical.cpp:478] Added agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 (localhost) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0708 11:42:48.539901 349908992 slave.cpp:1169] Registered with master master@127.0.0.1:50325; given agent ID e055d60c-05ff-487e-82da-d0a43e52605c-S0 I0708 11:42:48.540287 350445568 status_update_manager.cpp:181] Resuming sending status updates I0708 11:42:48.540501 351518720 replica.cpp:537] Replica received write request for position 4 from (3799)@127.0.0.1:50325 I0708 11:42:48.540583 352055296 master.cpp:5835] Sending 1 offers to framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 (default) I0708 11:42:48.540798 351518720 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 247us I0708 11:42:48.540868 351518720 replica.cpp:712] Persisted action at 4 I0708 11:42:48.540895 349908992 slave.cpp:1229] Forwarding total oversubscribed resources I0708 11:42:48.541035 352055296 master.cpp:5128] Received update of agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 at slave(119)@127.0.0.1:50325 (localhost) with total oversubscribed resources I0708 11:42:48.541291 349908992 hierarchical.cpp:542] Agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 (localhost) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I0708 11:42:48.541630 350982144 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0708 11:42:48.541911 350982144 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 189us I0708 11:42:48.541965 350982144 leveldb.cpp:399] Deleting ~2 keys from leveldb took 28us I0708 11:42:48.541987 350982144 replica.cpp:712] Persisted action at 4 I0708 11:42:48.542006 350982144 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0708 11:42:48.544836 352055296 http.cpp:381] HTTP POST for /master/api/v1 from 127.0.0.1:50920 I0708 11:42:48.544884 352055296 http.cpp:484] Processing call SUBSCRIBE I0708 11:42:48.545382 352055296 master.cpp:7599] Added subscriber: a85e7341-ac15-4f18-9021-1a2efa326442 to the list of active subscribers I0708 11:42:48.550048 348835840 http.cpp:381] HTTP POST for /master/api/v1/scheduler from 127.0.0.1:50919 I0708 11:42:48.550339 348835840 master.cpp:3468] Processing ACCEPT call for offers: [ e055d60c-05ff-487e-82da-d0a43e52605c-O0 ] on agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 at slave(119)@127.0.0.1:50325 (localhost) for framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 (default) I0708 11:42:48.550390 348835840 master.cpp:3106] Authorizing framework principal 'test-principal' to launch task d94e54c0-8c89-43bd-be2f-adeb8cf70cb1 W0708 11:42:48.551434 348835840 validation.cpp:650] Executor default for task d94e54c0-8c89-43bd-be2f-adeb8cf70cb1 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0708 11:42:48.551477 348835840 validation.cpp:662] Executor default for task d94e54c0-8c89-43bd-be2f-adeb8cf70cb1 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0708 11:42:48.551803 348835840 master.cpp:7565] Adding task d94e54c0-8c89-43bd-be2f-adeb8cf70cb1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 (localhost) I0708 11:42:48.551949 348835840 master.cpp:3957] Launching task d94e54c0-8c89-43bd-be2f-adeb8cf70cb1 of framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 (default) with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 at slave(119)@127.0.0.1:50325 (localhost) I0708 11:42:48.552151 352055296 slave.cpp:1569] Got assigned task d94e54c0-8c89-43bd-be2f-adeb8cf70cb1 for framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 I0708 11:42:48.552592 352055296 slave.cpp:1688] Launching task d94e54c0-8c89-43bd-be2f-adeb8cf70cb1 for framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 I0708 11:42:48.553282 352055296 paths.cpp:528] Trying to chown '/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX/slaves/e055d60c-05ff-487e-82da-d0a43e52605c-S0/frameworks/e055d60c-05ff-487e-82da-d0a43e52605c-0000/executors/default/runs/62add906-b60f-43ec-ab06-0514a798de26' to user 'zhitao' I0708 11:42:48.566201 352055296 slave.cpp:5748] Launching executor default of framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 with resources in work directory '/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX/slaves/e055d60c-05ff-487e-82da-d0a43e52605c-S0/frameworks/e055d60c-05ff-487e-82da-d0a43e52605c-0000/executors/default/runs/62add906-b60f-43ec-ab06-0514a798de26' I0708 11:42:48.567876 352055296 executor.cpp:188] Version: 1.0.0 I0708 11:42:48.568428 352055296 slave.cpp:1914] Queuing task 'd94e54c0-8c89-43bd-be2f-adeb8cf70cb1' for executor 'default' of framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 E0708 11:42:48.571115 352591872 process.cpp:2104] Failed to shutdown socket with fd 254: Socket is not connected W0708 11:42:48.570768 352055296 executor.cpp:739] Dropping SUBSCRIBE: Executor is in state DISCONNECTED GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: disconnected(0x7fad21fcebf0) Stack trace: ../../src/tests/api_tests.cpp:1537: Failure Failed to wait 15secs for event E0708 11:43:03.556205 352591872 process.cpp:2104] Failed to shutdown socket with fd 235: Socket is not connected E0708 11:43:03.556584 352591872 process.cpp:2104] Failed to shutdown socket with fd 223: Socket is not connected I0708 11:43:03.557134 349908992 master.cpp:1410] Framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 (default) disconnected I0708 11:43:03.557176 349908992 master.cpp:2851] Disconnecting framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 (default) I0708 11:43:03.557209 349908992 master.cpp:2875] Deactivating framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 (default) I0708 11:43:03.557415 349908992 master.cpp:1423] Giving framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 (default) 0ns to failover I0708 11:43:03.557456 348835840 hierarchical.cpp:382] Deactivated framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 I0708 11:43:03.557878 350445568 master.cpp:5687] Framework failover timeout, removing framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 (default) I0708 11:43:03.557945 350445568 master.cpp:6422] Removing framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 (default) I0708 11:43:03.558076 352055296 slave.cpp:2292] Asked to shut down framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 by master@127.0.0.1:50325 I0708 11:43:03.558106 352055296 slave.cpp:2317] Shutting down framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 I0708 11:43:03.558131 352055296 slave.cpp:4481] Shutting down executor 'default' of framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 W0708 11:43:03.558147 352055296 slave.hpp:768] Unable to send event to executor 'default' of framework e055d60c-05ff-487e-82da-d0a43e52605c-0000: unknown connection type I0708 11:43:03.558188 350445568 master.cpp:6959] Updating the state of task d94e54c0-8c89-43bd-be2f-adeb8cf70cb1 of framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 (latest state: TASK_KILLED, status update state: TASK_KILLED) I0708 11:43:03.558507 350445568 master.cpp:7025] Removing task d94e54c0-8c89-43bd-be2f-adeb8cf70cb1 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 on agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 at slave(119)@127.0.0.1:50325 (localhost) I0708 11:43:03.558709 350445568 master.cpp:7054] Removing executor 'default' with resources of framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 on agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 at slave(119)@127.0.0.1:50325 (localhost) I0708 11:43:03.559051 349372416 hierarchical.cpp:333] Removed framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 I0708 11:43:03.567955 350982144 slave.cpp:4163] Executor 'default' of framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 exited with status 0 I0708 11:43:03.568176 350982144 slave.cpp:4267] Cleaning up executor 'default' of framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 W0708 11:43:03.568258 348299264 master.cpp:5369] Ignoring unknown exited executor 'default' of framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 on agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 at slave(119)@127.0.0.1:50325 (localhost) I0708 11:43:03.568584 348299264 gc.cpp:55] Scheduling '/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX/slaves/e055d60c-05ff-487e-82da-d0a43e52605c-S0/frameworks/e055d60c-05ff-487e-82da-d0a43e52605c-0000/executors/default/runs/62add906-b60f-43ec-ab06-0514a798de26' for gc 6.99999342143407days in the future I0708 11:43:03.568864 350982144 slave.cpp:4355] Cleaning up framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 I0708 11:43:03.568879 352055296 gc.cpp:55] Scheduling '/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX/slaves/e055d60c-05ff-487e-82da-d0a43e52605c-S0/frameworks/e055d60c-05ff-487e-82da-d0a43e52605c-0000/executors/default' for gc 6.99999341739556days in the future I0708 11:43:03.569056 350445568 status_update_manager.cpp:282] Closing status update streams for framework e055d60c-05ff-487e-82da-d0a43e52605c-0000 I0708 11:43:03.569247 350982144 slave.cpp:841] Agent terminating I0708 11:43:03.569239 348835840 gc.cpp:55] Scheduling '/var/folders/ny/tcvyblqj43s2gdh2_895v9nw0000gp/T/ContentType_MasterAPITest_Subscribe_0_VaPndX/slaves/e055d60c-05ff-487e-82da-d0a43e52605c-S0/frameworks/e055d60c-05ff-487e-82da-d0a43e52605c-0000' for gc 6.99999341315852days in the future I0708 11:43:03.569524 350982144 master.cpp:1371] Agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 at slave(119)@127.0.0.1:50325 (localhost) disconnected I0708 11:43:03.569577 350982144 master.cpp:2910] Disconnecting agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 at slave(119)@127.0.0.1:50325 (localhost) I0708 11:43:03.569767 350982144 master.cpp:2929] Deactivating agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 at slave(119)@127.0.0.1:50325 (localhost) I0708 11:43:03.570020 349372416 hierarchical.cpp:571] Agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 deactivated ../../src/tests/api_tests.cpp:1509: Failure Actual function call count doesn't match EXPECT_CALL(*executor, acknowledged(_, _))... Expected: to be called once Actual: never called - unsatisfied and active ../../src/tests/api_tests.cpp:1505: Failure Actual function call count doesn't match EXPECT_CALL(*executor, launch(_, _))... Expected: to be called once Actual: never called - unsatisfied and active ../../src/tests/api_tests.cpp:1503: Failure Actual function call count doesn't match EXPECT_CALL(*executor, subscribed(_, _))... Expected: to be called once Actual: never called - unsatisfied and active ../../src/tests/api_tests.cpp:1496: Failure Actual function call count doesn't match EXPECT_CALL(*scheduler, update(_, _))... Expected: to be called twice Actual: never called - unsatisfied and active I0708 11:43:03.572598 1927435008 master.cpp:1218] Master terminating I0708 11:43:03.572844 352055296 hierarchical.cpp:510] Removed agent e055d60c-05ff-487e-82da-d0a43e52605c-S0 [ FAILED ] ContentType/MasterAPITest.Subscribe/0, where GetParam() = application/x-protobuf (15105 ms) [ RUN ] ContentType/MasterAPITest.Subscribe/0 I0708 15:42:16.042171 29138 cluster.cpp:155] Creating default 'local' authorizer I0708 15:42:16.154358 29138 leveldb.cpp:174] Opened db in 111.818825ms I0708 15:42:16.197175 29138 leveldb.cpp:181] Compacted db in 42.714984ms I0708 15:42:16.197293 29138 leveldb.cpp:196] Created db iterator in 32582ns I0708 15:42:16.197324 29138 leveldb.cpp:202] Seeked to beginning of db in 4050ns I0708 15:42:16.197343 29138 leveldb.cpp:271] Iterated through 0 keys in the db in 538ns I0708 15:42:16.197417 29138 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I0708 15:42:16.198655 29157 recover.cpp:451] Starting replica recovery I0708 15:42:16.199364 29161 recover.cpp:477] Replica is in EMPTY status I0708 15:42:16.200865 29161 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (16431)@172.17.0.3:34502 I0708 15:42:16.201282 29158 recover.cpp:197] Received a recover response from a replica in EMPTY status I0708 15:42:16.203222 29160 recover.cpp:568] Updating replica status to STARTING I0708 15:42:16.204633 29158 master.cpp:382] Master 2aea5b7f-ec9f-4fda-8f34-877d8adf064f (0382d073a49a) started on 172.17.0.3:34502 I0708 15:42:16.204675 29158 master.cpp:384] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/Lu916I/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-1.1.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/Lu916I/master"""" --zk_session_timeout=""""10secs"""" I0708 15:42:16.205265 29158 master.cpp:434] Master only allowing authenticated frameworks to register I0708 15:42:16.205283 29158 master.cpp:448] Master only allowing authenticated agents to register I0708 15:42:16.205294 29158 master.cpp:461] Master only allowing authenticated HTTP frameworks to register I0708 15:42:16.205307 29158 credentials.hpp:37] Loading credentials for authentication from '/tmp/Lu916I/credentials' I0708 15:42:16.205705 29158 master.cpp:506] Using default 'crammd5' authenticator I0708 15:42:16.205940 29158 master.cpp:578] Using default 'basic' HTTP authenticator I0708 15:42:16.206192 29158 master.cpp:658] Using default 'basic' HTTP framework authenticator I0708 15:42:16.206374 29158 master.cpp:705] Authorization enabled I0708 15:42:16.206866 29172 hierarchical.cpp:151] Initialized hierarchical allocator process I0708 15:42:16.207018 29172 whitelist_watcher.cpp:77] No whitelist given I0708 15:42:16.210026 29165 master.cpp:1973] The newly elected leader is master@172.17.0.3:34502 with id 2aea5b7f-ec9f-4fda-8f34-877d8adf064f I0708 15:42:16.210187 29165 master.cpp:1986] Elected as the leading master! I0708 15:42:16.210330 29165 master.cpp:1673] Recovering from registrar I0708 15:42:16.210577 29171 registrar.cpp:332] Recovering registrar I0708 15:42:16.239378 29160 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 35.540287ms I0708 15:42:16.239485 29160 replica.cpp:320] Persisted replica status to STARTING I0708 15:42:16.239938 29161 recover.cpp:477] Replica is in STARTING status I0708 15:42:16.242017 29165 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (16434)@172.17.0.3:34502 I0708 15:42:16.242527 29167 recover.cpp:197] Received a recover response from a replica in STARTING status I0708 15:42:16.243140 29167 recover.cpp:568] Updating replica status to VOTING I0708 15:42:16.281746 29167 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 38.318978ms I0708 15:42:16.281828 29167 replica.cpp:320] Persisted replica status to VOTING I0708 15:42:16.282094 29170 recover.cpp:582] Successfully joined the Paxos group I0708 15:42:16.282440 29170 recover.cpp:466] Recover process terminated I0708 15:42:16.283365 29170 log.cpp:553] Attempting to start the writer I0708 15:42:16.285605 29167 replica.cpp:493] Replica received implicit promise request from (16435)@172.17.0.3:34502 with proposal 1 I0708 15:42:16.315435 29167 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 29.761608ms I0708 15:42:16.315528 29167 replica.cpp:342] Persisted promised to 1 I0708 15:42:16.317147 29159 coordinator.cpp:238] Coordinator attempting to fill missing positions I0708 15:42:16.318914 29160 replica.cpp:388] Replica received explicit promise request from (16436)@172.17.0.3:34502 for position 0 with proposal 2 I0708 15:42:16.348886 29160 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 29.896283ms I0708 15:42:16.349161 29160 replica.cpp:712] Persisted action at 0 I0708 15:42:16.350939 29170 replica.cpp:537] Replica received write request for position 0 from (16437)@172.17.0.3:34502 I0708 15:42:16.351029 29170 leveldb.cpp:436] Reading position from leveldb took 42967ns I0708 15:42:16.382378 29170 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 31.28917ms I0708 15:42:16.382464 29170 replica.cpp:712] Persisted action at 0 I0708 15:42:16.383646 29169 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I0708 15:42:16.415894 29169 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 32.189511ms I0708 15:42:16.416015 29169 replica.cpp:712] Persisted action at 0 I0708 15:42:16.416056 29169 replica.cpp:697] Replica learned NOP action at position 0 I0708 15:42:16.417312 29168 log.cpp:569] Writer started with ending position 0 I0708 15:42:16.418628 29167 leveldb.cpp:436] Reading position from leveldb took 56748ns I0708 15:42:16.420019 29165 registrar.cpp:365] Successfully fetched the registry (0B) in 209.31712ms I0708 15:42:16.420155 29165 registrar.cpp:464] Applied 1 operations in 30566ns; attempting to update the 'registry' I0708 15:42:16.420994 29172 log.cpp:577] Attempting to append 168 bytes to the log I0708 15:42:16.421149 29157 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I0708 15:42:16.422169 29162 replica.cpp:537] Replica received write request for position 1 from (16438)@172.17.0.3:34502 I0708 15:42:16.457743 29162 leveldb.cpp:341] Persisting action (187 bytes) to leveldb took 35.505294ms I0708 15:42:16.457844 29162 replica.cpp:712] Persisted action at 1 I0708 15:42:16.459228 29172 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I0708 15:42:16.495947 29172 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 36.653391ms I0708 15:42:16.496048 29172 replica.cpp:712] Persisted action at 1 I0708 15:42:16.496091 29172 replica.cpp:697] Replica learned APPEND action at position 1 I0708 15:42:16.497947 29172 registrar.cpp:509] Successfully updated the 'registry' in 77.703936ms I0708 15:42:16.498132 29172 registrar.cpp:395] Successfully recovered registrar I0708 15:42:16.498169 29171 log.cpp:596] Attempting to truncate the log to 1 I0708 15:42:16.498294 29162 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I0708 15:42:16.498668 29171 master.cpp:1781] Recovered 0 agents from the Registry (129B) ; allowing 10mins for agents to re-register I0708 15:42:16.498919 29162 hierarchical.cpp:178] Skipping recovery of hierarchical allocator: nothing to recover I0708 15:42:16.499577 29171 replica.cpp:537] Replica received write request for position 2 from (16439)@172.17.0.3:34502 I0708 15:42:16.521065 29171 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 21.423468ms I0708 15:42:16.521160 29171 replica.cpp:712] Persisted action at 2 I0708 15:42:16.522766 29171 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I0708 15:42:16.546223 29171 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 23.402601ms I0708 15:42:16.546380 29171 leveldb.cpp:399] Deleting ~1 keys from leveldb took 70830ns I0708 15:42:16.546411 29171 replica.cpp:712] Persisted action at 2 I0708 15:42:16.546445 29171 replica.cpp:697] Replica learned TRUNCATE action at position 2 I0708 15:42:16.560467 29138 cluster.cpp:432] Creating default 'local' authorizer I0708 15:42:16.565003 29162 slave.cpp:205] Agent started on 449)@172.17.0.3:34502 I0708 15:42:16.565520 29138 scheduler.cpp:172] Version: 1.1.0 I0708 15:42:16.565150 29162 slave.cpp:206] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""true"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher_dir=""""/mesos/mesos-1.1.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/ContentType_MasterAPITest_Subscribe_0_be660M"""" I0708 15:42:16.566128 29162 credentials.hpp:86] Loading credential for authentication from '/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/credential' I0708 15:42:16.566423 29162 slave.cpp:343] Agent using credential for: test-principal I0708 15:42:16.566520 29171 scheduler.cpp:461] New master detected at master@172.17.0.3:34502 I0708 15:42:16.566543 29162 credentials.hpp:37] Loading credentials for authentication from '/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/http_credentials' I0708 15:42:16.566557 29171 scheduler.cpp:470] Waiting for 0ns before initiating a re-(connection) attempt with the master I0708 15:42:16.566838 29162 slave.cpp:395] Using default 'basic' HTTP authenticator I0708 15:42:16.568023 29162 resources.cpp:572] Parsing resources as JSON failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0708 15:42:16.568527 29162 resources.cpp:572] Parsing resources as JSON failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000] Trying semicolon-delimited string format instead I0708 15:42:16.569443 29162 slave.cpp:594] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0708 15:42:16.569535 29162 slave.cpp:602] Agent attributes: [ ] I0708 15:42:16.569552 29162 slave.cpp:607] Agent hostname: 0382d073a49a I0708 15:42:16.571897 29165 state.cpp:57] Recovering state from '/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/meta' I0708 15:42:16.572376 29165 status_update_manager.cpp:200] Recovering status update manager I0708 15:42:16.572638 29165 slave.cpp:4856] Finished recovery I0708 15:42:16.573194 29165 slave.cpp:5028] Querying resource estimator for oversubscribable resources I0708 15:42:16.574082 29165 slave.cpp:969] New master detected at master@172.17.0.3:34502 I0708 15:42:16.574111 29165 slave.cpp:1028] Authenticating with master master@172.17.0.3:34502 I0708 15:42:16.574174 29165 slave.cpp:1039] Using default CRAM-MD5 authenticatee I0708 15:42:16.574213 29162 status_update_manager.cpp:174] Pausing sending status updates I0708 15:42:16.574323 29165 slave.cpp:1001] Detecting new master I0708 15:42:16.574525 29165 authenticatee.cpp:121] Creating new client SASL connection I0708 15:42:16.574851 29160 slave.cpp:5042] Received oversubscribable resources from the resource estimator I0708 15:42:16.575621 29164 scheduler.cpp:349] Connected with the master at http://172.17.0.3:34502/master/api/v1/scheduler I0708 15:42:16.577546 29164 scheduler.cpp:231] Sending SUBSCRIBE call to http://172.17.0.3:34502/master/api/v1/scheduler I0708 15:42:16.579020 29168 master.cpp:6006] Authenticating slave(449)@172.17.0.3:34502 I0708 15:42:16.579133 29165 authenticator.cpp:414] Starting authentication session for crammd5_authenticatee(926)@172.17.0.3:34502 I0708 15:42:16.579236 29168 process.cpp:3322] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' I0708 15:42:16.579448 29157 authenticator.cpp:98] Creating new server SASL connection I0708 15:42:16.579684 29165 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I0708 15:42:16.579722 29165 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I0708 15:42:16.579831 29165 authenticator.cpp:204] Received SASL authentication start I0708 15:42:16.579910 29165 authenticator.cpp:326] Authentication requires more steps I0708 15:42:16.580013 29165 authenticatee.cpp:259] Received SASL authentication step I0708 15:42:16.580111 29165 authenticator.cpp:232] Received SASL authentication step I0708 15:42:16.580143 29165 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '0382d073a49a' server FQDN: '0382d073a49a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0708 15:42:16.580157 29165 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I0708 15:42:16.580196 29165 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0708 15:42:16.580227 29165 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '0382d073a49a' server FQDN: '0382d073a49a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0708 15:42:16.580240 29165 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0708 15:42:16.580251 29165 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0708 15:42:16.580271 29165 authenticator.cpp:318] Authentication success I0708 15:42:16.580420 29165 authenticatee.cpp:299] Authentication success I0708 15:42:16.580525 29165 authenticator.cpp:432] Authentication session cleanup for crammd5_authenticatee(926)@172.17.0.3:34502 I0708 15:42:16.580840 29165 slave.cpp:1123] Successfully authenticated with master master@172.17.0.3:34502 I0708 15:42:16.581131 29165 slave.cpp:1529] Will retry registration in 814473ns if necessary I0708 15:42:16.581560 29168 master.cpp:6036] Successfully authenticated principal 'test-principal' at slave(449)@172.17.0.3:34502 I0708 15:42:16.581795 29168 master.cpp:4676] Registering agent at slave(449)@172.17.0.3:34502 (0382d073a49a) with id 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 I0708 15:42:16.583050 29162 registrar.cpp:464] Applied 1 operations in 131284ns; attempting to update the 'registry' I0708 15:42:16.584233 29170 slave.cpp:1529] Will retry registration in 27.411836ms if necessary I0708 15:42:16.584384 29158 master.cpp:4664] Ignoring register agent message from slave(449)@172.17.0.3:34502 (0382d073a49a) as admission is already in progress I0708 15:42:16.585019 29168 log.cpp:577] Attempting to append 337 bytes to the log I0708 15:42:16.585113 29162 http.cpp:381] HTTP POST for /master/api/v1/scheduler from 172.17.0.3:33142 I0708 15:42:16.585156 29159 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I0708 15:42:16.585417 29162 master.cpp:2272] Received subscription request for HTTP framework 'default' I0708 15:42:16.585486 29162 master.cpp:2012] Authorizing framework principal 'test-principal' to receive offers for role '*' I0708 15:42:16.586509 29159 replica.cpp:537] Replica received write request for position 3 from (16448)@172.17.0.3:34502 I0708 15:42:16.587302 29168 master.cpp:2370] Subscribing framework 'default' with checkpointing disabled and capabilities [ ] I0708 15:42:16.588059 29170 master.hpp:2010] Sending heartbeat to 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.588745 29168 hierarchical.cpp:271] Added framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.588819 29168 hierarchical.cpp:1537] No allocations performed I0708 15:42:16.588851 29168 hierarchical.cpp:1632] No inverse offers to send out! I0708 15:42:16.588910 29168 hierarchical.cpp:1172] Performed allocation for 0 agents in 138375ns I0708 15:42:16.593391 29162 scheduler.cpp:662] Enqueuing event SUBSCRIBED received from http://172.17.0.3:34502/master/api/v1/scheduler I0708 15:42:16.594115 29162 scheduler.cpp:662] Enqueuing event HEARTBEAT received from http://172.17.0.3:34502/master/api/v1/scheduler I0708 15:42:16.612622 29162 slave.cpp:1529] Will retry registration in 35.186867ms if necessary I0708 15:42:16.613113 29169 master.cpp:4664] Ignoring register agent message from slave(449)@172.17.0.3:34502 (0382d073a49a) as admission is already in progress I0708 15:42:16.621047 29159 leveldb.cpp:341] Persisting action (356 bytes) to leveldb took 34.409256ms I0708 15:42:16.621134 29159 replica.cpp:712] Persisted action at 3 I0708 15:42:16.622661 29159 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I0708 15:42:16.646806 29159 leveldb.cpp:341] Persisting action (358 bytes) to leveldb took 24.085822ms I0708 15:42:16.646906 29159 replica.cpp:712] Persisted action at 3 I0708 15:42:16.646986 29159 replica.cpp:697] Replica learned APPEND action at position 3 I0708 15:42:16.649273 29157 registrar.cpp:509] Successfully updated the 'registry' in 66.121984ms I0708 15:42:16.649538 29167 slave.cpp:1529] Will retry registration in 111.475397ms if necessary I0708 15:42:16.649603 29158 log.cpp:596] Attempting to truncate the log to 3 I0708 15:42:16.649811 29157 master.cpp:4664] Ignoring register agent message from slave(449)@172.17.0.3:34502 (0382d073a49a) as admission is already in progress I0708 15:42:16.650069 29160 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I0708 15:42:16.650713 29160 slave.cpp:3760] Received ping from slave-observer(404)@172.17.0.3:34502 I0708 15:42:16.650879 29160 slave.cpp:1169] Registered with master master@172.17.0.3:34502; given agent ID 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 I0708 15:42:16.651007 29160 fetcher.cpp:86] Clearing fetcher cache I0708 15:42:16.651065 29158 hierarchical.cpp:478] Added agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 (0382d073a49a) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) I0708 15:42:16.651480 29160 slave.cpp:1192] Checkpointing SlaveInfo to '/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/meta/slaves/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0/slave.info' I0708 15:42:16.651499 29166 status_update_manager.cpp:181] Resuming sending status updates I0708 15:42:16.650825 29157 master.cpp:4745] Registered agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0708 15:42:16.651847 29158 hierarchical.cpp:1632] No inverse offers to send out! I0708 15:42:16.652433 29158 hierarchical.cpp:1195] Performed allocation for agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 in 1.317746ms I0708 15:42:16.651897 29172 replica.cpp:537] Replica received write request for position 4 from (16450)@172.17.0.3:34502 I0708 15:42:16.653264 29157 master.cpp:5835] Sending 1 offers to framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (default) I0708 15:42:16.654682 29160 slave.cpp:1229] Forwarding total oversubscribed resources I0708 15:42:16.656188 29165 master.cpp:5128] Received update of agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) with total oversubscribed resources I0708 15:42:16.656200 29164 scheduler.cpp:662] Enqueuing event OFFERS received from http://172.17.0.3:34502/master/api/v1/scheduler I0708 15:42:16.656708 29159 hierarchical.cpp:542] Agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 (0382d073a49a) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I0708 15:42:16.657385 29159 hierarchical.cpp:1537] No allocations performed I0708 15:42:16.657438 29159 hierarchical.cpp:1632] No inverse offers to send out! I0708 15:42:16.657519 29159 hierarchical.cpp:1195] Performed allocation for agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 in 516462ns I0708 15:42:16.660909 29163 process.cpp:3322] Handling HTTP event for process 'master' with path: '/master/api/v1' I0708 15:42:16.661958 29163 http.cpp:381] HTTP POST for /master/api/v1 from 172.17.0.3:33143 I0708 15:42:16.662125 29163 http.cpp:484] Processing call SUBSCRIBE I0708 15:42:16.663280 29164 master.cpp:7599] Added subscriber: 726edf8d-ad3d-4d08-9243-de3dc2df5f4a to the list of active subscribers I0708 15:42:16.671409 29161 scheduler.cpp:231] Sending ACCEPT call to http://172.17.0.3:34502/master/api/v1/scheduler I0708 15:42:16.672615 29165 process.cpp:3322] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' I0708 15:42:16.676375 29169 http.cpp:381] HTTP POST for /master/api/v1/scheduler from 172.17.0.3:33141 I0708 15:42:16.677199 29169 master.cpp:3468] Processing ACCEPT call for offers: [ 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-O0 ] on agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) for framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (default) I0708 15:42:16.677291 29169 master.cpp:3106] Authorizing framework principal 'test-principal' to launch task d8bd1ba3-055a-4420-820c-8e85fdde7c08 W0708 15:42:16.679435 29169 validation.cpp:650] Executor default for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W0708 15:42:16.679492 29169 validation.cpp:662] Executor default for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I0708 15:42:16.680003 29169 master.cpp:7573] Notifying all active subscribers about TASK_ADDED event I0708 15:42:16.680454 29172 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 27.707387ms I0708 15:42:16.680685 29172 replica.cpp:712] Persisted action at 4 I0708 15:42:16.681685 29168 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I0708 15:42:16.680449 29169 master.cpp:7565] Adding task d8bd1ba3-055a-4420-820c-8e85fdde7c08 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 (0382d073a49a) I0708 15:42:16.682688 29169 master.cpp:3957] Launching task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (default) with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) I0708 15:42:16.683289 29171 slave.cpp:1569] Got assigned task d8bd1ba3-055a-4420-820c-8e85fdde7c08 for framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.683903 29171 slave.cpp:1688] Launching task d8bd1ba3-055a-4420-820c-8e85fdde7c08 for framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.684563 29171 paths.cpp:528] Trying to chown '/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/slaves/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0/frameworks/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000/executors/default/runs/d8931456-e1d5-4875-9fb2-cf66e66d7fa3' to user 'mesos' I0708 15:42:16.699834 29171 slave.cpp:5748] Launching executor default of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 with resources in work directory '/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/slaves/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0/frameworks/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000/executors/default/runs/d8931456-e1d5-4875-9fb2-cf66e66d7fa3' I0708 15:42:16.702018 29171 executor.cpp:188] Version: 1.1.0 I0708 15:42:16.702541 29171 slave.cpp:1914] Queuing task 'd8bd1ba3-055a-4420-820c-8e85fdde7c08' for executor 'default' of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.702777 29171 slave.cpp:922] Successfully attached file '/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/slaves/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0/frameworks/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000/executors/default/runs/d8931456-e1d5-4875-9fb2-cf66e66d7fa3' I0708 15:42:16.704201 29169 executor.cpp:389] Connected with the agent I0708 15:42:16.704911 29159 executor.cpp:290] Sending SUBSCRIBE call to http://172.17.0.3:34502/slave(449)/api/v1/executor I0708 15:42:16.706003 29170 process.cpp:3322] Handling HTTP event for process 'slave(449)' with path: '/slave(449)/api/v1/executor' I0708 15:42:16.706897 29157 http.cpp:270] HTTP POST for /slave(449)/api/v1/executor from 172.17.0.3:33144 I0708 15:42:16.707108 29157 slave.cpp:2735] Received Subscribe request for HTTP executor 'default' of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.707819 29168 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 26.083403ms I0708 15:42:16.707986 29168 leveldb.cpp:399] Deleting ~2 keys from leveldb took 117548ns I0708 15:42:16.708031 29168 replica.cpp:712] Persisted action at 4 I0708 15:42:16.708076 29168 replica.cpp:697] Replica learned TRUNCATE action at position 4 I0708 15:42:16.708082 29157 slave.cpp:2079] Sending queued task 'd8bd1ba3-055a-4420-820c-8e85fdde7c08' to executor 'default' of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (via HTTP) I0708 15:42:16.710268 29163 executor.cpp:707] Enqueuing event SUBSCRIBED received from http://172.17.0.3:34502/slave(449)/api/v1/executor I0708 15:42:16.712131 29169 executor.cpp:707] Enqueuing event LAUNCH received from http://172.17.0.3:34502/slave(449)/api/v1/executor I0708 15:42:16.713174 29172 executor.cpp:290] Sending UPDATE call to http://172.17.0.3:34502/slave(449)/api/v1/executor I0708 15:42:16.713984 29170 process.cpp:3322] Handling HTTP event for process 'slave(449)' with path: '/slave(449)/api/v1/executor' I0708 15:42:16.714614 29170 http.cpp:270] HTTP POST for /slave(449)/api/v1/executor from 172.17.0.3:33145 I0708 15:42:16.714753 29170 slave.cpp:3285] Handling status update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.715451 29172 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.715498 29172 status_update_manager.cpp:497] Creating StatusUpdate stream for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.715996 29172 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 to the agent I0708 15:42:16.716584 29172 slave.cpp:3678] Forwarding the update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 to master@172.17.0.3:34502 I0708 15:42:16.716956 29172 slave.cpp:3572] Status update manager successfully handled status update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.717159 29171 master.cpp:5273] Status update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 from agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) I0708 15:42:16.717265 29171 master.cpp:5321] Forwarding status update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.718080 29168 executor.cpp:707] Enqueuing event ACKNOWLEDGED received from http://172.17.0.3:34502/slave(449)/api/v1/executor I0708 15:42:16.718299 29171 master.cpp:7573] Notifying all active subscribers about TASK_UPDATED event I0708 15:42:16.719683 29171 master.cpp:6959] Updating the state of task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0708 15:42:16.720386 29171 scheduler.cpp:662] Enqueuing event UPDATE received from http://172.17.0.3:34502/master/api/v1/scheduler I0708 15:42:16.726471 29164 hierarchical.cpp:1537] No allocations performed W0708 15:42:16.726742 29163 status_update_manager.cpp:475] Resending status update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.726840 29163 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 to the agent I0708 15:42:16.727401 29163 slave.cpp:3678] Forwarding the update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 to master@172.17.0.3:34502 I0708 15:42:16.727880 29163 master.cpp:5273] Status update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 from agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) I0708 15:42:16.728035 29163 master.cpp:5321] Forwarding status update TASK_RUNNING (UUID: a93b73fb-5289-4aec-80e9-1cad44bec619) for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.728570 29163 master.cpp:6959] Updating the state of task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0708 15:42:16.728003 29164 hierarchical.cpp:1632] No inverse offers to send out! I0708 15:42:16.730080 29164 hierarchical.cpp:1172] Performed allocation for 1 agents in 3.858387ms I0708 15:42:16.751055 29160 master.cpp:1410] Framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (default) disconnected I0708 15:42:16.751116 29160 master.cpp:2851] Disconnecting framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (default) I0708 15:42:16.751149 29160 master.cpp:2875] Deactivating framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (default) I0708 15:42:16.751242 29160 master.cpp:1423] Giving framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (default) 0ns to failover I0708 15:42:16.751602 29160 hierarchical.cpp:382] Deactivated framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.755091 29157 master.cpp:5687] Framework failover timeout, removing framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (default) I0708 15:42:16.755164 29157 master.cpp:6422] Removing framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (default) I0708 15:42:16.755425 29157 master.cpp:7573] Notifying all active subscribers about TASK_UPDATED event I0708 15:42:16.755795 29157 master.cpp:6959] Updating the state of task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (latest state: TASK_KILLED, status update state: TASK_KILLED) I0708 15:42:16.756032 29166 slave.cpp:2292] Asked to shut down framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 by master@172.17.0.3:34502 I0708 15:42:16.756093 29166 slave.cpp:2317] Shutting down framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.756172 29166 slave.cpp:4481] Shutting down executor 'default' of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (via HTTP) I0708 15:42:16.757699 29157 master.cpp:7025] Removing task d8bd1ba3-055a-4420-820c-8e85fdde7c08 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 on agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) I0708 15:42:16.758779 29161 hierarchical.cpp:924] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 from framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.759784 29161 executor.cpp:707] Enqueuing event SHUTDOWN received from http://172.17.0.3:34502/slave(449)/api/v1/executor I0708 15:42:16.761289 29157 master.cpp:7054] Removing executor 'default' with resources of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 on agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) I0708 15:42:16.763124 29157 hierarchical.cpp:333] Removed framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.777029 29163 slave.cpp:4163] Executor 'default' of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 exited with status 0 I0708 15:42:16.777218 29163 slave.cpp:4267] Cleaning up executor 'default' of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 (via HTTP) I0708 15:42:16.777710 29163 slave.cpp:4355] Cleaning up framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.778026 29163 gc.cpp:55] Scheduling '/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/slaves/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0/frameworks/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000/executors/default/runs/d8931456-e1d5-4875-9fb2-cf66e66d7fa3' for gc 6.99999163714074days in the future W0708 15:42:16.778028 29167 master.cpp:5369] Ignoring unknown exited executor 'default' of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 on agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) I0708 15:42:16.778195 29157 status_update_manager.cpp:282] Closing status update streams for framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.778239 29163 gc.cpp:55] Scheduling '/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/slaves/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0/frameworks/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000/executors/default' for gc 6.99999163714074days in the future I0708 15:42:16.778257 29157 status_update_manager.cpp:528] Cleaning up status update stream for task d8bd1ba3-055a-4420-820c-8e85fdde7c08 of framework 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000 I0708 15:42:16.778327 29163 gc.cpp:55] Scheduling '/tmp/ContentType_MasterAPITest_Subscribe_0_be660M/slaves/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0/frameworks/2aea5b7f-ec9f-4fda-8f34-877d8adf064f-0000' for gc 6.99999163714074days in the future I0708 15:42:16.797328 29138 slave.cpp:841] Agent terminating I0708 15:42:16.799114 29165 master.cpp:1371] Agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) disconnected I0708 15:42:16.800149 29165 master.cpp:2910] Disconnecting agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) I0708 15:42:16.800727 29165 master.cpp:2929] Deactivating agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 at slave(449)@172.17.0.3:34502 (0382d073a49a) I0708 15:42:16.801389 29165 hierarchical.cpp:571] Agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 deactivated ../../src/tests/api_tests.cpp:1496: Failure Actual function call count doesn't match EXPECT_CALL(*scheduler, update(_, _))... Expected: to be called twice Actual: called once - unsatisfied and active I0708 15:42:16.806820 29138 master.cpp:1218] Master terminating I0708 15:42:16.807718 29160 hierarchical.cpp:510] Removed agent 2aea5b7f-ec9f-4fda-8f34-877d8adf064f-S0 [ FAILED ] ContentType/MasterAPITest.Subscribe/0, where GetParam() = application/x-protobuf (780 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5822","07/08/2016 23:52:29",3,"Add a build script for the Windows CI ""The ASF CI for Mesos runs a script that lives inside the Mesos codebase: https://github.com/apache/mesos/blob/1cbfdc3c1e4b8498a67f8531ab264003c8c19fb1/support/docker_build.sh ASF Infrastructure have set up a machine that we can use for building Mesos on Windows. Considering the environment, we will need a separate script to build here.""","",0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5824","07/08/2016 23:58:46",3,"Include disk source information in stringification ""Some frameworks (like kafka_mesos) ignore the Source field when trying to reserve an offered mount or path persistent volume; the resulting error message is bewildering: {code:none} Task uses more resources cpus(*):4; mem(*):4096; ports(*):[31000-31000]; disk(kafka, kafka)[kafka_0:data]:960679 than available cpus(*):32; mem(*):256819; ports(*):[31000-32000]; disk(kafka, kafka)[kafka_0:data]:960679; disk(*):240169; {code} The stringification of disk resources should include source information. """," Task uses more resources cpus(*):4; mem(*):4096; ports(*):[31000-31000]; disk(kafka, kafka)[kafka_0:data]:960679 than available cpus(*):32; mem(*):256819; ports(*):[31000-32000]; disk(kafka, kafka)[kafka_0:data]:960679; disk(*):240169; ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5825","07/09/2016 00:04:00",5,"Support mounting image volume in mesos containerizer. ""Mesos containerizer should be able to support mounting image volume type. Specifically, both image rootfs and default manifest should be reachable inside container's mount namespace.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5845","07/14/2016 01:39:08",3,"The fetcher can access any local file as root ""The Mesos fetcher currently runs as root and does a blind cp+chown of any file:// URI into the task's sandbox, to be owned by the task user. Even if frameworks are restricted from running tasks as root, it seems they can still access root-protected files in this way. We should secure the fetcher so that it has the filesystem permissions of the user its associated task is being run as. One option would be to run the fetcher as the same user that the task will run as.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5852","07/15/2016 19:14:58",2,"CMake build needs to generate protobufs before building libmesos ""The existing CMake lists place protobufs at the same level as other Mesos sources: https://github.com/apache/mesos/blob/c4cecf9c279c5206faaf996fef0b1810b490b329/src/CMakeLists.txt#L415 This is incorrect, as protobuf changes need to be regenerated before we can build against them. Note: in the autotools build, this is done by compiling protobufs into {{libmesos}}, which then builds {{libmesos_no_3rdparty}}: https://github.com/apache/mesos/blob/c4cecf9c279c5206faaf996fef0b1810b490b329/src/Makefile.am#L1304-L1305""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5864","07/19/2016 22:53:25",2,"Document MESOS_SANDBOX executor env variable. ""And we should document the difference with MESOS_DIRECTORY.""","",0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5878","07/21/2016 13:14:34",3,"Strict/RegistrarTest.UpdateQuota/0 is flaky ""Observed on ASF CI (https://builds.apache.org/job/Mesos/BUILDTOOL=autotools,COMPILER=clang,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu:14.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-6)/2539/consoleFull). Log file is attached. Note that this might have been uncovered due to the recent removal of {{os::sleep}} from {{Clock::settle}}.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5879","07/21/2016 17:32:32",1,"cgroups/net_cls isolator causing agent recovery issues ""We run with 'cgroups/net_cls' in our isolator list, and when we restart any agent process in a cluster running an experimental custom isolator as well, the agents are unable to recover from checkpoint, because net_cls reports that unknown orphan containers have duplicate net_cls handles. While this is a problem that needs to be solved (probably by fixing our custom isolator), it's also a problem that the net_cls isolator fails recovery just for duplicate handles in cgroups that it is literally about to unconditionally destroy during recovery. Can this be fixed?""","",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5900","07/25/2016 14:20:20",5,"Support Unix domain socket connections in libprocess ""We should consider allowing two programs on the same host using libprocess to communicate via Unix domain sockets rather than TCP. This has a few advantages: * Security: remote hosts cannot connect to the Unix socket. Domain sockets also offer additional support for [authentication|https://docs.fedoraproject.org/en-US/Fedora_Security_Team/1/html/Defensive_Coding/sect-Defensive_Coding-Authentication-UNIX_Domain.html]. * Performance: domain sockets are marginally faster than localhost TCP.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5907","07/26/2016 17:29:49",1,"ExamplesTest.DiskFullFramework fails on Arch ""This test fails consistently on recent Arch linux, running in a VM.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5909","07/26/2016 21:19:58",2,"Stout ""OsTest.User"" test can fail on some systems ""Libc call {{getgrouplist}} doesn't return the {{gid}} list in a sorted manner (in my case, it's returning """"471 100"""") ... whereas {{id -G}} return a sorted list (""""100 471"""" in my case) causing the validation inside the loop to fail. We should sort both lists before comparing the values.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5927","07/29/2016 08:34:11",3,"Unable to run ""scratch"" Dockerfiles with Unified Containerizer. ""It is not possible to run Docker containers that are based upon the """"scratch"""" container. Setup: Mesos 1.0.0 with the following Mesos settings: {code:none} echo 'docker' | sudo tee /etc/mesos-slave/image_providers echo 'filesystem/linux,docker/runtime' | sudo tee /etc/mesos-slave/isolation Effect: The container will crash with messages from Mesos reporting it can't mount folder x/y/z. E.g. can't mount /tmp. This means you can't run any container that is not a """"fat"""" container (i.e. one with a full OS). E.g. error: bq. Failed to enter chroot '/var/lib/mesos/provisioner/containers/fed6add8-0126-40e6-ae81-5859a0c1a2d4/backends/copy/rootfses/4feefc8b-fd5a-4835-95db-165e675f11cd': /tmp in chroot does not existI0729 07:49:56.753474 4362 exec.cpp:413] Executor asked to shutdown Expected: Run without issues. Use case: We use scratch based containers with static binaries to keep the image size down. This is a common practice."""," echo 'docker' | sudo tee /etc/mesos-slave/image_providers echo 'filesystem/linux,docker/runtime' | sudo tee /etc/mesos-slave/isolation mesos-execute --command='echo ok' --docker_image=hello-seattle --master=localhost:5050 --name=test ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5930","07/29/2016 18:30:02",3,"Orphan tasks can show up as running after they have finished. ""On my cluster I have 111 Orphan Tasks of which some are RUNNING some are FINISHED and some are FAILED. When I open the task details for a FINISHED tasks the following page shows a state of TASK_FINISHED and likewise when I open a FAILED task the details page shows TASK_FAILED. However when I open the details for the RUNNING tasks they all have a task state of TASK_FINISHED. None of them is in state TASK_RUNNING. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5931","07/29/2016 19:10:32",8,"Support auto backend in Unified Containerizer. ""Currently in Unified Containerizer, copy backend will be selected by default. This is not ideal, especially for production environment. It would take a long time to prepare an huge container image to copy it from the store to provisioner. Ideally, we should support `auto backend`, which would automatically/intelligently select the best/optimal backend for image provisioner if user does not specify one from the agent flag. We should have a logic design first in this ticket, to determine how we want to choose the right backend (e.g., overlayfs or aufs should be preferred if available from the kernel).""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5943","07/31/2016 00:28:34",3,"Incremental http parsing of URLs leads to decoder error ""When requests arrive to the decoder in pieces (e.g. {{mes}} followed by a separate chunk of {{os.apache.org}}) the http parser is not able to handle this case if the split is within the URL component. This causes the decoder to error out, and can lead to connection invalidation. The scheduler driver is susceptible to this.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5944","07/31/2016 01:43:34",1,"Remove `O_SYNC` from StatusUpdateManager logs ""Currently the {{StatusUpdateManager}} uses {{O_SYNC}} to flush status updates to disk. We don't need to use {{O_SYNC}} because we only read this file if the host did not crash. {{os::write}} success implies the kernel will have flushed our data to the page cache. This is sufficient for the recovery scenarios we use this data for.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5958","08/01/2016 19:58:43",2,"Reviewbot failing due to python files not being cleaned up after distclean ""This is on ASF CI. https://builds.apache.org/job/mesos-reviewbot/14573/consoleFull """," find python -name """"build"""" -o -name """"dist"""" -o -name """"*.pyc"""" \ -o -name """"*.egg-info"""" -exec rm -rf '{}' \+ test -z """"libmesos_no_3rdparty.la libbuild.la liblog.la libstate.la libjava.la libexamplemodule.la libtestallocator.la libtestanonymous.la libtestauthentication.la libtestauthorizer.la libtestcontainer_logger.la libtesthook.la libtesthttpauthenticator.la libtestisolator.la libtestmastercontender.la libtestmasterdetector.la libtestqos_controller.la libtestresource_estimator.la"""" || rm -f libmesos_no_3rdparty.la libbuild.la liblog.la libstate.la libjava.la libexamplemodule.la libtestallocator.la libtestanonymous.la libtestauthentication.la libtestauthorizer.la libtestcontainer_logger.la libtesthook.la libtesthttpauthenticator.la libtestisolator.la libtestmastercontender.la libtestmasterdetector.la libtestqos_controller.la libtestresource_estimator.la test -z """"liblogrotate_container_logger.la libfixed_resource_estimator.la libload_qos_controller.la """" || rm -f liblogrotate_container_logger.la libfixed_resource_estimator.la libload_qos_controller.la rm -f mesos-fetcher mesos-executor mesos-containerizer mesos-logrotate-logger mesos-health-check mesos-usage mesos-docker-executor rm -f mesos-agent mesos-master mesos-slave rm -f ./so_locations rm -f *.o rm -f *.lo rm -f ../include/mesos/*.o rm -f ./so_locations rm -f *.tab.c test -z """""""" || rm -f rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags rm -f ./so_locations rm -f ../include/mesos/*.lo test . = """"../../src"""" || test -z """""""" || rm -f rm -f ../include/mesos/.deps/.dirstamp rm -f ../include/mesos/agent/*.o rm -f ../include/mesos/agent/*.lo rm -f ../include/mesos/.dirstamp rm -f ../include/mesos/allocator/*.o rm -f ../include/mesos/agent/.deps/.dirstamp rm -f ../include/mesos/allocator/*.lo rm -f ../include/mesos/agent/.dirstamp rm -f ../include/mesos/appc/*.o rm -f ../include/mesos/allocator/.deps/.dirstamp rm -f ../include/mesos/appc/*.lo rm -f ../include/mesos/allocator/.dirstamp rm -f ../include/mesos/authentication/*.o rm -f ../include/mesos/appc/.deps/.dirstamp rm -f ../include/mesos/authentication/*.lo rm -f ../include/mesos/appc/.dirstamp rm -f ../include/mesos/authorizer/*.o rm -f ../include/mesos/authentication/.deps/.dirstamp rm -f ../include/mesos/authorizer/*.lo rm -f ../include/mesos/authentication/.dirstamp rm -f ../include/mesos/containerizer/*.o rm -f ../include/mesos/authorizer/.deps/.dirstamp rm -f ../include/mesos/containerizer/*.lo rm -f ../include/mesos/authorizer/.dirstamp rm -f ../include/mesos/docker/*.o rm -f ../include/mesos/containerizer/.deps/.dirstamp rm -f ../include/mesos/docker/*.lo rm: cannot remove 'python/cli/build': Is a directory rm: cannot remove 'python/executor/build': Is a directory rm: cannot remove 'python/interface/build': Is a directory rm: cannot remove 'python/native/build': Is a directory rm: cannot remove 'python/scheduler/build': Is a directory rm -f ../include/mesos/containerizer/.dirstamp make[2]: [clean-generic] Error 1 (ignored) rm -f ../include/mesos/executor/*.o rm -f ../include/mesos/docker/.deps/.dirstamp rm -f ../include/mesos/executor/*.lo rm -f ../include/mesos/docker/.dirstamp rm -f ../include/mesos/fetcher/*.o rm -f ../include/mesos/executor/.deps/.dirstamp rm -f ../include/mesos/fetcher/*.lo rm -f ../include/mesos/executor/.dirstamp rm -f ../include/mesos/maintenance/*.o rm -f ../include/mesos/fetcher/.deps/.dirstamp rm -f ../include/mesos/maintenance/*.lo rm -f ../include/mesos/fetcher/.dirstamp rm -f ../include/mesos/master/*.o rm -f ../include/mesos/maintenance/.deps/.dirstamp rm -f ../include/mesos/master/*.lo rm -f ../include/mesos/module/*.o rm -f ../include/mesos/maintenance/.dirstamp rm -f ../include/mesos/module/*.lo rm -f ../include/mesos/master/.deps/.dirstamp rm -f ../include/mesos/master/.dirstamp rm -f ../include/mesos/quota/*.o rm -f ../include/mesos/module/.deps/.dirstamp rm -f ../include/mesos/quota/*.lo rm -f ../include/mesos/module/.dirstamp rm -f ../include/mesos/scheduler/*.o rm -f ../include/mesos/quota/.deps/.dirstamp rm -f ../include/mesos/scheduler/*.lo rm -f ../include/mesos/quota/.dirstamp rm -f ../include/mesos/slave/*.o rm -f ../include/mesos/scheduler/.deps/.dirstamp rm -f ../include/mesos/scheduler/.dirstamp rm -f ../include/mesos/slave/*.lo rm -f ../include/mesos/slave/.deps/.dirstamp rm -f ../include/mesos/state/*.o rm -f ../include/mesos/slave/.dirstamp rm -f ../include/mesos/state/*.lo rm -f ../include/mesos/state/.deps/.dirstamp rm -f ../include/mesos/uri/*.o rm -f ../include/mesos/state/.dirstamp rm -f ../include/mesos/uri/*.lo rm -f ../include/mesos/uri/.deps/.dirstamp rm -f ../include/mesos/v1/*.o rm -f ../include/mesos/uri/.dirstamp rm -f ../include/mesos/v1/*.lo rm -f ../include/mesos/v1/.deps/.dirstamp rm -f ../include/mesos/v1/agent/*.o rm -f ../include/mesos/v1/.dirstamp rm -f ../include/mesos/v1/agent/*.lo rm -f ../include/mesos/v1/agent/.deps/.dirstamp rm -f ../include/mesos/v1/allocator/*.o rm -f ../include/mesos/v1/agent/.dirstamp rm -f ../include/mesos/v1/allocator/*.lo rm -f ../include/mesos/v1/allocator/.deps/.dirstamp rm -f examples/java/*.class rm -f ../include/mesos/v1/executor/*.o rm -f ../include/mesos/v1/allocator/.dirstamp rm -f ../include/mesos/v1/executor/.deps/.dirstamp rm -f ../include/mesos/v1/executor/*.lo rm -f ../include/mesos/v1/executor/.dirstamp rm -f java/jni/org_apache_mesos*.h rm -f ../include/mesos/v1/maintenance/*.o rm -f ../include/mesos/v1/maintenance/*.lo rm -f ../include/mesos/v1/maintenance/.deps/.dirstamp rm -f ../include/mesos/v1/master/*.o rm -f ../include/mesos/v1/maintenance/.dirstamp rm -f ../include/mesos/v1/master/*.lo rm -f ../include/mesos/v1/master/.deps/.dirstamp rm -f ../include/mesos/v1/master/.dirstamp rm -f ../include/mesos/v1/quota/*.o rm -f ../include/mesos/v1/quota/.deps/.dirstamp rm -f ../include/mesos/v1/quota/*.lo rm -f ../include/mesos/v1/quota/.dirstamp rm -f ../include/mesos/v1/scheduler/*.o rm -f ../include/mesos/v1/scheduler/*.lo rm -f ../include/mesos/v1/scheduler/.deps/.dirstamp rm -f appc/*.o rm -f ../include/mesos/v1/scheduler/.dirstamp rm -f appc/*.lo rm -f authentication/cram_md5/*.o rm -f appc/.deps/.dirstamp rm -f authentication/cram_md5/*.lo rm -f appc/.dirstamp rm -f authentication/http/*.o rm -f authentication/cram_md5/.deps/.dirstamp rm -f authentication/http/*.lo rm -f authentication/cram_md5/.dirstamp rm -f authorizer/*.o rm -f authorizer/*.lo rm -f authentication/http/.deps/.dirstamp rm -f authorizer/local/*.o rm -f authentication/http/.dirstamp rm -f authorizer/local/*.lo rm -f authorizer/.deps/.dirstamp rm -f cli/*.o rm -f authorizer/.dirstamp rm -f authorizer/local/.deps/.dirstamp rm -f common/*.o rm -f authorizer/local/.dirstamp rm -f common/*.lo rm -f cli/.deps/.dirstamp rm -f docker/*.o rm -f cli/.dirstamp rm -f common/.deps/.dirstamp rm -f docker/*.lo rm -f common/.dirstamp rm -f examples/*.o rm -f docker/.deps/.dirstamp rm -f docker/.dirstamp rm -f examples/.deps/.dirstamp rm -f examples/.dirstamp rm -f examples/*.lo rm -f exec/.deps/.dirstamp rm -f exec/*.o rm -f exec/.dirstamp rm -f executor/.deps/.dirstamp rm -f exec/*.lo rm -f executor/.dirstamp rm -f executor/*.o rm -f files/.deps/.dirstamp rm -f executor/*.lo rm -f files/.dirstamp rm -f files/*.o rm -f hdfs/.deps/.dirstamp rm -f files/*.lo rm -f hdfs/.dirstamp rm -f health-check/.deps/.dirstamp rm -f health-check/.dirstamp rm -f hdfs/*.o rm -f hook/.deps/.dirstamp rm -f hdfs/*.lo rm -f hook/.dirstamp rm -f health-check/*.o rm -f internal/.deps/.dirstamp rm -f health-check/*.lo rm -f internal/.dirstamp rm -f hook/*.o rm -f java/jni/.deps/.dirstamp rm -f hook/*.lo rm -f java/jni/.dirstamp rm -f jvm/.deps/.dirstamp rm -f internal/*.o rm -f jvm/.dirstamp rm -f internal/*.lo rm -f jvm/org/apache/.deps/.dirstamp rm -f jvm/org/apache/.dirstamp rm -f java/jni/*.o rm -f launcher/.deps/.dirstamp rm -f java/jni/*.lo rm -f launcher/.dirstamp rm -f jvm/*.o rm -f launcher/posix/.deps/.dirstamp rm -f jvm/*.lo rm -f launcher/posix/.dirstamp rm -f jvm/org/apache/*.o rm -f linux/.deps/.dirstamp rm -f jvm/org/apache/*.lo rm -f linux/.dirstamp rm -f launcher/*.o rm -f linux/routing/.deps/.dirstamp rm -f linux/routing/.dirstamp rm -f linux/routing/diagnosis/.deps/.dirstamp rm -f launcher/posix/*.o rm -f linux/*.o rm -f linux/routing/diagnosis/.dirstamp rm -f linux/*.lo rm -f linux/routing/filter/.deps/.dirstamp rm -f linux/routing/*.o rm -f linux/routing/filter/.dirstamp rm -f linux/routing/*.lo rm -f linux/routing/link/.deps/.dirstamp rm -f linux/routing/link/.dirstamp rm -f linux/routing/diagnosis/*.o rm -f linux/routing/queueing/.deps/.dirstamp rm -f linux/routing/diagnosis/*.lo rm -f linux/routing/queueing/.dirstamp rm -rf ../include/mesos/.libs ../include/mesos/_libs rm -f linux/routing/filter/*.o rm -f local/.deps/.dirstamp rm -f linux/routing/filter/*.lo rm -rf ../include/mesos/agent/.libs ../include/mesos/agent/_libs rm -f local/.dirstamp rm -f linux/routing/link/*.o rm -rf ../include/mesos/allocator/.libs ../include/mesos/allocator/_libs rm -f log/.deps/.dirstamp rm -f linux/routing/link/*.lo rm -rf ../include/mesos/appc/.libs ../include/mesos/appc/_libs rm -f log/.dirstamp rm -f linux/routing/queueing/*.o rm -rf ../include/mesos/authentication/.libs ../include/mesos/authentication/_libs rm -f log/tool/.deps/.dirstamp rm -f linux/routing/queueing/*.lo rm -rf ../include/mesos/authorizer/.libs ../include/mesos/authorizer/_libs rm -f log/tool/.dirstamp rm -f local/*.o rm -rf ../include/mesos/containerizer/.libs ../include/mesos/containerizer/_libs rm -f logging/.deps/.dirstamp rm -rf ../include/mesos/docker/.libs ../include/mesos/docker/_libs rm -f local/*.lo rm -f logging/.dirstamp rm -f log/*.o rm -rf ../include/mesos/executor/.libs ../include/mesos/executor/_libs rm -f master/.deps/.dirstamp rm -f log/*.lo rm -rf ../include/mesos/fetcher/.libs ../include/mesos/fetcher/_libs rm -f master/.dirstamp rm -f log/tool/*.o rm -rf ../include/mesos/maintenance/.libs ../include/mesos/maintenance/_libs rm -f master/allocator/.deps/.dirstamp rm -f log/tool/*.lo rm -rf ../include/mesos/master/.libs ../include/mesos/master/_libs rm -f master/allocator/.dirstamp rm -f logging/*.o rm -f master/allocator/mesos/.deps/.dirstamp rm -rf ../include/mesos/module/.libs ../include/mesos/module/_libs rm -f logging/*.lo rm -f master/allocator/mesos/.dirstamp rm -rf ../include/mesos/quota/.libs ../include/mesos/quota/_libs rm -f master/*.o rm -f master/allocator/sorter/drf/.deps/.dirstamp rm -rf ../include/mesos/scheduler/.libs ../include/mesos/scheduler/_libs rm -f master/*.lo rm -f master/allocator/sorter/drf/.dirstamp rm -f master/contender/.deps/.dirstamp rm -rf ../include/mesos/slave/.libs ../include/mesos/slave/_libs rm -f master/allocator/*.o rm -f master/contender/.dirstamp rm -rf ../include/mesos/state/.libs ../include/mesos/state/_libs rm -f master/allocator/*.lo rm -f master/detector/.deps/.dirstamp rm -rf ../include/mesos/uri/.libs ../include/mesos/uri/_libs rm -f master/allocator/mesos/*.o rm -f master/detector/.dirstamp rm -rf ../include/mesos/v1/.libs ../include/mesos/v1/_libs rm -f master/allocator/mesos/*.lo rm -rf ../include/mesos/v1/agent/.libs ../include/mesos/v1/agent/_libs rm -f messages/.deps/.dirstamp rm -f master/allocator/sorter/drf/*.o rm -f messages/.dirstamp rm -rf ../include/mesos/v1/allocator/.libs ../include/mesos/v1/allocator/_libs rm -f master/allocator/sorter/drf/*.lo rm -f module/.deps/.dirstamp rm -rf ../include/mesos/v1/executor/.libs ../include/mesos/v1/executor/_libs rm -f master/contender/*.o rm -rf ../include/mesos/v1/maintenance/.libs ../include/mesos/v1/maintenance/_libs rm -f module/.dirstamp rm -f master/contender/*.lo rm -f sched/.deps/.dirstamp rm -rf ../include/mesos/v1/master/.libs ../include/mesos/v1/master/_libs rm -f master/detector/*.o rm -rf ../include/mesos/v1/quota/.libs ../include/mesos/v1/quota/_libs rm -f sched/.dirstamp rm -f master/detector/*.lo rm -rf ../include/mesos/v1/scheduler/.libs ../include/mesos/v1/scheduler/_libs rm -f scheduler/.deps/.dirstamp rm -f messages/*.o rm -f scheduler/.dirstamp rm -f messages/*.lo rm -rf appc/.libs appc/_libs rm -f slave/.deps/.dirstamp rm -f module/*.o rm -rf authentication/cram_md5/.libs authentication/cram_md5/_libs rm -f slave/.dirstamp rm -f module/*.lo rm -f slave/container_loggers/.deps/.dirstamp rm -f sched/*.o rm -rf authentication/http/.libs authentication/http/_libs rm -f slave/container_loggers/.dirstamp rm -f sched/*.lo rm -f slave/containerizer/.deps/.dirstamp rm -rf authorizer/.libs authorizer/_libs rm -f scheduler/*.o rm -f slave/containerizer/.dirstamp rm -rf authorizer/local/.libs authorizer/local/_libs rm -f scheduler/*.lo rm -f slave/containerizer/mesos/.deps/.dirstamp rm -rf common/.libs common/_libs rm -f slave/*.o rm -f slave/containerizer/mesos/.dirstamp rm -f slave/containerizer/mesos/isolators/appc/.deps/.dirstamp rm -f slave/*.lo rm -rf docker/.libs docker/_libs rm -f slave/containerizer/mesos/isolators/appc/.dirstamp rm -f slave/container_loggers/*.o rm -f slave/containerizer/mesos/isolators/cgroups/.deps/.dirstamp rm -rf examples/.libs examples/_libs rm -f slave/container_loggers/*.lo rm -f slave/containerizer/mesos/isolators/cgroups/.dirstamp rm -f slave/containerizer/*.o rm -f slave/containerizer/mesos/isolators/docker/.deps/.dirstamp rm -rf exec/.libs exec/_libs rm -f slave/containerizer/*.lo rm -f slave/containerizer/mesos/isolators/docker/.dirstamp rm -rf executor/.libs executor/_libs rm -f slave/containerizer/mesos/isolators/docker/volume/.deps/.dirstamp rm -f slave/containerizer/mesos/*.o rm -f slave/containerizer/mesos/isolators/docker/volume/.dirstamp rm -f slave/containerizer/mesos/*.lo rm -f slave/containerizer/mesos/isolators/filesystem/.deps/.dirstamp rm -rf files/.libs files/_libs rm -f slave/containerizer/mesos/isolators/appc/*.o rm -f slave/containerizer/mesos/isolators/filesystem/.dirstamp rm -rf hdfs/.libs hdfs/_libs rm -f slave/containerizer/mesos/isolators/gpu/.deps/.dirstamp rm -f slave/containerizer/mesos/isolators/appc/*.lo rm -f slave/containerizer/mesos/isolators/cgroups/*.o rm -rf health-check/.libs health-check/_libs rm -f slave/containerizer/mesos/isolators/gpu/.dirstamp rm -f slave/containerizer/mesos/isolators/cgroups/*.lo rm -rf hook/.libs hook/_libs rm -f slave/containerizer/mesos/isolators/namespaces/.deps/.dirstamp rm -f slave/containerizer/mesos/isolators/namespaces/.dirstamp rm -f slave/containerizer/mesos/isolators/docker/*.o rm -rf internal/.libs internal/_libs rm -f slave/containerizer/mesos/isolators/network/.deps/.dirstamp rm -f slave/containerizer/mesos/isolators/docker/*.lo rm -rf java/jni/.libs java/jni/_libs rm -f slave/containerizer/mesos/isolators/network/.dirstamp rm -f slave/containerizer/mesos/isolators/docker/volume/*.o rm -f slave/containerizer/mesos/isolators/network/cni/.deps/.dirstamp rm -f slave/containerizer/mesos/isolators/docker/volume/*.lo rm -f slave/containerizer/mesos/isolators/network/cni/.dirstamp rm -rf jvm/.libs jvm/_libs rm -f slave/containerizer/mesos/isolators/filesystem/*.o rm -f slave/containerizer/mesos/isolators/posix/.deps/.dirstamp rm -rf jvm/org/apache/.libs jvm/org/apache/_libs rm -f slave/containerizer/mesos/isolators/filesystem/*.lo rm -f slave/containerizer/mesos/isolators/posix/.dirstamp rm -rf linux/.libs linux/_libs rm -f slave/containerizer/mesos/isolators/gpu/*.o rm -f slave/containerizer/mesos/isolators/xfs/.deps/.dirstamp rm -f slave/containerizer/mesos/isolators/xfs/.dirstamp rm -f slave/containerizer/mesos/isolators/gpu/*.lo rm -rf linux/routing/.libs linux/routing/_libs rm -f slave/containerizer/mesos/provisioner/.deps/.dirstamp rm -f slave/containerizer/mesos/isolators/namespaces/*.o rm -rf linux/routing/diagnosis/.libs linux/routing/diagnosis/_libs rm -f slave/containerizer/mesos/provisioner/.dirstamp rm -rf linux/routing/filter/.libs linux/routing/filter/_libs rm -f slave/containerizer/mesos/isolators/namespaces/*.lo rm -f slave/containerizer/mesos/provisioner/appc/.deps/.dirstamp rm -rf linux/routing/link/.libs linux/routing/link/_libs rm -f slave/containerizer/mesos/isolators/network/*.o rm -f slave/containerizer/mesos/provisioner/appc/.dirstamp rm -f slave/containerizer/mesos/isolators/network/*.lo rm -f slave/containerizer/mesos/provisioner/backends/.deps/.dirstamp rm -rf linux/routing/queueing/.libs linux/routing/queueing/_libs rm -f slave/containerizer/mesos/isolators/network/cni/*.o rm -f slave/containerizer/mesos/provisioner/backends/.dirstamp rm -rf local/.libs local/_libs rm -f slave/containerizer/mesos/provisioner/docker/.deps/.dirstamp rm -f slave/containerizer/mesos/isolators/network/cni/*.lo rm -rf log/.libs log/_libs rm -f slave/containerizer/mesos/provisioner/docker/.dirstamp rm -f slave/containerizer/mesos/isolators/posix/*.o rm -f slave/qos_controllers/.deps/.dirstamp rm -f slave/containerizer/mesos/isolators/posix/*.lo rm -f slave/qos_controllers/.dirstamp rm -f slave/containerizer/mesos/isolators/xfs/*.o rm -f slave/resource_estimators/.deps/.dirstamp rm -f slave/containerizer/mesos/isolators/xfs/*.lo rm -f slave/resource_estimators/.dirstamp rm -f slave/containerizer/mesos/provisioner/*.o rm -f state/.deps/.dirstamp rm -rf log/tool/.libs log/tool/_libs rm -f state/.dirstamp rm -f slave/containerizer/mesos/provisioner/*.lo rm -f tests/.deps/.dirstamp rm -rf logging/.libs logging/_libs rm -f slave/containerizer/mesos/provisioner/appc/*.o rm -f tests/.dirstamp rm -rf master/.libs master/_libs rm -f slave/containerizer/mesos/provisioner/appc/*.lo rm -f tests/common/.deps/.dirstamp rm -f slave/containerizer/mesos/provisioner/backends/*.o rm -f tests/common/.dirstamp rm -f slave/containerizer/mesos/provisioner/backends/*.lo rm -f tests/containerizer/.deps/.dirstamp rm -f slave/containerizer/mesos/provisioner/docker/*.o rm -f tests/containerizer/.dirstamp rm -f slave/containerizer/mesos/provisioner/docker/*.lo rm -f uri/.deps/.dirstamp rm -f uri/.dirstamp rm -f slave/qos_controllers/*.o rm -f uri/fetchers/.deps/.dirstamp rm -f uri/fetchers/.dirstamp rm -f slave/qos_controllers/*.lo rm -f usage/.deps/.dirstamp rm -f slave/resource_estimators/*.o rm -rf master/allocator/.libs master/allocator/_libs rm -f usage/.dirstamp rm -f slave/resource_estimators/*.lo rm -rf master/allocator/mesos/.libs master/allocator/mesos/_libs rm -f state/*.o rm -f v1/.deps/.dirstamp rm -rf master/allocator/sorter/drf/.libs master/allocator/sorter/drf/_libs rm -f state/*.lo rm -f v1/.dirstamp rm -f version/.deps/.dirstamp rm -f tests/*.o rm -rf master/contender/.libs master/contender/_libs rm -f version/.dirstamp rm -rf master/detector/.libs master/detector/_libs rm -f watcher/.deps/.dirstamp rm -f watcher/.dirstamp rm -rf messages/.libs messages/_libs rm -f zookeeper/.deps/.dirstamp rm -rf module/.libs module/_libs rm -f zookeeper/.dirstamp rm -rf sched/.libs sched/_libs rm -rf scheduler/.libs scheduler/_libs rm -rf slave/.libs slave/_libs rm -rf slave/container_loggers/.libs slave/container_loggers/_libs rm -rf slave/containerizer/.libs slave/containerizer/_libs rm -rf slave/containerizer/mesos/.libs slave/containerizer/mesos/_libs rm -rf slave/containerizer/mesos/isolators/appc/.libs slave/containerizer/mesos/isolators/appc/_libs rm -rf slave/containerizer/mesos/isolators/cgroups/.libs slave/containerizer/mesos/isolators/cgroups/_libs rm -rf slave/containerizer/mesos/isolators/docker/.libs slave/containerizer/mesos/isolators/docker/_libs rm -rf slave/containerizer/mesos/isolators/docker/volume/.libs slave/containerizer/mesos/isolators/docker/volume/_libs rm -rf slave/containerizer/mesos/isolators/filesystem/.libs slave/containerizer/mesos/isolators/filesystem/_libs rm -rf slave/containerizer/mesos/isolators/gpu/.libs slave/containerizer/mesos/isolators/gpu/_libs rm -rf slave/containerizer/mesos/isolators/namespaces/.libs slave/containerizer/mesos/isolators/namespaces/_libs rm -rf slave/containerizer/mesos/isolators/network/.libs slave/containerizer/mesos/isolators/network/_libs rm -rf slave/containerizer/mesos/isolators/network/cni/.libs slave/containerizer/mesos/isolators/network/cni/_libs rm -rf slave/containerizer/mesos/isolators/posix/.libs slave/containerizer/mesos/isolators/posix/_libs rm -rf slave/containerizer/mesos/isolators/xfs/.libs slave/containerizer/mesos/isolators/xfs/_libs rm -rf slave/containerizer/mesos/provisioner/.libs slave/containerizer/mesos/provisioner/_libs rm -rf slave/containerizer/mesos/provisioner/appc/.libs slave/containerizer/mesos/provisioner/appc/_libs rm -f tests/common/*.o rm -rf slave/containerizer/mesos/provisioner/backends/.libs slave/containerizer/mesos/provisioner/backends/_libs rm -f tests/containerizer/*.o rm -rf slave/containerizer/mesos/provisioner/docker/.libs slave/containerizer/mesos/provisioner/docker/_libs rm -rf slave/qos_controllers/.libs slave/qos_controllers/_libs rm -rf slave/resource_estimators/.libs slave/resource_estimators/_libs rm -rf state/.libs state/_libs rm -rf uri/.libs uri/_libs rm -rf uri/fetchers/.libs uri/fetchers/_libs rm -rf usage/.libs usage/_libs rm -f uri/*.o rm -f uri/*.lo rm -rf v1/.libs v1/_libs rm -f uri/fetchers/*.o rm -rf version/.libs version/_libs rm -f uri/fetchers/*.lo rm -rf watcher/.libs watcher/_libs rm -f usage/*.o rm -rf zookeeper/.libs zookeeper/_libs rm -f usage/*.lo rm -f v1/*.o rm -f v1/*.lo rm -f version/*.o rm -f version/*.lo rm -f watcher/*.o rm -f watcher/*.lo rm -f zookeeper/*.o rm -f zookeeper/*.lo rm -rf ../include/mesos/.deps ../include/mesos/agent/.deps ../include/mesos/allocator/.deps ../include/mesos/appc/.deps ../include/mesos/authentication/.deps ../include/mesos/authorizer/.deps ../include/mesos/containerizer/.deps ../include/mesos/docker/.deps ../include/mesos/executor/.deps ../include/mesos/fetcher/.deps ../include/mesos/maintenance/.deps ../include/mesos/master/.deps ../include/mesos/module/.deps ../include/mesos/quota/.deps ../include/mesos/scheduler/.deps ../include/mesos/slave/.deps ../include/mesos/state/.deps ../include/mesos/uri/.deps ../include/mesos/v1/.deps ../include/mesos/v1/agent/.deps ../include/mesos/v1/allocator/.deps ../include/mesos/v1/executor/.deps ../include/mesos/v1/maintenance/.deps ../include/mesos/v1/master/.deps ../include/mesos/v1/quota/.deps ../include/mesos/v1/scheduler/.deps appc/.deps authentication/cram_md5/.deps authentication/http/.deps authorizer/.deps authorizer/local/.deps cli/.deps common/.deps docker/.deps examples/.deps exec/.deps executor/.deps files/.deps hdfs/.deps health-check/.deps hook/.deps internal/.deps java/jni/.deps jvm/.deps jvm/org/apache/.deps launcher/.deps launcher/posix/.deps linux/.deps linux/routing/.deps linux/routing/diagnosis/.deps linux/routing/filter/.deps linux/routing/link/.deps linux/routing/queueing/.deps local/.deps log/.deps log/tool/.deps logging/.deps master/.deps master/allocator/.deps master/allocator/mesos/.deps master/allocator/sorter/drf/.deps master/contender/.deps master/detector/.deps messages/.deps module/.deps sched/.deps scheduler/.deps slave/.deps slave/container_loggers/.deps slave/containerizer/.deps slave/containerizer/mesos/.deps slave/containerizer/mesos/isolators/appc/.deps slave/containerizer/mesos/isolators/cgroups/.deps slave/containerizer/mesos/isolators/docker/.deps slave/containerizer/mesos/isolators/docker/volume/.deps slave/containerizer/mesos/isolators/filesystem/.deps slave/containerizer/mesos/isolators/gpu/.deps slave/containerizer/mesos/isolators/namespaces/.deps slave/containerizer/mesos/isolators/network/.deps slave/containerizer/mesos/isolators/network/cni/.deps slave/containerizer/mesos/isolators/posix/.deps slave/containerizer/mesos/isolators/xfs/.deps slave/containerizer/mesos/provisioner/.deps slave/containerizer/mesos/provisioner/appc/.deps slave/containerizer/mesos/provisioner/backends/.deps slave/containerizer/mesos/provisioner/docker/.deps slave/qos_controllers/.deps slave/resource_estimators/.deps state/.deps tests/.deps tests/common/.deps tests/containerizer/.deps uri/.deps uri/fetchers/.deps usage/.deps v1/.deps version/.deps watcher/.deps zookeeper/.deps rm -f Makefile make[2]: Leaving directory `/mesos/mesos-1.1.0/_build/src' rm -f config.status config.cache config.log configure.lineno config.status.lineno rm -f Makefile ERROR: files left in build directory after distclean: ./src/python/executor/build/temp.linux-x86_64-2.7/src/mesos/executor/module.o ./src/python/executor/build/temp.linux-x86_64-2.7/src/mesos/executor/mesos_executor_driver_impl.o ./src/python/executor/build/temp.linux-x86_64-2.7/src/mesos/executor/proxy_executor.o ./src/python/executor/build/lib.linux-x86_64-2.7/mesos/executor/_executor.so ./src/python/executor/build/lib.linux-x86_64-2.7/mesos/executor/__init__.py ./src/python/executor/build/lib.linux-x86_64-2.7/mesos/__init__.py ./src/python/executor/ext_modules.pyc ./src/python/scheduler/build/temp.linux-x86_64-2.7/src/mesos/scheduler/module.o ./src/python/scheduler/build/temp.linux-x86_64-2.7/src/mesos/scheduler/mesos_scheduler_driver_impl.o ./src/python/scheduler/build/temp.linux-x86_64-2.7/src/mesos/scheduler/proxy_scheduler.o ./src/python/scheduler/build/lib.linux-x86_64-2.7/mesos/scheduler/_scheduler.so ./src/python/scheduler/build/lib.linux-x86_64-2.7/mesos/scheduler/__init__.py ./src/python/scheduler/build/lib.linux-x86_64-2.7/mesos/__init__.py ./src/python/scheduler/ext_modules.pyc ./src/python/build/lib.linux-x86_64-2.7/mesos/__init__.py ./src/python/cli/build/lib.linux-x86_64-2.7/mesos/http.py ./src/python/cli/build/lib.linux-x86_64-2.7/mesos/cli.py ./src/python/cli/build/lib.linux-x86_64-2.7/mesos/futures.py ./src/python/cli/build/lib.linux-x86_64-2.7/mesos/__init__.py ./src/python/interface/build/lib.linux-x86_64-2.7/mesos/__init__.py ./src/python/interface/build/lib.linux-x86_64-2.7/mesos/interface/containerizer_pb2.py ./src/python/interface/build/lib.linux-x86_64-2.7/mesos/interface/mesos_pb2.py ./src/python/interface/build/lib.linux-x86_64-2.7/mesos/interface/__init__.py ./src/python/native/build/lib.linux-x86_64-2.7/mesos/__init__.py ./src/python/native/build/lib.linux-x86_64-2.7/mesos/native/__init__.py make[1]: Leaving directory `/mesos/mesos-1.1.0/_build' make[1]: *** [distcleancheck] Error 1 make: *** [distcheck] Error 1 + docker rmi mesos-1470073345-14436 ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5970","08/02/2016 22:04:44",1,"Remove HTTP_PARSER_VERSION_MAJOR < 2 code in decoder. ""https://reviews.apache.org/r/50683""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5986","08/04/2016 17:30:13",3,"SSL Socket CHECK can fail after socket receives EOF ""While writing a test for MESOS-3753, I encountered a bug where [this check|https://github.com/apache/mesos/blob/853821cafcca3550b9c7bdaba5262d73869e2ee1/3rdparty/libprocess/src/libevent_ssl_socket.cpp#L708] fails at the very end of the test body, while objects in the stack frame are being destroyed. After adding some debug logging output, I produced the following: The {{in send()17}} line indicates the beginning of {{send()}} for the SSL socket using FD 17. {{in shutdown(): 17}} indicates the beginning of {{shutdown()}} for the same socket, while {{sending on socket: 17}} indicates the execution of the lambda from {{send()}} on the event loop. Since {{shutdown()}} was called in between the call to {{send()}} and the execution of its lambda, it looks like the {{Socket}} was destroyed before the lambda could run. It's unclear why this would happen, since {{send()}}'s lambda captures a shared copy of the socket's {{this}} pointer in order to keep it alive."""," I0804 08:32:33.263211 273793024 libevent_ssl_socket.cpp:681] *** in send()17 I0804 08:32:33.263209 273256448 process.cpp:2970] Cleaning up __limiter__(3)@127.0.0.1:55688 I0804 08:32:33.263263 275939328 libevent_ssl_socket.cpp:152] *** in initialize(): 14 I0804 08:32:33.263206 272719872 process.cpp:2865] Resuming (61)@127.0.0.1:55688 at 2016-08-04 15:32:33.263261952+00:00 I0804 08:32:33.263327 275939328 libevent_ssl_socket.cpp:584] *** in recv()14 I0804 08:32:33.263337 272719872 hierarchical.cpp:571] Agent e2a49340-34ec-403f-a5a4-15e29c4a2434-S0 deactivated I0804 08:32:33.263322 275402752 process.cpp:2865] Resuming help@127.0.0.1:55688 at 2016-08-04 15:32:33.263343104+00:00 I0804 08:32:33.263510 275939328 libevent_ssl_socket.cpp:322] *** in event_callback(bev) I0804 08:32:33.263536 275939328 libevent_ssl_socket.cpp:353] *** in event_callback check for EOF/CONNECTED/ERROR: 19 I0804 08:32:33.263592 275939328 libevent_ssl_socket.cpp:159] *** in shutdown(): 19 I0804 08:32:33.263622 1985901312 process.cpp:3170] Donating thread to (87)@127.0.0.1:55688 while waiting I0804 08:32:33.263639 274329600 process.cpp:2865] Resuming __http__(12)@127.0.0.1:55688 at 2016-08-04 15:32:33.263653888+00:00 I0804 08:32:33.263659 1985901312 process.cpp:2865] Resuming (87)@127.0.0.1:55688 at 2016-08-04 15:32:33.263671040+00:00 I0804 08:32:33.263730 1985901312 process.cpp:2970] Cleaning up (87)@127.0.0.1:55688 I0804 08:32:33.263741 275939328 libevent_ssl_socket.cpp:322] *** in event_callback(bev) I0804 08:32:33.263736 274329600 process.cpp:2970] Cleaning up __http__(12)@127.0.0.1:55688 I0804 08:32:33.263778 275939328 libevent_ssl_socket.cpp:353] *** in event_callback check for EOF/CONNECTED/ERROR: 17 I0804 08:32:33.263818 275939328 libevent_ssl_socket.cpp:159] *** in shutdown(): 17 I0804 08:32:33.263839 272183296 process.cpp:2865] Resuming help@127.0.0.1:55688 at 2016-08-04 15:32:33.263857920+00:00 I0804 08:32:33.263933 273793024 process.cpp:2865] Resuming __gc__@127.0.0.1:55688 at 2016-08-04 15:32:33.263951104+00:00 I0804 08:32:33.264034 275939328 libevent_ssl_socket.cpp:681] *** in send()17 I0804 08:32:33.264020 272719872 process.cpp:2865] Resuming __http__(11)@127.0.0.1:55688 at 2016-08-04 15:32:33.264041984+00:00 I0804 08:32:33.264036 274329600 process.cpp:2865] Resuming status-update-manager(3)@127.0.0.1:55688 at 2016-08-04 15:32:33.264056064+00:00 I0804 08:32:33.264071 272719872 process.cpp:2970] Cleaning up __http__(11)@127.0.0.1:55688 I0804 08:32:33.264088 274329600 process.cpp:2970] Cleaning up status-update-manager(3)@127.0.0.1:55688 I0804 08:32:33.264086 275939328 libevent_ssl_socket.cpp:721] *** sending on socket: 17, data: 0 I0804 08:32:33.264112 272183296 process.cpp:2865] Resuming (89)@127.0.0.1:55688 at 2016-08-04 15:32:33.264126976+00:00 I0804 08:32:33.264118 275402752 process.cpp:2865] Resuming help@127.0.0.1:55688 at 2016-08-04 15:32:33.264144896+00:00 I0804 08:32:33.264149 272183296 process.cpp:2970] Cleaning up (89)@127.0.0.1:55688 I0804 08:32:33.264202 275939328 libevent_ssl_socket.cpp:281] *** in send_callback(bev) I0804 08:32:33.264400 273793024 process.cpp:3170] Donating thread to (86)@127.0.0.1:55688 while waiting I0804 08:32:33.264413 273256448 process.cpp:2865] Resuming (76)@127.0.0.1:55688 at 2016-08-04 15:32:33.264428032+00:00 I0804 08:32:33.296268 275939328 libevent_ssl_socket.cpp:300] *** in send_callback(): 17 I0804 08:32:33.296419 273256448 process.cpp:2970] Cleaning up (76)@127.0.0.1:55688 I0804 08:32:33.296357 273793024 process.cpp:2865] Resuming (86)@127.0.0.1:55688 at 2016-08-04 15:32:33.296414976+00:00 I0804 08:32:33.296464 273793024 process.cpp:2970] Cleaning up (86)@127.0.0.1:55688 I0804 08:32:33.296497 275939328 libevent_ssl_socket.cpp:104] *** releasing SSL socket I0804 08:32:33.296517 275939328 libevent_ssl_socket.cpp:106] *** released SSL socket: 19 I0804 08:32:33.296515 274329600 process.cpp:2865] Resuming help@127.0.0.1:55688 at 2016-08-04 15:32:33.296532992+00:00 I0804 08:32:33.296550 275939328 libevent_ssl_socket.cpp:721] *** sending on socket: 17, data: 0 I0804 08:32:33.296583 273793024 process.cpp:2865] Resuming (77)@127.0.0.1:55688 at 2016-08-04 15:32:33.296616960+00:00 F0804 08:32:33.296623 275939328 libevent_ssl_socket.cpp:723] Check failed: 'self->send_request.get()' Must be non NULL *** Check failure stack trace: *** ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5988","08/04/2016 19:16:42",3,"PollSocketImpl can write to a stale fd. ""When tracking down MESOS-5986 with [~greggomann] and [~anandmazumdar]. We were curious why PollSocketImpl avoids the same issue. It seems that PollSocketImpl has a similar race, however in the case of PollSocketImpl we will simply write to a stale file descriptor. One example is {{PollSocketImpl::send(const char*, size_t)}}: https://github.com/apache/mesos/blob/1.0.0/3rdparty/libprocess/src/poll_socket.cpp#L241-L245 If the last reference to the {{Socket}} goes away before the {{socket_send_data}} loop completes, then we will write to a stale fd! It turns out that we have avoided this issue because in libprocess we happen to keep a reference to the {{Socket}} around when sending: https://github.com/apache/mesos/blob/1.0.0/3rdparty/libprocess/src/process.cpp#L1678-L1707 However, this may not be true in all call-sites going forward. Currently, it appears that http::Connection can trigger this bug."""," Future PollSocketImpl::send(const char* data, size_t size) { return io::poll(get(), io::WRITE) .then(lambda::bind(&internal::socket_send_data, get(), data, size)); } Future socket_send_data(int s, const char* data, size_t size) { CHECK(size > 0); while (true) { ssize_t length = send(s, data, size, MSG_NOSIGNAL); #ifdef __WINDOWS__ int error = WSAGetLastError(); #else int error = errno; #endif // __WINDOWS__ if (length < 0 && net::is_restartable_error(error)) { // Interrupted, try again now. continue; } else if (length < 0 && net::is_retryable_error(error)) { // Might block, try again later. return io::poll(s, io::WRITE) .then(lambda::bind(&internal::socket_send_data, s, data, size)); } else if (length <= 0) { // Socket error or closed. if (length < 0) { const string error = os::strerror(errno); VLOG(1) << """"Socket error while sending: """" << error; } else { VLOG(1) << """"Socket closed while sending""""; } if (length == 0) { return length; } else { return Failure(ErrnoError(""""Socket send failed"""")); } } else { CHECK(length > 0); return length; } } } void send(Encoder* encoder, Socket socket) { switch (encoder->kind()) { case Encoder::DATA: { size_t size; const char* data = static_cast(encoder)->next(&size); socket.send(data, size) .onAny(lambda::bind( &internal::_send, lambda::_1, socket, encoder, size)); break; } case Encoder::FILE: { off_t offset; size_t size; int fd = static_cast(encoder)->next(&offset, &size); socket.sendfile(fd, offset, size) .onAny(lambda::bind( &internal::_send, lambda::_1, socket, encoder, size)); break; } } } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-5995","08/05/2016 08:48:39",1,"Protobuf JSON deserialisation does not accept numbers formated as strings ""Proto2 does not specify JSON mappings but [Proto3|https://developers.google.com/protocol-buffers/docs/proto3#json] does and it recommend to map 64bit numbers as a string. Unfortunately Mesos does not accepts strings in places of uint64 and return 400 Bad {quote} Request error Failed to convert JSON into Call protobuf: Not expecting a JSON string for field 'value'. {quote} Is this by purpose or is this a bug?""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6001","08/05/2016 23:59:45",3,"Aufs backend cannot support the image with numerous layers. ""This issue was exposed in this unit test `ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller` by manually specifying the `bind` backend. Most likely mounting the aufs with specific options is limited by string length. """," [20:13:07] : [Step 10/10] [ RUN ] DockerRuntimeIsolatorTest.ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller [20:13:07]W: [Step 10/10] I0805 20:13:07.615844 23416 cluster.cpp:155] Creating default 'local' authorizer [20:13:07]W: [Step 10/10] I0805 20:13:07.624106 23416 leveldb.cpp:174] Opened db in 8.148813ms [20:13:07]W: [Step 10/10] I0805 20:13:07.627252 23416 leveldb.cpp:181] Compacted db in 3.126629ms [20:13:07]W: [Step 10/10] I0805 20:13:07.627275 23416 leveldb.cpp:196] Created db iterator in 4410ns [20:13:07]W: [Step 10/10] I0805 20:13:07.627282 23416 leveldb.cpp:202] Seeked to beginning of db in 763ns [20:13:07]W: [Step 10/10] I0805 20:13:07.627287 23416 leveldb.cpp:271] Iterated through 0 keys in the db in 491ns [20:13:07]W: [Step 10/10] I0805 20:13:07.627301 23416 replica.cpp:776] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned [20:13:07]W: [Step 10/10] I0805 20:13:07.627563 23434 recover.cpp:451] Starting replica recovery [20:13:07]W: [Step 10/10] I0805 20:13:07.627800 23437 recover.cpp:477] Replica is in EMPTY status [20:13:07]W: [Step 10/10] I0805 20:13:07.628113 23431 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from __req_res__(5852)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.628243 23430 recover.cpp:197] Received a recover response from a replica in EMPTY status [20:13:07]W: [Step 10/10] I0805 20:13:07.628365 23437 recover.cpp:568] Updating replica status to STARTING [20:13:07]W: [Step 10/10] I0805 20:13:07.628744 23432 master.cpp:375] Master dd755a55-0dd1-4d2d-9a49-812a666015cb (ip-172-30-2-138.mesosphere.io) started on 172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.628758 23432 master.cpp:377] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/OZHDIQ/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""true"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/OZHDIQ/master"""" --zk_session_timeout=""""10secs"""" [20:13:07]W: [Step 10/10] I0805 20:13:07.628893 23432 master.cpp:427] Master only allowing authenticated frameworks to register [20:13:07]W: [Step 10/10] I0805 20:13:07.628900 23432 master.cpp:441] Master only allowing authenticated agents to register [20:13:07]W: [Step 10/10] I0805 20:13:07.628902 23432 master.cpp:454] Master only allowing authenticated HTTP frameworks to register [20:13:07]W: [Step 10/10] I0805 20:13:07.628906 23432 credentials.hpp:37] Loading credentials for authentication from '/tmp/OZHDIQ/credentials' [20:13:07]W: [Step 10/10] I0805 20:13:07.628999 23432 master.cpp:499] Using default 'crammd5' authenticator [20:13:07]W: [Step 10/10] I0805 20:13:07.629041 23432 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-master-readonly' [20:13:07]W: [Step 10/10] I0805 20:13:07.629114 23432 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' [20:13:07]W: [Step 10/10] I0805 20:13:07.629166 23432 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' [20:13:07]W: [Step 10/10] I0805 20:13:07.629231 23432 master.cpp:579] Authorization enabled [20:13:07]W: [Step 10/10] I0805 20:13:07.629290 23434 whitelist_watcher.cpp:77] No whitelist given [20:13:07]W: [Step 10/10] I0805 20:13:07.629302 23430 hierarchical.cpp:151] Initialized hierarchical allocator process [20:13:07]W: [Step 10/10] I0805 20:13:07.629921 23433 master.cpp:1851] Elected as the leading master! [20:13:07]W: [Step 10/10] I0805 20:13:07.629933 23433 master.cpp:1547] Recovering from registrar [20:13:07]W: [Step 10/10] I0805 20:13:07.629992 23436 registrar.cpp:332] Recovering registrar [20:13:07]W: [Step 10/10] I0805 20:13:07.630861 23435 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 2.358536ms [20:13:07]W: [Step 10/10] I0805 20:13:07.630877 23435 replica.cpp:320] Persisted replica status to STARTING [20:13:07]W: [Step 10/10] I0805 20:13:07.630924 23435 recover.cpp:477] Replica is in STARTING status [20:13:07]W: [Step 10/10] I0805 20:13:07.631178 23435 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from __req_res__(5853)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.631285 23435 recover.cpp:197] Received a recover response from a replica in STARTING status [20:13:07]W: [Step 10/10] I0805 20:13:07.631433 23436 recover.cpp:568] Updating replica status to VOTING [20:13:07]W: [Step 10/10] I0805 20:13:07.633391 23433 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.912156ms [20:13:07]W: [Step 10/10] I0805 20:13:07.633409 23433 replica.cpp:320] Persisted replica status to VOTING [20:13:07]W: [Step 10/10] I0805 20:13:07.633438 23433 recover.cpp:582] Successfully joined the Paxos group [20:13:07]W: [Step 10/10] I0805 20:13:07.633479 23433 recover.cpp:466] Recover process terminated [20:13:07]W: [Step 10/10] I0805 20:13:07.633635 23435 log.cpp:553] Attempting to start the writer [20:13:07]W: [Step 10/10] I0805 20:13:07.634021 23432 replica.cpp:493] Replica received implicit promise request from __req_res__(5854)@172.30.2.138:44256 with proposal 1 [20:13:07]W: [Step 10/10] I0805 20:13:07.636034 23432 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.995908ms [20:13:07]W: [Step 10/10] I0805 20:13:07.636049 23432 replica.cpp:342] Persisted promised to 1 [20:13:07]W: [Step 10/10] I0805 20:13:07.636239 23432 coordinator.cpp:238] Coordinator attempting to fill missing positions [20:13:07]W: [Step 10/10] I0805 20:13:07.636672 23432 replica.cpp:388] Replica received explicit promise request from __req_res__(5855)@172.30.2.138:44256 for position 0 with proposal 2 [20:13:07]W: [Step 10/10] I0805 20:13:07.637307 23432 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 614745ns [20:13:07]W: [Step 10/10] I0805 20:13:07.637318 23432 replica.cpp:708] Persisted action NOP at position 0 [20:13:07]W: [Step 10/10] I0805 20:13:07.637668 23432 replica.cpp:537] Replica received write request for position 0 from __req_res__(5856)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.637692 23432 leveldb.cpp:436] Reading position from leveldb took 10680ns [20:13:07]W: [Step 10/10] I0805 20:13:07.638314 23432 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 610038ns [20:13:07]W: [Step 10/10] I0805 20:13:07.638325 23432 replica.cpp:708] Persisted action NOP at position 0 [20:13:07]W: [Step 10/10] I0805 20:13:07.638569 23436 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 [20:13:07]W: [Step 10/10] I0805 20:13:07.640446 23436 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.856131ms [20:13:07]W: [Step 10/10] I0805 20:13:07.640461 23436 replica.cpp:708] Persisted action NOP at position 0 [20:13:07]W: [Step 10/10] I0805 20:13:07.640645 23437 log.cpp:569] Writer started with ending position 0 [20:13:07]W: [Step 10/10] I0805 20:13:07.640940 23430 leveldb.cpp:436] Reading position from leveldb took 11341ns [20:13:07]W: [Step 10/10] I0805 20:13:07.641152 23430 registrar.cpp:365] Successfully fetched the registry (0B) in 11.14496ms [20:13:07]W: [Step 10/10] I0805 20:13:07.641185 23430 registrar.cpp:464] Applied 1 operations in 5010ns; attempting to update the registry [20:13:07]W: [Step 10/10] I0805 20:13:07.641381 23434 log.cpp:577] Attempting to append 209 bytes to the log [20:13:07]W: [Step 10/10] I0805 20:13:07.641425 23430 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 [20:13:07]W: [Step 10/10] I0805 20:13:07.641706 23434 replica.cpp:537] Replica received write request for position 1 from __req_res__(5857)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.642320 23434 leveldb.cpp:341] Persisting action (228 bytes) to leveldb took 596016ns [20:13:07]W: [Step 10/10] I0805 20:13:07.642333 23434 replica.cpp:708] Persisted action APPEND at position 1 [20:13:07]W: [Step 10/10] I0805 20:13:07.642608 23435 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 [20:13:07]W: [Step 10/10] I0805 20:13:07.644492 23435 leveldb.cpp:341] Persisting action (230 bytes) to leveldb took 1.868216ms [20:13:07]W: [Step 10/10] I0805 20:13:07.644507 23435 replica.cpp:708] Persisted action APPEND at position 1 [20:13:07]W: [Step 10/10] I0805 20:13:07.644716 23432 registrar.cpp:509] Successfully updated the registry in 3.512064ms [20:13:07]W: [Step 10/10] I0805 20:13:07.644759 23432 registrar.cpp:395] Successfully recovered registrar [20:13:07]W: [Step 10/10] I0805 20:13:07.644811 23431 log.cpp:596] Attempting to truncate the log to 1 [20:13:07]W: [Step 10/10] I0805 20:13:07.644879 23433 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 [20:13:07]W: [Step 10/10] I0805 20:13:07.644949 23430 master.cpp:1655] Recovered 0 agents from the registry (170B); allowing 10mins for agents to re-register [20:13:07]W: [Step 10/10] I0805 20:13:07.644959 23437 hierarchical.cpp:178] Skipping recovery of hierarchical allocator: nothing to recover [20:13:07]W: [Step 10/10] I0805 20:13:07.645247 23431 replica.cpp:537] Replica received write request for position 2 from __req_res__(5858)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.645884 23431 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 618643ns [20:13:07]W: [Step 10/10] I0805 20:13:07.645896 23431 replica.cpp:708] Persisted action TRUNCATE at position 2 [20:13:07]W: [Step 10/10] I0805 20:13:07.646080 23437 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 [20:13:07]W: [Step 10/10] I0805 20:13:07.648093 23437 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 1.995217ms [20:13:07]W: [Step 10/10] I0805 20:13:07.648118 23437 leveldb.cpp:399] Deleting ~1 keys from leveldb took 10026ns [20:13:07]W: [Step 10/10] I0805 20:13:07.648125 23437 replica.cpp:708] Persisted action TRUNCATE at position 2 [20:13:07]W: [Step 10/10] I0805 20:13:07.649564 23416 containerizer.cpp:200] Using isolation: docker/runtime,filesystem/linux,network/cni [20:13:07]W: [Step 10/10] I0805 20:13:07.652878 23416 linux_launcher.cpp:101] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher [20:13:07]W: [Step 10/10] E0805 20:13:07.656265 23416 shell.hpp:106] Command 'hadoop version 2>&1' failed; this is the output: [20:13:07]W: [Step 10/10] sh: 1: hadoop: not found [20:13:07]W: [Step 10/10] I0805 20:13:07.656286 23416 fetcher.cpp:62] Skipping URI fetcher plugin 'hadoop' as it could not be created: Failed to create HDFS client: Failed to execute 'hadoop version 2>&1'; the command was either not found or exited with a non-zero exit status: 127 [20:13:07]W: [Step 10/10] I0805 20:13:07.656338 23416 registry_puller.cpp:111] Creating registry puller with docker registry 'https://registry-1.docker.io' [20:13:07]W: [Step 10/10] I0805 20:13:07.657330 23416 linux.cpp:148] Bind mounting '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn' and making it a shared mount [20:13:07]W: [Step 10/10] I0805 20:13:07.663147 23416 cluster.cpp:434] Creating default 'local' authorizer [20:13:07]W: [Step 10/10] I0805 20:13:07.663566 23436 slave.cpp:198] Mesos agent started on (506)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.663583 23436 slave.cpp:199] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/OZHDIQ/store"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/http_credentials"""" --image_providers=""""docker"""" --initialize_driver_logging=""""true"""" --isolation=""""docker/runtime,filesystem/linux"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn"""" [20:13:07]W: [Step 10/10] I0805 20:13:07.663796 23436 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/credential' [20:13:07]W: [Step 10/10] I0805 20:13:07.663868 23436 slave.cpp:336] Agent using credential for: test-principal [20:13:07]W: [Step 10/10] I0805 20:13:07.663882 23436 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/http_credentials' [20:13:07]W: [Step 10/10] I0805 20:13:07.663969 23436 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' [20:13:07]W: [Step 10/10] I0805 20:13:07.664010 23436 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' [20:13:07]W: [Step 10/10] I0805 20:13:07.664225 23416 sched.cpp:226] Version: 1.1.0 [20:13:07]W: [Step 10/10] I0805 20:13:07.664423 23435 sched.cpp:330] New master detected at master@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.664451 23435 sched.cpp:396] Authenticating with master master@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.664428 23436 slave.cpp:519] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [20:13:07]W: [Step 10/10] I0805 20:13:07.664458 23435 sched.cpp:403] Using default CRAM-MD5 authenticatee [20:13:07]W: [Step 10/10] I0805 20:13:07.664463 23436 slave.cpp:527] Agent attributes: [ ] [20:13:07]W: [Step 10/10] I0805 20:13:07.664470 23436 slave.cpp:532] Agent hostname: ip-172-30-2-138.mesosphere.io [20:13:07]W: [Step 10/10] I0805 20:13:07.664588 23437 authenticatee.cpp:121] Creating new client SASL connection [20:13:07]W: [Step 10/10] I0805 20:13:07.664810 23437 master.cpp:5900] Authenticating scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.664873 23437 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(1028)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.664939 23432 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/meta' [20:13:07]W: [Step 10/10] I0805 20:13:07.665006 23431 authenticator.cpp:98] Creating new server SASL connection [20:13:07]W: [Step 10/10] I0805 20:13:07.665024 23435 status_update_manager.cpp:203] Recovering status update manager [20:13:07]W: [Step 10/10] I0805 20:13:07.665174 23434 containerizer.cpp:527] Recovering containerizer [20:13:07]W: [Step 10/10] I0805 20:13:07.665201 23431 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [20:13:07]W: [Step 10/10] I0805 20:13:07.665221 23431 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [20:13:07]W: [Step 10/10] I0805 20:13:07.665266 23431 authenticator.cpp:204] Received SASL authentication start [20:13:07]W: [Step 10/10] I0805 20:13:07.665303 23431 authenticator.cpp:326] Authentication requires more steps [20:13:07]W: [Step 10/10] I0805 20:13:07.665347 23431 authenticatee.cpp:259] Received SASL authentication step [20:13:07]W: [Step 10/10] I0805 20:13:07.665436 23431 authenticator.cpp:232] Received SASL authentication step [20:13:07]W: [Step 10/10] I0805 20:13:07.665457 23431 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-138.mesosphere.io' server FQDN: 'ip-172-30-2-138.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [20:13:07]W: [Step 10/10] I0805 20:13:07.665465 23431 auxprop.cpp:181] Looking up auxiliary property '*userPassword' [20:13:07]W: [Step 10/10] I0805 20:13:07.665482 23431 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [20:13:07]W: [Step 10/10] I0805 20:13:07.665494 23431 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-138.mesosphere.io' server FQDN: 'ip-172-30-2-138.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [20:13:07]W: [Step 10/10] I0805 20:13:07.665503 23431 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [20:13:07]W: [Step 10/10] I0805 20:13:07.665510 23431 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [20:13:07]W: [Step 10/10] I0805 20:13:07.665524 23431 authenticator.cpp:318] Authentication success [20:13:07]W: [Step 10/10] I0805 20:13:07.665575 23436 authenticatee.cpp:299] Authentication success [20:13:07]W: [Step 10/10] I0805 20:13:07.665596 23435 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(1028)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.665624 23431 master.cpp:5930] Successfully authenticated principal 'test-principal' at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.665705 23436 sched.cpp:502] Successfully authenticated with master master@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.665715 23436 sched.cpp:820] Sending SUBSCRIBE call to master@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.665751 23436 sched.cpp:853] Will retry registration in 188.601026ms if necessary [20:13:07]W: [Step 10/10] I0805 20:13:07.665796 23437 master.cpp:2425] Received SUBSCRIBE call for framework 'default' at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.665817 23437 master.cpp:1887] Authorizing framework principal 'test-principal' to receive offers for role '*' [20:13:07]W: [Step 10/10] I0805 20:13:07.665998 23430 master.cpp:2501] Subscribing framework default with checkpointing disabled and capabilities [ ] [20:13:07]W: [Step 10/10] I0805 20:13:07.666132 23432 hierarchical.cpp:271] Added framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:07]W: [Step 10/10] I0805 20:13:07.666148 23434 sched.cpp:743] Framework registered with dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:07]W: [Step 10/10] I0805 20:13:07.666154 23432 hierarchical.cpp:1548] No allocations performed [20:13:07]W: [Step 10/10] I0805 20:13:07.666173 23432 hierarchical.cpp:1643] No inverse offers to send out! [20:13:07]W: [Step 10/10] I0805 20:13:07.666177 23434 sched.cpp:757] Scheduler::registered took 11084ns [20:13:07]W: [Step 10/10] I0805 20:13:07.666189 23432 hierarchical.cpp:1192] Performed allocation for 0 agents in 43102ns [20:13:07]W: [Step 10/10] I0805 20:13:07.666486 23431 metadata_manager.cpp:205] No images to load from disk. Docker provisioner image storage path '/tmp/OZHDIQ/store/storedImages' does not exist [20:13:07]W: [Step 10/10] I0805 20:13:07.666558 23436 provisioner.cpp:255] Provisioner recovery complete [20:13:07]W: [Step 10/10] I0805 20:13:07.666677 23435 slave.cpp:4872] Finished recovery [20:13:07]W: [Step 10/10] I0805 20:13:07.666831 23435 slave.cpp:5044] Querying resource estimator for oversubscribable resources [20:13:07]W: [Step 10/10] I0805 20:13:07.666919 23435 slave.cpp:895] New master detected at master@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.666929 23435 slave.cpp:954] Authenticating with master master@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.666931 23436 status_update_manager.cpp:177] Pausing sending status updates [20:13:07]W: [Step 10/10] I0805 20:13:07.666944 23435 slave.cpp:965] Using default CRAM-MD5 authenticatee [20:13:07]W: [Step 10/10] I0805 20:13:07.666982 23435 slave.cpp:927] Detecting new master [20:13:07]W: [Step 10/10] I0805 20:13:07.667006 23431 authenticatee.cpp:121] Creating new client SASL connection [20:13:07]W: [Step 10/10] I0805 20:13:07.667014 23435 slave.cpp:5058] Received oversubscribable resources from the resource estimator [20:13:07]W: [Step 10/10] I0805 20:13:07.667162 23431 master.cpp:5900] Authenticating slave(506)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.667225 23434 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(1029)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.667275 23434 authenticator.cpp:98] Creating new server SASL connection [20:13:07]W: [Step 10/10] I0805 20:13:07.667418 23434 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 [20:13:07]W: [Step 10/10] I0805 20:13:07.667436 23434 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' [20:13:07]W: [Step 10/10] I0805 20:13:07.667492 23436 authenticator.cpp:204] Received SASL authentication start [20:13:07]W: [Step 10/10] I0805 20:13:07.667515 23436 authenticator.cpp:326] Authentication requires more steps [20:13:07]W: [Step 10/10] I0805 20:13:07.667546 23436 authenticatee.cpp:259] Received SASL authentication step [20:13:07]W: [Step 10/10] I0805 20:13:07.667592 23436 authenticator.cpp:232] Received SASL authentication step [20:13:07]W: [Step 10/10] I0805 20:13:07.667603 23436 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-138.mesosphere.io' server FQDN: 'ip-172-30-2-138.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false [20:13:07]W: [Step 10/10] I0805 20:13:07.667610 23436 auxprop.cpp:181] Looking up auxiliary property '*userPassword' [20:13:07]W: [Step 10/10] I0805 20:13:07.667619 23436 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' [20:13:07]W: [Step 10/10] I0805 20:13:07.667630 23436 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-138.mesosphere.io' server FQDN: 'ip-172-30-2-138.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true [20:13:07]W: [Step 10/10] I0805 20:13:07.667639 23436 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true [20:13:07]W: [Step 10/10] I0805 20:13:07.667642 23436 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true [20:13:07]W: [Step 10/10] I0805 20:13:07.667652 23436 authenticator.cpp:318] Authentication success [20:13:07]W: [Step 10/10] I0805 20:13:07.667688 23436 authenticatee.cpp:299] Authentication success [20:13:07]W: [Step 10/10] I0805 20:13:07.667713 23432 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(1029)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.667733 23434 master.cpp:5930] Successfully authenticated principal 'test-principal' at slave(506)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.667783 23437 slave.cpp:1049] Successfully authenticated with master master@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.667836 23437 slave.cpp:1455] Will retry registration in 4.197236ms if necessary [20:13:07]W: [Step 10/10] I0805 20:13:07.667901 23436 master.cpp:4554] Registering agent at slave(506)@172.30.2.138:44256 (ip-172-30-2-138.mesosphere.io) with id dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 [20:13:07]W: [Step 10/10] I0805 20:13:07.668021 23430 registrar.cpp:464] Applied 1 operations in 13306ns; attempting to update the registry [20:13:07]W: [Step 10/10] I0805 20:13:07.668269 23433 log.cpp:577] Attempting to append 395 bytes to the log [20:13:07]W: [Step 10/10] I0805 20:13:07.668329 23434 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 [20:13:07]W: [Step 10/10] I0805 20:13:07.668622 23433 replica.cpp:537] Replica received write request for position 3 from __req_res__(5859)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.669297 23433 leveldb.cpp:341] Persisting action (414 bytes) to leveldb took 658552ns [20:13:07]W: [Step 10/10] I0805 20:13:07.669309 23433 replica.cpp:708] Persisted action APPEND at position 3 [20:13:07]W: [Step 10/10] I0805 20:13:07.669589 23432 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 [20:13:07]W: [Step 10/10] I0805 20:13:07.672566 23432 leveldb.cpp:341] Persisting action (416 bytes) to leveldb took 2.962622ms [20:13:07]W: [Step 10/10] I0805 20:13:07.672580 23432 replica.cpp:708] Persisted action APPEND at position 3 [20:13:07]W: [Step 10/10] I0805 20:13:07.672866 23435 registrar.cpp:509] Successfully updated the registry in 4.822784ms [20:13:07]W: [Step 10/10] I0805 20:13:07.672936 23434 log.cpp:596] Attempting to truncate the log to 3 [20:13:07]W: [Step 10/10] I0805 20:13:07.673001 23437 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 [20:13:07]W: [Step 10/10] I0805 20:13:07.673110 23436 master.cpp:4623] Registered agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 at slave(506)@172.30.2.138:44256 (ip-172-30-2-138.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] [20:13:07]W: [Step 10/10] I0805 20:13:07.673152 23432 hierarchical.cpp:478] Added agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 (ip-172-30-2-138.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) [20:13:07]W: [Step 10/10] I0805 20:13:07.673174 23430 slave.cpp:3739] Received ping from slave-observer(465)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.673254 23430 slave.cpp:1095] Registered with master master@172.30.2.138:44256; given agent ID dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 [20:13:07]W: [Step 10/10] I0805 20:13:07.673266 23430 fetcher.cpp:86] Clearing fetcher cache [20:13:07]W: [Step 10/10] I0805 20:13:07.673288 23433 replica.cpp:537] Replica received write request for position 4 from __req_res__(5860)@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.673317 23432 hierarchical.cpp:1643] No inverse offers to send out! [20:13:07]W: [Step 10/10] I0805 20:13:07.673333 23432 hierarchical.cpp:1215] Performed allocation for agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 in 160981ns [20:13:07]W: [Step 10/10] I0805 20:13:07.673358 23432 status_update_manager.cpp:184] Resuming sending status updates [20:13:07]W: [Step 10/10] I0805 20:13:07.673435 23437 master.cpp:5729] Sending 1 offers to framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.673467 23430 slave.cpp:1118] Checkpointing SlaveInfo to '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/meta/slaves/dd755a55-0dd1-4d2d-9a49-812a666015cb-S0/slave.info' [20:13:07]W: [Step 10/10] I0805 20:13:07.673566 23437 sched.cpp:917] Scheduler::resourceOffers took 40919ns [20:13:07]W: [Step 10/10] I0805 20:13:07.673607 23430 slave.cpp:1155] Forwarding total oversubscribed resources [20:13:07]W: [Step 10/10] I0805 20:13:07.673710 23437 master.cpp:5006] Received update of agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 at slave(506)@172.30.2.138:44256 (ip-172-30-2-138.mesosphere.io) with total oversubscribed resources [20:13:07]W: [Step 10/10] I0805 20:13:07.673781 23437 hierarchical.cpp:542] Agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 (ip-172-30-2-138.mesosphere.io) updated with oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) [20:13:07]W: [Step 10/10] I0805 20:13:07.673823 23437 hierarchical.cpp:1548] No allocations performed [20:13:07]W: [Step 10/10] I0805 20:13:07.673830 23437 hierarchical.cpp:1643] No inverse offers to send out! [20:13:07]W: [Step 10/10] I0805 20:13:07.673838 23437 hierarchical.cpp:1215] Performed allocation for agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 in 31940ns [20:13:07]W: [Step 10/10] I0805 20:13:07.674163 23435 master.cpp:3346] Processing ACCEPT call for offers: [ dd755a55-0dd1-4d2d-9a49-812a666015cb-O0 ] on agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 at slave(506)@172.30.2.138:44256 (ip-172-30-2-138.mesosphere.io) for framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 [20:13:07]W: [Step 10/10] I0805 20:13:07.674186 23435 master.cpp:2981] Authorizing framework principal 'test-principal' to launch task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 [20:13:07]W: [Step 10/10] I0805 20:13:07.674538 23437 master.cpp:7451] Adding task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 with resources cpus(*):1; mem(*):128 on agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 (ip-172-30-2-138.mesosphere.io) [20:13:07]W: [Step 10/10] I0805 20:13:07.674564 23437 master.cpp:3835] Launching task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 with resources cpus(*):1; mem(*):128 on agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 at slave(506)@172.30.2.138:44256 (ip-172-30-2-138.mesosphere.io) [20:13:07]W: [Step 10/10] I0805 20:13:07.674665 23430 slave.cpp:1495] Got assigned task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 for framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:07]W: [Step 10/10] I0805 20:13:07.674713 23436 hierarchical.cpp:924] Recovered cpus(*):1; mem(*):896; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):1; mem(*):128) on agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 from framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:07]W: [Step 10/10] I0805 20:13:07.674736 23436 hierarchical.cpp:961] Framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 filtered agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 for 5secs [20:13:07]W: [Step 10/10] I0805 20:13:07.674866 23430 slave.cpp:1614] Launching task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 for framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:07]W: [Step 10/10] I0805 20:13:07.675107 23430 paths.cpp:536] Trying to chown '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/slaves/dd755a55-0dd1-4d2d-9a49-812a666015cb-S0/frameworks/dd755a55-0dd1-4d2d-9a49-812a666015cb-0000/executors/ecd0633f-2f1e-4cfa-819f-590bfb95fa12/runs/f2c1fd6d-4d11-45cd-a916-e4d73d226451' to user 'root' [20:13:07]W: [Step 10/10] I0805 20:13:07.678246 23433 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 4.916164ms [20:13:07]W: [Step 10/10] I0805 20:13:07.678267 23433 replica.cpp:708] Persisted action TRUNCATE at position 4 [20:13:07]W: [Step 10/10] I0805 20:13:07.678629 23436 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 [20:13:07]W: [Step 10/10] I0805 20:13:07.679050 23430 slave.cpp:5764] Launching executor 'ecd0633f-2f1e-4cfa-819f-590bfb95fa12' of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/slaves/dd755a55-0dd1-4d2d-9a49-812a666015cb-S0/frameworks/dd755a55-0dd1-4d2d-9a49-812a666015cb-0000/executors/ecd0633f-2f1e-4cfa-819f-590bfb95fa12/runs/f2c1fd6d-4d11-45cd-a916-e4d73d226451' [20:13:07]W: [Step 10/10] I0805 20:13:07.679200 23430 slave.cpp:1840] Queuing task 'ecd0633f-2f1e-4cfa-819f-590bfb95fa12' for executor 'ecd0633f-2f1e-4cfa-819f-590bfb95fa12' of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:07]W: [Step 10/10] I0805 20:13:07.679219 23437 containerizer.cpp:786] Starting container 'f2c1fd6d-4d11-45cd-a916-e4d73d226451' for executor 'ecd0633f-2f1e-4cfa-819f-590bfb95fa12' of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:07]W: [Step 10/10] I0805 20:13:07.679234 23430 slave.cpp:848] Successfully attached file '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/slaves/dd755a55-0dd1-4d2d-9a49-812a666015cb-S0/frameworks/dd755a55-0dd1-4d2d-9a49-812a666015cb-0000/executors/ecd0633f-2f1e-4cfa-819f-590bfb95fa12/runs/f2c1fd6d-4d11-45cd-a916-e4d73d226451' [20:13:07]W: [Step 10/10] I0805 20:13:07.679435 23430 metadata_manager.cpp:167] Looking for image 'mesosphere/inky' [20:13:07]W: [Step 10/10] I0805 20:13:07.679572 23430 registry_puller.cpp:236] Pulling image 'mesosphere/inky' from 'docker-manifest://registry-1.docker.io:443mesosphere/inky?latest#https' to '/tmp/OZHDIQ/store/staging/HbsybX' [20:13:07]W: [Step 10/10] I0805 20:13:07.680943 23436 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 2.14361ms [20:13:07]W: [Step 10/10] I0805 20:13:07.681073 23436 leveldb.cpp:399] Deleting ~2 keys from leveldb took 60273ns [20:13:07]W: [Step 10/10] I0805 20:13:07.681112 23436 replica.cpp:708] Persisted action TRUNCATE at position 4 [20:13:08]W: [Step 10/10] I0805 20:13:08.104004 23431 registry_puller.cpp:259] The manifest for image 'mesosphere/inky' is '{ [20:13:08]W: [Step 10/10] """"name"""": """"mesosphere/inky"""", [20:13:08]W: [Step 10/10] """"tag"""": """"latest"""", [20:13:08]W: [Step 10/10] """"architecture"""": """"amd64"""", [20:13:08]W: [Step 10/10] """"fsLayers"""": [ [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:1db09adb5ddd7f1a07b6d585a7db747a51c7bd17418d47e91f901bdf420abd66"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] } [20:13:08]W: [Step 10/10] ], [20:13:08]W: [Step 10/10] """"history"""": [ [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6\"""",\""""parent\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""created\"""":\""""2014-08-15T00:31:36.407713553Z\"""",\""""container\"""":\""""5d55401ff99c7508c9d546926b711c78e3ccb36e39a848024b623b2aef4c2c06\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) ENTRYPOINT [echo]\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""1.1.2\"""",\""""author\"""":\""""support@mesosphere.io\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""inky\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6\"""",\""""parent\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""created\"""":\""""2014-08-15T00:31:36.407713553Z\"""",\""""container\"""":\""""5d55401ff99c7508c9d546926b711c78e3ccb36e39a848024b623b2aef4c2c06\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) ENTRYPOINT [echo]\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""1.1.2\"""",\""""author\"""":\""""support@mesosphere.io\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""inky\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""parent\"""":\""""53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f\"""",\""""created\"""":\""""2014-08-15T00:31:36.247988044Z\"""",\""""container\"""":\""""ff756d99367825677c3c18cc5054bfbb3674a7f52a9f916282fb46b8feaddfb7\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) CMD [inky]\""""],\""""Image\"""":\""""53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""1.1.2\"""",\""""author\"""":\""""support@mesosphere.io\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""inky\""""],\""""Image\"""":\""""53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f\"""",\""""parent\"""":\""""a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721\"""",\""""created\"""":\""""2014-08-15T00:31:36.068514721Z\"""",\""""container\"""":\""""696c3d66c8575dfff3ba71267bf194ae97f0478231042449c98aa0d9164d3c8c\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) MAINTAINER support@mesosphere.io\""""],\""""Image\"""":\""""a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""1.1.2\"""",\""""author\"""":\""""support@mesosphere.io\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\""""],\""""Image\"""":\""""a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721\"""",\""""parent\"""":\""""120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16\"""",\""""created\"""":\""""2014-06-05T00:05:35.990887725Z\"""",\""""container\"""":\""""bb3475b3130b6a47104549a0291a6569d24e41fa57a7f094591f0d4611fd15bc\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) CMD [/bin/sh]\""""],\""""Image\"""":\""""120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""0.10.0\"""",\""""author\"""":\""""Jrme Petazzoni \\u003cjerome@docker.com\\u003e\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\""""],\""""Image\"""":\""""120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16\"""",\""""parent\"""":\""""42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229\"""",\""""created\"""":\""""2014-06-05T00:05:35.692528634Z\"""",\""""container\"""":\""""fc203791c4d5024b1a976223daa1cc7b1ceeb5b3abf25a2fb73034eba6398026\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) ADD file:88f36b32456f849299e5df807a1e3514cf1da798af9692a0004598e500be5901 in /\""""],\""""Image\"""":\""""42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""0.10.0\"""",\""""author\"""":\""""Jrme Petazzoni \\u003cjerome@docker.com\\u003e\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":null,\""""Image\"""":\""""42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":2433303}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229\"""",\""""parent\"""":\""""511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158\"""",\""""created\"""":\""""2014-06-05T00:05:35.589531476Z\"""",\""""container\"""":\""""f7d939e68b5afdd74637d9204c40fe00295e658923be395c761da3278b98e446\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) MAINTAINER Jrme Petazzoni \\u003cjerome@docker.com\\u003e\""""],\""""Image\"""":\""""511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""0.10.0\"""",\""""author\"""":\""""Jrme Petazzoni \\u003cjerome@docker.com\\u003e\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":null,\""""Image\"""":\""""511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158\"""",\""""comment\"""":\""""Imported from -\"""",\""""created\"""":\""""2013-06-13T14:03:50.821769-07:00\"""",\""""container_config\"""":{\""""Hostname\"""":\""""\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":null,\""""Cmd\"""":null,\""""Image\"""":\""""\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":null,\""""Labels\"""":null},\""""docker_version\"""":\""""0.4.0\"""",\""""architecture\"""":\""""x86_64\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] } [20:13:08]W: [Step 10/10] ], [20:13:08]W: [Step 10/10] """"schemaVersion"""": 1, [20:13:08]W: [Step 10/10] """"signatures"""": [ [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"header"""": { [20:13:08]W: [Step 10/10] """"jwk"""": { [20:13:08]W: [Step 10/10] """"crv"""": """"P-256"""", [20:13:08]W: [Step 10/10] """"kid"""": """"4AYN:KH32:GJJD:I6BX:SJAZ:A3EC:P7IC:7O7C:22ZQ:3Z5O:75VQ:3QOT"""", [20:13:08]W: [Step 10/10] """"kty"""": """"EC"""", [20:13:08]W: [Step 10/10] """"x"""": """"o8bvrUwNpXKZdgoo2wQ7EHQzCVYhVuoOvjqGEXtRylU"""", [20:13:08]W: [Step 10/10] """"y"""": """"DCHyGr0Cbi-fZzqypQm16qKfefUMqCTk0rQME-q5GmA"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] """"alg"""": """"ES256"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] """"signature"""": """"f3fAob4XPT0pUW9TiPtxAE_zPAe0PdM2imxAeaCmJbBf6Lb-SuFPVGE4iqz1CO0VOijeYVuB1G1lv_a5Nnj5zg"""", [20:13:08]W: [Step 10/10] """"protected"""": """"eyJmb3JtYXRMZW5ndGgiOjEzNzA3LCJmb3JtYXRUYWlsIjoiQ24wIiwidGltZSI6IjIwMTYtMDgtMDVUMjA6MTM6MDdaIn0"""" [20:13:08]W: [Step 10/10] } [20:13:08]W: [Step 10/10] ] [20:13:08]W: [Step 10/10] }' [20:13:08]W: [Step 10/10] I0805 20:13:08.104116 23431 registry_puller.cpp:369] Fetching blob 'sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4' for layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' of image 'mesosphere/inky' [20:13:08]W: [Step 10/10] I0805 20:13:08.104130 23431 registry_puller.cpp:369] Fetching blob 'sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4' for layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' of image 'mesosphere/inky' [20:13:08]W: [Step 10/10] I0805 20:13:08.104138 23431 registry_puller.cpp:369] Fetching blob 'sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4' for layer 'be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e' of image 'mesosphere/inky' [20:13:08]W: [Step 10/10] I0805 20:13:08.104146 23431 registry_puller.cpp:369] Fetching blob 'sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4' for layer '53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f' of image 'mesosphere/inky' [20:13:08]W: [Step 10/10] I0805 20:13:08.104151 23431 registry_puller.cpp:369] Fetching blob 'sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4' for layer 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721' of image 'mesosphere/inky' [20:13:08]W: [Step 10/10] I0805 20:13:08.104158 23431 registry_puller.cpp:369] Fetching blob 'sha256:1db09adb5ddd7f1a07b6d585a7db747a51c7bd17418d47e91f901bdf420abd66' for layer '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16' of image 'mesosphere/inky' [20:13:08]W: [Step 10/10] I0805 20:13:08.104164 23431 registry_puller.cpp:369] Fetching blob 'sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4' for layer '42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229' of image 'mesosphere/inky' [20:13:08]W: [Step 10/10] I0805 20:13:08.104171 23431 registry_puller.cpp:369] Fetching blob 'sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4' for layer '511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' of image 'mesosphere/inky' [20:13:08]W: [Step 10/10] I0805 20:13:08.504564 23436 registry_puller.cpp:306] Extracting layer tar ball '/tmp/OZHDIQ/store/staging/HbsybX/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/OZHDIQ/store/staging/HbsybX/e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6/rootfs' [20:13:08]W: [Step 10/10] I0805 20:13:08.507129 23436 registry_puller.cpp:306] Extracting layer tar ball '/tmp/OZHDIQ/store/staging/HbsybX/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/OZHDIQ/store/staging/HbsybX/e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6/rootfs' [20:13:08]W: [Step 10/10] I0805 20:13:08.508962 23436 registry_puller.cpp:306] Extracting layer tar ball '/tmp/OZHDIQ/store/staging/HbsybX/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/OZHDIQ/store/staging/HbsybX/be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e/rootfs' [20:13:08]W: [Step 10/10] I0805 20:13:08.510915 23436 registry_puller.cpp:306] Extracting layer tar ball '/tmp/OZHDIQ/store/staging/HbsybX/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/OZHDIQ/store/staging/HbsybX/53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f/rootfs' [20:13:08]W: [Step 10/10] I0805 20:13:08.512848 23436 registry_puller.cpp:306] Extracting layer tar ball '/tmp/OZHDIQ/store/staging/HbsybX/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/OZHDIQ/store/staging/HbsybX/a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721/rootfs' [20:13:08]W: [Step 10/10] I0805 20:13:08.515400 23436 registry_puller.cpp:306] Extracting layer tar ball '/tmp/OZHDIQ/store/staging/HbsybX/sha256:1db09adb5ddd7f1a07b6d585a7db747a51c7bd17418d47e91f901bdf420abd66 to rootfs '/tmp/OZHDIQ/store/staging/HbsybX/120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16/rootfs' [20:13:08]W: [Step 10/10] I0805 20:13:08.517390 23436 registry_puller.cpp:306] Extracting layer tar ball '/tmp/OZHDIQ/store/staging/HbsybX/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/OZHDIQ/store/staging/HbsybX/42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229/rootfs' [20:13:08]W: [Step 10/10] I0805 20:13:08.519486 23436 registry_puller.cpp:306] Extracting layer tar ball '/tmp/OZHDIQ/store/staging/HbsybX/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/OZHDIQ/store/staging/HbsybX/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/rootfs' [20:13:08]W: [Step 10/10] I0805 20:13:08.606955 23434 metadata_manager.cpp:155] Successfully cached image 'mesosphere/inky' [20:13:08]W: [Step 10/10] I0805 20:13:08.607501 23436 provisioner.cpp:312] Provisioning image rootfs '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/provisioner/containers/f2c1fd6d-4d11-45cd-a916-e4d73d226451/backends/aufs/rootfses/427b7851-bf82-4553-80f3-da2d42cede77' for container f2c1fd6d-4d11-45cd-a916-e4d73d226451 using the 'aufs' backend [20:13:08]W: [Step 10/10] I0805 20:13:08.607787 23434 aufs.cpp:152] Provisioning image rootfs with aufs: 'dirs=/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/provisioner/containers/f2c1fd6d-4d11-45cd-a916-e4d73d226451/backends/aufs/scratch/427b7851-bf82-4553-80f3-da2d42cede77/workdir:/tmp/OZHDIQ/store/layers/e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6/rootfs:/tmp/OZHDIQ/store/layers/e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6/rootfs:/tmp/OZHDIQ/store/layers/be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e/rootfs:/tmp/OZHDIQ/store/layers/53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f/rootfs:/tmp/OZHDIQ/store/layers/a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721/rootfs:/tmp/OZHDIQ/store/layers/120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16/rootfs:/tmp/OZHDIQ/store/layers/42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229/rootfs:/tmp/OZHDIQ/store/layers/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/rootfs' [20:13:08]W: [Step 10/10] E0805 20:13:08.614994 23432 slave.cpp:4029] Container 'f2c1fd6d-4d11-45cd-a916-e4d73d226451' for executor 'ecd0633f-2f1e-4cfa-819f-590bfb95fa12' of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 failed to start: Failed to mount rootfs '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/provisioner/containers/f2c1fd6d-4d11-45cd-a916-e4d73d226451/backends/aufs/rootfses/427b7851-bf82-4553-80f3-da2d42cede77' with aufs: Invalid argument [20:13:08]W: [Step 10/10] I0805 20:13:08.615058 23436 containerizer.cpp:1637] Destroying container 'f2c1fd6d-4d11-45cd-a916-e4d73d226451' [20:13:08]W: [Step 10/10] I0805 20:13:08.615072 23436 containerizer.cpp:1640] Waiting for the provisioner to complete for container 'f2c1fd6d-4d11-45cd-a916-e4d73d226451' [20:13:08]W: [Step 10/10] I0805 20:13:08.615279 23435 provisioner.cpp:455] Destroying container rootfs at '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/provisioner/containers/f2c1fd6d-4d11-45cd-a916-e4d73d226451/backends/aufs/rootfses/427b7851-bf82-4553-80f3-da2d42cede77' for container f2c1fd6d-4d11-45cd-a916-e4d73d226451 [20:13:08]W: [Step 10/10] I0805 20:13:08.616097 23430 slave.cpp:4135] Executor 'ecd0633f-2f1e-4cfa-819f-590bfb95fa12' of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 has terminated with unknown status [20:13:08]W: [Step 10/10] I0805 20:13:08.616173 23430 slave.cpp:3264] Handling status update TASK_FAILED (UUID: 4a37d8ce-6c60-4f3e-97bd-ea9148be4ce9) for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 from @0.0.0.0:0 [20:13:08]W: [Step 10/10] I0805 20:13:08.616320 23435 slave.cpp:6104] Terminating task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 [20:13:08]W: [Step 10/10] W0805 20:13:08.616402 23432 containerizer.cpp:1466] Ignoring update for unknown container: f2c1fd6d-4d11-45cd-a916-e4d73d226451 [20:13:08]W: [Step 10/10] I0805 20:13:08.616528 23433 status_update_manager.cpp:323] Received status update TASK_FAILED (UUID: 4a37d8ce-6c60-4f3e-97bd-ea9148be4ce9) for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:08]W: [Step 10/10] I0805 20:13:08.616545 23433 status_update_manager.cpp:500] Creating StatusUpdate stream for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:08]W: [Step 10/10] I0805 20:13:08.616750 23433 status_update_manager.cpp:377] Forwarding update TASK_FAILED (UUID: 4a37d8ce-6c60-4f3e-97bd-ea9148be4ce9) for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 to the agent [20:13:08]W: [Step 10/10] I0805 20:13:08.616827 23431 slave.cpp:3657] Forwarding the update TASK_FAILED (UUID: 4a37d8ce-6c60-4f3e-97bd-ea9148be4ce9) for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 to master@172.30.2.138:44256 [20:13:08]W: [Step 10/10] I0805 20:13:08.616936 23431 slave.cpp:3551] Status update manager successfully handled status update TASK_FAILED (UUID: 4a37d8ce-6c60-4f3e-97bd-ea9148be4ce9) for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:08]W: [Step 10/10] I0805 20:13:08.617010 23433 master.cpp:5141] Status update TASK_FAILED (UUID: 4a37d8ce-6c60-4f3e-97bd-ea9148be4ce9) for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 from agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 at slave(506)@172.30.2.138:44256 (ip-172-30-2-138.mesosphere.io) [20:13:08]W: [Step 10/10] I0805 20:13:08.617032 23433 master.cpp:5203] Forwarding status update TASK_FAILED (UUID: 4a37d8ce-6c60-4f3e-97bd-ea9148be4ce9) for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:08]W: [Step 10/10] I0805 20:13:08.617079 23433 master.cpp:6845] Updating the state of task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (latest state: TASK_FAILED, status update state: TASK_FAILED) [20:13:08]W: [Step 10/10] I0805 20:13:08.617187 23435 sched.cpp:1025] Scheduler::statusUpdate took 57204ns [20:13:08] : [Step 10/10] ../../src/tests/containerizer/runtime_isolator_tests.cpp:309: Failure [20:13:08] : [Step 10/10] Value of: statusRunning->state() [20:13:08]W: [Step 10/10] I0805 20:13:08.617234 23436 hierarchical.cpp:924] Recovered cpus(*):1; mem(*):128 (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 from framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:08] : [Step 10/10] Actual: TASK_FAILED [20:13:08] : [Step 10/10] Expected: TASK_RUNNING [20:13:08]W: [Step 10/10] I0805 20:13:08.617281 23432 master.cpp:4266] Processing ACKNOWLEDGE call 4a37d8ce-6c60-4f3e-97bd-ea9148be4ce9 for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 on agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 [20:13:08]W: [Step 10/10] I0805 20:13:08.617311 23432 master.cpp:6911] Removing task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 with resources cpus(*):1; mem(*):128 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 on agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 at slave(506)@172.30.2.138:44256 (ip-172-30-2-138.mesosphere.io) [20:13:08]W: [Step 10/10] I0805 20:13:08.617450 23430 status_update_manager.cpp:395] Received status update acknowledgement (UUID: 4a37d8ce-6c60-4f3e-97bd-ea9148be4ce9) for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:08]W: [Step 10/10] I0805 20:13:08.617480 23430 status_update_manager.cpp:531] Cleaning up status update stream for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:08]W: [Step 10/10] I0805 20:13:08.617545 23430 slave.cpp:2650] Status update manager successfully handled status update acknowledgement (UUID: 4a37d8ce-6c60-4f3e-97bd-ea9148be4ce9) for task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:08]W: [Step 10/10] I0805 20:13:08.617561 23430 slave.cpp:6145] Completing task ecd0633f-2f1e-4cfa-819f-590bfb95fa12 [20:13:08]W: [Step 10/10] I0805 20:13:08.617575 23430 slave.cpp:4246] Cleaning up executor 'ecd0633f-2f1e-4cfa-819f-590bfb95fa12' of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:08]W: [Step 10/10] I0805 20:13:08.617660 23435 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/slaves/dd755a55-0dd1-4d2d-9a49-812a666015cb-S0/frameworks/dd755a55-0dd1-4d2d-9a49-812a666015cb-0000/executors/ecd0633f-2f1e-4cfa-819f-590bfb95fa12/runs/f2c1fd6d-4d11-45cd-a916-e4d73d226451' for gc 6.99999285160889days in the future [20:13:08]W: [Step 10/10] I0805 20:13:08.617688 23430 slave.cpp:4334] Cleaning up framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:08]W: [Step 10/10] I0805 20:13:08.617708 23435 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/slaves/dd755a55-0dd1-4d2d-9a49-812a666015cb-S0/frameworks/dd755a55-0dd1-4d2d-9a49-812a666015cb-0000/executors/ecd0633f-2f1e-4cfa-819f-590bfb95fa12' for gc 6.9999928509363days in the future [20:13:08]W: [Step 10/10] I0805 20:13:08.617748 23434 status_update_manager.cpp:285] Closing status update streams for framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:08]W: [Step 10/10] I0805 20:13:08.617772 23434 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/slaves/dd755a55-0dd1-4d2d-9a49-812a666015cb-S0/frameworks/dd755a55-0dd1-4d2d-9a49-812a666015cb-0000' for gc 6.99999285021926days in the future [20:13:08]W: [Step 10/10] I0805 20:13:08.630481 23432 hierarchical.cpp:1643] No inverse offers to send out! [20:13:08]W: [Step 10/10] I0805 20:13:08.630504 23432 hierarchical.cpp:1192] Performed allocation for 1 agents in 155186ns [20:13:08]W: [Step 10/10] I0805 20:13:08.630609 23430 master.cpp:5729] Sending 1 offers to framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 [20:13:08]W: [Step 10/10] I0805 20:13:08.630728 23430 sched.cpp:917] Scheduler::resourceOffers took 13371ns [20:13:09]W: [Step 10/10] I0805 20:13:09.631413 23437 hierarchical.cpp:1548] No allocations performed [20:13:09]W: [Step 10/10] I0805 20:13:09.631450 23437 hierarchical.cpp:1643] No inverse offers to send out! [20:13:09]W: [Step 10/10] I0805 20:13:09.631465 23437 hierarchical.cpp:1192] Performed allocation for 1 agents in 202676ns [20:13:10]W: [Step 10/10] I0805 20:13:10.631609 23435 hierarchical.cpp:1548] No allocations performed [20:13:10]W: [Step 10/10] I0805 20:13:10.631640 23435 hierarchical.cpp:1643] No inverse offers to send out! [20:13:10]W: [Step 10/10] I0805 20:13:10.631655 23435 hierarchical.cpp:1192] Performed allocation for 1 agents in 102058ns [20:13:11]W: [Step 10/10] I0805 20:13:11.632261 23431 hierarchical.cpp:1548] No allocations performed [20:13:11]W: [Step 10/10] I0805 20:13:11.632294 23431 hierarchical.cpp:1643] No inverse offers to send out! [20:13:11]W: [Step 10/10] I0805 20:13:11.632308 23431 hierarchical.cpp:1192] Performed allocation for 1 agents in 112653ns [20:13:12]W: [Step 10/10] I0805 20:13:12.632477 23433 hierarchical.cpp:1548] No allocations performed [20:13:12]W: [Step 10/10] I0805 20:13:12.632510 23433 hierarchical.cpp:1643] No inverse offers to send out! [20:13:12]W: [Step 10/10] I0805 20:13:12.632525 23433 hierarchical.cpp:1192] Performed allocation for 1 agents in 144467ns [20:13:13]W: [Step 10/10] I0805 20:13:13.633517 23430 hierarchical.cpp:1548] No allocations performed [20:13:13]W: [Step 10/10] I0805 20:13:13.633549 23430 hierarchical.cpp:1643] No inverse offers to send out! [20:13:13]W: [Step 10/10] I0805 20:13:13.633563 23430 hierarchical.cpp:1192] Performed allocation for 1 agents in 111395ns [20:13:14]W: [Step 10/10] I0805 20:13:14.633985 23436 hierarchical.cpp:1548] No allocations performed [20:13:14]W: [Step 10/10] I0805 20:13:14.634018 23436 hierarchical.cpp:1643] No inverse offers to send out! [20:13:14]W: [Step 10/10] I0805 20:13:14.634048 23436 hierarchical.cpp:1192] Performed allocation for 1 agents in 132707ns [20:13:15]W: [Step 10/10] I0805 20:13:15.634266 23430 hierarchical.cpp:1548] No allocations performed [20:13:15]W: [Step 10/10] I0805 20:13:15.634299 23430 hierarchical.cpp:1643] No inverse offers to send out! [20:13:15]W: [Step 10/10] I0805 20:13:15.634313 23430 hierarchical.cpp:1192] Performed allocation for 1 agents in 103933ns [20:13:16]W: [Step 10/10] I0805 20:13:16.635295 23431 hierarchical.cpp:1548] No allocations performed [20:13:16]W: [Step 10/10] I0805 20:13:16.635330 23431 hierarchical.cpp:1643] No inverse offers to send out! [20:13:16]W: [Step 10/10] I0805 20:13:16.635346 23431 hierarchical.cpp:1192] Performed allocation for 1 agents in 115517ns [20:13:17]W: [Step 10/10] I0805 20:13:17.635922 23436 hierarchical.cpp:1548] No allocations performed [20:13:17]W: [Step 10/10] I0805 20:13:17.635958 23436 hierarchical.cpp:1643] No inverse offers to send out! [20:13:17]W: [Step 10/10] I0805 20:13:17.635973 23436 hierarchical.cpp:1192] Performed allocation for 1 agents in 109700ns [20:13:18]W: [Step 10/10] I0805 20:13:18.636693 23437 hierarchical.cpp:1548] No allocations performed [20:13:18]W: [Step 10/10] I0805 20:13:18.636728 23437 hierarchical.cpp:1643] No inverse offers to send out! [20:13:18]W: [Step 10/10] I0805 20:13:18.636744 23437 hierarchical.cpp:1192] Performed allocation for 1 agents in 123133ns [20:13:19]W: [Step 10/10] I0805 20:13:19.637589 23432 hierarchical.cpp:1548] No allocations performed [20:13:19]W: [Step 10/10] I0805 20:13:19.637624 23432 hierarchical.cpp:1643] No inverse offers to send out! [20:13:19]W: [Step 10/10] I0805 20:13:19.637639 23432 hierarchical.cpp:1192] Performed allocation for 1 agents in 118581ns [20:13:20]W: [Step 10/10] I0805 20:13:20.638517 23431 hierarchical.cpp:1548] No allocations performed [20:13:20]W: [Step 10/10] I0805 20:13:20.638550 23431 hierarchical.cpp:1643] No inverse offers to send out! [20:13:20]W: [Step 10/10] I0805 20:13:20.638566 23431 hierarchical.cpp:1192] Performed allocation for 1 agents in 107979ns [20:13:21]W: [Step 10/10] I0805 20:13:21.639577 23435 hierarchical.cpp:1548] No allocations performed [20:13:21]W: [Step 10/10] I0805 20:13:21.639612 23435 hierarchical.cpp:1643] No inverse offers to send out! [20:13:21]W: [Step 10/10] I0805 20:13:21.639628 23435 hierarchical.cpp:1192] Performed allocation for 1 agents in 126299ns [20:13:22]W: [Step 10/10] I0805 20:13:22.640533 23430 hierarchical.cpp:1548] No allocations performed [20:13:22]W: [Step 10/10] I0805 20:13:22.640566 23430 hierarchical.cpp:1643] No inverse offers to send out! [20:13:22]W: [Step 10/10] I0805 20:13:22.640581 23430 hierarchical.cpp:1192] Performed allocation for 1 agents in 106384ns [20:13:22]W: [Step 10/10] I0805 20:13:22.667985 23437 slave.cpp:5044] Querying resource estimator for oversubscribable resources [20:13:22]W: [Step 10/10] I0805 20:13:22.668124 23434 slave.cpp:5058] Received oversubscribable resources from the resource estimator [20:13:22]W: [Step 10/10] I0805 20:13:22.674278 23433 slave.cpp:3739] Received ping from slave-observer(465)@172.30.2.138:44256 [20:13:23] : [Step 10/10] ../../src/tests/containerizer/runtime_isolator_tests.cpp:311: Failure [20:13:23] : [Step 10/10] Failed to wait 15secs for statusFinished [20:13:23] : [Step 10/10] ../../src/tests/containerizer/runtime_isolator_tests.cpp:301: Failure [20:13:23] : [Step 10/10] Actual function call count doesn't match EXPECT_CALL(sched, statusUpdate(&driver, _))... [20:13:23] : [Step 10/10] Expected: to be called twice [20:13:23] : [Step 10/10] Actual: called once - unsatisfied and active [20:13:23]W: [Step 10/10] I0805 20:13:23.618680 23433 master.cpp:1284] Framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 disconnected [20:13:23]W: [Step 10/10] I0805 20:13:23.618721 23433 master.cpp:2726] Disconnecting framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 [20:13:23]W: [Step 10/10] I0805 20:13:23.618737 23433 master.cpp:2750] Deactivating framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 [20:13:23]W: [Step 10/10] I0805 20:13:23.618883 23434 hierarchical.cpp:382] Deactivated framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:23]W: [Step 10/10] W0805 20:13:23.618918 23433 master.hpp:2131] Master attempted to send message to disconnected framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 [20:13:23]W: [Step 10/10] I0805 20:13:23.618963 23433 master.cpp:1297] Giving framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 0ns to failover [20:13:23]W: [Step 10/10] I0805 20:13:23.619046 23434 hierarchical.cpp:924] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 from framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:23]W: [Step 10/10] I0805 20:13:23.619258 23416 slave.cpp:767] Agent terminating [20:13:23]W: [Step 10/10] I0805 20:13:23.619321 23432 master.cpp:1245] Agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 at slave(506)@172.30.2.138:44256 (ip-172-30-2-138.mesosphere.io) disconnected [20:13:23]W: [Step 10/10] I0805 20:13:23.619336 23432 master.cpp:2785] Disconnecting agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 at slave(506)@172.30.2.138:44256 (ip-172-30-2-138.mesosphere.io) [20:13:23]W: [Step 10/10] I0805 20:13:23.619371 23432 master.cpp:2804] Deactivating agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 at slave(506)@172.30.2.138:44256 (ip-172-30-2-138.mesosphere.io) [20:13:23]W: [Step 10/10] I0805 20:13:23.619431 23432 hierarchical.cpp:571] Agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 deactivated [20:13:23]W: [Step 10/10] I0805 20:13:23.620216 23435 master.cpp:5581] Framework failover timeout, removing framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 [20:13:23]W: [Step 10/10] I0805 20:13:23.620232 23435 master.cpp:6316] Removing framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 (default) at scheduler-893e3efc-6e25-48f3-a487-d2ef50ffd5ba@172.30.2.138:44256 [20:13:23]W: [Step 10/10] I0805 20:13:23.620357 23433 hierarchical.cpp:333] Removed framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 [20:13:23]W: [Step 10/10] I0805 20:13:23.621464 23416 master.cpp:1092] Master terminating [20:13:23]W: [Step 10/10] I0805 20:13:23.621561 23433 hierarchical.cpp:510] Removed agent dd755a55-0dd1-4d2d-9a49-812a666015cb-S0 [20:13:23] : [Step 10/10] [ FAILED ] DockerRuntimeIsolatorTest.ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller (16012 ms) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6013","08/09/2016 18:51:00",3,"Use readdir instead of readdir_r. ""{{readdir_r}} is deprecated in recent versions of glibc (https://sourceware.org/ml/libc-alpha/2016-02/msg00093.html). As a result, Mesos doesn't build on recent Arch Linux: Seems like {{readdir_r}} is deprecated; manpage suggests using {{readdir}} instead."""," /bin/sh ../libtool --tag=CXX --mode=compile ccache g++ -DPACKAGE_NAME=\""""mesos\"""" -DPACKAGE_TARNAME=\""""mesos\"""" -DPACKAGE_VERSION=\""""1.1.0\"""" -DPACKAGE_STRING=\""""mesos\ 1.1.0\"""" -DPACKAGE_BUGREPORT=\""""\"""" -DPACKAGE_URL=\""""\"""" -DPACKAGE=\""""mesos\"""" -DVERSION=\""""1.1.0\"""" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\"""".libs/\"""" -DHAVE_CXX11=1 -DHAVE_PTHREAD_PRIO_INHERIT=1 -DHAVE_PTHREAD=1 -DHAVE_LIBZ=1 -DHAVE_FTS_H=1 -DHAVE_APR_POOLS_H=1 -DHAVE_LIBAPR_1=1 -DHAVE_LIBCURL=1 -DMESOS_HAS_JAVA=1 -DHAVE_LIBSASL2=1 -DHAVE_SVN_VERSION_H=1 -DHAVE_LIBSVN_SUBR_1=1 -DHAVE_SVN_DELTA_H=1 -DHAVE_LIBSVN_DELTA_1=1 -DHAVE_LIBZ=1 -I. -I../../mesos/src -Wall -Werror -Wsign-compare -DLIBDIR=\""""/usr/local/lib\"""" -DPKGLIBEXECDIR=\""""/usr/local/libexec/mesos\"""" -DPKGDATADIR=\""""/usr/local/share/mesos\"""" -DPKGMODULEDIR=\""""/usr/local/lib/mesos/modules\"""" -I../../mesos/include -I../include -I../include/mesos -DPICOJSON_USE_INT64 -D__STDC_FORMAT_MACROS -isystem ../3rdparty/boost-1.53.0 -I../3rdparty/elfio-3.1 -I../3rdparty/glog-0.3.3/src -I../3rdparty/leveldb-1.4/include -I../../mesos/3rdparty/libprocess/include -I../3rdparty/nvml-352.79 -I../3rdparty/picojson-1.3.0 -I../3rdparty/protobuf-2.6.1/src -I../../mesos/3rdparty/stout/include -I../3rdparty/zookeeper-3.4.8/src/c/include -I../3rdparty/zookeeper-3.4.8/src/c/generated -DHAS_AUTHENTICATION=1 -I/usr/include/subversion-1 -I/usr/include/apr-1 -I/usr/include/apr-1.0 -pthread -g1 -O0 -Wno-unused-local-typedefs -std=c++11 -MT appc/libmesos_no_3rdparty_la-spec.lo -MD -MP -MF appc/.deps/libmesos_no_3rdparty_la-spec.Tpo -c -o appc/libmesos_no_3rdparty_la-spec.lo `test -f 'appc/spec.cpp' || echo '../../mesos/src/'`appc/spec.cpp libtool: compile: ccache g++ -DPACKAGE_NAME=\""""mesos\"""" -DPACKAGE_TARNAME=\""""mesos\"""" -DPACKAGE_VERSION=\""""1.1.0\"""" """"-DPACKAGE_STRING=\""""mesos 1.1.0\"""""""" -DPACKAGE_BUGREPORT=\""""\"""" -DPACKAGE_URL=\""""\"""" -DPACKAGE=\""""mesos\"""" -DVERSION=\""""1.1.0\"""" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\"""".libs/\"""" -DHAVE_CXX11=1 -DHAVE_PTHREAD_PRIO_INHERIT=1 -DHAVE_PTHREAD=1 -DHAVE_LIBZ=1 -DHAVE_FTS_H=1 -DHAVE_APR_POOLS_H=1 -DHAVE_LIBAPR_1=1 -DHAVE_LIBCURL=1 -DMESOS_HAS_JAVA=1 -DHAVE_LIBSASL2=1 -DHAVE_SVN_VERSION_H=1 -DHAVE_LIBSVN_SUBR_1=1 -DHAVE_SVN_DELTA_H=1 -DHAVE_LIBSVN_DELTA_1=1 -DHAVE_LIBZ=1 -I. -I../../mesos/src -Wall -Werror -Wsign-compare -DLIBDIR=\""""/usr/local/lib\"""" -DPKGLIBEXECDIR=\""""/usr/local/libexec/mesos\"""" -DPKGDATADIR=\""""/usr/local/share/mesos\"""" -DPKGMODULEDIR=\""""/usr/local/lib/mesos/modules\"""" -I../../mesos/include -I../include -I../include/mesos -DPICOJSON_USE_INT64 -D__STDC_FORMAT_MACROS -isystem ../3rdparty/boost-1.53.0 -I../3rdparty/elfio-3.1 -I../3rdparty/glog-0.3.3/src -I../3rdparty/leveldb-1.4/include -I../../mesos/3rdparty/libprocess/include -I../3rdparty/nvml-352.79 -I../3rdparty/picojson-1.3.0 -I../3rdparty/protobuf-2.6.1/src -I../../mesos/3rdparty/stout/include -I../3rdparty/zookeeper-3.4.8/src/c/include -I../3rdparty/zookeeper-3.4.8/src/c/generated -DHAS_AUTHENTICATION=1 -I/usr/include/subversion-1 -I/usr/include/apr-1 -I/usr/include/apr-1.0 -pthread -g1 -O0 -Wno-unused-local-typedefs -std=c++11 -MT appc/libmesos_no_3rdparty_la-spec.lo -MD -MP -MF appc/.deps/libmesos_no_3rdparty_la-spec.Tpo -c ../../mesos/src/appc/spec.cpp -fPIC -DPIC -o appc/.libs/libmesos_no_3rdparty_la-spec.o In file included from ../../mesos/3rdparty/stout/include/stout/os.hpp:52:0, from ../../mesos/src/appc/spec.cpp:17: ../../mesos/3rdparty/stout/include/stout/os/ls.hpp: In function ‘Try > > os::ls(const string&)’: ../../mesos/3rdparty/stout/include/stout/os/ls.hpp:56:19: error: ‘int readdir_r(DIR*, dirent*, dirent**)’ is deprecated [-Werror=deprecated-declarations] while ((error = readdir_r(dir, temp, &entry)) == 0 && entry != nullptr) { ^~~~~~~~~ In file included from ../../mesos/3rdparty/stout/include/stout/os/ls.hpp:19:0, from ../../mesos/3rdparty/stout/include/stout/os.hpp:52, from ../../mesos/src/appc/spec.cpp:17: /usr/include/dirent.h:183:12: note: declared here extern int readdir_r (DIR *__restrict __dirp, ^~~~~~~~~ In file included from ../../mesos/3rdparty/stout/include/stout/os.hpp:52:0, from ../../mesos/src/appc/spec.cpp:17: ../../mesos/3rdparty/stout/include/stout/os/ls.hpp:56:46: error: ‘int readdir_r(DIR*, dirent*, dirent**)’ is deprecated [-Werror=deprecated-declarations] while ((error = readdir_r(dir, temp, &entry)) == 0 && entry != nullptr) { ^ In file included from ../../mesos/3rdparty/stout/include/stout/os/ls.hpp:19:0, from ../../mesos/3rdparty/stout/include/stout/os.hpp:52, from ../../mesos/src/appc/spec.cpp:17: /usr/include/dirent.h:183:12: note: declared here extern int readdir_r (DIR *__restrict __dirp, ^~~~~~~~~ cc1plus: all warnings being treated as errors ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6014","08/09/2016 22:28:44",5,"Added port mapping CNI plugin. ""Currently there is no CNI plugin that supports port mapping. Given that the unified containerizer is starting to become the de-facto container run time, having a CNI plugin that provides port mapping is a must have. This is primarily required for support BRIDGE networking mode, similar to docker bridge networking that users expect to have when using docker containers. While the most obvious use case is that of using the port-mapper plugin with the bridge plugin, the port-mapping functionality itself is generic and should be usable with any CNI plugin that needs it. Keeping port-mapping as a CNI plugin gives operators the ability to use the default port-mapper (CNI plugin) that Mesos provides, or use their own plugin.""","",1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6015","08/09/2016 22:33:58",1,"Design for port-mapper CNI plugin ""Create a design doc for port-mapper CNI plugin.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6017","08/10/2016 00:08:51",1,"Introduce `PortMapping` protobuf. ""Currently we have a `PortMapping` message defined for `DockerInfo`. This can be used only by the `DockerContainerizer`. We need to introduce a new Protobuf message in `NetworkInfo` which will allow frameworks to specify port mapping when using CNI with the `MesosContainerizer`.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6022","08/10/2016 19:31:20",3,"unit-test for port-mapper CNI plugin ""Write unit-tests for the port mapper plugin.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6023","08/10/2016 19:39:23",8,"Create a binary for the port-mapper plugin ""The CNI port mapper plugin needs to be a separate binary that will be invoked by the `network/cni` isolator as a CNI plugin.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6052","08/17/2016 18:03:49",1,"Unable to launch containers on CNI networks on CoreOS ""CoreOS does not have an `/etc/hosts`. Currently, in the `network/cni` isolator, if we don't see a `/etc/hosts` on the host filesystem we don't bind mount the containers `hosts` file to this target for the `command executor`. On distros such as CoreOS this fails the container launch since the `libprocess` initialization of the `command executor` fails cause it can't resolve its `hostname`. We should be creating the `/etc/hosts` and `/etc/hostname` files when they are absent on the host filesystem since creating these files should not affect name resolution on the host network namespace, and it will allow the `/etc/hosts` file to be bind mounted correctly and allow name resolution in the containers network namespace as well. ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6065","08/22/2016 21:14:13",5,"Support provisioning image volumes in an isolator. ""Currently the image volumes are provisioned in mesos containerizer. This makes the containerzer logic complicated, and hard to make containerizer launch to be nest aware. We should implement a 'volume/image' isolator to move these part of logic away from the mesos containerizer.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6067","08/22/2016 22:58:11",8,"Support provisioner to be nested aware for Mesos Pods. ""The provisioner has to be nested aware for sub-container provisioning, as well as recovery and nested container destroy. Better to support multi-level hierarchy. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6104","08/30/2016 01:51:07",3,"Potential FD double close in libevent's implementation of `sendfile`. ""Repro copied from: https://reviews.apache.org/r/51509/ It is possible to make the master CHECK fail by repeatedly hitting the web UI and reloading the static assets: 1) Paste lots of text (16KB or more) of text into `src/webui/master/static/home.html`. The more text, the more reliable the repro. 2) Start the master with SSL enabled: 3) Run two instances of this python script repeatedly: i.e. """," LIBPROCESS_SSL_ENABLED=true LIBPROCESS_SSL_KEY_FILE=key.pem LIBPROCESS_SSL_CERT_FILE=cert.pem bin/mesos-master.sh --work_dir=/tmp/master import socket import ssl s = ssl.wrap_socket(socket.socket()) s.connect((""""localhost"""", 5050)) s.sendall(""""""""""""GET /static/home.html HTTP/1.1 User-Agent: foobar Host: localhost:5050 Accept: */* Connection: Keep-Alive """""""""""") # The HTTP part of the response print s.recv(1000) while python test.py; do :; done & while python test.py; do :; done ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6110","08/30/2016 23:17:41",3,"Deprecate using health checks without setting the type ""When sending a task launch using the 1.0.x protos and the legacy (non-http) API, tasks with a healthcheck defined are rejected (TASK_ERROR) because the 'type' field is not set. This field is marked optional in the proto and is not available before 1.1.0, so it should not be required in order to keep the mesos v1 api compatibility promise. For backwards compatibility temporarily allow the use case when command health check is set without a type.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6115","09/01/2016 08:32:34",3,"Source tree contains compiled protobuf source ""Stout's {{protobuf_tests.cpp}} uses checked in, generated protobuf files {{protobuf_tests.pb.h}} and {{protobuf_tests.pb.cc}}. These files are * not meant to be edited, * might require updates whenever protobuf is updated, and * likely do not follow Mesos coding standards. We should try to remove them from the source tree.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6130","09/06/2016 23:07:12",3,"Make the disk usage isolator nesting-aware ""With the addition of task groups, the disk usage isolator must be updated. Since sub-container sandboxes are nested within the parent container's sandbox, the isolator must exclude these folders from its usage calculation when examining the parent container's disk usage.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6140","09/08/2016 10:17:34",5,"Add a parallel test runner ""In order to allow parallelization of the test execution we should add a parallel test executor to Mesos, and subsequently activate it in the build setup.""","",0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6142","09/08/2016 18:00:07",3,"Frameworks may RESERVE for an arbitrary role. ""The master does not validate that resources from a reservation request have the same role the framework is registered with. As a result, frameworks may reserve resources for arbitrary roles. I've modified the role in [the {{ReserveThenUnreserve}} test|https://github.com/apache/mesos/blob/bca600cf5602ed8227d91af9f73d689da14ad786/src/tests/reservation_tests.cpp#L117] to """"yoyo"""" and observed the following in the test's log: """," I0908 18:35:43.379122 2138112 master.cpp:3362] Processing ACCEPT call for offers: [ dfaf67e6-7c1c-4988-b427-c49842cb7bb7-O0 ] on agent dfaf67e6-7c1c-4988-b427-c49842cb7bb7-S0 at slave(1)@10.200.181.237:60116 (alexr.railnet.train) for framework dfaf67e6-7c1c-4988-b427-c49842cb7bb7-0000 (default) at scheduler-ca12a660-9f08-49de-be4e-d452aa3aa6da@10.200.181.237:60116 I0908 18:35:43.379170 2138112 master.cpp:3022] Authorizing principal 'test-principal' to reserve resources 'cpus(yoyo, test-principal):1; mem(yoyo, test-principal):512' I0908 18:35:43.379678 2138112 master.cpp:3642] Applying RESERVE operation for resources cpus(yoyo, test-principal):1; mem(yoyo, test-principal):512 from framework dfaf67e6-7c1c-4988-b427-c49842cb7bb7-0000 (default) at scheduler-ca12a660-9f08-49de-be4e-d452aa3aa6da@10.200.181.237:60116 to agent dfaf67e6-7c1c-4988-b427-c49842cb7bb7-S0 at slave(1)@10.200.181.237:60116 (alexr.railnet.train) I0908 18:35:43.379767 2138112 master.cpp:7341] Sending checkpointed resources cpus(yoyo, test-principal):1; mem(yoyo, test-principal):512 to agent dfaf67e6-7c1c-4988-b427-c49842cb7bb7-S0 at slave(1)@10.200.181.237:60116 (alexr.railnet.train) I0908 18:35:43.380273 3211264 slave.cpp:2497] Updated checkpointed resources from to cpus(yoyo, test-principal):1; mem(yoyo, test-principal):512 I0908 18:35:43.380574 2674688 hierarchical.cpp:760] Updated allocation of framework dfaf67e6-7c1c-4988-b427-c49842cb7bb7-0000 on agent dfaf67e6-7c1c-4988-b427-c49842cb7bb7-S0 from cpus(*):1; mem(*):512; disk(*):470841; ports(*):[31000-32000] to ports(*):[31000-32000]; cpus(yoyo, test-principal):1; disk(*):470841; mem(yoyo, test-principal):512 with RESERVE operation ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6150","09/12/2016 18:50:13",5,"Introduce the new isolator recover interface for nested container support. ""Currently, the isolator::recover include two parameters: 1. The list of ContainerState, which are the checkpointed conttainers. 2. The hashset of orphans, which are returned from the launcher::recover. However, to support nested containers in Mesos Pod, this interface is not sufficient. Because unknown nested containers may exist under either the top level alive container or orphan container. We have to include a full list of unknown containers which includes containers from all hierarchy. We could have added a 3rd parameter to the isolator::recover interface, to guarantee the backward compatibility. However, considering the potential interface changes in the future work and the old orphan hashset should be deprecated, it is the right time to introduce a new protobuf message `ContainerRecoverInfo` for isolator::recover(), which wraps all information for isolators to recover containers.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6156","09/14/2016 00:11:15",3,"Make the `network/cni` isolator nesting aware ""In pods, child containers share the network and UTS namespace with the parent containers. This implies that during `prepare` and `isolate` the `network/cni` isolator needs to be aware the parent-child relationship between containers to make the following decisions: a) During `prepare` a container should be allocated a new network namespace and UTS namespace only if the container is a top level container. b) During `isolate` the network files (/etc/hosts, /etc/hostname, /etc/resolv.conf) should be created only for top level containers. The network files for child containers will just be symlinks to the parent containers network files.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6159","09/15/2016 00:06:32",1,"Remove stout's Set type ""stout provides a {{Set}} type which wraps a {{std::set}}. As only addition it provides new constructors, which simplified creation of a {{Set}} from (up to four) known elements. C++11 brought {{std::initializer_list}} which can be used to create a {{std::set}} from an arbitrary number of elements, so it appears that it should be possible to retire {{Set}}."""," Set(const T& t1); Set(const T& t1, const T& t2); Set(const T& t1, const T& t2, const T& t3); Set(const T& t1, const T& t2, const T& t3, const T& t4); ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6162","09/15/2016 03:13:15",8,"Add support for cgroups blkio subsystem blkio statistics. ""Noted that cgroups blkio subsystem may have performance issue, refer to https://github.com/opencontainers/runc/issues/861""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6208","09/19/2016 15:52:47",1,"Containers that use the Mesos containerizer but don't want to provision a container image fail to validate. ""Tasks using features like volumes or CNI in their containers, have to define these in {{TaskInfo.container}}. When these tasks don't want/need to provision a container image, neither {{ContainerInfo.docker}} nor {{ContainerInfo.mesos}} will be set. Nevertheless, the container type in {{ContainerInfo.type}} needs to be set, because it is a required field. In that case, the recently introduced validation rules in {{master/validation.cpp}} ({{validateContainerInfo}} will fail, which isn't expected.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6216","09/21/2016 09:30:02",5,"LibeventSSLSocketImpl::create is not safe to call concurrently with os::getenv ""{{LibeventSSLSocketImpl::create}} is called whenever a potentially ssl-enabled socket is created. It in turn calls {{openssl::initialize}} which calls a function {{reinitialize}} using {{os::setenv}}. Here {{os::setenv}} is used to set up SSL-related libprocess environment variables {{LIBPROCESS_SSL_*}}. Since {{os::setenv}} is not thread-safe just like the {{::setenv}} it wraps, any calling of functions like {{os::getenv}} (or via {{os::environment}}) concurrently with the first invocation of {{LibeventSSLSocketImpl::create}} performs unsynchronized r/w access to the same data structure in the runtime. We usually perform most setup of the environment before we start the libprocess runtime with {{process::initialize}} from a {{main}} function, see e.g., {{src/slave/main.cpp}} or {{src/master/main.cpp}} and others. It appears that we should move the setup of libprocess' SSL environment variables to a similar spot.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6233","09/22/2016 22:16:55",2,"Master CHECK fails during recovery while relinking to other masters ""Mesos Version: 1.0.1 OS: CoreOS 1068 """," Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: I0922 20:05:17.948004 104495 manager.cpp:795] overlay-master in `RECOVERING` state . Hence, not sending an update to agentoverlay-agent@10.4.4.1:5051 Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: F0922 20:05:17.948120 104529 process.cpp:2243] Check failed: sockets.count(from_fd) > 0 Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: *** Check failure stack trace: *** Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc1908829fd google::LogMessage::Fail() Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc19088482d google::LogMessage::SendToLog() Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc1908825ec google::LogMessage::Flush() Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc190885129 google::LogMessageFatal::~LogMessageFatal() Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc1908171dd process::SocketManager::swap_implementing_socket() Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc19081aa90 process::SocketManager::link_connect() Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc1908227f9 _ZNSt17_Function_handlerIFvRKN7process6FutureI7NothingEEEZNKS3_5onAnyISt5_BindIFSt7_Mem_fnIMNS0_13SocketManagerEFvS5_NS0_7network6SocketERKNS0_4UPIDEEEPSA_St12_PlaceholderILi1EESC_SD_EEvEES5_OT_NS3_6PreferEEUlS5_E_E9_M_invokeERKSt9_Any_dataS5_ Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x41eb26 _ZN7process8internal3runISt8functionIFvRKNS_6FutureI7NothingEEEEJRS5_EEEvRKSt6vectorIT_SaISC_EEDpOT0_ Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x42a36f process::Future<>::fail() Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc19085283c process::network::LibeventSSLSocketImpl::event_callback() Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc190852f17 process::network::LibeventSSLSocketImpl::event_callback() Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc18d616631 bufferevent_run_deferred_callbacks_locked Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc18d60cc5d event_base_loop Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc190865a1d process::EventLoop::run() Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc18eeabd73 (unknown) Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc18e6a852c (unknown) Sep 22 20:05:17 node-44a84215535c mesos-master[104478]: @ 0x7fc18e3e61dd (unknown) Sep 22 20:05:18 node-44a84215535c systemd[1]: [0;1;39mdcos-mesos-master.service: Main process exited, code=killed, status=6/ABRT ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6234","09/23/2016 00:35:24",3,"Potential socket leak during Zookeeper network changes ""There is a potential leak when using the version of {{link}} with {{RemoteConnection::RECONNECT}}. This was originally implemented to refresh links during master recovery. The leak occurs here: https://github.com/apache/mesos/blob/5e23edd513caec51ce3e94b3d785d714052525e8/3rdparty/libprocess/src/process.cpp#L1592-L1597 ^ The comment here is not correct, as that is *not* the last reference to the {{existing}} socket. At this point, the {{existing}} socket may be a perfectly valid link. Valid links will all have a reference inside a callback loop created here: https://github.com/apache/mesos/blob/5e23edd513caec51ce3e94b3d785d714052525e8/3rdparty/libprocess/src/process.cpp#L1503-L1509 ----- We need to stop the callback loop but prevent any resulting {{ExitedEvents}} from being sent due to stopping the callback loop. This means discarding the callback loop's future after we have called {{swap_implementing_socket}}.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6246","09/24/2016 01:11:48",2,"Libprocess links will not generate an ExitedEvent if the socket creation fails ""Noticed this while inspecting nearby code for potential races. Normally, when a libprocess actor (the """"linkee"""") links to a remote process, it does the following: 1) Create a socket. 2) Connect to the remote process (asynchronous). 3) Check the connection succeeded. If (2) or (3) fail, the linkee will receive a {{ExitedEvent}}, which indicates that the link broke. In case (1) fails, there is no {{ExitedEvent}}: https://github.com/apache/mesos/blob/7c833abbec9c9e4eb51d67f7a8e7a8d0870825f8/3rdparty/libprocess/src/process.cpp#L1558-L1562""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6263","09/28/2016 02:12:11",3,"Mesos containerizer should figure out the correct sandbox directory for nested launch. ""Currently the mesos containerizer take the sandbox directory from the agent. Ideally, a nested sandbox dir can be figured out by the containerizer. And there is no need to pass it from the agent. We should remove the `directory` parameter in nested launch interface.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6280","09/29/2016 17:46:01",5,"Task group executor should support command health checks. ""Currently, the default (aka pod) executor supports only HTTP and TCP health checks. We should also support command health checks as well.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-6290","09/30/2016 02:08:09",2,"Support nested containers for logger in Mesos Containerizer. ""Currently, there are two issues in mesos containerizer using logger for nested contaienrs: 1. An empty executorinfo is passed to logger when launching a nested container, it would potentially break some logger modules if any module tries to access the required proto field (e.g., executorId). 2. The logger does not reocver the nested containers yet in MesosContainerizer::recover.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6302","10/01/2016 02:51:35",3,"Agent recovery can fail after nested containers are launched ""After launching a nested container which used a Docker image, I restarted the agent which ran that task group and saw the following in the agent logs during recovery: and the agent continues to restart in this fashion. Attached is the Marathon app definition that I used to launch the task group."""," Oct 01 01:45:10 ip-10-0-3-133.us-west-2.compute.internal mesos-agent[4629]: I1001 01:45:10.813596 4640 status_update_manager.cpp:203] Recovering status update manager Oct 01 01:45:10 ip-10-0-3-133.us-west-2.compute.internal mesos-agent[4629]: I1001 01:45:10.813622 4640 status_update_manager.cpp:211] Recovering executor 'instance-testvolume.02c26bce-8778-11e6-9ff3-7a3cd7c1568e' of framework 118ca38d-daee-4b2d-b584-b5581738a3dd-0000 Oct 01 01:45:10 ip-10-0-3-133.us-west-2.compute.internal mesos-agent[4629]: I1001 01:45:10.814249 4639 docker.cpp:745] Recovering Docker containers Oct 01 01:45:10 ip-10-0-3-133.us-west-2.compute.internal mesos-agent[4629]: I1001 01:45:10.815294 4642 containerizer.cpp:581] Recovering containerizer Oct 01 01:45:10 ip-10-0-3-133.us-west-2.compute.internal mesos-agent[4629]: Failed to perform recovery: Collect failed: Unable to list rootfses belonged to container a7d576da-fd0f-4dc1-bd5a-6d0a93ac8a53: Unable to list the container directory: Failed to opendir '/var/lib/mesos/slave/provisioner/containers/a7d576da-fd0f-4dc1-bd5a-6d0a93ac8a53/backends': No such file or directory Oct 01 01:45:10 ip-10-0-3-133.us-west-2.compute.internal mesos-agent[4629]: To remedy this do as follows: Oct 01 01:45:10 ip-10-0-3-133.us-west-2.compute.internal mesos-agent[4629]: Step 1: rm -f /var/lib/mesos/slave/meta/slaves/latest Oct 01 01:45:10 ip-10-0-3-133.us-west-2.compute.internal mesos-agent[4629]: This ensures agent doesn't recover old live executors. Oct 01 01:45:10 ip-10-0-3-133.us-west-2.compute.internal mesos-agent[4629]: Step 2: Restart the agent. ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6304","10/03/2016 19:58:44",2,"Add authentication support to the default executor ""The V1 executors should be updated to authenticate with the agent when HTTP executor authentication is enabled. This will be hard-coded into the executor library for the MVP, and it can be refactored into an {{HttpAuthenticatee}} module later. The executor must: * load a JWT from its environment, if present * decorate its requests with an {{Authorization}} header containing the JWT""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-6305","10/03/2016 20:03:00",3,"Add authorization support for nested container calls ""We need to authorize {LAUNCH, KILL, WAIT}_NESTED_CONTAINER API calls.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6324","10/07/2016 02:07:29",1,"CNI should not use `ifconfig` in executors `pre_exec_command` ""Currently the `network/cni` isolator sets up the `pre_exec_command` for executors when a container needs to be launched on a non-host network. The `pre_exec_command` is `ifconfig lo up`. This is done to primarily bring loopback up in the new network namespace. Setting up the `pre_exec_command` to bring loopback up is problematic since the executors PATH variable is generally very limited (doesn't contain all path that the agents PATH variable has due to security concerns). Therefore instead of running `ifconfig lo up` in the `pre_exec_command` we should run it in `NetworkCniIsolatorSetup` subcommand, which runs with the same PATH variable as the agent.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6344","10/08/2016 05:30:12",1,"Allow `network/cni` isolator to take a search path for CNI plugins instead of single directory ""Currently the `network/cni` isolator expects a single directory with the `--network_cni_plugins_dir` . This is very limiting because this forces the operator to put all the CNI plugins in the same directory. With Mesos port-mapper CNI plugin this would also imply that the operator would have to move this plugin from the Mesos installation directory to a directory specified in the `--network_cni_plugins_dir`. To simplify the operators experience it would make sense for the `--network_cni_plugins_dir` flag to take in set of directories instead of single directory. The `network/cni` isolator can then search this set of directories to find the CNI plugin.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6369","10/11/2016 23:38:13",1,"Add a column for FrameworkID when displaying tasks in the WebUI ""The Mesos Web UI home page shows a list of active/completed/orphan tasks tasks like this: || ID || Name || State || Started || Host || || | 1 | My ambiguously named task | RUNNING | 1 minute ago | 10.10.0.1 | Sandbox | | 1 | My ambiguously named task | RUNNING | 1 minute ago | 10.10.0.1 | Sandbox | | 2 | My ambiguously named task | RUNNING | 1 minute ago | 10.10.0.1 | Sandbox | When you start multiple frameworks, the task IDs and names show in the UI may be ambiguous, requiring extra clicks/investigation to disambiguate. In the above case, to disambiguate between the two tasks with ID {{1}}, the user would need to navigate to each sandbox and check the associated frameworkID in the {{/browse}} view. We could add a column showing the {{FrameworkID}} next to each task: || Framework || ID || Name || State || Started || Host || || | 179b5436-30ec-45e9-b324-fa5c5a1dd756-0000 | 1 | My ambiguously named task | RUNNING | 1 minute ago | 10.10.0.1 | Sandbox | | 179b5436-30ec-45e9-b324-fa5c5a1dd756-0001 | 1 | My ambiguously named task | RUNNING | 1 minute ago | 10.10.0.1 | Sandbox | | 179b5436-30ec-45e9-b324-fa5c5a1dd756-0001 | 2 | My ambiguously named task | RUNNING | 1 minute ago | 10.10.0.1 | Sandbox | The {{FrameworkID}} s could be links to the associated framework ----- This involves additions to three tables: https://github.com/apache/mesos/blob/1.0.x/src/webui/master/static/home.html#L152-L157 https://github.com/apache/mesos/blob/1.0.x/src/webui/master/static/home.html#L199-L205 https://github.com/apache/mesos/blob/1.0.x/src/webui/master/static/home.html#L246-L252 """," {{framework.id | truncateMesosID}} ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6371","10/12/2016 00:27:21",3,"Remove the 'recover()' interface in 'ContainerLogger'. ""This issue arises from the nested container support in Mesos. Currently, the container logger interface mainly contains `recover()` and `prepare()` methods. The `prepare` will be called in containerizer::launch() to launch a container, while `recover` will be called in containerizer::recover() to recover containers. Both methods rely on 2 parameters: ExecutorInfo and sandbox directory. The sandbox directory for nested containers can still be passed to the logger. However, because of nested container support, ExecutorInfo is no longer available for nested containers. In logger prepare, the ExecutorInfo is used for deliver FrameworkID, ExecutorID, and Label for custom metadata. In containerizer launch, we can still pass the ExecutorInfo of a nested container's top level parent to the logger, so that those information will not be lost. In logger recover, since currently the logger is stateless, and most of the logger modules are doing `noop` in logger::recover(). The recover interface should exist together with `cleanup` method if the logger become stateful in the future. To avoid adding tech debt in containerizer nested container support, we should remove the `recover` in container logger for now (can add it back together with `cleanup` in the future if the container logger become stateful).""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6386","10/13/2016 19:09:19",3,"""Reached unreachable statement"" in LinuxCapabilitiesIsolatorTest "" Observed running the tests as root on CentOS 7.2. Verbose test output attached."""," [ RUN ] TestParam/LinuxCapabilitiesIsolatorTest.ROOT_Ping/2 Failed to execute command: Permission denied Reached unreachable statement at ../../mesos/src/slave/containerizer/mesos/launch.cpp:710 [ OK ] TestParam/LinuxCapabilitiesIsolatorTest.ROOT_Ping/2 (366 ms) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6388","10/13/2016 21:13:12",1,"Report new PARTITION_AWARE task statuses in HTTP endpoints ""At a minimum, the {{/state-summary}} endpoint needs to be updated.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6411","10/18/2016 18:24:19",1,"Add documentation for CNI port-mapper plugin. ""Need to add the CNI port-mapper plugin to the CNI documentation within Mesos.""","",0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6419","10/19/2016 20:41:53",8,"The 'master/teardown' endpoint should support tearing down 'unregistered_frameworks'. ""This issue is exposed from [MESOS-6400](https://issues.apache.org/jira/browse/MESOS-6400). When a user is trying to tear down an 'unregistered_framework' from the 'master/teardown' endpoint, a bad request will be returned: `No framework found with specified ID`. Ideally, we should support tearing down an unregistered framework, since those frameworks may occur due to network partition, then all the orphan tasks still occupy the resources. It would be a nightmare if a user has to wait until the unregistered framework to get those resources back. This may be the initial implementation: https://github.com/apache/mesos/commit/bb8375975e92ee722befb478ddc3b2541d1ccaa9""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6424","10/20/2016 09:16:34",2,"Possible nullptr dereference in flag loading ""Coverity reports the following: The {{dynamic_cast}} is needed here if the derived {{Flags}} class got intentionally sliced (e.g., to a {{FlagsBase}}). Since the base class of the hierarchy ({{FlagsBase}}) stores the flags they would not be sliced away; the {{dynamic_cast}} here effectively filters out all flags still valid for the {{Flags}} used when the {{Flag}} was {{add}}'ed. It seems the intention here was to confirm that the {{dynamic_cast}} to {{Flags*}} succeeded like is done e.g., in {{flags.stringify}} and {{flags.validate}} just below. AFAICT this code has existed since 2013, but was only reported by coverity recently."""," /3rdparty/stout/include/stout/flags/flags.hpp: 375 in flags::FlagsBase::add, std::allocator>, char [10], mesos::internal::logger::rotate::Flags::Flags()::[lambda(const std::basic_string, std::allocator>&) (instance 1)]>(T2 T1::*, const flags::Name &, const Option &, const std::basic_string, std::allocator>&, const T3 *, T4)::[lambda(flags::FlagsBase*, const std::basic_string, std::allocator>&) (instance 1)]::operator ()(flags::FlagsBase*, const std::basic_string, std::allocator>&) const() 369 Flags* flags = dynamic_cast(base); 370 if (base != nullptr) { 371 // NOTE: 'fetch' """"retrieves"""" the value if necessary and then 372 // invokes 'parse'. See 'fetch' for more details. 373 Try t = fetch(value); 374 if (t.isSome()) { CID 1374083: (FORWARD_NULL) Dereferencing null pointer """"flags"""". 375 flags->*t1 = t.get(); 376 } else { 377 return Error(""""Failed to load value '"""" + value + """"': """" + t.error()); 378 } 379 } 380 ** CID 1374082: Null pointer dereferences (FORWARD_NULL) /3rdparty/stout/include/stout/flags/flags.hpp: 375 in flags::FlagsBase::add (*)(const Bytes &)>(T2 T1::*, const flags::Name &, Option&, const std::basic_string, std::allocator>&, const T3 *, T4)::[lambda(flags::FlagsBase*, const std::basic_string, std::allocator>&) (instance 1)]::operator ()(flags::FlagsBase*, const std::basic_string, std::allocator>&) const() ________________________________________________________________________________________________________ *** CID 1374082: Null pointer dereferences (FORWARD_NULL) /3rdparty/stout/include/stout/flags/flags.hpp: 375 in flags::FlagsBase::add (*)(const Bytes &)>(T2 T1::*, const flags::Name &, Option&, const std::basic_string, std::allocator>&, const T3 *, T4)::[lambda(flags::FlagsBase*, const std::basic_string, std::allocator>&) (instance 1)]::operator ()(flags::FlagsBase*, const std::basic_string, std::allocator>&) const() 369 Flags* flags = dynamic_cast(base); 370 if (base != nullptr) { 371 // NOTE: 'fetch' """"retrieves"""" the value if necessary and then 372 // invokes 'parse'. See 'fetch' for more details. 373 Try t = fetch(value); 374 if (t.isSome()) { CID 1374082: Null pointer dereferences (FORWARD_NULL) Dereferencing null pointer """"flags"""". 375 flags->*t1 = t.get(); 376 } else { 377 return Error(""""Failed to load value '"""" + value + """"': """" + t.error()); 378 } 379 } 380 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6426","10/20/2016 13:51:56",8,"Add rlimit support to Mesos containerizer ""Reviews: https://reviews.apache.org/r/53061/ https://reviews.apache.org/r/53062/ https://reviews.apache.org/r/53063/ https://reviews.apache.org/r/53078/""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6428","10/20/2016 15:49:19",3,"Mesos containerizer helper function signalSafeWriteStatus is not AS-Safe ""In {{src/slave/containerizer/mesos/launch.cpp}} a helper function {{signalSafeWriteStatus}} is defined. Its name seems to suggest that this function is safe to call in e.g., signal handlers, and it is used in this file's {{signalHandler}} for exactly that purpose. Currently this function is not AS-Safe since it e.g., allocates memory via construction of {{string}} instances, and might destructively modify {{errno}}. We should clean up this function to be in fact AS-Safe.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6430","10/20/2016 19:10:13",2,"The python linter doesn't rebuild the virtual environment before linting when ""pip-requirements.txt"" has changed ""We need to detect if """"pip-requirements.txt"""" changes and rebuild the virtual environment if it has.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6431","10/20/2016 19:28:24",1,"Add support for port-mapping in `mesos-execute` ""Add support to specify port-mappings for a container in mesos-execute.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6432","10/20/2016 22:01:27",5,"Roles with quota assigned can ""game"" the system to receive excessive resources. ""The current implementation of quota allocation attempts to satisfy each resource quota for a role, but in doing so can far exceed the quota assigned to the role. For example, if a role has quota for {{\[30,20,10\]}}, it can consume up to: {{\[∞, ∞, 10\]}} or {{\[∞, 20, ∞\]}} or {{\[30, ∞, ∞\]}} as only once each resource in the quota vector is satisfied do we stop allocating agent's resources to the role! As a first step for preventing gaming, we could consider quota satisfied once any of the resources in the vector has quota satisfied. This approach works reasonably well for resources that are required and are present on every agent (cpus, mem, disk). However, it doesn't work well for resources that are optional / only present on some agents (e.g. gpus) (a.k.a. non-ubiquitous / scarce resources). For this we would need to determine which agents have resources that can satisfy the quota prior to performing the allocation.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6441","10/21/2016 19:21:43",3,"Display reservations in the agent page in the webui. ""We currently do not display the reservations present on an agent in the webui. It would be nice to see this information. It would also be nice to update the resource statistics tables to make the distinction between unreserved and reserved resources. E.g. Reserved: Used, Allocated, Available and Total Unreserved: Used, Allocated, Available and Total""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6454","10/23/2016 21:21:58",2,"PosixRLimitsIsolatorTest.TaskExceedingLimit failed on OSX """""," [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from PosixRLimitsIsolatorTest [ RUN ] PosixRLimitsIsolatorTest.TaskExceedingLimit I1023 13:17:22.959827 2138112 exec.cpp:162] Version: 1.2.0 I1023 13:17:22.963068 1601536 exec.cpp:237] Executor registered on agent 98ce29d6-2558-4505-be56-7863b7b319c5-S0 Received SUBSCRIBED event Subscribed executor on 172.17.8.1 Received LAUNCH event Starting task 3af6d42c-b9ee-4373-b794-b702f8d5a9d4 /Users/jie/workspace/dist/mesos/build/src/mesos-containerizer launch --command=""""{""""shell"""":true,""""value"""":""""dd if=\/dev\/zero of=file bs=1024 count=8""""}"""" --help=""""false"""" Forked command at 30708 8+0 records in 8+0 records out 8192 bytes transferred in 0.000035 secs (233739717 bytes/sec) Command exited with status 0 (pid: 30708) /Users/jie/workspace/vagrant/trusty/mesos/src/tests/containerizer/posix_rlimits_isolator_tests.cpp:120: Failure Value of: statusFailed->state() Actual: TASK_FINISHED Expected: TASK_FAILED I1023 13:17:23.079619 528384 exec.cpp:414] Executor asked to shutdown Received SHUTDOWN event Shutting down [ FAILED ] PosixRLimitsIsolatorTest.TaskExceedingLimit (488 ms) [----------] 1 test from PosixRLimitsIsolatorTest (489 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (513 ms total) [ PASSED ] 0 tests. [ FAILED ] 1 test, listed below: [ FAILED ] PosixRLimitsIsolatorTest.TaskExceedingLimit ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6461","10/24/2016 23:27:09",2,"Duplicate framework ids in /master/frameworks endpoint 'unregistered_frameworks'. ""This issue was exposed from MESOS-6400. There are duplicate framework ids presented from the /master/frameworks endpoint due to: https://github.com/apache/mesos/blob/master/src/master/http.cpp#L1338 We should use a `set` or a `hashset` instead of an array, to avoid duplicate ids.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6504","10/28/2016 21:40:51",3,"Use 'geteuid()' for the root privileges check. ""Currently, parts of code in Mesos check the root privileges using os::user() to compare to """"root"""", which is not sufficient, since it compares the real user. When people change the mesos binary by 'setuid root', the process may not have the right permission to execute. We should check the effective user id instead in our code. ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6516","10/31/2016 14:09:19",2,"Parallel test running does not respect GTEST_FILTER ""Normally, you can use {{GTEST_FILTER}} to control which tests will be run by {{make check}}. However, this doesn't currently work if Mesos is configured with {{--enable-parallel-test-execution}}.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6519","10/31/2016 21:10:53",1,"MasterTest.OrphanTasksMultipleAgents ""Observed this on ASF CI. """," [ RUN ] MasterTest.OrphanTasksMultipleAgents I1031 14:54:18.459671 31623 cluster.cpp:158] Creating default 'local' authorizer I1031 14:54:18.462911 31623 leveldb.cpp:174] Opened db in 2.965951ms I1031 14:54:18.464269 31623 leveldb.cpp:181] Compacted db in 1.31548ms I1031 14:54:18.464326 31623 leveldb.cpp:196] Created db iterator in 13188ns I1031 14:54:18.464341 31623 leveldb.cpp:202] Seeked to beginning of db in 1625ns I1031 14:54:18.464349 31623 leveldb.cpp:271] Iterated through 0 keys in the db in 150ns I1031 14:54:18.464380 31623 replica.cpp:776] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I1031 14:54:18.465016 31646 recover.cpp:451] Starting replica recovery I1031 14:54:18.465349 31646 recover.cpp:477] Replica is in EMPTY status I1031 14:54:18.466303 31647 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from __req_res__(2991)@172.17.0.3:46956 I1031 14:54:18.466681 31653 recover.cpp:197] Received a recover response from a replica in EMPTY status I1031 14:54:18.467151 31649 recover.cpp:568] Updating replica status to STARTING I1031 14:54:18.467964 31655 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 578744ns I1031 14:54:18.467994 31655 replica.cpp:320] Persisted replica status to STARTING I1031 14:54:18.468024 31651 master.cpp:380] Master 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27 (173b586c223f) started on 172.17.0.3:46956 I1031 14:54:18.468204 31655 recover.cpp:477] Replica is in STARTING status I1031 14:54:18.468040 31651 master.cpp:382] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/ClEWj4/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-1.2.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/ClEWj4/master"""" --zk_session_timeout=""""10secs"""" I1031 14:54:18.468483 31651 master.cpp:432] Master only allowing authenticated frameworks to register I1031 14:54:18.468504 31651 master.cpp:446] Master only allowing authenticated agents to register I1031 14:54:18.468520 31651 master.cpp:459] Master only allowing authenticated HTTP frameworks to register I1031 14:54:18.468533 31651 credentials.hpp:37] Loading credentials for authentication from '/tmp/ClEWj4/credentials' I1031 14:54:18.468837 31651 master.cpp:504] Using default 'crammd5' authenticator I1031 14:54:18.469027 31651 http.cpp:887] Using default 'basic' HTTP authenticator for realm 'mesos-master-readonly' I1031 14:54:18.469029 31646 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from __req_res__(2992)@172.17.0.3:46956 I1031 14:54:18.469156 31651 http.cpp:887] Using default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' I1031 14:54:18.469245 31651 http.cpp:887] Using default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' I1031 14:54:18.469458 31651 master.cpp:584] Authorization enabled I1031 14:54:18.469560 31643 recover.cpp:197] Received a recover response from a replica in STARTING status I1031 14:54:18.469862 31653 hierarchical.cpp:149] Initialized hierarchical allocator process I1031 14:54:18.469888 31646 whitelist_watcher.cpp:77] No whitelist given I1031 14:54:18.470257 31647 recover.cpp:568] Updating replica status to VOTING I1031 14:54:18.470875 31649 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 400113ns I1031 14:54:18.470906 31649 replica.cpp:320] Persisted replica status to VOTING I1031 14:54:18.471055 31650 recover.cpp:582] Successfully joined the Paxos group I1031 14:54:18.471352 31650 recover.cpp:466] Recover process terminated I1031 14:54:18.472995 31645 master.cpp:2033] Elected as the leading master! I1031 14:54:18.473028 31645 master.cpp:1560] Recovering from registrar I1031 14:54:18.473158 31653 registrar.cpp:329] Recovering registrar I1031 14:54:18.473791 31650 log.cpp:553] Attempting to start the writer I1031 14:54:18.475165 31650 replica.cpp:493] Replica received implicit promise request from __req_res__(2993)@172.17.0.3:46956 with proposal 1 I1031 14:54:18.475623 31650 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 419160ns I1031 14:54:18.475654 31650 replica.cpp:342] Persisted promised to 1 I1031 14:54:18.476574 31649 coordinator.cpp:238] Coordinator attempting to fill missing positions I1031 14:54:18.477970 31651 replica.cpp:388] Replica received explicit promise request from __req_res__(2994)@172.17.0.3:46956 for position 0 with proposal 2 I1031 14:54:18.478452 31651 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 426215ns I1031 14:54:18.478488 31651 replica.cpp:708] Persisted action NOP at position 0 I1031 14:54:18.479763 31653 replica.cpp:537] Replica received write request for position 0 from __req_res__(2995)@172.17.0.3:46956 I1031 14:54:18.479832 31653 leveldb.cpp:436] Reading position from leveldb took 32539ns I1031 14:54:18.480396 31653 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 507365ns I1031 14:54:18.480428 31653 replica.cpp:708] Persisted action NOP at position 0 I1031 14:54:18.481135 31651 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I1031 14:54:18.481657 31651 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 479094ns I1031 14:54:18.481688 31651 replica.cpp:708] Persisted action NOP at position 0 I1031 14:54:18.482416 31647 log.cpp:569] Writer started with ending position 0 I1031 14:54:18.483616 31646 leveldb.cpp:436] Reading position from leveldb took 37294ns I1031 14:54:18.484666 31644 registrar.cpp:362] Successfully fetched the registry (0B) in 11.421952ms I1031 14:54:18.484797 31644 registrar.cpp:461] Applied 1 operations in 21474ns; attempting to update the registry I1031 14:54:18.485672 31654 log.cpp:577] Attempting to append 168 bytes to the log I1031 14:54:18.485873 31642 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I1031 14:54:18.486687 31656 replica.cpp:537] Replica received write request for position 1 from __req_res__(2996)@172.17.0.3:46956 I1031 14:54:18.487164 31656 leveldb.cpp:341] Persisting action (187 bytes) to leveldb took 429898ns I1031 14:54:18.487197 31656 replica.cpp:708] Persisted action APPEND at position 1 I1031 14:54:18.488112 31651 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I1031 14:54:18.488590 31651 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 411110ns I1031 14:54:18.488622 31651 replica.cpp:708] Persisted action APPEND at position 1 I1031 14:54:18.489761 31646 registrar.cpp:506] Successfully updated the registry in 4.892928ms I1031 14:54:18.489938 31646 registrar.cpp:392] Successfully recovered registrar I1031 14:54:18.490077 31649 log.cpp:596] Attempting to truncate the log to 1 I1031 14:54:18.490211 31648 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I1031 14:54:18.490653 31645 master.cpp:1676] Recovered 0 agents from the registry (129B); allowing 10mins for agents to re-register I1031 14:54:18.490811 31657 hierarchical.cpp:176] Skipping recovery of hierarchical allocator: nothing to recover I1031 14:54:18.491212 31657 replica.cpp:537] Replica received write request for position 2 from __req_res__(2997)@172.17.0.3:46956 I1031 14:54:18.491719 31657 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 461103ns I1031 14:54:18.491744 31657 replica.cpp:708] Persisted action TRUNCATE at position 2 I1031 14:54:18.492624 31652 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I1031 14:54:18.492983 31652 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 327074ns I1031 14:54:18.493051 31652 leveldb.cpp:399] Deleting ~1 keys from leveldb took 39143ns I1031 14:54:18.493075 31652 replica.cpp:708] Persisted action TRUNCATE at position 2 I1031 14:54:18.497611 31623 cluster.cpp:435] Creating default 'local' authorizer I1031 14:54:18.499606 31654 slave.cpp:208] Mesos agent started on (198)@172.17.0.3:46956 I1031 14:54:18.499665 31654 slave.cpp:209] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterTest_OrphanTasksMultipleAgents_qckwyX/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterTest_OrphanTasksMultipleAgents_qckwyX/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/MasterTest_OrphanTasksMultipleAgents_qckwyX/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/mesos-1.2.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/MasterTest_OrphanTasksMultipleAgents_qckwyX"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/MasterTest_OrphanTasksMultipleAgents_FlWdW0"""" I1031 14:54:18.500120 31654 credentials.hpp:86] Loading credential for authentication from '/tmp/MasterTest_OrphanTasksMultipleAgents_qckwyX/credential' I1031 14:54:18.500249 31654 slave.cpp:346] Agent using credential for: test-principal I1031 14:54:18.500272 31654 credentials.hpp:37] Loading credentials for authentication from '/tmp/MasterTest_OrphanTasksMultipleAgents_qckwyX/http_credentials' I1031 14:54:18.500517 31654 http.cpp:887] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I1031 14:54:18.500643 31654 http.cpp:887] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' I1031 14:54:18.500860 31623 sched.cpp:226] Version: 1.2.0 I1031 14:54:18.501534 31650 sched.cpp:330] New master detected at master@172.17.0.3:46956 I1031 14:54:18.501624 31650 sched.cpp:396] Authenticating with master master@172.17.0.3:46956 I1031 14:54:18.501644 31650 sched.cpp:403] Using default CRAM-MD5 authenticatee I1031 14:54:18.501754 31654 slave.cpp:533] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1031 14:54:18.501847 31654 slave.cpp:541] Agent attributes: [ ] I1031 14:54:18.501862 31654 slave.cpp:546] Agent hostname: 173b586c223f I1031 14:54:18.501898 31656 authenticatee.cpp:121] Creating new client SASL connection I1031 14:54:18.502148 31655 master.cpp:6742] Authenticating scheduler-60a2639c-70b4-4d13-83e6-f200a8b308f5@172.17.0.3:46956 I1031 14:54:18.502251 31644 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(450)@172.17.0.3:46956 I1031 14:54:18.502460 31648 authenticator.cpp:98] Creating new server SASL connection I1031 14:54:18.502676 31651 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1031 14:54:18.502709 31651 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1031 14:54:18.502818 31650 authenticator.cpp:204] Received SASL authentication start I1031 14:54:18.502918 31650 authenticator.cpp:326] Authentication requires more steps I1031 14:54:18.503060 31656 authenticatee.cpp:259] Received SASL authentication step I1031 14:54:18.503196 31656 authenticator.cpp:232] Received SASL authentication step I1031 14:54:18.503232 31656 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '173b586c223f' server FQDN: '173b586c223f' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1031 14:54:18.503258 31656 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1031 14:54:18.503303 31656 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1031 14:54:18.503343 31656 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '173b586c223f' server FQDN: '173b586c223f' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1031 14:54:18.503368 31656 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1031 14:54:18.503384 31656 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1031 14:54:18.503410 31656 authenticator.cpp:318] Authentication success I1031 14:54:18.503456 31646 state.cpp:57] Recovering state from '/tmp/MasterTest_OrphanTasksMultipleAgents_FlWdW0/meta' I1031 14:54:18.503504 31650 authenticatee.cpp:299] Authentication success I1031 14:54:18.503566 31648 master.cpp:6772] Successfully authenticated principal 'test-principal' at scheduler-60a2639c-70b4-4d13-83e6-f200a8b308f5@172.17.0.3:46956 I1031 14:54:18.503651 31656 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(450)@172.17.0.3:46956 I1031 14:54:18.503790 31645 status_update_manager.cpp:203] Recovering status update manager I1031 14:54:18.504070 31649 sched.cpp:502] Successfully authenticated with master master@172.17.0.3:46956 I1031 14:54:18.504091 31649 sched.cpp:820] Sending SUBSCRIBE call to master@172.17.0.3:46956 I1031 14:54:18.504111 31652 slave.cpp:5399] Finished recovery I1031 14:54:18.504182 31649 sched.cpp:853] Will retry registration in 1.851789769secs if necessary I1031 14:54:18.504303 31647 master.cpp:2612] Received SUBSCRIBE call for framework 'default' at scheduler-60a2639c-70b4-4d13-83e6-f200a8b308f5@172.17.0.3:46956 I1031 14:54:18.504362 31647 master.cpp:2069] Authorizing framework principal 'test-principal' to receive offers for role '*' I1031 14:54:18.504582 31652 slave.cpp:5573] Querying resource estimator for oversubscribable resources I1031 14:54:18.504797 31654 master.cpp:2688] Subscribing framework default with checkpointing disabled and capabilities [ ] I1031 14:54:18.505110 31657 status_update_manager.cpp:177] Pausing sending status updates I1031 14:54:18.505115 31652 slave.cpp:915] New master detected at master@172.17.0.3:46956 I1031 14:54:18.505151 31652 slave.cpp:974] Authenticating with master master@172.17.0.3:46956 I1031 14:54:18.505264 31652 slave.cpp:985] Using default CRAM-MD5 authenticatee I1031 14:54:18.505445 31647 sched.cpp:743] Framework registered with 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.505472 31643 hierarchical.cpp:275] Added framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.505555 31656 authenticatee.cpp:121] Creating new client SASL connection I1031 14:54:18.505571 31647 sched.cpp:757] Scheduler::registered took 54451ns I1031 14:54:18.505614 31643 hierarchical.cpp:1694] No allocations performed I1031 14:54:18.505658 31643 hierarchical.cpp:1789] No inverse offers to send out! I1031 14:54:18.505724 31643 hierarchical.cpp:1286] Performed allocation for 0 agents in 169452ns I1031 14:54:18.505501 31652 slave.cpp:947] Detecting new master I1031 14:54:18.505832 31654 master.cpp:6742] Authenticating slave(198)@172.17.0.3:46956 I1031 14:54:18.505985 31646 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(451)@172.17.0.3:46956 I1031 14:54:18.506060 31652 slave.cpp:5587] Received oversubscribable resources {} from the resource estimator I1031 14:54:18.506244 31649 authenticator.cpp:98] Creating new server SASL connection I1031 14:54:18.506481 31645 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1031 14:54:18.506525 31645 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1031 14:54:18.506630 31645 authenticator.cpp:204] Received SASL authentication start I1031 14:54:18.506705 31645 authenticator.cpp:326] Authentication requires more steps I1031 14:54:18.506834 31643 authenticatee.cpp:259] Received SASL authentication step I1031 14:54:18.507019 31652 authenticator.cpp:232] Received SASL authentication step I1031 14:54:18.507063 31652 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '173b586c223f' server FQDN: '173b586c223f' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1031 14:54:18.507091 31652 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1031 14:54:18.507139 31652 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1031 14:54:18.507179 31652 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '173b586c223f' server FQDN: '173b586c223f' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1031 14:54:18.507203 31652 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1031 14:54:18.507226 31652 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1031 14:54:18.507256 31652 authenticator.cpp:318] Authentication success I1031 14:54:18.507359 31643 authenticatee.cpp:299] Authentication success I1031 14:54:18.507447 31642 master.cpp:6772] Successfully authenticated principal 'test-principal' at slave(198)@172.17.0.3:46956 I1031 14:54:18.507532 31647 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(451)@172.17.0.3:46956 I1031 14:54:18.507719 31651 slave.cpp:1069] Successfully authenticated with master master@172.17.0.3:46956 I1031 14:54:18.507947 31651 slave.cpp:1483] Will retry registration in 11.127033ms if necessary I1031 14:54:18.508164 31642 master.cpp:5151] Registering agent at slave(198)@172.17.0.3:46956 (173b586c223f) with id 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 I1031 14:54:18.508657 31657 registrar.cpp:461] Applied 1 operations in 57631ns; attempting to update the registry I1031 14:54:18.509627 31643 log.cpp:577] Attempting to append 337 bytes to the log I1031 14:54:18.509760 31653 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I1031 14:54:18.510596 31656 replica.cpp:537] Replica received write request for position 3 from __req_res__(2998)@172.17.0.3:46956 I1031 14:54:18.511073 31656 leveldb.cpp:341] Persisting action (356 bytes) to leveldb took 426870ns I1031 14:54:18.511107 31656 replica.cpp:708] Persisted action APPEND at position 3 I1031 14:54:18.511775 31650 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I1031 14:54:18.512259 31650 leveldb.cpp:341] Persisting action (358 bytes) to leveldb took 411830ns I1031 14:54:18.512291 31650 replica.cpp:708] Persisted action APPEND at position 3 I1031 14:54:18.513923 31649 registrar.cpp:506] Successfully updated the registry in 5.201152ms I1031 14:54:18.514209 31648 log.cpp:596] Attempting to truncate the log to 3 I1031 14:54:18.514351 31656 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I1031 14:54:18.514930 31655 slave.cpp:4251] Received ping from slave-observer(200)@172.17.0.3:46956 I1031 14:54:18.515231 31643 replica.cpp:537] Replica received write request for position 4 from __req_res__(2999)@172.17.0.3:46956 I1031 14:54:18.515349 31657 slave.cpp:1115] Registered with master master@172.17.0.3:46956; given agent ID 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 I1031 14:54:18.515388 31657 fetcher.cpp:86] Clearing fetcher cache I1031 14:54:18.515269 31650 master.cpp:5222] Registered agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1031 14:54:18.515606 31646 status_update_manager.cpp:184] Resuming sending status updates I1031 14:54:18.515758 31653 hierarchical.cpp:485] Added agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 (173b586c223f) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: {}) I1031 14:54:18.515805 31643 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 522344ns I1031 14:54:18.515846 31643 replica.cpp:708] Persisted action TRUNCATE at position 4 I1031 14:54:18.515928 31657 slave.cpp:1138] Checkpointing SlaveInfo to '/tmp/MasterTest_OrphanTasksMultipleAgents_FlWdW0/meta/slaves/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0/slave.info' I1031 14:54:18.516572 31645 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I1031 14:54:18.516623 31657 slave.cpp:1175] Forwarding total oversubscribed resources {} I1031 14:54:18.516993 31651 master.cpp:5621] Received update of agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) with total oversubscribed resources {} I1031 14:54:18.517125 31645 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 508592ns I1031 14:54:18.517215 31645 leveldb.cpp:399] Deleting ~2 keys from leveldb took 56423ns I1031 14:54:18.517243 31645 replica.cpp:708] Persisted action TRUNCATE at position 4 I1031 14:54:18.517669 31653 hierarchical.cpp:1789] No inverse offers to send out! I1031 14:54:18.517771 31653 hierarchical.cpp:1309] Performed allocation for agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 in 1.943202ms I1031 14:54:18.517973 31653 hierarchical.cpp:555] Agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 (173b586c223f) updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I1031 14:54:18.518121 31653 hierarchical.cpp:1694] No allocations performed I1031 14:54:18.518163 31653 hierarchical.cpp:1789] No inverse offers to send out! I1031 14:54:18.518216 31653 hierarchical.cpp:1309] Performed allocation for agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 in 190028ns I1031 14:54:18.518220 31643 master.cpp:6571] Sending 1 offers to framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 (default) at scheduler-60a2639c-70b4-4d13-83e6-f200a8b308f5@172.17.0.3:46956 I1031 14:54:18.518725 31654 sched.cpp:917] Scheduler::resourceOffers took 123398ns I1031 14:54:18.520814 31651 master.cpp:3581] Processing ACCEPT call for offers: [ 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-O0 ] on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) for framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 (default) at scheduler-60a2639c-70b4-4d13-83e6-f200a8b308f5@172.17.0.3:46956 I1031 14:54:18.520928 31651 master.cpp:3173] Authorizing framework principal 'test-principal' to launch task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a W1031 14:54:18.523193 31655 validation.cpp:920] Executor 'default' for task '3d00a1fe-59e4-42cf-a231-df3a06aecc3a' uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W1031 14:54:18.523232 31655 validation.cpp:932] Executor 'default' for task '3d00a1fe-59e4-42cf-a231-df3a06aecc3a' uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I1031 14:54:18.523774 31655 master.cpp:8334] Adding task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 (173b586c223f) I1031 14:54:18.524137 31655 master.cpp:4230] Launching task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 (default) at scheduler-60a2639c-70b4-4d13-83e6-f200a8b308f5@172.17.0.3:46956 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.524610 31653 slave.cpp:1547] Got assigned task '3d00a1fe-59e4-42cf-a231-df3a06aecc3a' for framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.525429 31653 slave.cpp:1709] Launching task '3d00a1fe-59e4-42cf-a231-df3a06aecc3a' for framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.526227 31653 paths.cpp:536] Trying to chown '/tmp/MasterTest_OrphanTasksMultipleAgents_FlWdW0/slaves/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0/frameworks/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000/executors/default/runs/b47eb823-9ff3-4386-bbd1-dc5f09b2d5a2' to user 'mesos' I1031 14:54:18.534395 31653 slave.cpp:6307] Launching executor 'default' of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 with resources {} in work directory '/tmp/MasterTest_OrphanTasksMultipleAgents_FlWdW0/slaves/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0/frameworks/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000/executors/default/runs/b47eb823-9ff3-4386-bbd1-dc5f09b2d5a2' I1031 14:54:18.537529 31653 exec.cpp:162] Version: 1.2.0 I1031 14:54:18.537901 31654 exec.cpp:212] Executor started at: executor(78)@172.17.0.3:46956 with pid 31623 I1031 14:54:18.538316 31653 slave.cpp:2031] Queued task '3d00a1fe-59e4-42cf-a231-df3a06aecc3a' for executor 'default' of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.538415 31653 slave.cpp:868] Successfully attached file '/tmp/MasterTest_OrphanTasksMultipleAgents_FlWdW0/slaves/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0/frameworks/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000/executors/default/runs/b47eb823-9ff3-4386-bbd1-dc5f09b2d5a2' I1031 14:54:18.538676 31653 slave.cpp:3305] Got registration for executor 'default' of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 from executor(78)@172.17.0.3:46956 I1031 14:54:18.539242 31655 exec.cpp:237] Executor registered on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 I1031 14:54:18.539305 31655 exec.cpp:249] Executor::registered took 31294ns I1031 14:54:18.539798 31653 slave.cpp:2247] Sending queued task '3d00a1fe-59e4-42cf-a231-df3a06aecc3a' to executor 'default' of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 at executor(78)@172.17.0.3:46956 I1031 14:54:18.540233 31644 exec.cpp:324] Executor asked to run task '3d00a1fe-59e4-42cf-a231-df3a06aecc3a' I1031 14:54:18.540338 31644 exec.cpp:333] Executor::launchTask took 70083ns I1031 14:54:18.540446 31644 exec.cpp:550] Executor sending status update TASK_RUNNING (UUID: 5a86be7a-e823-4eb1-90e1-823aabd57613) for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.540742 31644 slave.cpp:3740] Handling status update TASK_RUNNING (UUID: 5a86be7a-e823-4eb1-90e1-823aabd57613) for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 from executor(78)@172.17.0.3:46956 I1031 14:54:18.541482 31653 status_update_manager.cpp:323] Received status update TASK_RUNNING (UUID: 5a86be7a-e823-4eb1-90e1-823aabd57613) for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.541540 31653 status_update_manager.cpp:500] Creating StatusUpdate stream for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.542067 31653 status_update_manager.cpp:377] Forwarding update TASK_RUNNING (UUID: 5a86be7a-e823-4eb1-90e1-823aabd57613) for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 to the agent I1031 14:54:18.542498 31645 slave.cpp:4169] Forwarding the update TASK_RUNNING (UUID: 5a86be7a-e823-4eb1-90e1-823aabd57613) for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 to master@172.17.0.3:46956 I1031 14:54:18.542769 31645 slave.cpp:4063] Status update manager successfully handled status update TASK_RUNNING (UUID: 5a86be7a-e823-4eb1-90e1-823aabd57613) for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.542840 31645 slave.cpp:4079] Sending acknowledgement for status update TASK_RUNNING (UUID: 5a86be7a-e823-4eb1-90e1-823aabd57613) for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 to executor(78)@172.17.0.3:46956 I1031 14:54:18.542984 31643 master.cpp:5757] Status update TASK_RUNNING (UUID: 5a86be7a-e823-4eb1-90e1-823aabd57613) for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 from agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.543071 31644 exec.cpp:373] Executor received status update acknowledgement 5a86be7a-e823-4eb1-90e1-823aabd57613 for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.543071 31643 master.cpp:5819] Forwarding status update TASK_RUNNING (UUID: 5a86be7a-e823-4eb1-90e1-823aabd57613) for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.543319 31643 master.cpp:7712] Updating the state of task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I1031 14:54:18.543658 31657 sched.cpp:1025] Scheduler::statusUpdate took 138896ns I1031 14:54:18.544138 31652 master.cpp:4867] Processing ACKNOWLEDGE call 5a86be7a-e823-4eb1-90e1-823aabd57613 for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 (default) at scheduler-60a2639c-70b4-4d13-83e6-f200a8b308f5@172.17.0.3:46956 on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 I1031 14:54:18.544566 31646 status_update_manager.cpp:395] Received status update acknowledgement (UUID: 5a86be7a-e823-4eb1-90e1-823aabd57613) for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.544916 31652 slave.cpp:3022] Status update manager successfully handled status update acknowledgement (UUID: 5a86be7a-e823-4eb1-90e1-823aabd57613) for task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.547865 31623 cluster.cpp:435] Creating default 'local' authorizer I1031 14:54:18.549728 31645 slave.cpp:208] Mesos agent started on (199)@172.17.0.3:46956 I1031 14:54:18.549870 31645 slave.cpp:209] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterTest_OrphanTasksMultipleAgents_EVkMmK/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterTest_OrphanTasksMultipleAgents_EVkMmK/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/MasterTest_OrphanTasksMultipleAgents_EVkMmK/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/mesos-1.2.0/_build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/MasterTest_OrphanTasksMultipleAgents_EVkMmK"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/MasterTest_OrphanTasksMultipleAgents_7SCiCN"""" I1031 14:54:18.550590 31645 credentials.hpp:86] Loading credential for authentication from '/tmp/MasterTest_OrphanTasksMultipleAgents_EVkMmK/credential' I1031 14:54:18.550806 31645 slave.cpp:346] Agent using credential for: test-principal I1031 14:54:18.550833 31645 credentials.hpp:37] Loading credentials for authentication from '/tmp/MasterTest_OrphanTasksMultipleAgents_EVkMmK/http_credentials' I1031 14:54:18.551142 31645 http.cpp:887] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I1031 14:54:18.551317 31645 http.cpp:887] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' I1031 14:54:18.552794 31645 slave.cpp:533] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1031 14:54:18.552898 31645 slave.cpp:541] Agent attributes: [ ] I1031 14:54:18.552917 31645 slave.cpp:546] Agent hostname: 173b586c223f I1031 14:54:18.554693 31650 state.cpp:57] Recovering state from '/tmp/MasterTest_OrphanTasksMultipleAgents_7SCiCN/meta' I1031 14:54:18.555044 31646 status_update_manager.cpp:203] Recovering status update manager I1031 14:54:18.555426 31643 slave.cpp:5399] Finished recovery I1031 14:54:18.555929 31643 slave.cpp:5573] Querying resource estimator for oversubscribable resources I1031 14:54:18.556341 31656 status_update_manager.cpp:177] Pausing sending status updates I1031 14:54:18.556361 31642 slave.cpp:915] New master detected at master@172.17.0.3:46956 I1031 14:54:18.556391 31642 slave.cpp:974] Authenticating with master master@172.17.0.3:46956 I1031 14:54:18.556450 31642 slave.cpp:985] Using default CRAM-MD5 authenticatee I1031 14:54:18.556589 31642 slave.cpp:947] Detecting new master I1031 14:54:18.556648 31654 authenticatee.cpp:121] Creating new client SASL connection I1031 14:54:18.556732 31642 slave.cpp:5587] Received oversubscribable resources {} from the resource estimator I1031 14:54:18.556872 31656 master.cpp:6742] Authenticating slave(199)@172.17.0.3:46956 I1031 14:54:18.556994 31655 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(452)@172.17.0.3:46956 I1031 14:54:18.557211 31649 authenticator.cpp:98] Creating new server SASL connection I1031 14:54:18.557438 31645 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1031 14:54:18.557507 31645 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1031 14:54:18.557646 31647 authenticator.cpp:204] Received SASL authentication start I1031 14:54:18.557719 31647 authenticator.cpp:326] Authentication requires more steps I1031 14:54:18.557847 31645 authenticatee.cpp:259] Received SASL authentication step I1031 14:54:18.558033 31652 authenticator.cpp:232] Received SASL authentication step I1031 14:54:18.558080 31652 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '173b586c223f' server FQDN: '173b586c223f' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1031 14:54:18.558109 31652 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1031 14:54:18.558156 31652 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1031 14:54:18.558184 31652 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '173b586c223f' server FQDN: '173b586c223f' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1031 14:54:18.558202 31652 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1031 14:54:18.558212 31652 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1031 14:54:18.558226 31652 authenticator.cpp:318] Authentication success I1031 14:54:18.558296 31642 authenticatee.cpp:299] Authentication success I1031 14:54:18.558343 31656 master.cpp:6772] Successfully authenticated principal 'test-principal' at slave(199)@172.17.0.3:46956 I1031 14:54:18.558392 31657 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(452)@172.17.0.3:46956 I1031 14:54:18.558599 31650 slave.cpp:1069] Successfully authenticated with master master@172.17.0.3:46956 I1031 14:54:18.558734 31650 slave.cpp:1483] Will retry registration in 10.674123ms if necessary I1031 14:54:18.558900 31652 master.cpp:5151] Registering agent at slave(199)@172.17.0.3:46956 (173b586c223f) with id 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 I1031 14:54:18.559381 31642 registrar.cpp:461] Applied 1 operations in 73404ns; attempting to update the registry I1031 14:54:18.560248 31645 log.cpp:577] Attempting to append 503 bytes to the log I1031 14:54:18.560426 31644 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 5 I1031 14:54:18.561122 31650 replica.cpp:537] Replica received write request for position 5 from __req_res__(3000)@172.17.0.3:46956 I1031 14:54:18.561779 31650 leveldb.cpp:341] Persisting action (522 bytes) to leveldb took 611463ns I1031 14:54:18.561803 31650 replica.cpp:708] Persisted action APPEND at position 5 I1031 14:54:18.562427 31645 replica.cpp:691] Replica received learned notice for position 5 from @0.0.0.0:0 I1031 14:54:18.562875 31645 leveldb.cpp:341] Persisting action (524 bytes) to leveldb took 415244ns I1031 14:54:18.562902 31645 replica.cpp:708] Persisted action APPEND at position 5 I1031 14:54:18.564662 31645 registrar.cpp:506] Successfully updated the registry in 5.217024ms I1031 14:54:18.564867 31644 log.cpp:596] Attempting to truncate the log to 5 I1031 14:54:18.565037 31645 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 6 I1031 14:54:18.565430 31652 slave.cpp:4251] Received ping from slave-observer(201)@172.17.0.3:46956 I1031 14:54:18.565668 31649 slave.cpp:1115] Registered with master master@172.17.0.3:46956; given agent ID 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 I1031 14:54:18.565706 31649 fetcher.cpp:86] Clearing fetcher cache I1031 14:54:18.565615 31651 master.cpp:5222] Registered agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1031 14:54:18.565855 31654 status_update_manager.cpp:184] Resuming sending status updates I1031 14:54:18.565848 31655 replica.cpp:537] Replica received write request for position 6 from __req_res__(3001)@172.17.0.3:46956 I1031 14:54:18.565982 31645 hierarchical.cpp:485] Added agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 (173b586c223f) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: {}) I1031 14:54:18.566078 31649 slave.cpp:1138] Checkpointing SlaveInfo to '/tmp/MasterTest_OrphanTasksMultipleAgents_7SCiCN/meta/slaves/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1/slave.info' I1031 14:54:18.566566 31655 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 622375ns I1031 14:54:18.566586 31649 slave.cpp:1175] Forwarding total oversubscribed resources {} I1031 14:54:18.566591 31655 replica.cpp:708] Persisted action TRUNCATE at position 6 I1031 14:54:18.566735 31649 master.cpp:5621] Received update of agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) with total oversubscribed resources {} I1031 14:54:18.567116 31645 hierarchical.cpp:1789] No inverse offers to send out! I1031 14:54:18.567229 31645 hierarchical.cpp:1309] Performed allocation for agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 in 1.19084ms I1031 14:54:18.567384 31648 replica.cpp:691] Replica received learned notice for position 6 from @0.0.0.0:0 I1031 14:54:18.567620 31645 hierarchical.cpp:555] Agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 (173b586c223f) updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I1031 14:54:18.567729 31642 master.cpp:6571] Sending 1 offers to framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 (default) at scheduler-60a2639c-70b4-4d13-83e6-f200a8b308f5@172.17.0.3:46956 I1031 14:54:18.567814 31645 hierarchical.cpp:1694] No allocations performed I1031 14:54:18.567865 31645 hierarchical.cpp:1789] No inverse offers to send out! I1031 14:54:18.567997 31645 hierarchical.cpp:1309] Performed allocation for agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 in 290412ns I1031 14:54:18.568034 31648 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 580942ns I1031 14:54:18.568120 31648 leveldb.cpp:399] Deleting ~2 keys from leveldb took 49833ns I1031 14:54:18.568147 31648 replica.cpp:708] Persisted action TRUNCATE at position 6 I1031 14:54:18.568269 31655 sched.cpp:917] Scheduler::resourceOffers took 118768ns I1031 14:54:18.570488 31646 master.cpp:3581] Processing ACCEPT call for offers: [ 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-O1 ] on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) for framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 (default) at scheduler-60a2639c-70b4-4d13-83e6-f200a8b308f5@172.17.0.3:46956 I1031 14:54:18.570608 31646 master.cpp:3173] Authorizing framework principal 'test-principal' to launch task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 W1031 14:54:18.573030 31657 validation.cpp:920] Executor 'default' for task '11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0' uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases. W1031 14:54:18.573073 31657 validation.cpp:932] Executor 'default' for task '11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0' uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases. I1031 14:54:18.573660 31657 master.cpp:8334] Adding task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 (173b586c223f) I1031 14:54:18.574072 31657 master.cpp:4230] Launching task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 (default) at scheduler-60a2639c-70b4-4d13-83e6-f200a8b308f5@172.17.0.3:46956 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.574569 31651 slave.cpp:1547] Got assigned task '11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0' for framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.575388 31651 slave.cpp:1709] Launching task '11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0' for framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.576159 31651 paths.cpp:536] Trying to chown '/tmp/MasterTest_OrphanTasksMultipleAgents_7SCiCN/slaves/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1/frameworks/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000/executors/default/runs/fb01a7c6-1cf6-4ff9-b768-61ce927127bc' to user 'mesos' I1031 14:54:18.584102 31651 slave.cpp:6307] Launching executor 'default' of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 with resources {} in work directory '/tmp/MasterTest_OrphanTasksMultipleAgents_7SCiCN/slaves/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1/frameworks/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000/executors/default/runs/fb01a7c6-1cf6-4ff9-b768-61ce927127bc' I1031 14:54:18.587211 31651 exec.cpp:162] Version: 1.2.0 I1031 14:54:18.587581 31650 exec.cpp:212] Executor started at: executor(79)@172.17.0.3:46956 with pid 31623 I1031 14:54:18.588001 31651 slave.cpp:2031] Queued task '11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0' for executor 'default' of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.588085 31651 slave.cpp:868] Successfully attached file '/tmp/MasterTest_OrphanTasksMultipleAgents_7SCiCN/slaves/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1/frameworks/5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000/executors/default/runs/fb01a7c6-1cf6-4ff9-b768-61ce927127bc' I1031 14:54:18.588179 31651 slave.cpp:3305] Got registration for executor 'default' of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 from executor(79)@172.17.0.3:46956 I1031 14:54:18.588629 31643 exec.cpp:237] Executor registered on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 I1031 14:54:18.588697 31643 exec.cpp:249] Executor::registered took 34326ns I1031 14:54:18.589377 31651 slave.cpp:2247] Sending queued task '11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0' to executor 'default' of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 at executor(79)@172.17.0.3:46956 I1031 14:54:18.589785 31652 exec.cpp:324] Executor asked to run task '11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0' I1031 14:54:18.589892 31652 exec.cpp:333] Executor::launchTask took 74900ns I1031 14:54:18.590060 31652 exec.cpp:550] Executor sending status update TASK_RUNNING (UUID: 4d1f38ad-684a-462f-ba03-4398f01694db) for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.590425 31649 slave.cpp:3740] Handling status update TASK_RUNNING (UUID: 4d1f38ad-684a-462f-ba03-4398f01694db) for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 from executor(79)@172.17.0.3:46956 I1031 14:54:18.591174 31652 status_update_manager.cpp:323] Received status update TASK_RUNNING (UUID: 4d1f38ad-684a-462f-ba03-4398f01694db) for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.591233 31652 status_update_manager.cpp:500] Creating StatusUpdate stream for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.591780 31652 status_update_manager.cpp:377] Forwarding update TASK_RUNNING (UUID: 4d1f38ad-684a-462f-ba03-4398f01694db) for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 to the agent I1031 14:54:18.592180 31651 slave.cpp:4169] Forwarding the update TASK_RUNNING (UUID: 4d1f38ad-684a-462f-ba03-4398f01694db) for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 to master@172.17.0.3:46956 I1031 14:54:18.592463 31651 slave.cpp:4063] Status update manager successfully handled status update TASK_RUNNING (UUID: 4d1f38ad-684a-462f-ba03-4398f01694db) for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.592525 31651 slave.cpp:4079] Sending acknowledgement for status update TASK_RUNNING (UUID: 4d1f38ad-684a-462f-ba03-4398f01694db) for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 to executor(79)@172.17.0.3:46956 I1031 14:54:18.592603 31647 master.cpp:5757] Status update TASK_RUNNING (UUID: 4d1f38ad-684a-462f-ba03-4398f01694db) for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 from agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.592679 31647 master.cpp:5819] Forwarding status update TASK_RUNNING (UUID: 4d1f38ad-684a-462f-ba03-4398f01694db) for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.592748 31649 exec.cpp:373] Executor received status update acknowledgement 4d1f38ad-684a-462f-ba03-4398f01694db for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.592947 31647 master.cpp:7712] Updating the state of task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I1031 14:54:18.593291 31655 sched.cpp:1025] Scheduler::statusUpdate took 135341ns I1031 14:54:18.594218 31644 master.cpp:4867] Processing ACKNOWLEDGE call 4d1f38ad-684a-462f-ba03-4398f01694db for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 (default) at scheduler-60a2639c-70b4-4d13-83e6-f200a8b308f5@172.17.0.3:46956 on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 I1031 14:54:18.594410 31644 master.cpp:1097] Master terminating I1031 14:54:18.594658 31655 status_update_manager.cpp:395] Received status update acknowledgement (UUID: 4d1f38ad-684a-462f-ba03-4398f01694db) for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 W1031 14:54:18.594576 31644 master.cpp:7794] Removing task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) in non-terminal state TASK_RUNNING I1031 14:54:18.594992 31650 slave.cpp:3022] Status update manager successfully handled status update acknowledgement (UUID: 4d1f38ad-684a-462f-ba03-4398f01694db) for task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.595250 31649 hierarchical.cpp:517] Removed agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 I1031 14:54:18.595525 31644 master.cpp:7837] Removing executor 'default' with resources {} of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) W1031 14:54:18.596063 31644 master.cpp:7794] Removing task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) in non-terminal state TASK_RUNNING I1031 14:54:18.596593 31649 hierarchical.cpp:517] Removed agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 I1031 14:54:18.596689 31644 master.cpp:7837] Removing executor 'default' with resources {} of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.597661 31649 hierarchical.cpp:337] Removed framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 I1031 14:54:18.598047 31657 slave.cpp:4297] Got exited event for master@172.17.0.3:46956 I1031 14:54:18.598114 31653 slave.cpp:4297] Got exited event for master@172.17.0.3:46956 W1031 14:54:18.598136 31653 slave.cpp:4302] Master disconnected! Waiting for a new master to be elected W1031 14:54:18.598136 31657 slave.cpp:4302] Master disconnected! Waiting for a new master to be elected I1031 14:54:18.604696 31623 cluster.cpp:158] Creating default 'local' authorizer I1031 14:54:18.608245 31623 leveldb.cpp:174] Opened db in 3.227023ms I1031 14:54:18.611130 31623 leveldb.cpp:181] Compacted db in 2.848105ms I1031 14:54:18.611196 31623 leveldb.cpp:196] Created db iterator in 23453ns I1031 14:54:18.611227 31623 leveldb.cpp:202] Seeked to beginning of db in 16697ns I1031 14:54:18.611335 31623 leveldb.cpp:271] Iterated through 3 keys in the db in 90684ns I1031 14:54:18.611428 31623 replica.cpp:776] Replica recovered with log positions 5 -> 6 with 0 holes and 0 unlearned I1031 14:54:18.612046 31646 recover.cpp:451] Starting replica recovery I1031 14:54:18.612493 31651 recover.cpp:477] Replica is in VOTING status I1031 14:54:18.612758 31651 recover.cpp:466] Recover process terminated I1031 14:54:18.614444 31654 master.cpp:380] Master 5a14d27f-26b6-4697-9beb-0f36b3dcda4c (173b586c223f) started on 172.17.0.3:46956 I1031 14:54:18.614476 31654 master.cpp:382] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/ClEWj4/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/mesos/mesos-1.2.0/_inst/share/mesos/webui"""" --work_dir=""""/tmp/ClEWj4/master"""" --zk_session_timeout=""""10secs"""" I1031 14:54:18.615159 31654 master.cpp:432] Master only allowing authenticated frameworks to register I1031 14:54:18.615186 31654 master.cpp:446] Master only allowing authenticated agents to register I1031 14:54:18.615202 31654 master.cpp:459] Master only allowing authenticated HTTP frameworks to register I1031 14:54:18.615221 31654 credentials.hpp:37] Loading credentials for authentication from '/tmp/ClEWj4/credentials' I1031 14:54:18.615653 31654 master.cpp:504] Using default 'crammd5' authenticator I1031 14:54:18.615836 31654 http.cpp:887] Using default 'basic' HTTP authenticator for realm 'mesos-master-readonly' I1031 14:54:18.616070 31654 http.cpp:887] Using default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' I1031 14:54:18.616211 31654 http.cpp:887] Using default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' I1031 14:54:18.616320 31654 master.cpp:584] Authorization enabled I1031 14:54:18.616480 31649 whitelist_watcher.cpp:77] No whitelist given I1031 14:54:18.616485 31651 hierarchical.cpp:149] Initialized hierarchical allocator process I1031 14:54:18.619087 31652 master.cpp:2033] Elected as the leading master! I1031 14:54:18.619113 31652 master.cpp:1560] Recovering from registrar I1031 14:54:18.619220 31646 registrar.cpp:329] Recovering registrar I1031 14:54:18.619686 31651 log.cpp:553] Attempting to start the writer I1031 14:54:18.620842 31647 replica.cpp:493] Replica received implicit promise request from __req_res__(3002)@172.17.0.3:46956 with proposal 2 I1031 14:54:18.621428 31647 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 551908ns I1031 14:54:18.621451 31647 replica.cpp:342] Persisted promised to 2 I1031 14:54:18.622010 31650 coordinator.cpp:238] Coordinator attempting to fill missing positions I1031 14:54:18.622259 31655 log.cpp:569] Writer started with ending position 6 I1031 14:54:18.623313 31645 leveldb.cpp:436] Reading position from leveldb took 59676ns I1031 14:54:18.623406 31645 leveldb.cpp:436] Reading position from leveldb took 33374ns I1031 14:54:18.625212 31650 registrar.cpp:362] Successfully fetched the registry (464B) in 5.854208ms I1031 14:54:18.625520 31650 registrar.cpp:461] Applied 1 operations in 67195ns; attempting to update the registry I1031 14:54:18.626541 31649 log.cpp:577] Attempting to append 503 bytes to the log I1031 14:54:18.626691 31645 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 7 I1031 14:54:18.627703 31656 replica.cpp:537] Replica received write request for position 7 from __req_res__(3003)@172.17.0.3:46956 I1031 14:54:18.628237 31656 leveldb.cpp:341] Persisting action (522 bytes) to leveldb took 484788ns I1031 14:54:18.628268 31656 replica.cpp:708] Persisted action APPEND at position 7 I1031 14:54:18.629057 31648 replica.cpp:691] Replica received learned notice for position 7 from @0.0.0.0:0 I1031 14:54:18.629573 31648 leveldb.cpp:341] Persisting action (524 bytes) to leveldb took 468946ns I1031 14:54:18.629606 31648 replica.cpp:708] Persisted action APPEND at position 7 I1031 14:54:18.631556 31652 registrar.cpp:506] Successfully updated the registry in 5.9648ms I1031 14:54:18.631827 31651 log.cpp:596] Attempting to truncate the log to 7 I1031 14:54:18.631916 31652 registrar.cpp:392] Successfully recovered registrar I1031 14:54:18.632025 31649 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 8 I1031 14:54:18.632654 31656 replica.cpp:537] Replica received write request for position 8 from __req_res__(3004)@172.17.0.3:46956 I1031 14:54:18.633050 31656 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 359655ns I1031 14:54:18.633080 31656 replica.cpp:708] Persisted action TRUNCATE at position 8 I1031 14:54:18.633361 31644 master.cpp:1676] Recovered 2 agents from the registry (464B); allowing 10mins for agents to re-register I1031 14:54:18.633430 31650 hierarchical.cpp:176] Skipping recovery of hierarchical allocator: nothing to recover I1031 14:54:18.633929 31655 replica.cpp:691] Replica received learned notice for position 8 from @0.0.0.0:0 I1031 14:54:18.634366 31655 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 371057ns I1031 14:54:18.634430 31655 leveldb.cpp:399] Deleting ~2 keys from leveldb took 34917ns I1031 14:54:18.634454 31655 replica.cpp:708] Persisted action TRUNCATE at position 8 I1031 14:54:18.635414 31657 slave.cpp:915] New master detected at master@172.17.0.3:46956 I1031 14:54:18.635432 31646 status_update_manager.cpp:177] Pausing sending status updates I1031 14:54:18.635443 31657 slave.cpp:974] Authenticating with master master@172.17.0.3:46956 I1031 14:54:18.635502 31646 slave.cpp:915] New master detected at master@172.17.0.3:46956 I1031 14:54:18.635525 31646 slave.cpp:974] Authenticating with master master@172.17.0.3:46956 I1031 14:54:18.635560 31657 slave.cpp:985] Using default CRAM-MD5 authenticatee I1031 14:54:18.635598 31645 status_update_manager.cpp:177] Pausing sending status updates I1031 14:54:18.635598 31646 slave.cpp:985] Using default CRAM-MD5 authenticatee I1031 14:54:18.635813 31657 slave.cpp:947] Detecting new master I1031 14:54:18.635885 31656 authenticatee.cpp:121] Creating new client SASL connection I1031 14:54:18.636018 31649 authenticatee.cpp:121] Creating new client SASL connection I1031 14:54:18.636176 31646 slave.cpp:947] Detecting new master I1031 14:54:18.636258 31654 master.cpp:6742] Authenticating slave(199)@172.17.0.3:46956 I1031 14:54:18.636364 31650 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(454)@172.17.0.3:46956 I1031 14:54:18.636493 31654 master.cpp:6742] Authenticating slave(198)@172.17.0.3:46956 I1031 14:54:18.636574 31646 authenticator.cpp:98] Creating new server SASL connection I1031 14:54:18.636601 31650 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(453)@172.17.0.3:46956 I1031 14:54:18.636772 31649 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1031 14:54:18.636802 31648 authenticator.cpp:98] Creating new server SASL connection I1031 14:54:18.636842 31649 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1031 14:54:18.637020 31652 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1031 14:54:18.637034 31648 authenticator.cpp:204] Received SASL authentication start I1031 14:54:18.637053 31652 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1031 14:54:18.637095 31648 authenticator.cpp:326] Authentication requires more steps I1031 14:54:18.637167 31652 authenticator.cpp:204] Received SASL authentication start I1031 14:54:18.637181 31648 authenticatee.cpp:259] Received SASL authentication step I1031 14:54:18.637217 31652 authenticator.cpp:326] Authentication requires more steps I1031 14:54:18.637265 31648 authenticator.cpp:232] Received SASL authentication step I1031 14:54:18.637292 31648 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '173b586c223f' server FQDN: '173b586c223f' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1031 14:54:18.637308 31648 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1031 14:54:18.637310 31652 authenticatee.cpp:259] Received SASL authentication step I1031 14:54:18.637362 31648 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1031 14:54:18.637399 31648 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '173b586c223f' server FQDN: '173b586c223f' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1031 14:54:18.637419 31648 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1031 14:54:18.637437 31648 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1031 14:54:18.637454 31655 authenticator.cpp:232] Received SASL authentication step I1031 14:54:18.637461 31648 authenticator.cpp:318] Authentication success I1031 14:54:18.637483 31655 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '173b586c223f' server FQDN: '173b586c223f' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1031 14:54:18.637501 31655 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1031 14:54:18.637522 31655 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1031 14:54:18.637549 31655 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '173b586c223f' server FQDN: '173b586c223f' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1031 14:54:18.637575 31655 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1031 14:54:18.637585 31655 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1031 14:54:18.637603 31655 authenticator.cpp:318] Authentication success I1031 14:54:18.637689 31643 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(454)@172.17.0.3:46956 I1031 14:54:18.637704 31649 authenticatee.cpp:299] Authentication success I1031 14:54:18.637552 31652 authenticatee.cpp:299] Authentication success I1031 14:54:18.637948 31643 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(453)@172.17.0.3:46956 I1031 14:54:18.638047 31645 slave.cpp:1069] Successfully authenticated with master master@172.17.0.3:46956 I1031 14:54:18.638206 31656 slave.cpp:1069] Successfully authenticated with master master@172.17.0.3:46956 I1031 14:54:18.638486 31656 slave.cpp:1483] Will retry registration in 7.380044ms if necessary I1031 14:54:18.638504 31645 slave.cpp:1483] Will retry registration in 2.286536ms if necessary I1031 14:54:18.637601 31654 master.cpp:6772] Successfully authenticated principal 'test-principal' at slave(199)@172.17.0.3:46956 I1031 14:54:18.638775 31654 master.cpp:6772] Successfully authenticated principal 'test-principal' at slave(198)@172.17.0.3:46956 I1031 14:54:18.639214 31654 master.cpp:5370] Re-registering agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.639966 31644 registrar.cpp:461] Applied 1 operations in 54895ns; attempting to update the registry I1031 14:54:18.640280 31654 master.cpp:5370] Re-registering agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.640707 31644 log.cpp:577] Attempting to append 503 bytes to the log I1031 14:54:18.640811 31657 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 9 I1031 14:54:18.641525 31652 replica.cpp:537] Replica received write request for position 9 from __req_res__(3005)@172.17.0.3:46956 I1031 14:54:18.641760 31652 leveldb.cpp:341] Persisting action (522 bytes) to leveldb took 194719ns I1031 14:54:18.641788 31652 replica.cpp:708] Persisted action APPEND at position 9 I1031 14:54:18.642148 31652 slave.cpp:1483] Will retry registration in 31.9918ms if necessary I1031 14:54:18.642640 31646 master.cpp:5363] Ignoring re-register agent message from agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) as readmission is already in progress I1031 14:54:18.642675 31642 replica.cpp:691] Replica received learned notice for position 9 from @0.0.0.0:0 I1031 14:54:18.643157 31642 leveldb.cpp:341] Persisting action (524 bytes) to leveldb took 444580ns I1031 14:54:18.643191 31642 replica.cpp:708] Persisted action APPEND at position 9 I1031 14:54:18.645174 31644 registrar.cpp:506] Successfully updated the registry in 5.126912ms I1031 14:54:18.645500 31652 log.cpp:596] Attempting to truncate the log to 9 I1031 14:54:18.645648 31654 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 10 I1031 14:54:18.645874 31644 registrar.cpp:461] Applied 1 operations in 66647ns; attempting to update the registry I1031 14:54:18.646440 31657 master.cpp:8334] Adding task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 (173b586c223f) I1031 14:54:18.646606 31651 replica.cpp:537] Replica received write request for position 10 from __req_res__(3006)@172.17.0.3:46956 W1031 14:54:18.646946 31657 master.cpp:7414] Possibly orphaned task 11a3e383-b8f4-4ee4-a2c6-a78b8b7507a0 of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 running on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.647025 31651 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 369772ns I1031 14:54:18.647066 31651 replica.cpp:708] Persisted action TRUNCATE at position 10 I1031 14:54:18.647284 31657 master.cpp:5466] Re-registered agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1031 14:54:18.647390 31650 slave.cpp:1483] Will retry registration in 12.331401ms if necessary I1031 14:54:18.647438 31657 master.cpp:5560] Sending updated checkpointed resources {} to agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.647495 31650 slave.cpp:4251] Received ping from slave-observer(202)@172.17.0.3:46956 I1031 14:54:18.647753 31650 slave.cpp:1217] Re-registered with master master@172.17.0.3:46956 I1031 14:54:18.647744 31643 hierarchical.cpp:485] Added agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 (173b586c223f) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I1031 14:54:18.647841 31650 slave.cpp:1253] Forwarding total oversubscribed resources {} I1031 14:54:18.647847 31647 status_update_manager.cpp:184] Resuming sending status updates I1031 14:54:18.647883 31657 master.cpp:5301] Re-registering agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.647933 31643 hierarchical.cpp:1694] No allocations performed I1031 14:54:18.647969 31650 slave.cpp:2807] Ignoring new checkpointed resources identical to the current version: {} I1031 14:54:18.648167 31643 hierarchical.cpp:1309] Performed allocation for agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 in 364483ns I1031 14:54:18.648221 31657 master.cpp:5560] Sending updated checkpointed resources {} to agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) W1031 14:54:18.648299 31651 slave.cpp:1234] Already re-registered with master master@172.17.0.3:46956 I1031 14:54:18.648331 31651 slave.cpp:1253] Forwarding total oversubscribed resources {} I1031 14:54:18.648439 31657 master.cpp:5621] Received update of agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) with total oversubscribed resources {} I1031 14:54:18.648448 31656 replica.cpp:691] Replica received learned notice for position 10 from @0.0.0.0:0 I1031 14:54:18.648525 31651 slave.cpp:2807] Ignoring new checkpointed resources identical to the current version: {} I1031 14:54:18.648795 31657 master.cpp:5621] Received update of agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 at slave(199)@172.17.0.3:46956 (173b586c223f) with total oversubscribed resources {} I1031 14:54:18.648811 31651 hierarchical.cpp:555] Agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 (173b586c223f) updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I1031 14:54:18.648872 31656 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 381171ns I1031 14:54:18.648941 31656 leveldb.cpp:399] Deleting ~2 keys from leveldb took 40884ns I1031 14:54:18.648985 31656 replica.cpp:708] Persisted action TRUNCATE at position 10 I1031 14:54:18.649085 31651 hierarchical.cpp:1694] No allocations performed I1031 14:54:18.649155 31651 hierarchical.cpp:1309] Performed allocation for agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 in 267700ns I1031 14:54:18.649305 31651 hierarchical.cpp:555] Agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 (173b586c223f) updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I1031 14:54:18.649469 31651 hierarchical.cpp:1694] No allocations performed I1031 14:54:18.649554 31651 hierarchical.cpp:1309] Performed allocation for agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S1 in 195355ns I1031 14:54:18.650046 31643 log.cpp:577] Attempting to append 503 bytes to the log I1031 14:54:18.650197 31653 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 11 I1031 14:54:18.650496 31648 process.cpp:3570] Handling HTTP event for process 'master' with path: '/master/state' I1031 14:54:18.651095 31657 replica.cpp:537] Replica received write request for position 11 from __req_res__(3007)@172.17.0.3:46956 I1031 14:54:18.651302 31657 leveldb.cpp:341] Persisting action (522 bytes) to leveldb took 160039ns I1031 14:54:18.651340 31657 replica.cpp:708] Persisted action APPEND at position 11 I1031 14:54:18.651604 31651 http.cpp:391] HTTP GET for /master/state from 172.17.0.3:35755 I1031 14:54:18.652230 31656 replica.cpp:691] Replica received learned notice for position 11 from @0.0.0.0:0 I1031 14:54:18.652693 31656 leveldb.cpp:341] Persisting action (524 bytes) to leveldb took 420455ns I1031 14:54:18.652725 31656 replica.cpp:708] Persisted action APPEND at position 11 I1031 14:54:18.654693 31648 registrar.cpp:506] Successfully updated the registry in 8.75392ms I1031 14:54:18.655004 31650 log.cpp:596] Attempting to truncate the log to 11 I1031 14:54:18.655267 31656 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 12 I1031 14:54:18.655925 31645 master.cpp:8334] Adding task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 (173b586c223f) I1031 14:54:18.656440 31647 replica.cpp:537] Replica received write request for position 12 from __req_res__(3008)@172.17.0.3:46956 W1031 14:54:18.656488 31645 master.cpp:7414] Possibly orphaned task 3d00a1fe-59e4-42cf-a231-df3a06aecc3a of framework 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-0000 running on agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.656636 31649 slave.cpp:4251] Received ping from slave-observer(203)@172.17.0.3:46956 I1031 14:54:18.656883 31647 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 388466ns I1031 14:54:18.656944 31651 slave.cpp:1217] Re-registered with master master@172.17.0.3:46956 I1031 14:54:18.656944 31647 replica.cpp:708] Persisted action TRUNCATE at position 12 I1031 14:54:18.656857 31645 master.cpp:5466] Re-registered agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1031 14:54:18.657088 31645 master.cpp:5560] Sending updated checkpointed resources {} to agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) I1031 14:54:18.657121 31657 status_update_manager.cpp:184] Resuming sending status updates I1031 14:54:18.657127 31651 slave.cpp:1253] Forwarding total oversubscribed resources {} I1031 14:54:18.657199 31644 hierarchical.cpp:485] Added agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 (173b586c223f) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I1031 14:54:18.657316 31651 slave.cpp:2807] Ignoring new checkpointed resources identical to the current version: {} I1031 14:54:18.657330 31644 hierarchical.cpp:1694] No allocations performed I1031 14:54:18.657398 31644 hierarchical.cpp:1309] Performed allocation for agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 in 153893ns I1031 14:54:18.657395 31645 master.cpp:5621] Received update of agent 5098f7d2-2044-4dcd-b97f-9bd7c3dd1e27-S0 at slave(198)@172.17.0.3:46956 (173b586c223f) with total oversubscribed resources {} ../../src/tests/master_tests.cpp:3074: Failure Value of: orphanTasks.values.size() Actual: 1 Expected: 2u Which is: 2 ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6527","11/01/2016 23:13:40",2,"Memory leak in the libprocess request decoder. ""The libprocess decoder can leak a {{Request}} object in cases when a client disconnects while the request is in progress. In such cases, the decoder's destructor won't delete the active {{Request}} object that it had allocated on the heap. https://github.com/apache/mesos/blob/master/3rdparty/libprocess/src/decoder.hpp#L271""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6528","11/02/2016 00:12:01",3,"Container status of a task in a pod is not correct. ""Currently, the container status is for the top level executor container. This is not ideal. Ideally, we should get the container status for the corresponding nested container and report that with the task status update.""","",0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6530","11/02/2016 02:59:27",3,"Add support for incremental gzip decompression. ""We currently only support compressing and decompressing based on the entire input being available at once. We can add a {{gzip::Decompressor}} to support incremental decompression.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6551","11/04/2016 20:36:20",5,"Add attach/exec commands to the Mesos CLI ""After all of this support has landed, we need to update the Mesos CLI to implement {{attach}} and {{exec}} functionality as outlined in the Design Doc""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6560","11/08/2016 09:42:15",2,"The default stout stringify always copies its argument ""The default implementation of the template {{stringify}} in stout always copies its argument, For most types implementing a dedicated {{stringify}} we restrict {{T}} to some {{const}} ref with the exception of the specialization for {{bool}}, Copying by default is bad since it requires {{T}} to be copyable without {{stringify}} actually requiring this. It also likely leads to bad performance. It appears switching to e.g., and adjusting the {{bool}} specialization would be a general improvement. This issue was first detected by Coverity in CID 727974 way back on 2012-09-21."""," template std::string stringify(T t) template <> std::string stringify(const bool& b) template std::string stringify(const T& t) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6606","11/18/2016 08:34:40",2,"Reject optimized builds with libcxx before 3.9 ""Recent clang versions optimize more aggressively which leads to runtime errors using valid code, see e.g., MESOS-5745, due to code exposing undefined behavior in libcxx-3.8 and earlier. This was fixed with upstream libcxx-3.9. See https://reviews.llvm.org/D20786 for the patch and https://llvm.org/bugs/show_bug.cgi?id=28469 for the code example extracted from our code base. We should consider rejecting builds if libcxx-3.8 or older is detected since not all users compiling Mesos might run the test suite. In our decision to reject we could possibly also take the used clang versions into account (which would just ensure we don't run into the known problems from the UB in libcxx).""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6618","11/21/2016 12:31:55",3,"Some tests use hardcoded port numbers. ""DockerContainerizerTest.ROOT_DOCKER_NoTransitionFromKillingToRunning and many HealthCheckTests use hardcoded port numbers. This can create false failures if these tests are run in parallel on the same machine. It appears instead we should use random port numbers.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6619","11/21/2016 17:47:07",8,"Improve task management for unreachable tasks ""Scenario: # Framework starts non-partition-aware task T on agent A # Agent A is partitioned. Task T is marked as a """"completed task"""" in the {{Framework}} struct of the master, as part of {{Framework::removeTask}}. # Agent A re-registers with the master. The tasks running on A are re-added to their respective frameworks on the master as running tasks. # In {{Master::\_reregisterSlave}}, the master sends a {{ShutdownFrameworkMessage}} for all non-partition-aware frameworks running on the agent. The master then does {{removeTask}} for each task managed by one of these frameworks, which results in calling {{Framework::removeTask}}, which adds _another_ task to {{completed_tasks}}. Note that {{completed_tasks}} does not attempt to detect/suppress duplicates, so this results in two elements in the {{completed_tasks}} collection. Similar problems occur when a partition-aware task is running on a partitioned agent that re-registers: the result is a task in the {{tasks}} list _and_ a task in the {{completed_tasks}} list. Possible fixes/changes: * Adding a task to the {{completed_tasks}} list when an agent becomes partitioned is debatable; certainly for partition-aware tasks, the task is not """"completed"""". We might consider adding an """"{{unreachable_tasks}}"""" list to the HTTP endpoints. * Regardless of whether we continue to use {{completed_tasks}} or add a new collection, we should ensure the consistency of that data structure after agent re-registration.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6621","11/22/2016 02:30:17",3,"SSL downgrade path will CHECK-fail when using both temporary and persistent sockets ""The code path for downgrading sockets from SSL to non-SSL includes this code: https://github.com/apache/mesos/blob/1.1.x/3rdparty/libprocess/src/process.cpp#L2311-L2321 It is possible for libprocess to hold both temporary and persistent sockets to the same address. This can happen when a message is first sent ({{ProcessBase::send}}), and then a link is established ({{ProcessBase::link}}). When the target of the message/link is a non-SSL socket, both temporary and persistent sockets go through the downgrade path. If a temporary socket is present while a persistent socket is being created, the above code will remap both temporary and persistent sockets to the same address (it should only remap the persistent socket). This leads to some CHECK failures if those sockets are used or closed later: * https://github.com/apache/mesos/blob/1.1.x/3rdparty/libprocess/src/process.cpp#L1942 * https://github.com/apache/mesos/blob/1.1.x/3rdparty/libprocess/src/process.cpp#L2044"""," // If this address is a temporary link. if (temps.count(addresses[to_fd]) > 0) { temps[addresses[to_fd]] = to_fd; // No need to erase as we're changing the value, not the key. } // If this address is a persistent link. if (persists.count(addresses[to_fd]) > 0) { persists[addresses[to_fd]] = to_fd; // No need to erase as we're changing the value, not the key. } bool persist = persists.count(address) > 0; bool temp = temps.count(address) > 0; if (persist || temp) { int s = persist ? persists[address] : temps[address]; CHECK(sockets.count(s) > 0); socket = sockets.at(s); if (dispose.count(s) > 0) { // This is either a temporary socket we created or it's a // socket that we were receiving data from and possibly // sending HTTP responses back on. Clean up either way. if (addresses.count(s) > 0) { const Address& address = addresses[s]; CHECK(temps.count(address) > 0 && temps[address] == s); temps.erase(address); ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6622","11/22/2016 02:48:21",2,"NvidiaGpuTest.ROOT_INTERNET_CURL_CGROUPS_NVIDIA_GPU_NvidiaDockerImage is flaky ""This test occasionally times out after one minute: The test itself has a future that waits for 2 minutes for the executor to start up."""," I1122 02:07:25.721348 2328 slave.cpp:4263] Received ping from slave-observer(563)@172.16.10.39:45772 I1122 02:07:25.728559 2324 slave.cpp:5122] Terminating executor ''b5a3a115-27da-4b81-902e-b99602f902a6' of framework 42a4cb0e-aea9-4b9d-8bab-3279ee5a7b8b-0000' because it did not register within 1mins I1122 02:07:25.728667 2330 containerizer.cpp:2038] Destroying container b4711187-157c-421e-a6d9-9fa32a6e263c in PROVISIONING state I1122 02:07:25.728734 2330 containerizer.cpp:2093] Waiting for the provisioner to complete provisioning before destroying container b4711187-157c-421e-a6d9-9fa32a6e263c ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6626","11/22/2016 22:51:42",3,"Support `foreachpair` for LinkedHashMap ""{{LinkedHashMap}} does not support iteration via {{foreachpair}}; it should.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6630","11/23/2016 02:18:45",3,"Add some benchmark test for quota allocation ""Comparing to non-quota allocation, current quota allocation involves a separate allocation stage and additional tracking such as headroom and role consumed quota. Thus quota allocation performance could be drastically different (probably slower) than non-quota allocation. A dedicated benchmark for quota allocation is necessary.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6631","11/23/2016 02:27:52",3,"Disallow frameworks from modifying FrameworkInfo.roles. ""In """"phase 1"""" of the multi-role framework support, we want to preserve the existing behavior of single-role framework support in that we disallow frameworks from modifying their role. With multi-role framework support, we will initially disallow frameworks from modifying the roles field. Note that in the case that the master has failed over but the framework hasn't re-registered yet, we will use the framework info from the agents to disallow changes to the roles field. We will treat {{FrameworkInfo.roles}} as a set rather than a list, so ordering does not matter for equality. One difference between {{role}} and {{roles}} is that for {{role}} modification, we ignore it. But, with {{roles}} modification, since this is a new feature, we can disallow it by rejecting the framework subscription. Later, in phase 2, we will allow frameworks to modify their roles, see MESOS-6627.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6646","11/25/2016 11:36:46",1,"StreamingRequestDecoder incompletely initializes its http_parser_settings ""Coverity reports in CID1394703 at {{3rdparty/libprocess/src/decoder.hpp:767}}: It seems like {{StreamingRequestDecoder}} should properly initialize its member {{settings}}, e.g., with {{http_parser_settings_init}}."""," CID 1394703 (#1 of 1): Uninitialized pointer field (UNINIT_CTOR) 2. uninit_member: Non-static class member field settings.on_status is not initialized in this constructor nor in any functions that it calls. ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6653","11/30/2016 17:54:15",3,"Overlayfs backend may fail to mount the rootfs if both container image and image volume are specified. ""Depending on MESOS-6000, we use symlink to shorten the overlayfs mounting arguments. However, if more than one image need to be provisioned (e.g., a container image is specified while image volumes are specified for the same container), the symlink .../backends/overlay/links would fail to be created since it exists already. Here is a simple log when we hard code overlayfs as our default backend: We should differenciate the links for different provisioned images."""," [07:02:45] : [Step 10/10] [ RUN ] Nesting/VolumeImageIsolatorTest.ROOT_ImageInVolumeWithRootFilesystem/0 [07:02:46] : [Step 10/10] I1127 07:02:46.416021 2919 containerizer.cpp:207] Using isolation: filesystem/linux,volume/image,docker/runtime,network/cni [07:02:46] : [Step 10/10] I1127 07:02:46.419312 2919 linux_launcher.cpp:150] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher [07:02:46] : [Step 10/10] E1127 07:02:46.425336 2919 shell.hpp:107] Command 'hadoop version 2>&1' failed; this is the output: [07:02:46] : [Step 10/10] sh: 1: hadoop: not found [07:02:46] : [Step 10/10] I1127 07:02:46.425379 2919 fetcher.cpp:69] Skipping URI fetcher plugin 'hadoop' as it could not be created: Failed to create HDFS client: Failed to execute 'hadoop version 2>&1'; the command was either not found or exited with a non-zero exit status: 127 [07:02:46] : [Step 10/10] I1127 07:02:46.425452 2919 local_puller.cpp:94] Creating local puller with docker registry '/tmp/R6OUei/registry' [07:02:46] : [Step 10/10] I1127 07:02:46.427258 2934 containerizer.cpp:956] Starting container 9af6c98a-d9f7-4c89-a5ed-fc7ae2fa1330 for executor 'test_executor' of framework [07:02:46] : [Step 10/10] I1127 07:02:46.427592 2938 metadata_manager.cpp:167] Looking for image 'test_image_rootfs' [07:02:46] : [Step 10/10] I1127 07:02:46.427774 2936 local_puller.cpp:147] Untarring image 'test_image_rootfs' from '/tmp/R6OUei/registry/test_image_rootfs.tar' to '/tmp/R6OUei/store/staging/9krDz2' [07:02:46] : [Step 10/10] I1127 07:02:46.512070 2933 local_puller.cpp:167] The repositories JSON file for image 'test_image_rootfs' is '{""""test_image_rootfs"""":{""""latest"""":""""815b809d588c80fd6ddf4d6ac244ad1c01ae4cbe0f91cc7480e306671ee9c346""""}}' [07:02:46] : [Step 10/10] I1127 07:02:46.512279 2933 local_puller.cpp:295] Extracting layer tar ball '/tmp/R6OUei/store/staging/9krDz2/815b809d588c80fd6ddf4d6ac244ad1c01ae4cbe0f91cc7480e306671ee9c346/layer.tar to rootfs '/tmp/R6OUei/store/staging/9krDz2/815b809d588c80fd6ddf4d6ac244ad1c01ae4cbe0f91cc7480e306671ee9c346/rootfs' [07:02:46] : [Step 10/10] I1127 07:02:46.617442 2937 metadata_manager.cpp:155] Successfully cached image 'test_image_rootfs' [07:02:46] : [Step 10/10] I1127 07:02:46.617908 2938 provisioner.cpp:286] Image layers: 1 [07:02:46] : [Step 10/10] I1127 07:02:46.617925 2938 provisioner.cpp:296] Should hit here [07:02:46] : [Step 10/10] I1127 07:02:46.617949 2938 provisioner.cpp:315] !!!!: bind [07:02:46] : [Step 10/10] I1127 07:02:46.617959 2938 provisioner.cpp:315] !!!!: overlay [07:02:46] : [Step 10/10] I1127 07:02:46.617967 2938 provisioner.cpp:315] !!!!: copy [07:02:46] : [Step 10/10] I1127 07:02:46.617974 2938 provisioner.cpp:318] Provisioning image rootfs '/mnt/teamcity/temp/buildTmp/Nesting_VolumeImageIsolatorTest_ROOT_ImageInVolumeWithRootFilesystem_0_1fMo0c/provisioner/containers/9af6c98a-d9f7-4c89-a5ed-fc7ae2fa1330/backends/overlay/rootfses/c71e83d2-5dbe-4eb7-a2fc-b8cc826771f7' for container 9af6c98a-d9f7-4c89-a5ed-fc7ae2fa1330 using overlay backend [07:02:46] : [Step 10/10] I1127 07:02:46.618408 2936 overlay.cpp:175] Created symlink '/mnt/teamcity/temp/buildTmp/Nesting_VolumeImageIsolatorTest_ROOT_ImageInVolumeWithRootFilesystem_0_1fMo0c/provisioner/containers/9af6c98a-d9f7-4c89-a5ed-fc7ae2fa1330/backends/overlay/links' -> '/tmp/DQ3blT' [07:02:46] : [Step 10/10] I1127 07:02:46.618472 2936 overlay.cpp:203] Provisioning image rootfs with overlayfs: 'lowerdir=/tmp/DQ3blT/0,upperdir=/mnt/teamcity/temp/buildTmp/Nesting_VolumeImageIsolatorTest_ROOT_ImageInVolumeWithRootFilesystem_0_1fMo0c/provisioner/containers/9af6c98a-d9f7-4c89-a5ed-fc7ae2fa1330/backends/overlay/scratch/c71e83d2-5dbe-4eb7-a2fc-b8cc826771f7/upperdir,workdir=/mnt/teamcity/temp/buildTmp/Nesting_VolumeImageIsolatorTest_ROOT_ImageInVolumeWithRootFilesystem_0_1fMo0c/provisioner/containers/9af6c98a-d9f7-4c89-a5ed-fc7ae2fa1330/backends/overlay/scratch/c71e83d2-5dbe-4eb7-a2fc-b8cc826771f7/workdir' [07:02:46] : [Step 10/10] I1127 07:02:46.619098 2933 linux.cpp:451] Ignored an image volume for container 9af6c98a-d9f7-4c89-a5ed-fc7ae2fa1330 [07:02:46] : [Step 10/10] I1127 07:02:46.619745 2938 metadata_manager.cpp:167] Looking for image 'test_image_volume' [07:02:46] : [Step 10/10] I1127 07:02:46.619925 2937 local_puller.cpp:147] Untarring image 'test_image_volume' from '/tmp/R6OUei/registry/test_image_volume.tar' to '/tmp/R6OUei/store/staging/2GNlJO' [07:02:46] : [Step 10/10] I1127 07:02:46.713526 2935 local_puller.cpp:167] The repositories JSON file for image 'test_image_volume' is '{""""test_image_volume"""":{""""latest"""":""""815b809d588c80fd6ddf4d6ac244ad1c01ae4cbe0f91cc7480e306671ee9c346""""}}' [07:02:46] : [Step 10/10] I1127 07:02:46.713726 2935 local_puller.cpp:295] Extracting layer tar ball '/tmp/R6OUei/store/staging/2GNlJO/815b809d588c80fd6ddf4d6ac244ad1c01ae4cbe0f91cc7480e306671ee9c346/layer.tar to rootfs '/tmp/R6OUei/store/staging/2GNlJO/815b809d588c80fd6ddf4d6ac244ad1c01ae4cbe0f91cc7480e306671ee9c346/rootfs' [07:02:46] : [Step 10/10] I1127 07:02:46.818696 2937 metadata_manager.cpp:155] Successfully cached image 'test_image_volume' [07:02:46] : [Step 10/10] I1127 07:02:46.819169 2934 provisioner.cpp:286] Image layers: 1 [07:02:46] : [Step 10/10] I1127 07:02:46.819188 2934 provisioner.cpp:296] Should hit here [07:02:46] : [Step 10/10] I1127 07:02:46.819221 2934 provisioner.cpp:315] !!!!: bind [07:02:46] : [Step 10/10] I1127 07:02:46.819232 2934 provisioner.cpp:315] !!!!: overlay [07:02:46] : [Step 10/10] I1127 07:02:46.819236 2934 provisioner.cpp:315] !!!!: copy [07:02:46] : [Step 10/10] I1127 07:02:46.819241 2934 provisioner.cpp:318] Provisioning image rootfs '/mnt/teamcity/temp/buildTmp/Nesting_VolumeImageIsolatorTest_ROOT_ImageInVolumeWithRootFilesystem_0_1fMo0c/provisioner/containers/9af6c98a-d9f7-4c89-a5ed-fc7ae2fa1330/backends/overlay/rootfses/baf632b3-29c5-45e4-9d2e-6f3a2bdd9759' for container 9af6c98a-d9f7-4c89-a5ed-fc7ae2fa1330 using overlay backend [07:02:46] : [Step 10/10] ../../src/tests/containerizer/volume_image_isolator_tests.cpp:214: Failure [07:02:46] : [Step 10/10] (launch).failure(): Failed to create symlink '/mnt/teamcity/temp/buildTmp/Nesting_VolumeImageIsolatorTest_ROOT_ImageInVolumeWithRootFilesystem_0_1fMo0c/provisioner/containers/9af6c98a-d9f7-4c89-a5ed-fc7ae2fa1330/backends/overlay/links' -> '/tmp/6dj9IG' [07:02:46] : [Step 10/10] [ FAILED ] Nesting/VolumeImageIsolatorTest.ROOT_ImageInVolumeWithRootFilesystem/0, where GetParam() = false (919 ms) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6654","11/30/2016 18:09:35",3,"Duplicate image layer ids may make the backend failed to mount rootfs. ""Some images (e.g., 'mesosphere/inky') may contain duplicate layer ids in manifest, which may cause some backends unable to mount the rootfs (e.g., 'aufs' backend). We should make sure that each layer path returned in 'ImageInfo' is unique. Here is an example manifest from 'mesosphere/inky': These two layer ids are totally identical: It would make the backend (e.g., aufs) failed to mount the rootfs due to invalid arguments. We should make sure the vector of layer paths that is passed to the backend contains only unique layer path."""," [20:13:08]W: [Step 10/10] """"name"""": """"mesosphere/inky"""", [20:13:08]W: [Step 10/10] """"tag"""": """"latest"""", [20:13:08]W: [Step 10/10] """"architecture"""": """"amd64"""", [20:13:08]W: [Step 10/10] """"fsLayers"""": [ [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:1db09adb5ddd7f1a07b6d585a7db747a51c7bd17418d47e91f901bdf420abd66"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" [20:13:08]W: [Step 10/10] } [20:13:08]W: [Step 10/10] ], [20:13:08]W: [Step 10/10] """"history"""": [ [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6\"""",\""""parent\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""created\"""":\""""2014-08-15T00:31:36.407713553Z\"""",\""""container\"""":\""""5d55401ff99c7508c9d546926b711c78e3ccb36e39a848024b623b2aef4c2c06\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) ENTRYPOINT [echo]\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""1.1.2\"""",\""""author\"""":\""""support@mesosphere.io\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""inky\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6\"""",\""""parent\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""created\"""":\""""2014-08-15T00:31:36.407713553Z\"""",\""""container\"""":\""""5d55401ff99c7508c9d546926b711c78e3ccb36e39a848024b623b2aef4c2c06\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) ENTRYPOINT [echo]\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""1.1.2\"""",\""""author\"""":\""""support@mesosphere.io\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""inky\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""parent\"""":\""""53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f\"""",\""""created\"""":\""""2014-08-15T00:31:36.247988044Z\"""",\""""container\"""":\""""ff756d99367825677c3c18cc5054bfbb3674a7f52a9f916282fb46b8feaddfb7\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) CMD [inky]\""""],\""""Image\"""":\""""53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""1.1.2\"""",\""""author\"""":\""""support@mesosphere.io\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""inky\""""],\""""Image\"""":\""""53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f\"""",\""""parent\"""":\""""a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721\"""",\""""created\"""":\""""2014-08-15T00:31:36.068514721Z\"""",\""""container\"""":\""""696c3d66c8575dfff3ba71267bf194ae97f0478231042449c98aa0d9164d3c8c\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) MAINTAINER support@mesosphere.io\""""],\""""Image\"""":\""""a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""1.1.2\"""",\""""author\"""":\""""support@mesosphere.io\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\""""],\""""Image\"""":\""""a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721\"""",\""""parent\"""":\""""120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16\"""",\""""created\"""":\""""2014-06-05T00:05:35.990887725Z\"""",\""""container\"""":\""""bb3475b3130b6a47104549a0291a6569d24e41fa57a7f094591f0d4611fd15bc\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) CMD [/bin/sh]\""""],\""""Image\"""":\""""120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""0.10.0\"""",\""""author\"""":\""""Jrme Petazzoni \\u003cjerome@docker.com\\u003e\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\""""],\""""Image\"""":\""""120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16\"""",\""""parent\"""":\""""42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229\"""",\""""created\"""":\""""2014-06-05T00:05:35.692528634Z\"""",\""""container\"""":\""""fc203791c4d5024b1a976223daa1cc7b1ceeb5b3abf25a2fb73034eba6398026\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) ADD file:88f36b32456f849299e5df807a1e3514cf1da798af9692a0004598e500be5901 in /\""""],\""""Image\"""":\""""42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""0.10.0\"""",\""""author\"""":\""""Jrme Petazzoni \\u003cjerome@docker.com\\u003e\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":null,\""""Image\"""":\""""42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":2433303}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229\"""",\""""parent\"""":\""""511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158\"""",\""""created\"""":\""""2014-06-05T00:05:35.589531476Z\"""",\""""container\"""":\""""f7d939e68b5afdd74637d9204c40fe00295e658923be395c761da3278b98e446\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) MAINTAINER Jrme Petazzoni \\u003cjerome@docker.com\\u003e\""""],\""""Image\"""":\""""511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""0.10.0\"""",\""""author\"""":\""""Jrme Petazzoni \\u003cjerome@docker.com\\u003e\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":null,\""""Image\"""":\""""511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158\"""",\""""comment\"""":\""""Imported from -\"""",\""""created\"""":\""""2013-06-13T14:03:50.821769-07:00\"""",\""""container_config\"""":{\""""Hostname\"""":\""""\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":null,\""""Cmd\"""":null,\""""Image\"""":\""""\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":null,\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":null,\""""Labels\"""":null},\""""docker_version\"""":\""""0.4.0\"""",\""""architecture\"""":\""""x86_64\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] } [20:13:08]W: [Step 10/10] ], [20:13:08]W: [Step 10/10] """"schemaVersion"""": 1, [20:13:08]W: [Step 10/10] """"signatures"""": [ [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"header"""": { [20:13:08]W: [Step 10/10] """"jwk"""": { [20:13:08]W: [Step 10/10] """"crv"""": """"P-256"""", [20:13:08]W: [Step 10/10] """"kid"""": """"4AYN:KH32:GJJD:I6BX:SJAZ:A3EC:P7IC:7O7C:22ZQ:3Z5O:75VQ:3QOT"""", [20:13:08]W: [Step 10/10] """"kty"""": """"EC"""", [20:13:08]W: [Step 10/10] """"x"""": """"o8bvrUwNpXKZdgoo2wQ7EHQzCVYhVuoOvjqGEXtRylU"""", [20:13:08]W: [Step 10/10] """"y"""": """"DCHyGr0Cbi-fZzqypQm16qKfefUMqCTk0rQME-q5GmA"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] """"alg"""": """"ES256"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] """"signature"""": """"f3fAob4XPT0pUW9TiPtxAE_zPAe0PdM2imxAeaCmJbBf6Lb-SuFPVGE4iqz1CO0VOijeYVuB1G1lv_a5Nnj5zg"""", [20:13:08]W: [Step 10/10] """"protected"""": """"eyJmb3JtYXRMZW5ndGgiOjEzNzA3LCJmb3JtYXRUYWlsIjoiQ24wIiwidGltZSI6IjIwMTYtMDgtMDVUMjA6MTM6MDdaIn0"""" [20:13:08]W: [Step 10/10] } [20:13:08]W: [Step 10/10] ] [20:13:08]W: [Step 10/10] }' [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6\"""",\""""parent\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""created\"""":\""""2014-08-15T00:31:36.407713553Z\"""",\""""container\"""":\""""5d55401ff99c7508c9d546926b711c78e3ccb36e39a848024b623b2aef4c2c06\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) ENTRYPOINT [echo]\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""1.1.2\"""",\""""author\"""":\""""support@mesosphere.io\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""inky\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] { [20:13:08]W: [Step 10/10] """"v1Compatibility"""": """"{\""""id\"""":\""""e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6\"""",\""""parent\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""created\"""":\""""2014-08-15T00:31:36.407713553Z\"""",\""""container\"""":\""""5d55401ff99c7508c9d546926b711c78e3ccb36e39a848024b623b2aef4c2c06\"""",\""""container_config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) ENTRYPOINT [echo]\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""docker_version\"""":\""""1.1.2\"""",\""""author\"""":\""""support@mesosphere.io\"""",\""""config\"""":{\""""Hostname\"""":\""""f7d939e68b5a\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""PortSpecs\"""":null,\""""ExposedPorts\"""":null,\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""HOME=/\"""",\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\""""],\""""Cmd\"""":[\""""inky\""""],\""""Image\"""":\""""be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e\"""",\""""Volumes\"""":null,\""""VolumeDriver\"""":\""""\"""",\""""WorkingDir\"""":\""""\"""",\""""Entrypoint\"""":[\""""echo\""""],\""""NetworkDisabled\"""":false,\""""MacAddress\"""":\""""\"""",\""""OnBuild\"""":[],\""""Labels\"""":null},\""""architecture\"""":\""""amd64\"""",\""""os\"""":\""""linux\"""",\""""Size\"""":0}\n"""" [20:13:08]W: [Step 10/10] }, [20:13:08]W: [Step 10/10] E0805 20:13:08.614994 23432 slave.cpp:4029] Container 'f2c1fd6d-4d11-45cd-a916-e4d73d226451' for executor 'ecd0633f-2f1e-4cfa-819f-590bfb95fa12' of framework dd755a55-0dd1-4d2d-9a49-812a666015cb-0000 failed to start: Failed to mount rootfs '/mnt/teamcity/temp/buildTmp/DockerRuntimeIsolatorTest_ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller_CL0mhn/provisioner/containers/f2c1fd6d-4d11-45cd-a916-e4d73d226451/backends/aufs/rootfses/427b7851-bf82-4553-80f3-da2d42cede77' with aufs: Invalid argument ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6656","11/30/2016 23:40:36",0,"Nested containers can become unkillable ""An incident occurred recently in a cluster running a build of Mesos based on commit {{757319357471227c0a1e906076eae8f9aa2fdbd6}} from master. A task group of five tasks was launched via Marathon. After the tasks were launched, one of the containers quickly exited and was successfully destroyed. A couple minutes later, the task group was killed manually via Marathon, and the agent can then be seen repeatedly attempting to kill the tasks for hours. No calls to {{WAIT_NESTED_CONTAINER}} are visible in the agent logs, and the executor logs do not indicate at any point that the nested containers were launched successfully. Agent logs: Executor log: Meanwhile, the tasks show up as {{STAGING}} in the Mesos web UI, and their sandboxes are empty."""," Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.890911 6406 slave.cpp:1539] Got assigned task group containing tasks [ dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server1, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server2, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server3, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server4, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server5 ] for framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.892299 6406 gc.cpp:83] Unscheduling '/var/lib/mesos/slave/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000' from gc Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.892379 6406 gc.cpp:83] Unscheduling '/var/lib/mesos/slave/meta/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000' from gc Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.893131 6405 slave.cpp:1701] Launching task group containing tasks [ dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server1, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server2, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server3, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server4, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server5 ] for framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.893435 6405 paths.cpp:536] Trying to chown '/var/lib/mesos/slave/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000/executors/instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001/runs/8750c2a7-8bef-4a69-8ef2-b873f884bf91' to user 'root' Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.898026 6405 slave.cpp:6179] Launching executor 'instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001' of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 with resources cpus(*):0.1; mem(*):32; disk(*):10; ports(*):[21421-21425] in work directory '/var/lib/mesos/slave/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000/executors/instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001/runs/8750c2a7-8bef-4a69-8ef2-b873f884bf91' Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.898731 6407 docker.cpp:1000] Skipping non-docker container Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.899050 6407 containerizer.cpp:938] Starting container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 for executor 'instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001' of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.899909 6405 slave.cpp:1987] Queued task group containing tasks [ dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server1, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server2, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server3, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server4, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server5 ] for executor 'instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001' of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.907033 6405 memory.cpp:451] Started listening for OOM events for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.908336 6405 memory.cpp:562] Started listening on 'low' memory pressure events for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.909638 6405 memory.cpp:562] Started listening on 'medium' memory pressure events for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.910923 6405 memory.cpp:562] Started listening on 'critical' memory pressure events for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.912464 6405 memory.cpp:199] Updated 'memory.soft_limit_in_bytes' to 32MB for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.914640 6405 memory.cpp:251] Updated 'memory.limit_in_bytes' to 32MB for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.915769 6405 cpu.cpp:101] Updated 'cpu.shares' to 102 (cpus 0.1) for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.917944 6405 cpu.cpp:121] Updated 'cpu.cfs_period_us' to 100ms and 'cpu.cfs_quota_us' to 10ms (cpus 0.1) for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.918515 6404 isolator_module.cpp:55] Container prepare: container_id[value: """"8750c2a7-8bef-4a69-8ef2-b873f884bf91""""] container_config[directory: """"/var/lib/mesos/slave/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000/executors/instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001/runs/8750c2a7-8bef-4a69-8ef2-b873f884bf91"""" user: """"root"""" executor_info { executor_id { value: """"instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001"""" } resources { name: """"cpus"""" type: SCALAR scalar { value: 0.1 } } resources { name: """"mem"""" type: SCALAR scalar { value: 32 } } resources { name: """"disk"""" type: SCALAR scalar { value: 10 } } resources { name: """"ports"""" type: RANGES ranges { range { begin: 21421 end: 21425 } } role: """"*"""" } command { value: """"/opt/mesosphere/packages/mesos--a00089c894f82b6455d1db7c4267a742458020b3/libexec/mesos/mesos-default-executor"""" user: """"root"""" shell: false arguments: """"mesos-default-executor"""" } framework_id { value: """"ce4bd8be-1198-4819-81d4-9a8439439741-0000"""" } container { type: MESOS network_infos { labels { labels { key: """"rcluster.location"""" value: """"globalqa.las2.test0"""" } labels { key: """"rcluster.group"""" value: """"dat"""" } labels { key: """"rcluster.application"""" value: """"scout"""" } } name: """"rcluster-cni"""" port_mappings { host_port: 21421 container_port: 8080 protocol: """"tcp"""" } port_mappings { host_port: 21422 container_port: 8090 protocol: """"tcp"""" } port_mappings { host_port: 21423 container_port: 8093 protocol: """"tcp"""" } port_mappings { host_port: 21424 container_port: 8090 protocol: """"tcp"""" } port_mappings { host_port: 21425 container_port: 8090 protocol: """"tcp"""" } } } labels { } type: DEFAULT } command_info { value: """"/opt/mesosphere/packages/mesos--a00089c894f82b6455d1db7c4267a742458020b3/libexec/mesos/mesos-default-executor"""" user: """"root"""" shell: false arguments: """"mesos-default-executor"""" } container_info { type: MESOS network_infos { labels { labels { key: """"rcluster.location"""" value: """"globalqa.las2.test0"""" } labels { key: """"rcluster.group"""" value: """"dat"""" } labels { key: """"rc Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: luster.application"""" value: """"scout"""" } } name: """"rcluster-cni"""" port_mappings { host_port: 21421 container_port: 8080 protocol: """"tcp"""" } port_mappings { host_port: 21422 container_port: 8090 protocol: """"tcp"""" } port_mappings { host_port: 21423 container_port: 8093 protocol: """"tcp"""" } port_mappings { host_port: 21424 container_port: 8090 protocol: """"tcp"""" } port_mappings { host_port: 21425 container_port: 8090 protocol: """"tcp"""" } } } resources { name: """"cpus"""" type: SCALAR scalar { value: 0.1 } } resources { name: """"mem"""" type: SCALAR scalar { value: 32 } } resources { name: """"disk"""" type: SCALAR scalar { value: 10 } } resources { name: """"ports"""" type: RANGES ranges { range { begin: 21421 end: 21425 } } role: """"*"""" }] Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.918647 6404 container_assigner.cpp:64] Registering and retrieving endpoint for container_id[value: """"8750c2a7-8bef-4a69-8ef2-b873f884bf91""""] executor_info[executor_id { value: """"instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001"""" } resources { name: """"cpus"""" type: SCALAR scalar { value: 0.1 } } resources { name: """"mem"""" type: SCALAR scalar { value: 32 } } resources { name: """"disk"""" type: SCALAR scalar { value: 10 } } resources { name: """"ports"""" type: RANGES ranges { range { begin: 21421 end: 21425 } } role: """"*"""" } command { value: """"/opt/mesosphere/packages/mesos--a00089c894f82b6455d1db7c4267a742458020b3/libexec/mesos/mesos-default-executor"""" user: """"root"""" shell: false arguments: """"mesos-default-executor"""" } framework_id { value: """"ce4bd8be-1198-4819-81d4-9a8439439741-0000"""" } container { type: MESOS network_infos { labels { labels { key: """"rcluster.location"""" value: """"globalqa.las2.test0"""" } labels { key: """"rcluster.group"""" value: """"dat"""" } labels { key: """"rcluster.application"""" value: """"scout"""" } } name: """"rcluster-cni"""" port_mappings { host_port: 21421 container_port: 8080 protocol: """"tcp"""" } port_mappings { host_port: 21422 container_port: 8090 protocol: """"tcp"""" } port_mappings { host_port: 21423 container_port: 8093 protocol: """"tcp"""" } port_mappings { host_port: 21424 container_port: 8090 protocol: """"tcp"""" } port_mappings { host_port: 21425 container_port: 8090 protocol: """"tcp"""" } } } labels { } type: DEFAULT]. Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.918715 6404 sync_util.hpp:39] Dispatching and waiting <=5s for ticket 431: register_and_update_cache Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.918865 6488 container_reader_impl.cpp:34] Reader constructed for 198.51.100.1:0 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.919052 6488 container_reader_impl.cpp:103] Reader listening on 198.51.100.1:51243 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.919108 6488 container_assigner_strategy.cpp:156] New ephemeral-port reader for container[value: """"8750c2a7-8bef-4a69-8ef2-b873f884bf91""""] created at endpoint[198.51.100.1:51243]. Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.919144 6488 container_state_cache_impl.cpp:134] Writing container file[/var/run/mesos/isolators/com_mesosphere_MetricsIsolatorModule/containers/8750c2a7-8bef-4a69-8ef2-b873f884bf91] with endpoint[198.51.100.1:51243] Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.919220 6488 sync_util.hpp:136] Result for ticket 431 complete, returning value. Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.919275 6404 sync_util.hpp:83] Dispatch result obtained for ticket 431 after waiting <=5s: register_and_update_cache Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.924211 6404 systemd.cpp:96] Assigned child process '2701' to 'mesos_executors.slice' Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.927724 6404 systemd.cpp:96] Assigned child process '2702' to 'mesos_executors.slice' Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.929244 6403 linux_launcher.cpp:421] Launching container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 and cloning with namespaces CLONE_NEWNS | CLONE_NEWUTS | CLONE_NEWNET Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.935976 6403 systemd.cpp:96] Assigned child process '2705' to 'mesos_executors.slice' Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.943640 6409 containerizer.cpp:1489] Checkpointing container's forked pid 2705 to '/var/lib/mesos/slave/meta/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000/executors/instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001/runs/8750c2a7-8bef-4a69-8ef2-b873f884bf91/pids/forked.pid' Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:16.945247 6410 cni.cpp:814] Bind mounted '/proc/2705/ns/net' to '/run/mesos/isolators/network/cni/8750c2a7-8bef-4a69-8ef2-b873f884bf91/ns' for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:16.961654 2701 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:16 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:16.963691 2702 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:17.250937 6410 cni.cpp:1230] Got assigned IPv4 address '10.47.11.252/32' from CNI network 'rcluster-cni' for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:17.251171 6410 cni.cpp:939] DNS nameservers for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 are: Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: nameserver 10.190.4.22 Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: nameserver 10.190.4.23 Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:17.942303 6404 http.cpp:277] HTTP POST for /slave(1)/api/v1/executor from 10.47.11.252:38416 Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:17.942446 6404 slave.cpp:3022] Received Subscribe request for HTTP executor 'instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001' of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:17.942533 6404 slave.cpp:3085] Creating a marker file for HTTP based executor 'instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001' of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 (via HTTP) at path '/var/lib/mesos/slave/meta/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000/executors/instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001/runs/8750c2a7-8bef-4a69-8ef2-b873f884bf91/http.marker' Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:17.944046 6404 disk.cpp:207] Updating the disk resources for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 to cpus(*):0.6; mem(*):192; disk(*):170; ports(*):[21421-21425] Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:17.945669 6407 memory.cpp:199] Updated 'memory.soft_limit_in_bytes' to 192MB for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:17.947779 6407 memory.cpp:251] Updated 'memory.limit_in_bytes' to 192MB for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:17.948951 6407 cpu.cpp:101] Updated 'cpu.shares' to 614 (cpus 0.6) for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:17.951117 6407 cpu.cpp:121] Updated 'cpu.cfs_period_us' to 100ms and 'cpu.cfs_quota_us' to 60ms (cpus 0.6) for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91 Nov 29 04:04:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:17.952219 6404 slave.cpp:2220] Sending queued task group task group containing tasks [ dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server1, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server2, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server3, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server4, dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server5 ] to executor 'instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001' of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 (via HTTP) Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.002063 6408 http.cpp:277] HTTP POST for /slave(1)/api/v1 from 10.47.11.252:38420 Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.002223 6408 http.cpp:353] Processing call LAUNCH_NESTED_CONTAINER Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.003235 6409 containerizer.cpp:1657] Starting nested container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.063dbcd6-1a0a-4058-a615-7e6deaf17707 Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.003327 6407 http.cpp:277] HTTP POST for /slave(1)/api/v1 from 10.47.11.252:38420 Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.003432 6409 containerizer.cpp:1681] Trying to chown '/var/lib/mesos/slave/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000/executors/instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001/runs/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/063dbcd6-1a0a-4058-a615-7e6deaf17707' to user 'nobody' Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.003465 6407 http.cpp:353] Processing call LAUNCH_NESTED_CONTAINER Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.005939 6407 http.cpp:277] HTTP POST for /slave(1)/api/v1 from 10.47.11.252:38420 Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.006057 6407 http.cpp:353] Processing call LAUNCH_NESTED_CONTAINER Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.006265 6407 http.cpp:277] HTTP POST for /slave(1)/api/v1 from 10.47.11.252:38420 Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.006384 6407 http.cpp:353] Processing call LAUNCH_NESTED_CONTAINER Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.008982 6409 containerizer.cpp:1657] Starting nested container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.a3b19108-d56c-44ea-b20a-912d539dde7b Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.009096 6409 containerizer.cpp:1681] Trying to chown '/var/lib/mesos/slave/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000/executors/instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001/runs/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/a3b19108-d56c-44ea-b20a-912d539dde7b' to user 'nobody' Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.013618 6409 containerizer.cpp:1657] Starting nested container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.02c64ae4-ccdd-415d-946a-3f665acbb668 Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.013708 6409 containerizer.cpp:1681] Trying to chown '/var/lib/mesos/slave/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000/executors/instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001/runs/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/02c64ae4-ccdd-415d-946a-3f665acbb668' to user 'nobody' Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.018230 6409 containerizer.cpp:1657] Starting nested container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c5c10cff-a6af-4e51-911e-61d04e5c1d47 Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.018399 6409 containerizer.cpp:1681] Trying to chown '/var/lib/mesos/slave/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000/executors/instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001/runs/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/c5c10cff-a6af-4e51-911e-61d04e5c1d47' to user 'nobody' Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.022992 6403 http.cpp:277] HTTP POST for /slave(1)/api/v1 from 10.47.11.252:38420 Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.023133 6403 http.cpp:353] Processing call LAUNCH_NESTED_CONTAINER Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.023598 6406 containerizer.cpp:1657] Starting nested container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c8f6dc94-baba-44be-ab65-49538645a31c Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.023705 6406 containerizer.cpp:1681] Trying to chown '/var/lib/mesos/slave/slaves/ce4bd8be-1198-4819-81d4-9a8439439741-S1/frameworks/ce4bd8be-1198-4819-81d4-9a8439439741-0000/executors/instance-dat_scout.e57be1fe-b5e8-11e6-995b-70b3d5800001/runs/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/c8f6dc94-baba-44be-ab65-49538645a31c' to user 'nobody' Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.462172 6405 provisioner.cpp:294] Provisioning image rootfs '/var/lib/mesos/slave/provisioner/containers/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/063dbcd6-1a0a-4058-a615-7e6deaf17707/backends/copy/rootfses/97f1a1df-1eec-46e3-8a93-8a2b80d48dd2' for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.063dbcd6-1a0a-4058-a615-7e6deaf17707 Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.462401 6405 provisioner.cpp:294] Provisioning image rootfs '/var/lib/mesos/slave/provisioner/containers/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/a3b19108-d56c-44ea-b20a-912d539dde7b/backends/copy/rootfses/b93ffe07-6f64-42e1-b8c4-77bb224fc9e0' for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.a3b19108-d56c-44ea-b20a-912d539dde7b Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.465819 6410 provisioner.cpp:294] Provisioning image rootfs '/var/lib/mesos/slave/provisioner/containers/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/02c64ae4-ccdd-415d-946a-3f665acbb668/backends/copy/rootfses/5d388325-8b4d-496f-b520-4a96584a5971' for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.02c64ae4-ccdd-415d-946a-3f665acbb668 Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.470854 6409 provisioner.cpp:294] Provisioning image rootfs '/var/lib/mesos/slave/provisioner/containers/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/c5c10cff-a6af-4e51-911e-61d04e5c1d47/backends/copy/rootfses/5b4d32da-7d82-45d2-ad28-85a48456a18b' for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c5c10cff-a6af-4e51-911e-61d04e5c1d47 Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.471132 6409 provisioner.cpp:294] Provisioning image rootfs '/var/lib/mesos/slave/provisioner/containers/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/c8f6dc94-baba-44be-ab65-49538645a31c/backends/copy/rootfses/244e36f5-ecca-444f-9bf6-05aa77cb7263' for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c8f6dc94-baba-44be-ab65-49538645a31c Nov 29 04:04:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:18.893782 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.164958 6405 runtime.cpp:109] Container user 'nobody' is not supported yet for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.063dbcd6-1a0a-4058-a615-7e6deaf17707 Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.171018 6405 systemd.cpp:96] Assigned child process '2880' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.176822 6405 systemd.cpp:96] Assigned child process '2881' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.179283 6405 linux_launcher.cpp:421] Launching nested container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.063dbcd6-1a0a-4058-a615-7e6deaf17707 and cloning with namespaces CLONE_NEWNS Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.185796 6409 runtime.cpp:109] Container user 'nobody' is not supported yet for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c5c10cff-a6af-4e51-911e-61d04e5c1d47 Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.200559 6405 systemd.cpp:96] Assigned child process '2884' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.209010 2881 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.211642 2880 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.215493 6405 runtime.cpp:109] Container user 'nobody' is not supported yet for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.a3b19108-d56c-44ea-b20a-912d539dde7b Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.228160 6407 systemd.cpp:96] Assigned child process '2896' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.234012 6407 systemd.cpp:96] Assigned child process '2897' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.239496 6409 linux_launcher.cpp:421] Launching nested container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c5c10cff-a6af-4e51-911e-61d04e5c1d47 and cloning with namespaces CLONE_NEWNS Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.242473 6405 runtime.cpp:109] Container user 'nobody' is not supported yet for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.02c64ae4-ccdd-415d-946a-3f665acbb668 Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.243351 6405 runtime.cpp:109] Container user 'nobody' is not supported yet for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c8f6dc94-baba-44be-ab65-49538645a31c Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.254703 6409 systemd.cpp:96] Assigned child process '2900' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.268317 2896 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.274961 6405 systemd.cpp:96] Assigned child process '2902' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.279268 6405 systemd.cpp:96] Assigned child process '2911' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.284328 6405 systemd.cpp:96] Assigned child process '2912' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.286672 6404 linux_launcher.cpp:421] Launching nested container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.a3b19108-d56c-44ea-b20a-912d539dde7b and cloning with namespaces CLONE_NEWNS Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.299939 6405 systemd.cpp:96] Assigned child process '2914' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.304103 6405 systemd.cpp:96] Assigned child process '2917' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.307452 6405 systemd.cpp:96] Assigned child process '2918' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.309020 2897 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.311029 6404 systemd.cpp:96] Assigned child process '2916' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.331065 2911 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.333595 6405 linux_launcher.cpp:421] Launching nested container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.02c64ae4-ccdd-415d-946a-3f665acbb668 and cloning with namespaces CLONE_NEWNS Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.342017 2902 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.372247 6405 systemd.cpp:96] Assigned child process '2937' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.372742 2912 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.378247 2914 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.388391 2917 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.388517 6410 linux_launcher.cpp:421] Launching nested container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c8f6dc94-baba-44be-ab65-49538645a31c and cloning with namespaces CLONE_NEWNS Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: WARNING: Logging before InitGoogleLogging() is written to STDERR Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: W1129 04:04:20.396700 2918 process.cpp:882] Failed SSL connections will be downgraded to a non-SSL socket Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.400269 6410 systemd.cpp:96] Assigned child process '2969' to 'mesos_executors.slice' Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.772186 6403 containerizer.cpp:2313] Container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c8f6dc94-baba-44be-ab65-49538645a31c has exited Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.772228 6403 containerizer.cpp:1950] Destroying container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c8f6dc94-baba-44be-ab65-49538645a31c in RUNNING state Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.772330 6403 linux_launcher.cpp:498] Asked to destroy container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c8f6dc94-baba-44be-ab65-49538645a31c Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.773825 6403 linux_launcher.cpp:541] Using freezer to destroy cgroup mesos/8750c2a7-8bef-4a69-8ef2-b873f884bf91/mesos/c8f6dc94-baba-44be-ab65-49538645a31c Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.776217 6407 cgroups.cpp:2705] Freezing cgroup /sys/fs/cgroup/freezer/mesos/8750c2a7-8bef-4a69-8ef2-b873f884bf91/mesos/c8f6dc94-baba-44be-ab65-49538645a31c Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.780053 6410 cgroups.cpp:1439] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/8750c2a7-8bef-4a69-8ef2-b873f884bf91/mesos/c8f6dc94-baba-44be-ab65-49538645a31c after 3.798016ms Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.783502 6407 cgroups.cpp:2723] Thawing cgroup /sys/fs/cgroup/freezer/mesos/8750c2a7-8bef-4a69-8ef2-b873f884bf91/mesos/c8f6dc94-baba-44be-ab65-49538645a31c Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.786717 6407 cgroups.cpp:1468] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/8750c2a7-8bef-4a69-8ef2-b873f884bf91/mesos/c8f6dc94-baba-44be-ab65-49538645a31c after 3.186176ms Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.791746 6404 provisioner.cpp:488] Destroying container rootfs at '/var/lib/mesos/slave/provisioner/containers/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/c8f6dc94-baba-44be-ab65-49538645a31c/backends/copy/rootfses/244e36f5-ecca-444f-9bf6-05aa77cb7263' for container 8750c2a7-8bef-4a69-8ef2-b873f884bf91.c8f6dc94-baba-44be-ab65-49538645a31c Nov 29 04:04:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:20.873241 6405 containerizer.cpp:2229] Checkpointing termination state to nested container's runtime directory '/var/run/mesos/containers/8750c2a7-8bef-4a69-8ef2-b873f884bf91/containers/c8f6dc94-baba-44be-ab65-49538645a31c/termination' Nov 29 04:04:21 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:21.153594 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:23 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:23.410384 6404 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:25 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:25.667546 6410 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:27 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:27.925597 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:30 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:30.198536 6410 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:32 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:32.456645 6407 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:34 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:34.716507 6405 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:36 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:36.974459 6409 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:39 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:39.229652 6406 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:41 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:41.482465 6404 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:43 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:43.736668 6405 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:45 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:45.994588 6407 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:45 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:45.999001 6488 metrics_tcp_sender.cpp:252] TCP Throughput (bytes): sent=0, dropped=0, failed=0, pending=0 (state CONNECT_PENDING) Nov 29 04:04:46 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:46.187460 6408 slave.cpp:5044] Current disk usage 10.89%. Max allowed age: 1.582110178819861days Nov 29 04:04:48 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:48.161314 6405 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:57362 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:04:48 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:48.250435 6409 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:49 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:49.033789 6488 metrics_tcp_sender.cpp:127] Timed out when opening metrics connection to 127.0.0.1:8124. This is expected if no Metrics Collector is running on this agent. (state CONNECT_PENDING) Nov 29 04:04:49 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:49.033946 6488 metrics_tcp_sender.cpp:112] Attempting to open connection to 127.0.0.1:8124 Nov 29 04:04:49 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:49.034059 6488 metrics_tcp_sender.cpp:140] Got error 'Connection refused'(system:111) when connecting to metrics service at 127.0.0.1:8124. This is expected if no Metrics Collector is running on this agent. (state CONNECT_IN_PROGRESS) Nov 29 04:04:49 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:49.034081 6488 metrics_tcp_sender.cpp:94] Scheduling reconnect to 127.0.0.1:8124 in 60s... Nov 29 04:04:50 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:50.161448 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:57426 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:04:50 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:50.503459 6404 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:52 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:52.161912 6406 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:57486 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:04:52 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:52.765455 6407 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:54 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:54.161412 6405 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:57546 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:04:55 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:55.022529 6410 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:56 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:56.163820 6410 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:57606 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:04:57 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:57.279559 6405 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:04:58 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:58.162578 6405 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:57666 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:04:59 ip-10-190-112-199 mesos-agent[6397]: I1129 04:04:59.538532 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:00 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:00.161715 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:57726 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:05:01 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:01.801764 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:02 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:02.160838 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:57790 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:05:04 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:04.060504 6409 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:04 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:04.162878 6409 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:57908 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:05:06 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:06.161875 6407 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:57962 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:05:06 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:06.318395 6409 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:08 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:08.160797 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:58016 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:05:08 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:08.572476 6404 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:10 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:10.162160 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:58070 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:05:10 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:10.828313 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:12 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:12.161572 6407 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:58124 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:05:13 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:13.084312 6406 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:14 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:14.164269 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:58182 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:05:15 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:15.339344 6406 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:16.165223 6405 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:58238 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:05:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:16.919145 6488 container_reader_impl.cpp:146] Throughput from container at port 51243 (bytes): received=0, throttled=0 Nov 29 04:05:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:17.476977 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.74:58298 with User-Agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' with X-Forwarded-For='10.192.128.26' Nov 29 04:05:17 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:17.594532 6407 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:19 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:19.856494 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:22 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:22.112598 6404 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:24 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:24.371556 6406 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:26 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:26.628479 6410 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:28 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:28.885504 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:31 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:31.143674 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:33 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:33.434650 6410 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:35 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:35.694718 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:37 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:37.952450 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:40 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:40.209468 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:42 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:42.466473 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:44 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:44.724468 6406 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:45 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:45.999135 6488 metrics_tcp_sender.cpp:252] TCP Throughput (bytes): sent=0, dropped=373, failed=0, pending=0 (state CONNECT_PENDING) Nov 29 04:05:46 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:46.188647 6408 slave.cpp:5044] Current disk usage 10.83%. Max allowed age: 1.583336138877014days Nov 29 04:05:46 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:46.986548 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:49 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:49.034080 6488 metrics_tcp_sender.cpp:127] Timed out when opening metrics connection to 127.0.0.1:8124. This is expected if no Metrics Collector is running on this agent. (state CONNECT_PENDING) Nov 29 04:05:49 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:49.034231 6488 metrics_tcp_sender.cpp:112] Attempting to open connection to 127.0.0.1:8124 Nov 29 04:05:49 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:49.034332 6488 metrics_tcp_sender.cpp:140] Got error 'Connection refused'(system:111) when connecting to metrics service at 127.0.0.1:8124. This is expected if no Metrics Collector is running on this agent. (state CONNECT_IN_PROGRESS) Nov 29 04:05:49 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:49.034358 6488 metrics_tcp_sender.cpp:94] Scheduling reconnect to 127.0.0.1:8124 in 60s... Nov 29 04:05:49 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:49.243544 6406 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:51 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:51.500578 6410 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:53 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:53.755667 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:56 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:56.010568 6409 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:05:58 ip-10-190-112-199 mesos-agent[6397]: I1129 04:05:58.269441 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:00 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:00.526623 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:02 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:02.784615 6406 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:05 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:05.043455 6407 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:07 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:07.299536 6410 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:09 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:09.556404 6405 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:11 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:11.813442 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:14 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:14.068482 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:16.325479 6410 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:16 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:16.919404 6488 container_reader_impl.cpp:146] Throughput from container at port 51243 (bytes): received=0, throttled=0 Nov 29 04:06:18 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:18.581571 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:20 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:20.841563 6407 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:23 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:23.098562 6403 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:25 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:25.358496 6404 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:26 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:26.886557 6406 slave.cpp:2288] Asked to kill task dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server5 of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:06:26 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:26.886762 6405 slave.cpp:2288] Asked to kill task dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server1 of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:06:26 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:26.886831 6405 slave.cpp:2288] Asked to kill task dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server3 of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:06:26 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:26.886884 6405 slave.cpp:2288] Asked to kill task dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server4 of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:06:26 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:26.886924 6405 slave.cpp:2288] Asked to kill task dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server2 of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:06:27 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:27.615567 6404 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:29 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:29.877619 6405 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:32 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:32.135524 6408 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:34 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:34.402644 6409 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:36 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:36.664479 6407 http.cpp:277] HTTP GET for /slave(1)/state from 10.190.112.199:36156 Nov 29 04:06:36 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:36.903115 6405 slave.cpp:2288] Asked to kill task dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server5 of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:06:36 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:36.903259 6405 slave.cpp:2288] Asked to kill task dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server1 of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:06:36 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:36.903317 6405 slave.cpp:2288] Asked to kill task dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server3 of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:06:36 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:36.903436 6408 slave.cpp:2288] Asked to kill task dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server4 of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 Nov 29 04:06:36 ip-10-190-112-199 mesos-agent[6397]: I1129 04:06:36.903501 6408 slave.cpp:2288] Asked to kill task dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server2 of framework ce4bd8be-1198-4819-81d4-9a8439439741-0000 I1129 04:04:17.832391 2734 executor.cpp:189] Version: 1.1.0 I1129 04:04:17.956600 2738 default_executor.cpp:123] Received SUBSCRIBED event I1129 04:04:17.956832 2738 default_executor.cpp:127] Subscribed executor on 10.190.112.199 I1129 04:04:17.968544 2735 default_executor.cpp:123] Received LAUNCH_GROUP event I1129 04:06:26.899070 2735 default_executor.cpp:123] Received KILL event I1129 04:06:26.899101 2735 default_executor.cpp:802] Received kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server5' W1129 04:06:26.899122 2735 default_executor.cpp:813] Ignoring kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server5' as it is no longer active I1129 04:06:26.899235 2742 default_executor.cpp:123] Received KILL event I1129 04:06:26.899251 2742 default_executor.cpp:802] Received kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server1' W1129 04:06:26.899260 2742 default_executor.cpp:813] Ignoring kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server1' as it is no longer active I1129 04:06:26.899425 2741 default_executor.cpp:123] Received KILL event I1129 04:06:26.899443 2741 default_executor.cpp:802] Received kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server3' W1129 04:06:26.899448 2741 default_executor.cpp:813] Ignoring kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server3' as it is no longer active I1129 04:06:26.899591 2740 default_executor.cpp:123] Received KILL event I1129 04:06:26.899616 2740 default_executor.cpp:802] Received kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server4' W1129 04:06:26.899627 2740 default_executor.cpp:813] Ignoring kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server4' as it is no longer active I1129 04:06:26.899699 2735 default_executor.cpp:123] Received KILL event I1129 04:06:26.899713 2735 default_executor.cpp:802] Received kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server2' W1129 04:06:26.899721 2735 default_executor.cpp:813] Ignoring kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server2' as it is no longer active I1129 04:06:36.915755 2740 default_executor.cpp:123] Received KILL event I1129 04:06:36.915786 2740 default_executor.cpp:802] Received kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server5' W1129 04:06:36.915796 2740 default_executor.cpp:813] Ignoring kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server5' as it is no longer active I1129 04:06:36.915915 2736 default_executor.cpp:123] Received KILL event I1129 04:06:36.915931 2736 default_executor.cpp:802] Received kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server1' W1129 04:06:36.915940 2736 default_executor.cpp:813] Ignoring kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server1' as it is no longer active I1129 04:06:36.915962 2736 default_executor.cpp:123] Received KILL event I1129 04:06:36.915972 2736 default_executor.cpp:802] Received kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server3' W1129 04:06:36.915979 2736 default_executor.cpp:813] Ignoring kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server3' as it is no longer active I1129 04:06:36.915997 2736 default_executor.cpp:123] Received KILL event I1129 04:06:36.916007 2736 default_executor.cpp:802] Received kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server4' W1129 04:06:36.916013 2736 default_executor.cpp:813] Ignoring kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server4' as it is no longer active I1129 04:06:36.916031 2736 default_executor.cpp:123] Received KILL event I1129 04:06:36.916040 2736 default_executor.cpp:802] Received kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server2' W1129 04:06:36.916048 2736 default_executor.cpp:813] Ignoring kill for task 'dat_scout.instance-e57be1fe-b5e8-11e6-995b-70b3d5800001.scout-server2' as it is no longer active continuing for hours ... ",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6672","12/02/2016 15:10:31",1,"Class DynamicLibrary's default copy constructor can lead to inconsistent state ""The class {{DynamicLibrary}} provides a RAII wrapper around a low-level handle to a loaded library. Currently it supports copy- and move-construction which would lead to two libraries holding handles to the same library. This can e.g., lead to libraries being unloaded while other wrappers still hold handles.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6713","12/05/2016 07:50:06",3,"Port `slave_recovery_tests.cpp` ""https://reviews.apache.org/r/65408/""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6719","12/05/2016 16:54:40",2,"Unify ""active"" and ""state""/""connected"" fields in Master::Framework ""Rather than tracking whether a framework is """"active"""" separately from whether it is """"connected"""", we should consider using a single """"state"""" variable to track the current state of the framework (connected-and-active, connected-and-inactive, disconnected, etc.)""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6720","12/05/2016 17:26:24",2,"Check that `PreferredToolArchitecture` is set to `x64` on Windows before building ""If this variable is not set before we build, it will cause the linker to occasionally hang forever, due to a MSVC toolchain bug in the linker. We should make this easy on developers and check for them. If the variable is not set, we should display an error message explaining.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6743","12/07/2016 15:13:16",5,"Docker executor hangs forever if `docker stop` fails. ""If {{docker stop}} finishes with an error status, the executor should catch this and react instead of indefinitely waiting for {{reaped}} to return. An interesting question is _how_ to react. Here are possible solutions. 1. Retry {{docker stop}}. In this case it is unclear how many times to retry and what to do if {{docker stop}} continues to fail. 2. Unmark task as {{killed}}. This will allow frameworks to retry the kill. However, in this case it is unclear what status updates we should send: {{TASK_KILLING}} for every kill retry? an extra update when we failed to kill a task? or set a specific reason in {{TASK_KILLING}}? 3. Clean up and exit. In this case we should make sure the task container is killed or notify the framework and the operator that the container may still be running.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6744","12/07/2016 16:30:55",2,"DefaultExecutorTest.KillTaskGroupOnTaskFailure is flaky ""This repros consistently for me (~10 test iterations or fewer). Test log: """," [ RUN ] DefaultExecutorTest.KillTaskGroupOnTaskFailure I1208 03:26:47.461477 28632 cluster.cpp:160] Creating default 'local' authorizer I1208 03:26:47.462673 28632 replica.cpp:776] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I1208 03:26:47.463248 28650 recover.cpp:451] Starting replica recovery I1208 03:26:47.463537 28650 recover.cpp:477] Replica is in EMPTY status I1208 03:26:47.476333 28651 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from __req_res__(64)@10.0.2.15:46643 I1208 03:26:47.476618 28650 recover.cpp:197] Received a recover response from a replica in EMPTY status I1208 03:26:47.477242 28649 recover.cpp:568] Updating replica status to STARTING I1208 03:26:47.477496 28649 replica.cpp:320] Persisted replica status to STARTING I1208 03:26:47.477607 28649 recover.cpp:477] Replica is in STARTING status I1208 03:26:47.478910 28653 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from __req_res__(65)@10.0.2.15:46643 I1208 03:26:47.479385 28651 recover.cpp:197] Received a recover response from a replica in STARTING status I1208 03:26:47.479717 28647 recover.cpp:568] Updating replica status to VOTING I1208 03:26:47.479996 28648 replica.cpp:320] Persisted replica status to VOTING I1208 03:26:47.480077 28648 recover.cpp:582] Successfully joined the Paxos group I1208 03:26:47.763380 28651 master.cpp:380] Master 0bcb0250-4cf5-4209-92fe-ce260518b50f (archlinux.vagrant.vm) started on 10.0.2.15:46643 I1208 03:26:47.763463 28651 master.cpp:382] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/7lpy50/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/7lpy50/master"""" --zk_session_timeout=""""10secs"""" I1208 03:26:47.764010 28651 master.cpp:432] Master only allowing authenticated frameworks to register I1208 03:26:47.764070 28651 master.cpp:446] Master only allowing authenticated agents to register I1208 03:26:47.764076 28651 master.cpp:459] Master only allowing authenticated HTTP frameworks to register I1208 03:26:47.764081 28651 credentials.hpp:37] Loading credentials for authentication from '/tmp/7lpy50/credentials' I1208 03:26:47.764482 28651 master.cpp:504] Using default 'crammd5' authenticator I1208 03:26:47.764659 28651 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-readonly' I1208 03:26:47.764981 28651 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' I1208 03:26:47.765136 28651 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' I1208 03:26:47.765231 28651 master.cpp:584] Authorization enabled I1208 03:26:47.768061 28651 master.cpp:2043] Elected as the leading master! I1208 03:26:47.768097 28651 master.cpp:1566] Recovering from registrar I1208 03:26:47.768766 28648 log.cpp:553] Attempting to start the writer I1208 03:26:47.769899 28653 replica.cpp:493] Replica received implicit promise request from __req_res__(66)@10.0.2.15:46643 with proposal 1 I1208 03:26:47.769984 28653 replica.cpp:342] Persisted promised to 1 I1208 03:26:47.770534 28652 coordinator.cpp:238] Coordinator attempting to fill missing positions I1208 03:26:47.771479 28652 replica.cpp:388] Replica received explicit promise request from __req_res__(67)@10.0.2.15:46643 for position 0 with proposal 2 I1208 03:26:47.772897 28650 replica.cpp:537] Replica received write request for position 0 from __req_res__(68)@10.0.2.15:46643 I1208 03:26:47.773437 28650 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I1208 03:26:47.774327 28650 log.cpp:569] Writer started with ending position 0 I1208 03:26:47.776505 28647 registrar.cpp:362] Successfully fetched the registry (0B) in 8.211712ms I1208 03:26:47.776597 28647 registrar.cpp:461] Applied 1 operations in 11511ns; attempting to update the registry I1208 03:26:47.777253 28653 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I1208 03:26:47.778172 28648 replica.cpp:537] Replica received write request for position 1 from __req_res__(69)@10.0.2.15:46643 I1208 03:26:47.778695 28646 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I1208 03:26:47.779631 28652 registrar.cpp:506] Successfully updated the registry in 2.979072ms I1208 03:26:47.779736 28652 registrar.cpp:392] Successfully recovered registrar I1208 03:26:47.779911 28652 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I1208 03:26:47.780030 28653 master.cpp:1682] Recovered 0 agents from the registry (145B); allowing 10mins for agents to re-register I1208 03:26:47.788097 28648 replica.cpp:537] Replica received write request for position 2 from __req_res__(70)@10.0.2.15:46643 I1208 03:26:47.788931 28651 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I1208 03:26:47.844846 28632 containerizer.cpp:207] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni W1208 03:26:47.845237 28632 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges I1208 03:26:47.846787 28632 cluster.cpp:446] Creating default 'local' authorizer I1208 03:26:47.848178 28647 slave.cpp:208] Mesos agent started on (8)@10.0.2.15:46643 I1208 03:26:47.848201 28647 slave.cpp:209] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_OQh5HL/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_OQh5HL/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_OQh5HL/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --io_switchboard_enable_server=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/home/vagrant/build-mesos/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_OQh5HL"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_M0FOoo"""" I1208 03:26:47.848474 28647 credentials.hpp:86] Loading credential for authentication from '/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_OQh5HL/credential' I1208 03:26:47.848573 28647 slave.cpp:346] Agent using credential for: test-principal I1208 03:26:47.848587 28647 credentials.hpp:37] Loading credentials for authentication from '/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_OQh5HL/http_credentials' I1208 03:26:47.848707 28647 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I1208 03:26:47.848764 28632 scheduler.cpp:182] Version: 1.2.0 I1208 03:26:47.921869 28647 slave.cpp:533] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1208 03:26:47.921993 28647 slave.cpp:541] Agent attributes: [ ] I1208 03:26:47.922003 28647 slave.cpp:546] Agent hostname: archlinux.vagrant.vm I1208 03:26:47.923415 28649 state.cpp:57] Recovering state from '/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_M0FOoo/meta' I1208 03:26:47.923770 28651 status_update_manager.cpp:203] Recovering status update manager I1208 03:26:47.924353 28650 containerizer.cpp:581] Recovering containerizer I1208 03:26:47.925879 28651 provisioner.cpp:253] Provisioner recovery complete I1208 03:26:47.926267 28649 slave.cpp:5414] Finished recovery I1208 03:26:47.926981 28646 slave.cpp:918] New master detected at master@10.0.2.15:46643 I1208 03:26:47.927004 28648 status_update_manager.cpp:177] Pausing sending status updates I1208 03:26:47.927008 28646 slave.cpp:977] Authenticating with master master@10.0.2.15:46643 I1208 03:26:47.927127 28646 slave.cpp:988] Using default CRAM-MD5 authenticatee I1208 03:26:47.927259 28646 slave.cpp:950] Detecting new master I1208 03:26:47.927393 28651 authenticatee.cpp:121] Creating new client SASL connection I1208 03:26:47.927543 28651 master.cpp:6793] Authenticating slave(8)@10.0.2.15:46643 I1208 03:26:47.927907 28649 authenticator.cpp:98] Creating new server SASL connection I1208 03:26:47.928110 28648 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1208 03:26:47.928138 28648 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1208 03:26:47.928225 28648 authenticator.cpp:204] Received SASL authentication start I1208 03:26:47.928256 28648 authenticator.cpp:326] Authentication requires more steps I1208 03:26:47.928315 28648 authenticatee.cpp:259] Received SASL authentication step I1208 03:26:47.928390 28648 authenticator.cpp:232] Received SASL authentication step I1208 03:26:47.928442 28648 authenticator.cpp:318] Authentication success I1208 03:26:47.928549 28651 authenticatee.cpp:299] Authentication success I1208 03:26:47.928560 28648 master.cpp:6823] Successfully authenticated principal 'test-principal' at slave(8)@10.0.2.15:46643 I1208 03:26:47.928917 28648 slave.cpp:1072] Successfully authenticated with master master@10.0.2.15:46643 I1208 03:26:47.929293 28648 master.cpp:5202] Registering agent at slave(8)@10.0.2.15:46643 (archlinux.vagrant.vm) with id 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 I1208 03:26:47.929668 28652 registrar.cpp:461] Applied 1 operations in 37939ns; attempting to update the registry I1208 03:26:47.930258 28647 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I1208 03:26:47.931262 28650 replica.cpp:537] Replica received write request for position 3 from __req_res__(71)@10.0.2.15:46643 I1208 03:26:47.931882 28650 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I1208 03:26:47.932858 28650 registrar.cpp:506] Successfully updated the registry in 3.147008ms I1208 03:26:47.933434 28650 master.cpp:5273] Registered agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 at slave(8)@10.0.2.15:46643 (archlinux.vagrant.vm) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1208 03:26:47.933506 28650 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I1208 03:26:47.933869 28650 hierarchical.cpp:485] Added agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 (archlinux.vagrant.vm) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: {}) I1208 03:26:47.934068 28650 slave.cpp:1118] Registered with master master@10.0.2.15:46643; given agent ID 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 I1208 03:26:47.934249 28650 slave.cpp:1178] Forwarding total oversubscribed resources {} I1208 03:26:47.934497 28650 status_update_manager.cpp:184] Resuming sending status updates I1208 03:26:47.934567 28650 master.cpp:5672] Received update of agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 at slave(8)@10.0.2.15:46643 (archlinux.vagrant.vm) with total oversubscribed resources {} I1208 03:26:47.934734 28650 hierarchical.cpp:555] Agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 (archlinux.vagrant.vm) updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: {}) I1208 03:26:47.935107 28650 replica.cpp:537] Replica received write request for position 4 from __req_res__(72)@10.0.2.15:46643 I1208 03:26:47.935642 28650 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I1208 03:26:50.427475 28648 scheduler.cpp:475] New master detected at master@10.0.2.15:46643 I1208 03:26:50.435571 28648 http.cpp:391] HTTP POST for /master/api/v1/scheduler from 10.0.2.15:50358 I1208 03:26:50.435753 28648 master.cpp:2340] Received subscription request for HTTP framework 'default' I1208 03:26:50.435832 28648 master.cpp:2079] Authorizing framework principal 'test-principal' to receive offers for role '*' I1208 03:26:50.436213 28647 master.cpp:2454] Subscribing framework 'default' with checkpointing disabled and capabilities [ ] I1208 03:26:50.436691 28651 hierarchical.cpp:275] Added framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:50.437602 28647 master.cpp:6622] Sending 1 offers to framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 (default) I1208 03:26:50.442335 28653 http.cpp:391] HTTP POST for /master/api/v1/scheduler from 10.0.2.15:50356 I1208 03:26:50.442852 28653 master.cpp:3629] Processing ACCEPT call for offers: [ 0bcb0250-4cf5-4209-92fe-ce260518b50f-O0 ] on agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 at slave(8)@10.0.2.15:46643 (archlinux.vagrant.vm) for framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 (default) I1208 03:26:50.442912 28653 master.cpp:3216] Authorizing framework principal 'test-principal' to launch task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 I1208 03:26:50.443050 28653 master.cpp:3216] Authorizing framework principal 'test-principal' to launch task bf21fae2-513e-4ea1-b85c-dfd2546e4249 I1208 03:26:50.445271 28653 master.cpp:8424] Adding task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 with resources cpus(*):0.1; mem(*):32; disk(*):32 on agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 (archlinux.vagrant.vm) I1208 03:26:50.445487 28653 master.cpp:8424] Adding task bf21fae2-513e-4ea1-b85c-dfd2546e4249 with resources cpus(*):0.1; mem(*):32; disk(*):32 on agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 (archlinux.vagrant.vm) I1208 03:26:50.445565 28653 master.cpp:4486] Launching task group { 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5, bf21fae2-513e-4ea1-b85c-dfd2546e4249 } of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 (default) with resources cpus(*):0.2; mem(*):64; disk(*):64 on agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 at slave(8)@10.0.2.15:46643 (archlinux.vagrant.vm) I1208 03:26:50.446413 28653 slave.cpp:1550] Got assigned task group containing tasks [ 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5, bf21fae2-513e-4ea1-b85c-dfd2546e4249 ] for framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:50.446836 28653 slave.cpp:1712] Launching task group containing tasks [ 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5, bf21fae2-513e-4ea1-b85c-dfd2546e4249 ] for framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:50.446993 28653 paths.cpp:530] Trying to chown '/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_M0FOoo/slaves/0bcb0250-4cf5-4209-92fe-ce260518b50f-S0/frameworks/0bcb0250-4cf5-4209-92fe-ce260518b50f-0000/executors/default/runs/725fe374-0d1e-4d9f-b1b0-e5ffb16b1101' to user 'vagrant' I1208 03:26:50.451642 28653 slave.cpp:6341] Launching executor 'default' of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 with resources cpus(*):0.1; mem(*):32; disk(*):32 in work directory '/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_M0FOoo/slaves/0bcb0250-4cf5-4209-92fe-ce260518b50f-S0/frameworks/0bcb0250-4cf5-4209-92fe-ce260518b50f-0000/executors/default/runs/725fe374-0d1e-4d9f-b1b0-e5ffb16b1101' I1208 03:26:50.452117 28652 containerizer.cpp:973] Starting container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101 for executor 'default' of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:50.452283 28653 slave.cpp:2034] Queued task group containing tasks [ 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5, bf21fae2-513e-4ea1-b85c-dfd2546e4249 ] for executor 'default' of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:50.456531 28647 launcher.cpp:133] Forked child with pid '29159' for container '725fe374-0d1e-4d9f-b1b0-e5ffb16b1101' I1208 03:26:53.105209 29173 executor.cpp:189] Version: 1.2.0 I1208 03:26:53.112563 28646 http.cpp:288] HTTP POST for /slave(8)/api/v1/executor from 10.0.2.15:50360 I1208 03:26:53.112725 28646 slave.cpp:3089] Received Subscribe request for HTTP executor 'default' of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:53.114614 28651 slave.cpp:2279] Sending queued task group task group containing tasks [ 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5, bf21fae2-513e-4ea1-b85c-dfd2546e4249 ] to executor 'default' of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 (via HTTP) I1208 03:26:53.117642 29191 default_executor.cpp:131] Received SUBSCRIBED event I1208 03:26:53.122820 29191 default_executor.cpp:135] Subscribed executor on archlinux.vagrant.vm I1208 03:26:53.123080 29191 default_executor.cpp:131] Received LAUNCH_GROUP event I1208 03:26:53.126833 28653 http.cpp:288] HTTP POST for /slave(8)/api/v1 from 10.0.2.15:50364 I1208 03:26:53.127091 28653 http.cpp:288] HTTP POST for /slave(8)/api/v1 from 10.0.2.15:50364 I1208 03:26:53.127477 28653 http.cpp:449] Processing call LAUNCH_NESTED_CONTAINER I1208 03:26:53.127671 28653 http.cpp:449] Processing call LAUNCH_NESTED_CONTAINER I1208 03:26:53.128360 28653 containerizer.cpp:1776] Starting nested container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.a532fb7f-fe4c-4588-b1c1-c45dee7fd9c8 I1208 03:26:53.128434 28653 containerizer.cpp:1800] Trying to chown '/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_M0FOoo/slaves/0bcb0250-4cf5-4209-92fe-ce260518b50f-S0/frameworks/0bcb0250-4cf5-4209-92fe-ce260518b50f-0000/executors/default/runs/725fe374-0d1e-4d9f-b1b0-e5ffb16b1101/containers/a532fb7f-fe4c-4588-b1c1-c45dee7fd9c8' to user 'vagrant' I1208 03:26:53.134392 28653 containerizer.cpp:1776] Starting nested container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.855fae95-810b-4e9d-8397-7138bdda91b7 I1208 03:26:53.134567 28653 containerizer.cpp:1800] Trying to chown '/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_M0FOoo/slaves/0bcb0250-4cf5-4209-92fe-ce260518b50f-S0/frameworks/0bcb0250-4cf5-4209-92fe-ce260518b50f-0000/executors/default/runs/725fe374-0d1e-4d9f-b1b0-e5ffb16b1101/containers/855fae95-810b-4e9d-8397-7138bdda91b7' to user 'vagrant' I1208 03:26:53.142004 28653 launcher.cpp:133] Forked child with pid '29198' for container '725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.a532fb7f-fe4c-4588-b1c1-c45dee7fd9c8' I1208 03:26:53.144634 28653 launcher.cpp:133] Forked child with pid '29199' for container '725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.855fae95-810b-4e9d-8397-7138bdda91b7' I1208 03:26:53.152432 29187 default_executor.cpp:452] Successfully launched child containers [ 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.a532fb7f-fe4c-4588-b1c1-c45dee7fd9c8, 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.855fae95-810b-4e9d-8397-7138bdda91b7 ] for tasks [ 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5, bf21fae2-513e-4ea1-b85c-dfd2546e4249 ] I1208 03:26:53.154485 29189 default_executor.cpp:528] Waiting for child container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.a532fb7f-fe4c-4588-b1c1-c45dee7fd9c8 of task '3a8a0c1c-c386-409d-a21c-653dc2d3d7d5' I1208 03:26:53.154712 29189 default_executor.cpp:528] Waiting for child container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.855fae95-810b-4e9d-8397-7138bdda91b7 of task 'bf21fae2-513e-4ea1-b85c-dfd2546e4249' I1208 03:26:53.155655 28647 http.cpp:288] HTTP POST for /slave(8)/api/v1/executor from 10.0.2.15:50362 I1208 03:26:53.155807 28647 slave.cpp:3743] Handling status update TASK_RUNNING (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:53.156080 28647 http.cpp:288] HTTP POST for /slave(8)/api/v1/executor from 10.0.2.15:50362 I1208 03:26:53.156163 28647 slave.cpp:3743] Handling status update TASK_RUNNING (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:53.157243 28647 http.cpp:288] HTTP POST for /slave(8)/api/v1 from 10.0.2.15:50368 I1208 03:26:53.157467 28647 http.cpp:288] HTTP POST for /slave(8)/api/v1 from 10.0.2.15:50366 I1208 03:26:53.157699 28647 http.cpp:449] Processing call WAIT_NESTED_CONTAINER I1208 03:26:53.157966 28647 http.cpp:449] Processing call WAIT_NESTED_CONTAINER I1208 03:26:53.158440 28652 status_update_manager.cpp:323] Received status update TASK_RUNNING (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:53.159534 28647 slave.cpp:4184] Forwarding the update TASK_RUNNING (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 to master@10.0.2.15:46643 I1208 03:26:53.159840 28651 master.cpp:5808] Status update TASK_RUNNING (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 from agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 at slave(8)@10.0.2.15:46643 (archlinux.vagrant.vm) I1208 03:26:53.159881 28651 master.cpp:5870] Forwarding status update TASK_RUNNING (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:53.159986 28647 status_update_manager.cpp:323] Received status update TASK_RUNNING (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:53.160099 28651 master.cpp:7790] Updating the state of task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I1208 03:26:53.160365 28647 slave.cpp:4184] Forwarding the update TASK_RUNNING (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 to master@10.0.2.15:46643 I1208 03:26:53.160670 28647 master.cpp:5808] Status update TASK_RUNNING (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 from agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 at slave(8)@10.0.2.15:46643 (archlinux.vagrant.vm) I1208 03:26:53.160711 28647 master.cpp:5870] Forwarding status update TASK_RUNNING (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:53.160899 28647 master.cpp:7790] Updating the state of task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I1208 03:26:53.162302 29190 default_executor.cpp:131] Received ACKNOWLEDGED event I1208 03:26:53.162602 29188 default_executor.cpp:131] Received ACKNOWLEDGED event I1208 03:26:53.213343 28653 http.cpp:391] HTTP POST for /master/api/v1/scheduler from 10.0.2.15:50356 I1208 03:26:53.213508 28653 master.cpp:4918] Processing ACKNOWLEDGE call bfb80b10-da9b-44d2-977a-61b88531e809 for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 (default) on agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 I1208 03:26:53.213838 28648 status_update_manager.cpp:395] Received status update acknowledgement (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 W1208 03:26:53.213982 28648 status_update_manager.cpp:769] Unexpected status update acknowledgement (received bfb80b10-da9b-44d2-977a-61b88531e809, expecting dcdd2cb5-fdea-4556-94e9-ff6246132315) for update TASK_RUNNING (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:53.214042 28647 http.cpp:391] HTTP POST for /master/api/v1/scheduler from 10.0.2.15:50356 E1208 03:26:53.214143 28648 slave.cpp:3018] Failed to handle status update acknowledgement (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000: Duplicate acknowledgement I1208 03:26:53.214166 28647 master.cpp:4918] Processing ACKNOWLEDGE call dcdd2cb5-fdea-4556-94e9-ff6246132315 for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 (default) on agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 I1208 03:26:53.214479 28653 status_update_manager.cpp:395] Received status update acknowledgement (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 W1208 03:26:53.214584 28653 status_update_manager.cpp:769] Unexpected status update acknowledgement (received dcdd2cb5-fdea-4556-94e9-ff6246132315, expecting bfb80b10-da9b-44d2-977a-61b88531e809) for update TASK_RUNNING (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 E1208 03:26:53.214701 28653 slave.cpp:3018] Failed to handle status update acknowledgement (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000: Duplicate acknowledgement I1208 03:26:53.220249 28649 containerizer.cpp:2450] Container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.a532fb7f-fe4c-4588-b1c1-c45dee7fd9c8 has exited I1208 03:26:53.220296 28649 containerizer.cpp:2087] Destroying container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.a532fb7f-fe4c-4588-b1c1-c45dee7fd9c8 in RUNNING state I1208 03:26:53.220535 28649 launcher.cpp:149] Asked to destroy container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.a532fb7f-fe4c-4588-b1c1-c45dee7fd9c8 I1208 03:26:53.225530 28650 containerizer.cpp:2366] Checkpointing termination state to nested container's runtime directory '/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_OQh5HL/containers/725fe374-0d1e-4d9f-b1b0-e5ffb16b1101/containers/a532fb7f-fe4c-4588-b1c1-c45dee7fd9c8/termination' I1208 03:26:53.228024 29188 default_executor.cpp:656] Successfully waited for child container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.a532fb7f-fe4c-4588-b1c1-c45dee7fd9c8 of task '3a8a0c1c-c386-409d-a21c-653dc2d3d7d5' in state TASK_FAILED E1208 03:26:53.228068 29188 default_executor.cpp:667] Child container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.a532fb7f-fe4c-4588-b1c1-c45dee7fd9c8 terminated with status exited with status 1 I1208 03:26:53.228073 29188 default_executor.cpp:687] Shutting down I1208 03:26:53.228792 29192 default_executor.cpp:781] Killing child container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.855fae95-810b-4e9d-8397-7138bdda91b7 I1208 03:26:53.230953 28653 http.cpp:288] HTTP POST for /slave(8)/api/v1 from 10.0.2.15:50370 I1208 03:26:53.231276 28653 http.cpp:449] Processing call KILL_NESTED_CONTAINER I1208 03:26:53.231853 28652 containerizer.cpp:2087] Destroying container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.855fae95-810b-4e9d-8397-7138bdda91b7 in RUNNING state I1208 03:26:53.232113 28652 launcher.cpp:149] Asked to destroy container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.855fae95-810b-4e9d-8397-7138bdda91b7 I1208 03:26:53.273080 28648 http.cpp:288] HTTP POST for /slave(8)/api/v1/executor from 10.0.2.15:50362 I1208 03:26:53.273331 28648 slave.cpp:3743] Handling status update TASK_FAILED (UUID: ff8338ce-58e5-4508-a2c1-0eb7580aa8f8) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:53.274930 28650 status_update_manager.cpp:323] Received status update TASK_FAILED (UUID: ff8338ce-58e5-4508-a2c1-0eb7580aa8f8) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:53.276623 29187 default_executor.cpp:131] Received ACKNOWLEDGED event I1208 03:26:53.321367 28649 containerizer.cpp:2450] Container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.855fae95-810b-4e9d-8397-7138bdda91b7 has exited I1208 03:26:53.322789 28649 containerizer.cpp:2366] Checkpointing termination state to nested container's runtime directory '/tmp/DefaultExecutorTest_KillTaskGroupOnTaskFailure_OQh5HL/containers/725fe374-0d1e-4d9f-b1b0-e5ffb16b1101/containers/855fae95-810b-4e9d-8397-7138bdda91b7/termination' I1208 03:26:53.325847 29194 default_executor.cpp:656] Successfully waited for child container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101.855fae95-810b-4e9d-8397-7138bdda91b7 of task 'bf21fae2-513e-4ea1-b85c-dfd2546e4249' in state TASK_KILLED I1208 03:26:53.325887 29194 default_executor.cpp:767] Terminating after 1secs I1208 03:26:53.369621 28648 http.cpp:288] HTTP POST for /slave(8)/api/v1/executor from 10.0.2.15:50362 I1208 03:26:53.369870 28648 slave.cpp:3743] Handling status update TASK_KILLED (UUID: 07d27a3a-c58d-4c2e-8a8f-ee2e4900fb91) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:53.371409 28648 status_update_manager.cpp:323] Received status update TASK_KILLED (UUID: 07d27a3a-c58d-4c2e-8a8f-ee2e4900fb91) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:26:54.335306 28650 containerizer.cpp:2450] Container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101 has exited I1208 03:26:54.335353 28650 containerizer.cpp:2087] Destroying container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101 in RUNNING state I1208 03:26:54.335593 28650 launcher.cpp:149] Asked to destroy container 725fe374-0d1e-4d9f-b1b0-e5ffb16b1101 I1208 03:26:54.341533 28652 slave.cpp:4675] Executor 'default' of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 exited with status 0 I1208 03:26:54.341866 28651 master.cpp:5932] Executor 'default' of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 on agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 at slave(8)@10.0.2.15:46643 (archlinux.vagrant.vm): exited with status 0 I1208 03:26:54.341914 28651 master.cpp:7915] Removing executor 'default' with resources cpus(*):0.1; mem(*):32; disk(*):32 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 on agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 at slave(8)@10.0.2.15:46643 (archlinux.vagrant.vm) GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: failure(0x7ffec532a3f0, @0x7fe234016590 48-byte object <48-2F 78-64 E2-7F 00-00 00-00 00-00 00-00 00-00 07-00 00-00 00-00 00-00 C0-E5 00-34 E2-7F 00-00 90-D9 0A-34 E2-7F 00-00 00-00 00-00 00-00 00-00>) Stack trace: I1208 03:26:54.777720 28651 master.cpp:6622] Sending 1 offers to framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 (default) W1208 03:27:03.160513 28646 status_update_manager.cpp:478] Resending status update TASK_RUNNING (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 W1208 03:27:03.160785 28646 status_update_manager.cpp:478] Resending status update TASK_RUNNING (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:27:03.160995 28646 slave.cpp:4184] Forwarding the update TASK_RUNNING (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 to master@10.0.2.15:46643 I1208 03:27:03.161177 28646 slave.cpp:4184] Forwarding the update TASK_RUNNING (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 to master@10.0.2.15:46643 I1208 03:27:03.161424 28646 master.cpp:5808] Status update TASK_RUNNING (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 from agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 at slave(8)@10.0.2.15:46643 (archlinux.vagrant.vm) I1208 03:27:03.161469 28646 master.cpp:5870] Forwarding status update TASK_RUNNING (UUID: dcdd2cb5-fdea-4556-94e9-ff6246132315) for task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:27:03.161887 28646 master.cpp:7790] Updating the state of task 3a8a0c1c-c386-409d-a21c-653dc2d3d7d5 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 (latest state: TASK_FAILED, status update state: TASK_RUNNING) I1208 03:27:03.162178 28646 master.cpp:5808] Status update TASK_RUNNING (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 from agent 0bcb0250-4cf5-4209-92fe-ce260518b50f-S0 at slave(8)@10.0.2.15:46643 (archlinux.vagrant.vm) I1208 03:27:03.162214 28646 master.cpp:5870] Forwarding status update TASK_RUNNING (UUID: bfb80b10-da9b-44d2-977a-61b88531e809) for task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 I1208 03:27:03.162407 28646 master.cpp:7790] Updating the state of task bf21fae2-513e-4ea1-b85c-dfd2546e4249 of framework 0bcb0250-4cf5-4209-92fe-ce260518b50f-0000 (latest state: TASK_KILLED, status update state: TASK_RUNNING) ../../mesos/src/tests/default_executor_tests.cpp:610: Failure Value of: taskStates Actual: { (3a8a0c1c-c386-409d-a21c-653dc2d3d7d5, TASK_FAILED), (bf21fae2-513e-4ea1-b85c-dfd2546e4249, TASK_KILLED) } Expected: expectedTaskStates Which is: { (3a8a0c1c-c386-409d-a21c-653dc2d3d7d5, TASK_RUNNING), (bf21fae2-513e-4ea1-b85c-dfd2546e4249, TASK_RUNNING) } *** Aborted at 1481128023 (unix time) try """"date -d @1481128023"""" if you are using GNU date *** PC: @ 0x1bb3ed4 testing::UnitTest::AddTestPartResult() *** SIGSEGV (@0x0) received by PID 28632 (TID 0x7fe264b7ec40) from PID 0; stack trace: *** @ 0x7fe25df89080 (unknown) @ 0x1bb3ed4 testing::UnitTest::AddTestPartResult() @ 0x1ba86d1 testing::internal::AssertHelper::operator=() @ 0xe3889c mesos::internal::tests::DefaultExecutorTest_KillTaskGroupOnTaskFailure_Test::TestBody() @ 0x1bd1df0 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x1bcce74 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x1badb08 testing::Test::Run() @ 0x1bae2c0 testing::TestInfo::Run() @ 0x1bae8fd testing::TestCase::Run() @ 0x1bb53f1 testing::internal::UnitTestImpl::RunAllTests() @ 0x1bd2ab7 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x1bcd9b4 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x1bb40f5 testing::UnitTest::Run() @ 0x118c09e RUN_ALL_TESTS() @ 0x118bc54 main @ 0x7fe25bf16291 __libc_start_main @ 0xa842fa _start @ 0x0 (unknown) Install 'notify-send' and try again ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6745","12/07/2016 16:35:46",2,"MesosContainerizer/DefaultExecutorTest.KillTask/0 is flaky ""This repros consistently for me (< 20 test iterations), using {{master}} as of {{ab79d58c9df0ffb8ad35f6662541e7a5c3ea4a80}}. Test log: """," [----------] 1 test from MesosContainerizer/DefaultExecutorTest [ RUN ] MesosContainerizer/DefaultExecutorTest.KillTask/0 I1208 03:32:34.943745 29285 cluster.cpp:160] Creating default 'local' authorizer I1208 03:32:34.944695 29285 replica.cpp:776] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned I1208 03:32:34.945287 29306 recover.cpp:451] Starting replica recovery I1208 03:32:34.945431 29306 recover.cpp:477] Replica is in EMPTY status I1208 03:32:34.946542 29300 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from __req_res__(127)@10.0.2.15:36807 I1208 03:32:34.946768 29301 recover.cpp:197] Received a recover response from a replica in EMPTY status I1208 03:32:34.947377 29299 recover.cpp:568] Updating replica status to STARTING I1208 03:32:34.947746 29306 replica.cpp:320] Persisted replica status to STARTING I1208 03:32:34.947887 29306 recover.cpp:477] Replica is in STARTING status I1208 03:32:34.948559 29306 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from __req_res__(128)@10.0.2.15:36807 I1208 03:32:34.948771 29299 recover.cpp:197] Received a recover response from a replica in STARTING status I1208 03:32:34.949097 29302 recover.cpp:568] Updating replica status to VOTING I1208 03:32:34.949385 29306 replica.cpp:320] Persisted replica status to VOTING I1208 03:32:34.949467 29306 recover.cpp:582] Successfully joined the Paxos group I1208 03:32:34.971436 29301 master.cpp:380] Master 67de7bda-9b5b-4fe9-aede-390ec9ca7290 (archlinux.vagrant.vm) started on 10.0.2.15:36807 I1208 03:32:34.971519 29301 master.cpp:382] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/8oMk6W/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/8oMk6W/master"""" --zk_session_timeout=""""10secs"""" I1208 03:32:34.971824 29301 master.cpp:432] Master only allowing authenticated frameworks to register I1208 03:32:34.971832 29301 master.cpp:446] Master only allowing authenticated agents to register I1208 03:32:34.971837 29301 master.cpp:459] Master only allowing authenticated HTTP frameworks to register I1208 03:32:34.971842 29301 credentials.hpp:37] Loading credentials for authentication from '/tmp/8oMk6W/credentials' I1208 03:32:34.972051 29301 master.cpp:504] Using default 'crammd5' authenticator I1208 03:32:34.972198 29301 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-readonly' I1208 03:32:34.972327 29301 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' I1208 03:32:34.972436 29301 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' I1208 03:32:34.972561 29301 master.cpp:584] Authorization enabled I1208 03:32:34.974555 29300 master.cpp:2043] Elected as the leading master! I1208 03:32:34.974586 29300 master.cpp:1566] Recovering from registrar I1208 03:32:34.975244 29306 log.cpp:553] Attempting to start the writer I1208 03:32:34.976706 29304 replica.cpp:493] Replica received implicit promise request from __req_res__(129)@10.0.2.15:36807 with proposal 1 I1208 03:32:34.976793 29304 replica.cpp:342] Persisted promised to 1 I1208 03:32:34.977449 29300 coordinator.cpp:238] Coordinator attempting to fill missing positions I1208 03:32:34.978907 29303 replica.cpp:388] Replica received explicit promise request from __req_res__(130)@10.0.2.15:36807 for position 0 with proposal 2 I1208 03:32:34.980016 29303 replica.cpp:537] Replica received write request for position 0 from __req_res__(131)@10.0.2.15:36807 I1208 03:32:34.980762 29304 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0 I1208 03:32:34.981369 29303 log.cpp:569] Writer started with ending position 0 I1208 03:32:34.982964 29300 registrar.cpp:362] Successfully fetched the registry (0B) in 8.218112ms I1208 03:32:34.983037 29300 registrar.cpp:461] Applied 1 operations in 10890ns; attempting to update the registry I1208 03:32:34.983845 29300 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1 I1208 03:32:34.984659 29300 replica.cpp:537] Replica received write request for position 1 from __req_res__(132)@10.0.2.15:36807 I1208 03:32:34.985409 29300 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0 I1208 03:32:34.986521 29300 registrar.cpp:506] Successfully updated the registry in 3.441152ms I1208 03:32:34.986608 29300 registrar.cpp:392] Successfully recovered registrar I1208 03:32:34.986986 29300 master.cpp:1682] Recovered 0 agents from the registry (145B); allowing 10mins for agents to re-register I1208 03:32:34.987040 29300 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2 I1208 03:32:34.987974 29306 replica.cpp:537] Replica received write request for position 2 from __req_res__(133)@10.0.2.15:36807 I1208 03:32:34.988716 29299 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0 I1208 03:32:35.014484 29285 containerizer.cpp:207] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni W1208 03:32:35.014842 29285 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges I1208 03:32:35.016302 29285 cluster.cpp:446] Creating default 'local' authorizer I1208 03:32:35.017352 29299 slave.cpp:208] Mesos agent started on (15)@10.0.2.15:36807 I1208 03:32:35.017374 29299 slave.cpp:209] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/mesos/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_MN1ibR/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_MN1ibR/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_MN1ibR/http_credentials"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --io_switchboard_enable_server=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/home/vagrant/build-mesos/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_MN1ibR"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_HvRO8o"""" I1208 03:32:35.017683 29299 credentials.hpp:86] Loading credential for authentication from '/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_MN1ibR/credential' I1208 03:32:35.017772 29285 scheduler.cpp:182] Version: 1.2.0 I1208 03:32:35.017784 29299 slave.cpp:346] Agent using credential for: test-principal I1208 03:32:35.017797 29299 credentials.hpp:37] Loading credentials for authentication from '/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_MN1ibR/http_credentials' I1208 03:32:35.017921 29299 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I1208 03:32:35.038817 29306 scheduler.cpp:475] New master detected at master@10.0.2.15:36807 I1208 03:32:35.039311 29299 slave.cpp:533] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1208 03:32:35.039384 29299 slave.cpp:541] Agent attributes: [ ] I1208 03:32:35.039393 29299 slave.cpp:546] Agent hostname: archlinux.vagrant.vm I1208 03:32:35.041291 29300 state.cpp:57] Recovering state from '/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_HvRO8o/meta' I1208 03:32:35.041695 29305 status_update_manager.cpp:203] Recovering status update manager I1208 03:32:35.042660 29305 containerizer.cpp:581] Recovering containerizer I1208 03:32:35.043954 29305 provisioner.cpp:253] Provisioner recovery complete I1208 03:32:35.044245 29305 slave.cpp:5414] Finished recovery I1208 03:32:35.044770 29305 slave.cpp:918] New master detected at master@10.0.2.15:36807 I1208 03:32:35.044790 29305 slave.cpp:977] Authenticating with master master@10.0.2.15:36807 I1208 03:32:35.044831 29305 slave.cpp:988] Using default CRAM-MD5 authenticatee I1208 03:32:35.044935 29305 slave.cpp:950] Detecting new master I1208 03:32:35.045037 29305 status_update_manager.cpp:177] Pausing sending status updates I1208 03:32:35.045119 29305 authenticatee.cpp:121] Creating new client SASL connection I1208 03:32:35.045295 29305 master.cpp:6793] Authenticating slave(15)@10.0.2.15:36807 I1208 03:32:35.045591 29305 authenticator.cpp:98] Creating new server SASL connection I1208 03:32:35.045696 29305 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1208 03:32:35.045711 29305 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1208 03:32:35.045755 29305 authenticator.cpp:204] Received SASL authentication start I1208 03:32:35.045783 29305 authenticator.cpp:326] Authentication requires more steps I1208 03:32:35.045820 29305 authenticatee.cpp:259] Received SASL authentication step I1208 03:32:35.045864 29305 authenticator.cpp:232] Received SASL authentication step I1208 03:32:35.045912 29305 authenticator.cpp:318] Authentication success I1208 03:32:35.046001 29305 authenticatee.cpp:299] Authentication success I1208 03:32:35.046047 29305 master.cpp:6823] Successfully authenticated principal 'test-principal' at slave(15)@10.0.2.15:36807 I1208 03:32:35.046273 29305 slave.cpp:1072] Successfully authenticated with master master@10.0.2.15:36807 I1208 03:32:35.046571 29305 master.cpp:5202] Registering agent at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm) with id 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 I1208 03:32:35.046859 29305 registrar.cpp:461] Applied 1 operations in 36199ns; attempting to update the registry I1208 03:32:35.047488 29305 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3 I1208 03:32:35.048190 29303 replica.cpp:537] Replica received write request for position 3 from __req_res__(134)@10.0.2.15:36807 I1208 03:32:35.048971 29305 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0 I1208 03:32:35.049500 29306 http.cpp:391] HTTP POST for /master/api/v1/scheduler from 10.0.2.15:54722 I1208 03:32:35.049639 29306 master.cpp:2340] Received subscription request for HTTP framework 'default' I1208 03:32:35.049691 29306 master.cpp:2079] Authorizing framework principal 'test-principal' to receive offers for role '*' I1208 03:32:35.050125 29304 master.cpp:2454] Subscribing framework 'default' with checkpointing disabled and capabilities [ ] I1208 03:32:35.050231 29306 registrar.cpp:506] Successfully updated the registry in 3.32416ms I1208 03:32:35.050590 29299 hierarchical.cpp:275] Added framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.050866 29299 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4 I1208 03:32:35.050879 29304 master.cpp:5273] Registered agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1208 03:32:35.051034 29306 slave.cpp:1118] Registered with master master@10.0.2.15:36807; given agent ID 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 I1208 03:32:35.051120 29304 hierarchical.cpp:485] Added agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 (archlinux.vagrant.vm) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: {}) I1208 03:32:35.051209 29299 status_update_manager.cpp:184] Resuming sending status updates I1208 03:32:35.051242 29306 slave.cpp:1178] Forwarding total oversubscribed resources {} I1208 03:32:35.051383 29299 master.cpp:5672] Received update of agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm) with total oversubscribed resources {} I1208 03:32:35.051553 29299 replica.cpp:537] Replica received write request for position 4 from __req_res__(135)@10.0.2.15:36807 I1208 03:32:35.051617 29304 hierarchical.cpp:555] Agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 (archlinux.vagrant.vm) updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I1208 03:32:35.051746 29306 master.cpp:6622] Sending 1 offers to framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (default) I1208 03:32:35.051970 29299 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0 I1208 03:32:35.057456 29300 http.cpp:391] HTTP POST for /master/api/v1/scheduler from 10.0.2.15:54720 I1208 03:32:35.058238 29300 master.cpp:3629] Processing ACCEPT call for offers: [ 67de7bda-9b5b-4fe9-aede-390ec9ca7290-O0 ] on agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm) for framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (default) I1208 03:32:35.058313 29300 master.cpp:3216] Authorizing framework principal 'test-principal' to launch task 93d62044-e146-4b70-9648-221b72cfaad7 I1208 03:32:35.058464 29300 master.cpp:3216] Authorizing framework principal 'test-principal' to launch task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b I1208 03:32:35.060178 29302 master.cpp:8424] Adding task 93d62044-e146-4b70-9648-221b72cfaad7 with resources cpus(*):0.1; mem(*):32; disk(*):32 on agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 (archlinux.vagrant.vm) I1208 03:32:35.060353 29302 master.cpp:8424] Adding task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b with resources cpus(*):0.1; mem(*):32; disk(*):32 on agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 (archlinux.vagrant.vm) I1208 03:32:35.060420 29302 master.cpp:4486] Launching task group { 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b, 93d62044-e146-4b70-9648-221b72cfaad7 } of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (default) with resources cpus(*):0.2; mem(*):64; disk(*):64 on agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm) I1208 03:32:35.060748 29300 slave.cpp:1550] Got assigned task group containing tasks [ 93d62044-e146-4b70-9648-221b72cfaad7, 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b ] for framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.061226 29300 slave.cpp:1712] Launching task group containing tasks [ 93d62044-e146-4b70-9648-221b72cfaad7, 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b ] for framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.061377 29300 paths.cpp:530] Trying to chown '/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_HvRO8o/slaves/67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0/frameworks/67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000/executors/default/runs/935514b3-95ae-450e-b766-084fb5e7734e' to user 'vagrant' I1208 03:32:35.066329 29300 slave.cpp:6341] Launching executor 'default' of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 with resources cpus(*):0.1; mem(*):32; disk(*):32 in work directory '/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_HvRO8o/slaves/67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0/frameworks/67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000/executors/default/runs/935514b3-95ae-450e-b766-084fb5e7734e' I1208 03:32:35.066738 29302 containerizer.cpp:973] Starting container 935514b3-95ae-450e-b766-084fb5e7734e for executor 'default' of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.066845 29300 slave.cpp:2034] Queued task group containing tasks [ 93d62044-e146-4b70-9648-221b72cfaad7, 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b ] for executor 'default' of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.070379 29302 launcher.cpp:133] Forked child with pid '30301' for container '935514b3-95ae-450e-b766-084fb5e7734e' I1208 03:32:35.251523 30315 executor.cpp:189] Version: 1.2.0 I1208 03:32:35.259582 29301 http.cpp:288] HTTP POST for /slave(15)/api/v1/executor from 10.0.2.15:54724 I1208 03:32:35.259738 29301 slave.cpp:3089] Received Subscribe request for HTTP executor 'default' of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.261139 29299 slave.cpp:2279] Sending queued task group task group containing tasks [ 93d62044-e146-4b70-9648-221b72cfaad7, 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b ] to executor 'default' of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (via HTTP) I1208 03:32:35.264493 30330 default_executor.cpp:131] Received SUBSCRIBED event I1208 03:32:35.271474 30330 default_executor.cpp:135] Subscribed executor on archlinux.vagrant.vm I1208 03:32:35.271862 30330 default_executor.cpp:131] Received LAUNCH_GROUP event I1208 03:32:35.275249 29301 http.cpp:288] HTTP POST for /slave(15)/api/v1 from 10.0.2.15:54728 I1208 03:32:35.275681 29301 http.cpp:449] Processing call LAUNCH_NESTED_CONTAINER I1208 03:32:35.275827 29301 http.cpp:288] HTTP POST for /slave(15)/api/v1 from 10.0.2.15:54728 I1208 03:32:35.276204 29301 http.cpp:449] Processing call LAUNCH_NESTED_CONTAINER I1208 03:32:35.276213 29303 containerizer.cpp:1776] Starting nested container 935514b3-95ae-450e-b766-084fb5e7734e.1df56862-093d-4a93-85e0-f955c67c333c I1208 03:32:35.276324 29303 containerizer.cpp:1800] Trying to chown '/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_HvRO8o/slaves/67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0/frameworks/67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000/executors/default/runs/935514b3-95ae-450e-b766-084fb5e7734e/containers/1df56862-093d-4a93-85e0-f955c67c333c' to user 'vagrant' I1208 03:32:35.282511 29303 containerizer.cpp:1776] Starting nested container 935514b3-95ae-450e-b766-084fb5e7734e.97ef5b4e-b4be-4777-b8d8-180333e5a3d3 I1208 03:32:35.282573 29303 containerizer.cpp:1800] Trying to chown '/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_HvRO8o/slaves/67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0/frameworks/67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000/executors/default/runs/935514b3-95ae-450e-b766-084fb5e7734e/containers/97ef5b4e-b4be-4777-b8d8-180333e5a3d3' to user 'vagrant' I1208 03:32:35.288836 29303 launcher.cpp:133] Forked child with pid '30340' for container '935514b3-95ae-450e-b766-084fb5e7734e.1df56862-093d-4a93-85e0-f955c67c333c' I1208 03:32:35.291239 29303 launcher.cpp:133] Forked child with pid '30341' for container '935514b3-95ae-450e-b766-084fb5e7734e.97ef5b4e-b4be-4777-b8d8-180333e5a3d3' I1208 03:32:35.301012 30331 default_executor.cpp:452] Successfully launched child containers [ 935514b3-95ae-450e-b766-084fb5e7734e.1df56862-093d-4a93-85e0-f955c67c333c, 935514b3-95ae-450e-b766-084fb5e7734e.97ef5b4e-b4be-4777-b8d8-180333e5a3d3 ] for tasks [ 93d62044-e146-4b70-9648-221b72cfaad7, 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b ] I1208 03:32:35.304010 30335 default_executor.cpp:528] Waiting for child container 935514b3-95ae-450e-b766-084fb5e7734e.1df56862-093d-4a93-85e0-f955c67c333c of task '93d62044-e146-4b70-9648-221b72cfaad7' I1208 03:32:35.304203 30335 default_executor.cpp:528] Waiting for child container 935514b3-95ae-450e-b766-084fb5e7734e.97ef5b4e-b4be-4777-b8d8-180333e5a3d3 of task '004a6d7e-e6be-4a1c-a23d-b5b83e69a19b' I1208 03:32:35.305105 29299 http.cpp:288] HTTP POST for /slave(15)/api/v1/executor from 10.0.2.15:54726 I1208 03:32:35.305248 29299 slave.cpp:3743] Handling status update TASK_RUNNING (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.306941 29299 http.cpp:288] HTTP POST for /slave(15)/api/v1/executor from 10.0.2.15:54726 I1208 03:32:35.307052 29299 slave.cpp:3743] Handling status update TASK_RUNNING (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.307536 29301 http.cpp:288] HTTP POST for /slave(15)/api/v1 from 10.0.2.15:54730 I1208 03:32:35.307814 29301 http.cpp:288] HTTP POST for /slave(15)/api/v1 from 10.0.2.15:54732 I1208 03:32:35.308068 29301 http.cpp:449] Processing call WAIT_NESTED_CONTAINER I1208 03:32:35.308425 29301 http.cpp:449] Processing call WAIT_NESTED_CONTAINER I1208 03:32:35.308574 29305 status_update_manager.cpp:323] Received status update TASK_RUNNING (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.309396 29304 status_update_manager.cpp:323] Received status update TASK_RUNNING (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.309468 29299 slave.cpp:4184] Forwarding the update TASK_RUNNING (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 to master@10.0.2.15:36807 I1208 03:32:35.309793 29299 master.cpp:5808] Status update TASK_RUNNING (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 from agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm) I1208 03:32:35.309833 29299 master.cpp:5870] Forwarding status update TASK_RUNNING (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.309991 29304 slave.cpp:4184] Forwarding the update TASK_RUNNING (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 to master@10.0.2.15:36807 I1208 03:32:35.310056 29299 master.cpp:7790] Updating the state of task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I1208 03:32:35.310166 29299 master.cpp:5808] Status update TASK_RUNNING (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 from agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm) I1208 03:32:35.310199 29299 master.cpp:5870] Forwarding status update TASK_RUNNING (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.310392 29299 master.cpp:7790] Updating the state of task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I1208 03:32:35.312916 30333 default_executor.cpp:131] Received ACKNOWLEDGED event I1208 03:32:35.313199 30333 default_executor.cpp:131] Received ACKNOWLEDGED event I1208 03:32:35.357170 29303 http.cpp:391] HTTP POST for /master/api/v1/scheduler from 10.0.2.15:54720 I1208 03:32:35.357322 29303 master.cpp:4918] Processing ACKNOWLEDGE call aed3ed28-1943-44c3-a8b6-40be41ffc20b for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (default) on agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 I1208 03:32:35.358458 29303 status_update_manager.cpp:395] Received status update acknowledgement (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 W1208 03:32:35.358573 29303 status_update_manager.cpp:769] Unexpected status update acknowledgement (received aed3ed28-1943-44c3-a8b6-40be41ffc20b, expecting b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for update TASK_RUNNING (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 E1208 03:32:35.358795 29303 slave.cpp:3018] Failed to handle status update acknowledgement (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000: Duplicate acknowledgement I1208 03:32:35.359001 29303 http.cpp:391] HTTP POST for /master/api/v1/scheduler from 10.0.2.15:54720 I1208 03:32:35.359069 29303 master.cpp:4918] Processing ACKNOWLEDGE call b3bbc1e3-b15b-4227-94ff-209b0d7aa181 for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (default) on agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 I1208 03:32:35.359213 29303 http.cpp:391] HTTP POST for /master/api/v1/scheduler from 10.0.2.15:54720 I1208 03:32:35.359316 29303 master.cpp:4810] Telling agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm) to kill task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (default) I1208 03:32:35.359495 29303 slave.cpp:2347] Asked to kill task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.359784 29303 status_update_manager.cpp:395] Received status update acknowledgement (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 W1208 03:32:35.359866 29303 status_update_manager.cpp:769] Unexpected status update acknowledgement (received b3bbc1e3-b15b-4227-94ff-209b0d7aa181, expecting aed3ed28-1943-44c3-a8b6-40be41ffc20b) for update TASK_RUNNING (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 E1208 03:32:35.360075 29303 slave.cpp:3018] Failed to handle status update acknowledgement (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000: Duplicate acknowledgement I1208 03:32:35.361574 30334 default_executor.cpp:131] Received KILL event I1208 03:32:35.361610 30334 default_executor.cpp:809] Received kill for task '93d62044-e146-4b70-9648-221b72cfaad7' I1208 03:32:35.361631 30334 default_executor.cpp:687] Shutting down I1208 03:32:35.362273 30332 default_executor.cpp:781] Killing child container 935514b3-95ae-450e-b766-084fb5e7734e.1df56862-093d-4a93-85e0-f955c67c333c I1208 03:32:35.362457 30332 default_executor.cpp:781] Killing child container 935514b3-95ae-450e-b766-084fb5e7734e.97ef5b4e-b4be-4777-b8d8-180333e5a3d3 I1208 03:32:35.364953 29304 http.cpp:288] HTTP POST for /slave(15)/api/v1 from 10.0.2.15:54734 I1208 03:32:35.365206 29304 http.cpp:288] HTTP POST for /slave(15)/api/v1 from 10.0.2.15:54734 I1208 03:32:35.365443 29304 http.cpp:449] Processing call KILL_NESTED_CONTAINER I1208 03:32:35.365655 29304 http.cpp:449] Processing call KILL_NESTED_CONTAINER I1208 03:32:35.365854 29301 containerizer.cpp:2087] Destroying container 935514b3-95ae-450e-b766-084fb5e7734e.1df56862-093d-4a93-85e0-f955c67c333c in RUNNING state I1208 03:32:35.366088 29301 containerizer.cpp:2087] Destroying container 935514b3-95ae-450e-b766-084fb5e7734e.97ef5b4e-b4be-4777-b8d8-180333e5a3d3 in RUNNING state I1208 03:32:35.366221 29301 launcher.cpp:149] Asked to destroy container 935514b3-95ae-450e-b766-084fb5e7734e.1df56862-093d-4a93-85e0-f955c67c333c I1208 03:32:35.373039 29301 launcher.cpp:149] Asked to destroy container 935514b3-95ae-450e-b766-084fb5e7734e.97ef5b4e-b4be-4777-b8d8-180333e5a3d3 I1208 03:32:35.435508 29305 containerizer.cpp:2450] Container 935514b3-95ae-450e-b766-084fb5e7734e.1df56862-093d-4a93-85e0-f955c67c333c has exited I1208 03:32:35.436269 29305 containerizer.cpp:2450] Container 935514b3-95ae-450e-b766-084fb5e7734e.97ef5b4e-b4be-4777-b8d8-180333e5a3d3 has exited I1208 03:32:35.438416 29306 containerizer.cpp:2366] Checkpointing termination state to nested container's runtime directory '/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_MN1ibR/containers/935514b3-95ae-450e-b766-084fb5e7734e/containers/1df56862-093d-4a93-85e0-f955c67c333c/termination' I1208 03:32:35.440366 29306 containerizer.cpp:2366] Checkpointing termination state to nested container's runtime directory '/tmp/MesosContainerizer_DefaultExecutorTest_KillTask_0_MN1ibR/containers/935514b3-95ae-450e-b766-084fb5e7734e/containers/97ef5b4e-b4be-4777-b8d8-180333e5a3d3/termination' I1208 03:32:35.442040 30332 default_executor.cpp:656] Successfully waited for child container 935514b3-95ae-450e-b766-084fb5e7734e.1df56862-093d-4a93-85e0-f955c67c333c of task '93d62044-e146-4b70-9648-221b72cfaad7' in state TASK_KILLED I1208 03:32:35.443637 30333 default_executor.cpp:656] Successfully waited for child container 935514b3-95ae-450e-b766-084fb5e7734e.97ef5b4e-b4be-4777-b8d8-180333e5a3d3 of task '004a6d7e-e6be-4a1c-a23d-b5b83e69a19b' in state TASK_KILLED I1208 03:32:35.443677 30333 default_executor.cpp:767] Terminating after 1secs I1208 03:32:35.487275 29305 http.cpp:288] HTTP POST for /slave(15)/api/v1/executor from 10.0.2.15:54726 I1208 03:32:35.487534 29305 slave.cpp:3743] Handling status update TASK_KILLED (UUID: f15f490b-a278-406e-abfa-9f1129d7c036) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.488466 29305 http.cpp:288] HTTP POST for /slave(15)/api/v1/executor from 10.0.2.15:54726 I1208 03:32:35.488623 29305 slave.cpp:3743] Handling status update TASK_KILLED (UUID: 0bed5420-7a04-49fe-9823-2d6a7aff53dc) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.489390 29300 status_update_manager.cpp:323] Received status update TASK_KILLED (UUID: f15f490b-a278-406e-abfa-9f1129d7c036) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:35.492724 29300 status_update_manager.cpp:323] Received status update TASK_KILLED (UUID: 0bed5420-7a04-49fe-9823-2d6a7aff53dc) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:36.451230 29302 containerizer.cpp:2450] Container 935514b3-95ae-450e-b766-084fb5e7734e has exited I1208 03:32:36.451273 29302 containerizer.cpp:2087] Destroying container 935514b3-95ae-450e-b766-084fb5e7734e in RUNNING state I1208 03:32:36.451477 29302 launcher.cpp:149] Asked to destroy container 935514b3-95ae-450e-b766-084fb5e7734e I1208 03:32:36.457315 29301 slave.cpp:4675] Executor 'default' of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 exited with status 0 I1208 03:32:36.457854 29301 master.cpp:5932] Executor 'default' of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 on agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm): exited with status 0 I1208 03:32:36.457904 29301 master.cpp:7915] Removing executor 'default' with resources cpus(*):0.1; mem(*):32; disk(*):32 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 on agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm) GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: failure(0x7ffdffc62f80, @0x7fb2340a8bd0 48-byte object <48-EF 6C-64 B2-7F 00-00 00-00 00-00 00-00 00-00 07-00 00-00 00-00 00-00 40-83 0A-34 B2-7F 00-00 A0-6F 0A-34 B2-7F 00-00 00-00 00-00 00-00 00-00>) Stack trace: I1208 03:32:36.976519 29304 master.cpp:6622] Sending 1 offers to framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (default) W1208 03:32:45.311133 29305 status_update_manager.cpp:478] Resending status update TASK_RUNNING (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 W1208 03:32:45.311413 29305 status_update_manager.cpp:478] Resending status update TASK_RUNNING (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:45.311717 29305 slave.cpp:4184] Forwarding the update TASK_RUNNING (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 to master@10.0.2.15:36807 I1208 03:32:45.311951 29305 slave.cpp:4184] Forwarding the update TASK_RUNNING (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 to master@10.0.2.15:36807 I1208 03:32:45.312207 29305 master.cpp:5808] Status update TASK_RUNNING (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 from agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm) I1208 03:32:45.312252 29305 master.cpp:5870] Forwarding status update TASK_RUNNING (UUID: b3bbc1e3-b15b-4227-94ff-209b0d7aa181) for task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:45.312572 29305 master.cpp:7790] Updating the state of task 93d62044-e146-4b70-9648-221b72cfaad7 of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (latest state: TASK_KILLED, status update state: TASK_RUNNING) I1208 03:32:45.312834 29305 master.cpp:5808] Status update TASK_RUNNING (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 from agent 67de7bda-9b5b-4fe9-aede-390ec9ca7290-S0 at slave(15)@10.0.2.15:36807 (archlinux.vagrant.vm) I1208 03:32:45.312870 29305 master.cpp:5870] Forwarding status update TASK_RUNNING (UUID: aed3ed28-1943-44c3-a8b6-40be41ffc20b) for task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 I1208 03:32:45.313035 29305 master.cpp:7790] Updating the state of task 004a6d7e-e6be-4a1c-a23d-b5b83e69a19b of framework 67de7bda-9b5b-4fe9-aede-390ec9ca7290-0000 (latest state: TASK_KILLED, status update state: TASK_RUNNING) ../../mesos/src/tests/default_executor_tests.cpp:417: Failure Value of: killedUpdate1->status().state() Actual: TASK_RUNNING Expected: TASK_KILLED *** Aborted at 1481128365 (unix time) try """"date -d @1481128365"""" if you are using GNU date *** PC: @ 0x1bb3ed4 testing::UnitTest::AddTestPartResult() *** SIGSEGV (@0x0) received by PID 29285 (TID 0x7fb264acac40) from PID 0; stack trace: *** @ 0x7fb25ded5080 (unknown) @ 0x1bb3ed4 testing::UnitTest::AddTestPartResult() @ 0x1ba86d1 testing::internal::AssertHelper::operator=() @ 0xe357d9 mesos::internal::tests::DefaultExecutorTest_KillTask_Test::TestBody() @ 0x1bd1df0 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x1bcce74 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x1badb08 testing::Test::Run() @ 0x1bae2c0 testing::TestInfo::Run() @ 0x1bae8fd testing::TestCase::Run() @ 0x1bb53f1 testing::internal::UnitTestImpl::RunAllTests() @ 0x1bd2ab7 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x1bcd9b4 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x1bb40f5 testing::UnitTest::Run() @ 0x118c09e RUN_ALL_TESTS() @ 0x118bc54 main @ 0x7fb25be62291 __libc_start_main @ 0xa842fa _start @ 0x0 (unknown) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6749","12/07/2016 19:55:40",3,"Update master and agent endpoints to expose FrameworkInfo.roles. ""With the addition of the FrameworkInfo.roles field, all of the endpoints that expose the framework information need to be updated to expose this additional field. It should be the case that for the v1-style operator calls, the new field will be automatically visible thanks to the direct mapping from protobuf (we should verify this). We can track the updates to metrics separately.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6750","12/07/2016 20:34:59",1,"Metrics on the Agent view of the Mesos web UI flickers between empty and non-empty states ""When viewing a specific agent on the Mesos WebUI, the metrics panel on the left side of the UI will alternate between having values and being empty. This is due to two different callbacks that run: * This one sets the metrics into the {{$scope.state}} variable: https://github.com/apache/mesos/blob/1.1.x/src/webui/master/static/js/controllers.js#L564-L577 * This one blows away the {{$scope.state}} in favor of a new one: https://github.com/apache/mesos/blob/1.1.x/src/webui/master/static/js/controllers.js#L521 The metrics callback should simply assign to a different variable.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6758","12/08/2016 17:20:19",5,"Support 'Basic' auth docker private registry on Unified Containerizer. ""Currently, the Unified Containerizer only supports the private docker registry with 'Bearer' authorization (token is needed from the auth server). We should support the 'Basic' auth registry as well.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6784","12/12/2016 21:58:22",1,"IOSwitchboardTest.KillSwitchboardContainerDestroyed is flaky "" """," [ RUN ] IOSwitchboardTest.KillSwitchboardContainerDestroyed I1212 13:57:02.641043 2211 containerizer.cpp:220] Using isolation: posix/cpu,filesystem/posix,network/cni W1212 13:57:02.641438 2211 backend.cpp:76] Failed to create 'overlay' backend: OverlayBackend requires root privileges, but is running as user nrc W1212 13:57:02.641559 2211 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges I1212 13:57:02.642822 2268 containerizer.cpp:594] Recovering containerizer I1212 13:57:02.643975 2253 provisioner.cpp:253] Provisioner recovery complete I1212 13:57:02.644953 2255 containerizer.cpp:986] Starting container 09e87380-00ab-4987-83c9-fa1c5d86717f for executor 'executor' of framework I1212 13:57:02.647004 2245 switchboard.cpp:430] Allocated pseudo terminal '/dev/pts/54' for container 09e87380-00ab-4987-83c9-fa1c5d86717f I1212 13:57:02.652305 2245 switchboard.cpp:596] Created I/O switchboard server (pid: 2705) listening on socket file '/tmp/mesos-io-switchboard-b4af1c92-6633-44f3-9d35-e0e36edaf70a' for container 09e87380-00ab-4987-83c9-fa1c5d86717f I1212 13:57:02.655513 2267 launcher.cpp:133] Forked child with pid '2706' for container '09e87380-00ab-4987-83c9-fa1c5d86717f' I1212 13:57:02.655732 2267 containerizer.cpp:1621] Checkpointing container's forked pid 2706 to '/tmp/IOSwitchboardTest_KillSwitchboardContainerDestroyed_Me5CRx/meta/slaves/frameworks/executors/executor/runs/09e87380-00ab-4987-83c9-fa1c5d86717f/pids/forked.pid' I1212 13:57:02.726306 2265 containerizer.cpp:2463] Container 09e87380-00ab-4987-83c9-fa1c5d86717f has exited I1212 13:57:02.726352 2265 containerizer.cpp:2100] Destroying container 09e87380-00ab-4987-83c9-fa1c5d86717f in RUNNING state E1212 13:57:02.726495 2243 switchboard.cpp:861] Unexpected termination of I/O switchboard server: 'IOSwitchboard' exited with signal: Killed for container 09e87380-00ab-4987-83c9-fa1c5d86717f I1212 13:57:02.726563 2265 launcher.cpp:149] Asked to destroy container 09e87380-00ab-4987-83c9-fa1c5d86717f E1212 13:57:02.783607 2228 switchboard.cpp:799] Failed to remove unix domain socket file '/tmp/mesos-io-switchboard-b4af1c92-6633-44f3-9d35-e0e36edaf70a' for container '09e87380-00ab-4987-83c9-fa1c5d86717f': No such file or directory ../../mesos/src/tests/containerizer/io_switchboard_tests.cpp:661: Failure Value of: wait.get()->reasons().size() == 1 Actual: false Expected: true *** Aborted at 1481579822 (unix time) try """"date -d @1481579822"""" if you are using GNU date *** PC: @ 0x1bf16d0 testing::UnitTest::AddTestPartResult() *** SIGSEGV (@0x0) received by PID 2211 (TID 0x7faed7d078c0) from PID 0; stack trace: *** @ 0x7faecf855100 (unknown) @ 0x1bf16d0 testing::UnitTest::AddTestPartResult() @ 0x1be6247 testing::internal::AssertHelper::operator=() @ 0x19ed751 mesos::internal::tests::IOSwitchboardTest_KillSwitchboardContainerDestroyed_Test::TestBody() @ 0x1c0ed8c testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x1c09e74 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x1beb505 testing::Test::Run() @ 0x1bebc88 testing::TestInfo::Run() @ 0x1bec2ce testing::TestCase::Run() @ 0x1bf2ba8 testing::internal::UnitTestImpl::RunAllTests() @ 0x1c0f9b1 testing::internal::HandleSehExceptionsInMethodIfSupported<>() @ 0x1c0a9f2 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0x1bf18ee testing::UnitTest::Run() @ 0x11bc9e3 RUN_ALL_TESTS() @ 0x11bc599 main @ 0x7faece663b15 __libc_start_main @ 0xa9c219 (unknown) ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6785","12/12/2016 22:18:46",3,"CHECK failure on duplicate task IDs ""The master crashes with a CHECK failure in the following scenario: # Framework launches task X on agent A1. The framework may or may not be partition-aware; let's assume it is not partition-aware. # A1 becomes partitioned from the master. # Framework launches task X on agent A2. # Master fails over. # Agents A1 and A2 both re-register with the master. Because the master has failed over, the task on A1 is _not_ terminated (""""non-strict registry semantics""""). This results in two running tasks with the same ID, which causes a master {{CHECK}} failure among other badness: """," master.hpp:2299] Check failed: !tasks.contains(task->task_id()) Duplicate task b88153a2-571a-41e7-9e9b-c297fef4f3cd of framework eaef1879-8cc9-412f-928d-86c9925a7abb-0000 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6789","12/14/2016 00:00:53",2,"SSL socket's 'shutdown()' method is broken ""We recently uncovered two issues with the {{LibeventSSLSocketImpl::shutdown}} method: * The introduction of a shutdown method parameter with [this commit|https://reviews.apache.org/r/54113/] means that the implementation's method is no longer overriding the default implementation. In addition to fixing the implementation method's signature, we should add the {{override}} specifier to all of our socket implementations' methods to ensure that this doesn't happen in the future. * The {{LibeventSSLSocketImpl::shutdown}} function does not actually shutdown the SSL socket. The proper function to shutdown an SSL socket is {{SSL_shutdown}}, which is called in the implementation's destructor. We should move this into {{shutdown()}} so that by the time that method returns, the socket has actually been shutdown.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6790","12/14/2016 04:03:47",1,"Wrong task started time in webui ""Reported by [~janisz] {quote} Hi When task has enabled Mesos healthcheck start time in UI can show wrong time. This happens because UI assumes that first status is task started [0]. This is not always true because Mesos keeps only recent tasks statuses [1] so when healthcheck updates tasks status it can override task start time displayed in webui. Best Tomek [0] https://github.com/apache/mesos/blob/master/src/webui/master/static/js/controllers.js#L140 [1] https://github.com/apache/mesos/blob/f2adc8a95afda943f6a10e771aad64300da19047/src/common/protobuf_utils.cpp#L263-L265 {quote}""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6793","12/14/2016 11:32:26",2,"CniIsolatorTest.ROOT_EnvironmentLibprocessIP fails on systems using dash as sh ""On systems using {{dash}} as default shell (e.g., Debian, Ubuntu) {{CniIsolatorTest.ROOT_EnvironmentLibprocessIP}} often fails with """," [ RUN ] CniIsolatorTest.ROOT_EnvironmentLibprocessIP I1214 05:04:11.653625 24102 cluster.cpp:160] Creating default 'local' authorizer I1214 05:04:11.654268 24118 master.cpp:380] Master cf36e3a9-60fb-4885-8145-831ca6998263 (ip-172-16-10-182.mesosphere.io) started on 172.16.10.182:60883 I1214 05:04:11.654281 24118 master.cpp:382] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/mnt/teamcity/temp/buildTmp/qEaG50/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/qEaG50/master"""" --zk_session_timeout=""""10secs"""" I1214 05:04:11.654422 24118 master.cpp:432] Master only allowing authenticated frameworks to register I1214 05:04:11.654428 24118 master.cpp:446] Master only allowing authenticated agents to register I1214 05:04:11.654431 24118 master.cpp:459] Master only allowing authenticated HTTP frameworks to register I1214 05:04:11.654434 24118 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/qEaG50/credentials' I1214 05:04:11.654512 24118 master.cpp:504] Using default 'crammd5' authenticator I1214 05:04:11.654549 24118 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-readonly' I1214 05:04:11.654598 24118 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' I1214 05:04:11.654669 24118 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' I1214 05:04:11.654706 24118 master.cpp:584] Authorization enabled I1214 05:04:11.654785 24119 hierarchical.cpp:149] Initialized hierarchical allocator process I1214 05:04:11.654793 24123 whitelist_watcher.cpp:77] No whitelist given I1214 05:04:11.655488 24123 master.cpp:2045] Elected as the leading master! I1214 05:04:11.655498 24123 master.cpp:1568] Recovering from registrar I1214 05:04:11.655591 24116 registrar.cpp:329] Recovering registrar I1214 05:04:11.655776 24122 registrar.cpp:362] Successfully fetched the registry (0B) in 163072ns I1214 05:04:11.655800 24122 registrar.cpp:461] Applied 1 operations in 2760ns; attempting to update the registry I1214 05:04:11.655972 24118 registrar.cpp:506] Successfully updated the registry in 156928ns I1214 05:04:11.656020 24118 registrar.cpp:392] Successfully recovered registrar I1214 05:04:11.656093 24118 master.cpp:1684] Recovered 0 agents from the registry (174B); allowing 10mins for agents to re-register I1214 05:04:11.656112 24120 hierarchical.cpp:176] Skipping recovery of hierarchical allocator: nothing to recover I1214 05:04:11.657594 24102 containerizer.cpp:220] Using isolation: network/cni,filesystem/posix I1214 05:04:11.660497 24102 linux_launcher.cpp:150] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher I1214 05:04:11.661574 24102 cluster.cpp:446] Creating default 'local' authorizer I1214 05:04:11.661927 24119 slave.cpp:209] Mesos agent started on (576)@172.16.10.182:60883 I1214 05:04:11.661937 24119 slave.cpp:210] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/mnt/teamcity/temp/buildTmp/mesos/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/mnt/teamcity/temp/buildTmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""network/cni"""" --launcher=""""linux"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --network_cni_config_dir=""""/mnt/teamcity/temp/buildTmp/qEaG50/configs"""" --network_cni_plugins_dir=""""/mnt/teamcity/temp/buildTmp/qEaG50/plugins"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg"""" I1214 05:04:11.662156 24119 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/credential' I1214 05:04:11.662252 24119 slave.cpp:352] Agent using credential for: test-principal I1214 05:04:11.662262 24119 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/http_credentials' I1214 05:04:11.662319 24119 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I1214 05:04:11.662350 24119 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' I1214 05:04:11.662586 24102 sched.cpp:232] Version: 1.2.0 I1214 05:04:11.662672 24119 slave.cpp:539] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1214 05:04:11.662709 24119 slave.cpp:547] Agent attributes: [ ] I1214 05:04:11.662716 24121 sched.cpp:336] New master detected at master@172.16.10.182:60883 I1214 05:04:11.662717 24119 slave.cpp:552] Agent hostname: ip-172-16-10-182.mesosphere.io I1214 05:04:11.662739 24121 sched.cpp:402] Authenticating with master master@172.16.10.182:60883 I1214 05:04:11.662744 24121 sched.cpp:409] Using default CRAM-MD5 authenticatee I1214 05:04:11.662809 24120 authenticatee.cpp:121] Creating new client SASL connection I1214 05:04:11.662976 24116 master.cpp:6748] Authenticating scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.663029 24120 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(1172)@172.16.10.182:60883 I1214 05:04:11.663045 24121 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/meta' I1214 05:04:11.663202 24119 status_update_manager.cpp:203] Recovering status update manager I1214 05:04:11.663219 24120 authenticator.cpp:98] Creating new server SASL connection I1214 05:04:11.663321 24119 containerizer.cpp:594] Recovering containerizer I1214 05:04:11.663372 24119 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1214 05:04:11.663388 24119 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1214 05:04:11.663450 24120 authenticator.cpp:204] Received SASL authentication start I1214 05:04:11.663487 24120 authenticator.cpp:326] Authentication requires more steps I1214 05:04:11.663533 24120 authenticatee.cpp:259] Received SASL authentication step I1214 05:04:11.663609 24122 authenticator.cpp:232] Received SASL authentication step I1214 05:04:11.663625 24122 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-182.mesosphere.io' server FQDN: 'ip-172-16-10-182.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1214 05:04:11.663630 24122 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1214 05:04:11.663635 24122 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1214 05:04:11.663642 24122 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-182.mesosphere.io' server FQDN: 'ip-172-16-10-182.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1214 05:04:11.663646 24122 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1214 05:04:11.663650 24122 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1214 05:04:11.663657 24122 authenticator.cpp:318] Authentication success I1214 05:04:11.663708 24122 authenticatee.cpp:299] Authentication success I1214 05:04:11.663739 24120 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(1172)@172.16.10.182:60883 I1214 05:04:11.663797 24118 master.cpp:6778] Successfully authenticated principal 'test-principal' at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.663831 24122 sched.cpp:508] Successfully authenticated with master master@172.16.10.182:60883 I1214 05:04:11.663841 24122 sched.cpp:826] Sending SUBSCRIBE call to master@172.16.10.182:60883 I1214 05:04:11.663913 24122 sched.cpp:859] Will retry registration in 1.01884683secs if necessary I1214 05:04:11.663961 24116 master.cpp:2633] Received SUBSCRIBE call for framework 'default' at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.663985 24116 master.cpp:2081] Authorizing framework principal 'test-principal' to receive offers for role '*' I1214 05:04:11.664217 24116 master.cpp:2709] Subscribing framework default with checkpointing disabled and capabilities [ ] I1214 05:04:11.664366 24117 sched.cpp:749] Framework registered with cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.664367 24119 hierarchical.cpp:276] Added framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.664393 24117 sched.cpp:763] Scheduler::registered took 9855ns I1214 05:04:11.664402 24119 hierarchical.cpp:1689] No allocations performed I1214 05:04:11.664409 24119 hierarchical.cpp:1784] No inverse offers to send out! I1214 05:04:11.664417 24119 hierarchical.cpp:1291] Performed allocation for 0 agents in 23024ns I1214 05:04:11.664438 24116 provisioner.cpp:253] Provisioner recovery complete I1214 05:04:11.664584 24121 slave.cpp:5420] Finished recovery I1214 05:04:11.664744 24121 slave.cpp:5594] Querying resource estimator for oversubscribable resources I1214 05:04:11.664862 24122 slave.cpp:924] New master detected at master@172.16.10.182:60883 I1214 05:04:11.664870 24118 status_update_manager.cpp:177] Pausing sending status updates I1214 05:04:11.664877 24122 slave.cpp:983] Authenticating with master master@172.16.10.182:60883 I1214 05:04:11.664893 24122 slave.cpp:994] Using default CRAM-MD5 authenticatee I1214 05:04:11.664923 24122 slave.cpp:956] Detecting new master I1214 05:04:11.664952 24123 authenticatee.cpp:121] Creating new client SASL connection I1214 05:04:11.664979 24122 slave.cpp:5608] Received oversubscribable resources {} from the resource estimator I1214 05:04:11.665102 24123 master.cpp:6748] Authenticating slave(576)@172.16.10.182:60883 I1214 05:04:11.665148 24116 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(1173)@172.16.10.182:60883 I1214 05:04:11.665217 24116 authenticator.cpp:98] Creating new server SASL connection I1214 05:04:11.665410 24116 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1214 05:04:11.665427 24116 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1214 05:04:11.665475 24116 authenticator.cpp:204] Received SASL authentication start I1214 05:04:11.665504 24116 authenticator.cpp:326] Authentication requires more steps I1214 05:04:11.665547 24116 authenticatee.cpp:259] Received SASL authentication step I1214 05:04:11.665601 24116 authenticator.cpp:232] Received SASL authentication step I1214 05:04:11.665616 24116 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-182.mesosphere.io' server FQDN: 'ip-172-16-10-182.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1214 05:04:11.665622 24116 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1214 05:04:11.665632 24116 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1214 05:04:11.665643 24116 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-182.mesosphere.io' server FQDN: 'ip-172-16-10-182.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1214 05:04:11.665652 24116 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1214 05:04:11.665657 24116 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1214 05:04:11.665668 24116 authenticator.cpp:318] Authentication success I1214 05:04:11.665733 24119 authenticatee.cpp:299] Authentication success I1214 05:04:11.665763 24116 master.cpp:6778] Successfully authenticated principal 'test-principal' at slave(576)@172.16.10.182:60883 I1214 05:04:11.665784 24121 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(1173)@172.16.10.182:60883 I1214 05:04:11.665923 24120 slave.cpp:1078] Successfully authenticated with master master@172.16.10.182:60883 I1214 05:04:11.665961 24120 slave.cpp:1492] Will retry registration in 14.874897ms if necessary I1214 05:04:11.666002 24118 master.cpp:5161] Registering agent at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) with id cf36e3a9-60fb-4885-8145-831ca6998263-S0 I1214 05:04:11.666106 24122 registrar.cpp:461] Applied 1 operations in 9638ns; attempting to update the registry I1214 05:04:11.666337 24120 registrar.cpp:506] Successfully updated the registry in 217088ns I1214 05:04:11.666514 24122 master.cpp:5232] Registered agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1214 05:04:11.666548 24119 slave.cpp:4272] Received ping from slave-observer(545)@172.16.10.182:60883 I1214 05:04:11.666596 24120 hierarchical.cpp:490] Added agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 (ip-172-16-10-182.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: {}) I1214 05:04:11.666616 24119 slave.cpp:1124] Registered with master master@172.16.10.182:60883; given agent ID cf36e3a9-60fb-4885-8145-831ca6998263-S0 I1214 05:04:11.666630 24119 fetcher.cpp:90] Clearing fetcher cache I1214 05:04:11.666707 24116 status_update_manager.cpp:184] Resuming sending status updates I1214 05:04:11.666790 24119 slave.cpp:1147] Checkpointing SlaveInfo to '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/meta/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/slave.info' I1214 05:04:11.666831 24120 hierarchical.cpp:1784] No inverse offers to send out! I1214 05:04:11.666857 24120 hierarchical.cpp:1314] Performed allocation for agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 in 234910ns I1214 05:04:11.666913 24119 slave.cpp:1184] Forwarding total oversubscribed resources {} I1214 05:04:11.666954 24122 master.cpp:6577] Sending 1 offers to framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.667026 24122 master.cpp:5633] Received update of agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) with total oversubscribed resources {} I1214 05:04:11.667127 24120 sched.cpp:923] Scheduler::resourceOffers took 38599ns I1214 05:04:11.667131 24118 hierarchical.cpp:560] Agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 (ip-172-16-10-182.mesosphere.io) updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I1214 05:04:11.667181 24118 hierarchical.cpp:1689] No allocations performed I1214 05:04:11.667192 24118 hierarchical.cpp:1784] No inverse offers to send out! I1214 05:04:11.667207 24118 hierarchical.cpp:1314] Performed allocation for agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 in 44745ns I1214 05:04:11.667513 24118 master.cpp:3588] Processing ACCEPT call for offers: [ cf36e3a9-60fb-4885-8145-831ca6998263-O0 ] on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) for framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.667537 24118 master.cpp:3175] Authorizing framework principal 'test-principal' to launch task 80ddad17-5693-4883-a79d-1925a7b41661 I1214 05:04:11.667912 24119 master.cpp:8501] Adding task 80ddad17-5693-4883-a79d-1925a7b41661 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 (ip-172-16-10-182.mesosphere.io) I1214 05:04:11.667978 24119 master.cpp:4240] Launching task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:11.668177 24120 slave.cpp:1556] Got assigned task '80ddad17-5693-4883-a79d-1925a7b41661' for framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.668344 24120 slave.cpp:1718] Launching task '80ddad17-5693-4883-a79d-1925a7b41661' for framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.668531 24120 paths.cpp:530] Trying to chown '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661/runs/696f16bc-efdc-4a36-b94c-b0739cc1c095' to user 'root' I1214 05:04:11.672688 24120 slave.cpp:6347] Launching executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661/runs/696f16bc-efdc-4a36-b94c-b0739cc1c095' I1214 05:04:11.672895 24123 containerizer.cpp:986] Starting container 696f16bc-efdc-4a36-b94c-b0739cc1c095 for executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.672901 24120 slave.cpp:2040] Queued task '80ddad17-5693-4883-a79d-1925a7b41661' for executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.672932 24120 slave.cpp:877] Successfully attached file '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661/runs/696f16bc-efdc-4a36-b94c-b0739cc1c095' I1214 05:04:11.673694 24121 containerizer.cpp:1535] Launching 'mesos-containerizer' with flags '--help=""""false"""" --launch_info=""""{""""clone_namespaces"""":[131072,67108864],""""command"""":{""""arguments"""":[""""mesos-executor"""",""""--launcher_dir=\/mnt\/teamcity\/work\/4240ba9ddd0997c3\/build\/src""""],""""shell"""":false,""""value"""":""""\/mnt\/teamcity\/work\/4240ba9ddd0997c3\/build\/src\/mesos-executor""""},""""environment"""":{""""variables"""":[{""""name"""":""""LIBPROCESS_PORT"""",""""value"""":""""0""""},{""""name"""":""""MESOS_AGENT_ENDPOINT"""",""""value"""":""""172.16.10.182:60883""""},{""""name"""":""""MESOS_CHECKPOINT"""",""""value"""":""""0""""},{""""name"""":""""MESOS_DIRECTORY"""",""""value"""":""""\/mnt\/teamcity\/temp\/buildTmp\/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg\/slaves\/cf36e3a9-60fb-4885-8145-831ca6998263-S0\/frameworks\/cf36e3a9-60fb-4885-8145-831ca6998263-0000\/executors\/80ddad17-5693-4883-a79d-1925a7b41661\/runs\/696f16bc-efdc-4a36-b94c-b0739cc1c095""""},{""""name"""":""""MESOS_EXECUTOR_ID"""",""""value"""":""""80ddad17-5693-4883-a79d-1925a7b41661""""},{""""name"""":""""MESOS_EXECUTOR_SHUTDOWN_GRACE_PERIOD"""",""""value"""":""""5secs""""},{""""name"""":""""MESOS_FRAMEWORK_ID"""",""""value"""":""""cf36e3a9-60fb-4885-8145-831ca6998263-0000""""},{""""name"""":""""MESOS_HTTP_COMMAND_EXECUTOR"""",""""value"""":""""0""""},{""""name"""":""""MESOS_SLAVE_ID"""",""""value"""":""""cf36e3a9-60fb-4885-8145-831ca6998263-S0""""},{""""name"""":""""MESOS_SLAVE_PID"""",""""value"""":""""slave(576)@172.16.10.182:60883""""},{""""name"""":""""MESOS_SANDBOX"""",""""value"""":""""\/mnt\/teamcity\/temp\/buildTmp\/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg\/slaves\/cf36e3a9-60fb-4885-8145-831ca6998263-S0\/frameworks\/cf36e3a9-60fb-4885-8145-831ca6998263-0000\/executors\/80ddad17-5693-4883-a79d-1925a7b41661\/runs\/696f16bc-efdc-4a36-b94c-b0739cc1c095""""},{""""name"""":""""LIBPROCESS_IP"""",""""value"""":""""0.0.0.0""""}]},""""user"""":""""root"""",""""working_directory"""":""""\/mnt\/teamcity\/temp\/buildTmp\/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg\/slaves\/cf36e3a9-60fb-4885-8145-831ca6998263-S0\/frameworks\/cf36e3a9-60fb-4885-8145-831ca6998263-0000\/executors\/80ddad17-5693-4883-a79d-1925a7b41661\/runs\/696f16bc-efdc-4a36-b94c-b0739cc1c095""""}"""" --pipe_read=""""28"""" --pipe_write=""""32"""" --runtime_directory=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/containers/696f16bc-efdc-4a36-b94c-b0739cc1c095"""" --unshare_namespace_mnt=""""false""""' I1214 05:04:11.674006 24117 linux_launcher.cpp:429] Launching container 696f16bc-efdc-4a36-b94c-b0739cc1c095 and cloning with namespaces CLONE_NEWNS | CLONE_NEWUTS I1214 05:04:11.677374 24116 cni.cpp:844] Bind mounted '/proc/11118/ns/net' to '/run/mesos/isolators/network/cni/696f16bc-efdc-4a36-b94c-b0739cc1c095/ns' for container 696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.677500 24116 cni.cpp:1173] Invoking CNI plugin '/mnt/teamcity/temp/buildTmp/qEaG50/plugins/mockPlugin' with network configuration '{""""args"""":{""""org.apache.mesos"""":{""""network_info"""":{""""name"""":""""__MESOS_TEST__""""}}},""""name"""":""""__MESOS_TEST__"""",""""type"""":""""mockPlugin""""}' to attach container 696f16bc-efdc-4a36-b94c-b0739cc1c095 to network '__MESOS_TEST__' I1214 05:04:11.742821 24122 cni.cpp:1260] Got assigned IPv4 address '172.17.0.1/16' from CNI network '__MESOS_TEST__' for container 696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.743064 24116 cni.cpp:969] DNS nameservers for container 696f16bc-efdc-4a36-b94c-b0739cc1c095 are: nameserver 172.16.10.2 I1214 05:04:11.843673 24119 fetcher.cpp:349] Starting to fetch URIs for container: 696f16bc-efdc-4a36-b94c-b0739cc1c095, directory: /mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661/runs/696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.876816 11170 exec.cpp:162] Version: 1.2.0 I1214 05:04:11.879708 24118 slave.cpp:3314] Got registration for executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 from executor(1)@172.17.0.1:34346 I1214 05:04:11.880328 11169 exec.cpp:237] Executor registered on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 I1214 05:04:11.880398 24121 slave.cpp:2256] Sending queued task '80ddad17-5693-4883-a79d-1925a7b41661' to executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 at executor(1)@172.17.0.1:34346 Received SUBSCRIBED event Subscribed executor on ip-172-16-10-182.mesosphere.io Received LAUNCH event Starting task 80ddad17-5693-4883-a79d-1925a7b41661 /mnt/teamcity/work/4240ba9ddd0997c3/build/src/mesos-containerizer launch --help=""""false"""" --launch_info=""""{""""command"""":{""""shell"""":true,""""value"""":""""\n #!\/bin\/sh\n if [ x\""""$LIBPROCESS_IP\"""" == x\""""0.0.0.0\"""" ]; then\n exit 0\n else\n exit 1\n fi""""}}"""" --unshare_namespace_mnt=""""false"""" Forked command at 11172 I1214 05:04:11.884364 24116 slave.cpp:3749] Handling status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 from executor(1)@172.17.0.1:34346 I1214 05:04:11.884829 24121 status_update_manager.cpp:323] Received status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.884848 24121 status_update_manager.cpp:500] Creating StatusUpdate stream for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.884946 24121 status_update_manager.cpp:377] Forwarding update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to the agent I1214 05:04:11.885074 24120 slave.cpp:4190] Forwarding the update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to master@172.16.10.182:60883 I1214 05:04:11.885150 24120 slave.cpp:4084] Status update manager successfully handled status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.885172 24120 slave.cpp:4100] Sending acknowledgement for status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to executor(1)@172.17.0.1:34346 I1214 05:04:11.885200 24118 master.cpp:5769] Status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 from agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:11.885229 24118 master.cpp:5831] Forwarding status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.885284 24118 master.cpp:7867] Updating the state of task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I1214 05:04:11.885409 24117 sched.cpp:1031] Scheduler::statusUpdate took 63040ns I1214 05:04:11.885541 24122 master.cpp:4877] Processing ACKNOWLEDGE call 32ed996d-9fc5-4e82-afb1-d7936670af16 for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 I1214 05:04:11.885753 24119 status_update_manager.cpp:395] Received status update acknowledgement (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.885821 24119 slave.cpp:3031] Status update manager successfully handled status update acknowledgement (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 sh: 3: [: x0.0.0.0: unexpected operator Command exited with status 1 (pid: 11172) I1214 05:04:11.978447 24119 slave.cpp:3749] Handling status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 from executor(1)@172.17.0.1:34346 I1214 05:04:11.979202 24122 status_update_manager.cpp:323] Received status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.979249 24122 status_update_manager.cpp:377] Forwarding update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to the agent I1214 05:04:11.979321 24119 slave.cpp:4190] Forwarding the update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to master@172.16.10.182:60883 I1214 05:04:11.979404 24119 slave.cpp:4084] Status update manager successfully handled status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.979425 24119 slave.cpp:4100] Sending acknowledgement for status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to executor(1)@172.17.0.1:34346 I1214 05:04:11.979462 24121 master.cpp:5769] Status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 from agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:11.979482 24121 master.cpp:5831] Forwarding status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.979528 24121 master.cpp:7867] Updating the state of task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (latest state: TASK_FAILED, status update state: TASK_FAILED) I1214 05:04:11.979662 24120 sched.cpp:1031] Scheduler::statusUpdate took 66057ns I1214 05:04:11.979784 24116 hierarchical.cpp:1023] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: {}) on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 from framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 ../../src/tests/containerizer/cni_isolator_tests.cpp:601: Failure Value of: statusFinished->state() Actual: TASK_FAILED Expected: TASK_FINISHED I1214 05:04:11.979811 24123 master.cpp:4877] Processing ACKNOWLEDGE call 361b2ed5-7dee-4c96-854a-585c78d25532 for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 I1214 05:04:11.979845 24102 sched.cpp:2008] Asked to stop the driver I1214 05:04:11.979832 24123 master.cpp:7963] Removing task 80ddad17-5693-4883-a79d-1925a7b41661 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:11.979892 24121 sched.cpp:1193] Stopping framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.979966 24122 master.cpp:7287] Processing TEARDOWN call for framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.979981 24122 master.cpp:7299] Removing framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.979990 24120 status_update_manager.cpp:395] Received status update acknowledgement (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980049 24120 status_update_manager.cpp:531] Cleaning up status update stream for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980075 24123 hierarchical.cpp:391] Deactivated framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980115 24121 slave.cpp:2584] Asked to shut down framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 by master@172.16.10.182:60883 I1214 05:04:11.980144 24121 slave.cpp:2609] Shutting down framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980160 24121 slave.cpp:4999] Shutting down executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 at executor(1)@172.17.0.1:34346 I1214 05:04:11.980175 24118 hierarchical.cpp:342] Removed framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980267 24121 slave.cpp:3031] Status update manager successfully handled status update acknowledgement (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980285 24121 slave.cpp:6719] Completing task 80ddad17-5693-4883-a79d-1925a7b41661 I1214 05:04:11.980413 11170 exec.cpp:414] Executor asked to shutdown Received SHUTDOWN event Shutting down I1214 05:04:11.980662 24120 containerizer.cpp:2113] Destroying container 696f16bc-efdc-4a36-b94c-b0739cc1c095 in RUNNING state I1214 05:04:11.980731 24120 linux_launcher.cpp:505] Asked to destroy container 696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.981130 24120 linux_launcher.cpp:548] Using freezer to destroy cgroup mesos/696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.981762 24117 cgroups.cpp:2726] Freezing cgroup /sys/fs/cgroup/freezer/mesos/696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.982657 24123 cgroups.cpp:1439] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/696f16bc-efdc-4a36-b94c-b0739cc1c095 after 869120ns I1214 05:04:11.983755 24117 cgroups.cpp:2744] Thawing cgroup /sys/fs/cgroup/freezer/mesos/696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.984719 24118 cgroups.cpp:1468] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/696f16bc-efdc-4a36-b94c-b0739cc1c095 after 944128ns I1214 05:04:11.999498 24117 slave.cpp:4318] Got exited event for executor(1)@172.17.0.1:34346 I1214 05:04:12.043917 24120 containerizer.cpp:2476] Container 696f16bc-efdc-4a36-b94c-b0739cc1c095 has exited I1214 05:04:12.044694 24119 cni.cpp:1511] Invoking CNI plugin '/mnt/teamcity/temp/buildTmp/qEaG50/plugins/mockPlugin' with network configuration '/run/mesos/isolators/network/cni/696f16bc-efdc-4a36-b94c-b0739cc1c095/__MESOS_TEST__/network.conf' to detach container 696f16bc-efdc-4a36-b94c-b0739cc1c095 from network '__MESOS_TEST__' I1214 05:04:12.151413 24123 cni.cpp:1438] Unmounted the network namespace handle '/run/mesos/isolators/network/cni/696f16bc-efdc-4a36-b94c-b0739cc1c095/ns' for container 696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:12.151480 24123 cni.cpp:1449] Removed the container directory '/run/mesos/isolators/network/cni/696f16bc-efdc-4a36-b94c-b0739cc1c095' I1214 05:04:12.151793 24116 provisioner.cpp:324] Ignoring destroy request for unknown container 696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:12.152027 24117 slave.cpp:4681] Executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 terminated with signal Killed I1214 05:04:12.152050 24117 slave.cpp:4785] Cleaning up executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 at executor(1)@172.17.0.1:34346 I1214 05:04:12.152204 24121 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661/runs/696f16bc-efdc-4a36-b94c-b0739cc1c095' for gc 6.99999823931852days in the future I1214 05:04:12.152215 24117 slave.cpp:4873] Cleaning up framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:12.152261 24121 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661' for gc 6.99999823841481days in the future I1214 05:04:12.152300 24116 status_update_manager.cpp:285] Closing status update streams for framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:12.152401 24117 slave.cpp:796] Agent terminating I1214 05:04:12.152410 24122 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000' for gc 6.99999823711704days in the future I1214 05:04:12.152472 24122 master.cpp:1258] Agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) disconnected I1214 05:04:12.152488 24122 master.cpp:2977] Disconnecting agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:12.152509 24122 master.cpp:2996] Deactivating agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:12.152565 24122 hierarchical.cpp:589] Agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 deactivated I1214 05:04:12.154162 24102 master.cpp:1097] Master terminating I1214 05:04:12.154306 24118 hierarchical.cpp:522] Removed agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 [ FAILED ] CniIsolatorTest.ROOT_EnvironmentLibprocessIP (503 ms)[05:04:11] : [Step 11/11] [ RUN ] CniIsolatorTest.ROOT_EnvironmentLibprocessIP I1214 05:04:11.653625 24102 cluster.cpp:160] Creating default 'local' authorizer I1214 05:04:11.654268 24118 master.cpp:380] Master cf36e3a9-60fb-4885-8145-831ca6998263 (ip-172-16-10-182.mesosphere.io) started on 172.16.10.182:60883 I1214 05:04:11.654281 24118 master.cpp:382] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/mnt/teamcity/temp/buildTmp/qEaG50/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/qEaG50/master"""" --zk_session_timeout=""""10secs"""" I1214 05:04:11.654422 24118 master.cpp:432] Master only allowing authenticated frameworks to register I1214 05:04:11.654428 24118 master.cpp:446] Master only allowing authenticated agents to register I1214 05:04:11.654431 24118 master.cpp:459] Master only allowing authenticated HTTP frameworks to register I1214 05:04:11.654434 24118 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/qEaG50/credentials' I1214 05:04:11.654512 24118 master.cpp:504] Using default 'crammd5' authenticator I1214 05:04:11.654549 24118 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-readonly' I1214 05:04:11.654598 24118 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' I1214 05:04:11.654669 24118 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' I1214 05:04:11.654706 24118 master.cpp:584] Authorization enabled I1214 05:04:11.654785 24119 hierarchical.cpp:149] Initialized hierarchical allocator process I1214 05:04:11.654793 24123 whitelist_watcher.cpp:77] No whitelist given I1214 05:04:11.655488 24123 master.cpp:2045] Elected as the leading master! I1214 05:04:11.655498 24123 master.cpp:1568] Recovering from registrar I1214 05:04:11.655591 24116 registrar.cpp:329] Recovering registrar I1214 05:04:11.655776 24122 registrar.cpp:362] Successfully fetched the registry (0B) in 163072ns I1214 05:04:11.655800 24122 registrar.cpp:461] Applied 1 operations in 2760ns; attempting to update the registry I1214 05:04:11.655972 24118 registrar.cpp:506] Successfully updated the registry in 156928ns I1214 05:04:11.656020 24118 registrar.cpp:392] Successfully recovered registrar I1214 05:04:11.656093 24118 master.cpp:1684] Recovered 0 agents from the registry (174B); allowing 10mins for agents to re-register I1214 05:04:11.656112 24120 hierarchical.cpp:176] Skipping recovery of hierarchical allocator: nothing to recover I1214 05:04:11.657594 24102 containerizer.cpp:220] Using isolation: network/cni,filesystem/posix I1214 05:04:11.660497 24102 linux_launcher.cpp:150] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher I1214 05:04:11.661574 24102 cluster.cpp:446] Creating default 'local' authorizer I1214 05:04:11.661927 24119 slave.cpp:209] Mesos agent started on (576)@172.16.10.182:60883 I1214 05:04:11.661937 24119 slave.cpp:210] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/mnt/teamcity/temp/buildTmp/mesos/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/mnt/teamcity/temp/buildTmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""network/cni"""" --launcher=""""linux"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --network_cni_config_dir=""""/mnt/teamcity/temp/buildTmp/qEaG50/configs"""" --network_cni_plugins_dir=""""/mnt/teamcity/temp/buildTmp/qEaG50/plugins"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg"""" I1214 05:04:11.662156 24119 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/credential' I1214 05:04:11.662252 24119 slave.cpp:352] Agent using credential for: test-principal I1214 05:04:11.662262 24119 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/http_credentials' I1214 05:04:11.662319 24119 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I1214 05:04:11.662350 24119 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' I1214 05:04:11.662586 24102 sched.cpp:232] Version: 1.2.0 I1214 05:04:11.662672 24119 slave.cpp:539] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1214 05:04:11.662709 24119 slave.cpp:547] Agent attributes: [ ] I1214 05:04:11.662716 24121 sched.cpp:336] New master detected at master@172.16.10.182:60883 I1214 05:04:11.662717 24119 slave.cpp:552] Agent hostname: ip-172-16-10-182.mesosphere.io I1214 05:04:11.662739 24121 sched.cpp:402] Authenticating with master master@172.16.10.182:60883 I1214 05:04:11.662744 24121 sched.cpp:409] Using default CRAM-MD5 authenticatee I1214 05:04:11.662809 24120 authenticatee.cpp:121] Creating new client SASL connection I1214 05:04:11.662976 24116 master.cpp:6748] Authenticating scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.663029 24120 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(1172)@172.16.10.182:60883 I1214 05:04:11.663045 24121 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/meta' I1214 05:04:11.663202 24119 status_update_manager.cpp:203] Recovering status update manager I1214 05:04:11.663219 24120 authenticator.cpp:98] Creating new server SASL connection I1214 05:04:11.663321 24119 containerizer.cpp:594] Recovering containerizer I1214 05:04:11.663372 24119 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1214 05:04:11.663388 24119 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1214 05:04:11.663450 24120 authenticator.cpp:204] Received SASL authentication start I1214 05:04:11.663487 24120 authenticator.cpp:326] Authentication requires more steps I1214 05:04:11.663533 24120 authenticatee.cpp:259] Received SASL authentication step I1214 05:04:11.663609 24122 authenticator.cpp:232] Received SASL authentication step I1214 05:04:11.663625 24122 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-182.mesosphere.io' server FQDN: 'ip-172-16-10-182.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1214 05:04:11.663630 24122 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1214 05:04:11.663635 24122 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1214 05:04:11.663642 24122 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-182.mesosphere.io' server FQDN: 'ip-172-16-10-182.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1214 05:04:11.663646 24122 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1214 05:04:11.663650 24122 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1214 05:04:11.663657 24122 authenticator.cpp:318] Authentication success I1214 05:04:11.663708 24122 authenticatee.cpp:299] Authentication success I1214 05:04:11.663739 24120 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(1172)@172.16.10.182:60883 I1214 05:04:11.663797 24118 master.cpp:6778] Successfully authenticated principal 'test-principal' at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.663831 24122 sched.cpp:508] Successfully authenticated with master master@172.16.10.182:60883 I1214 05:04:11.663841 24122 sched.cpp:826] Sending SUBSCRIBE call to master@172.16.10.182:60883 I1214 05:04:11.663913 24122 sched.cpp:859] Will retry registration in 1.01884683secs if necessary I1214 05:04:11.663961 24116 master.cpp:2633] Received SUBSCRIBE call for framework 'default' at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.663985 24116 master.cpp:2081] Authorizing framework principal 'test-principal' to receive offers for role '*' I1214 05:04:11.664217 24116 master.cpp:2709] Subscribing framework default with checkpointing disabled and capabilities [ ] I1214 05:04:11.664366 24117 sched.cpp:749] Framework registered with cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.664367 24119 hierarchical.cpp:276] Added framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.664393 24117 sched.cpp:763] Scheduler::registered took 9855ns I1214 05:04:11.664402 24119 hierarchical.cpp:1689] No allocations performed I1214 05:04:11.664409 24119 hierarchical.cpp:1784] No inverse offers to send out! I1214 05:04:11.664417 24119 hierarchical.cpp:1291] Performed allocation for 0 agents in 23024ns I1214 05:04:11.664438 24116 provisioner.cpp:253] Provisioner recovery complete I1214 05:04:11.664584 24121 slave.cpp:5420] Finished recovery I1214 05:04:11.664744 24121 slave.cpp:5594] Querying resource estimator for oversubscribable resources I1214 05:04:11.664862 24122 slave.cpp:924] New master detected at master@172.16.10.182:60883 I1214 05:04:11.664870 24118 status_update_manager.cpp:177] Pausing sending status updates I1214 05:04:11.664877 24122 slave.cpp:983] Authenticating with master master@172.16.10.182:60883 I1214 05:04:11.664893 24122 slave.cpp:994] Using default CRAM-MD5 authenticatee I1214 05:04:11.664923 24122 slave.cpp:956] Detecting new master I1214 05:04:11.664952 24123 authenticatee.cpp:121] Creating new client SASL connection I1214 05:04:11.664979 24122 slave.cpp:5608] Received oversubscribable resources {} from the resource estimator I1214 05:04:11.665102 24123 master.cpp:6748] Authenticating slave(576)@172.16.10.182:60883 I1214 05:04:11.665148 24116 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(1173)@172.16.10.182:60883 I1214 05:04:11.665217 24116 authenticator.cpp:98] Creating new server SASL connection I1214 05:04:11.665410 24116 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1214 05:04:11.665427 24116 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1214 05:04:11.665475 24116 authenticator.cpp:204] Received SASL authentication start I1214 05:04:11.665504 24116 authenticator.cpp:326] Authentication requires more steps I1214 05:04:11.665547 24116 authenticatee.cpp:259] Received SASL authentication step I1214 05:04:11.665601 24116 authenticator.cpp:232] Received SASL authentication step I1214 05:04:11.665616 24116 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-182.mesosphere.io' server FQDN: 'ip-172-16-10-182.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1214 05:04:11.665622 24116 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1214 05:04:11.665632 24116 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1214 05:04:11.665643 24116 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-182.mesosphere.io' server FQDN: 'ip-172-16-10-182.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1214 05:04:11.665652 24116 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1214 05:04:11.665657 24116 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1214 05:04:11.665668 24116 authenticator.cpp:318] Authentication success I1214 05:04:11.665733 24119 authenticatee.cpp:299] Authentication success I1214 05:04:11.665763 24116 master.cpp:6778] Successfully authenticated principal 'test-principal' at slave(576)@172.16.10.182:60883 I1214 05:04:11.665784 24121 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(1173)@172.16.10.182:60883 I1214 05:04:11.665923 24120 slave.cpp:1078] Successfully authenticated with master master@172.16.10.182:60883 I1214 05:04:11.665961 24120 slave.cpp:1492] Will retry registration in 14.874897ms if necessary I1214 05:04:11.666002 24118 master.cpp:5161] Registering agent at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) with id cf36e3a9-60fb-4885-8145-831ca6998263-S0 I1214 05:04:11.666106 24122 registrar.cpp:461] Applied 1 operations in 9638ns; attempting to update the registry I1214 05:04:11.666337 24120 registrar.cpp:506] Successfully updated the registry in 217088ns I1214 05:04:11.666514 24122 master.cpp:5232] Registered agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1214 05:04:11.666548 24119 slave.cpp:4272] Received ping from slave-observer(545)@172.16.10.182:60883 I1214 05:04:11.666596 24120 hierarchical.cpp:490] Added agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 (ip-172-16-10-182.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: {}) I1214 05:04:11.666616 24119 slave.cpp:1124] Registered with master master@172.16.10.182:60883; given agent ID cf36e3a9-60fb-4885-8145-831ca6998263-S0 I1214 05:04:11.666630 24119 fetcher.cpp:90] Clearing fetcher cache I1214 05:04:11.666707 24116 status_update_manager.cpp:184] Resuming sending status updates I1214 05:04:11.666790 24119 slave.cpp:1147] Checkpointing SlaveInfo to '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/meta/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/slave.info' I1214 05:04:11.666831 24120 hierarchical.cpp:1784] No inverse offers to send out! I1214 05:04:11.666857 24120 hierarchical.cpp:1314] Performed allocation for agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 in 234910ns I1214 05:04:11.666913 24119 slave.cpp:1184] Forwarding total oversubscribed resources {} I1214 05:04:11.666954 24122 master.cpp:6577] Sending 1 offers to framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.667026 24122 master.cpp:5633] Received update of agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) with total oversubscribed resources {} I1214 05:04:11.667127 24120 sched.cpp:923] Scheduler::resourceOffers took 38599ns I1214 05:04:11.667131 24118 hierarchical.cpp:560] Agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 (ip-172-16-10-182.mesosphere.io) updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I1214 05:04:11.667181 24118 hierarchical.cpp:1689] No allocations performed I1214 05:04:11.667192 24118 hierarchical.cpp:1784] No inverse offers to send out! I1214 05:04:11.667207 24118 hierarchical.cpp:1314] Performed allocation for agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 in 44745ns I1214 05:04:11.667513 24118 master.cpp:3588] Processing ACCEPT call for offers: [ cf36e3a9-60fb-4885-8145-831ca6998263-O0 ] on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) for framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.667537 24118 master.cpp:3175] Authorizing framework principal 'test-principal' to launch task 80ddad17-5693-4883-a79d-1925a7b41661 I1214 05:04:11.667912 24119 master.cpp:8501] Adding task 80ddad17-5693-4883-a79d-1925a7b41661 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 (ip-172-16-10-182.mesosphere.io) I1214 05:04:11.667978 24119 master.cpp:4240] Launching task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:11.668177 24120 slave.cpp:1556] Got assigned task '80ddad17-5693-4883-a79d-1925a7b41661' for framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.668344 24120 slave.cpp:1718] Launching task '80ddad17-5693-4883-a79d-1925a7b41661' for framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.668531 24120 paths.cpp:530] Trying to chown '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661/runs/696f16bc-efdc-4a36-b94c-b0739cc1c095' to user 'root' I1214 05:04:11.672688 24120 slave.cpp:6347] Launching executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661/runs/696f16bc-efdc-4a36-b94c-b0739cc1c095' I1214 05:04:11.672895 24123 containerizer.cpp:986] Starting container 696f16bc-efdc-4a36-b94c-b0739cc1c095 for executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.672901 24120 slave.cpp:2040] Queued task '80ddad17-5693-4883-a79d-1925a7b41661' for executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.672932 24120 slave.cpp:877] Successfully attached file '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661/runs/696f16bc-efdc-4a36-b94c-b0739cc1c095' I1214 05:04:11.673694 24121 containerizer.cpp:1535] Launching 'mesos-containerizer' with flags '--help=""""false"""" --launch_info=""""{""""clone_namespaces"""":[131072,67108864],""""command"""":{""""arguments"""":[""""mesos-executor"""",""""--launcher_dir=\/mnt\/teamcity\/work\/4240ba9ddd0997c3\/build\/src""""],""""shell"""":false,""""value"""":""""\/mnt\/teamcity\/work\/4240ba9ddd0997c3\/build\/src\/mesos-executor""""},""""environment"""":{""""variables"""":[{""""name"""":""""LIBPROCESS_PORT"""",""""value"""":""""0""""},{""""name"""":""""MESOS_AGENT_ENDPOINT"""",""""value"""":""""172.16.10.182:60883""""},{""""name"""":""""MESOS_CHECKPOINT"""",""""value"""":""""0""""},{""""name"""":""""MESOS_DIRECTORY"""",""""value"""":""""\/mnt\/teamcity\/temp\/buildTmp\/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg\/slaves\/cf36e3a9-60fb-4885-8145-831ca6998263-S0\/frameworks\/cf36e3a9-60fb-4885-8145-831ca6998263-0000\/executors\/80ddad17-5693-4883-a79d-1925a7b41661\/runs\/696f16bc-efdc-4a36-b94c-b0739cc1c095""""},{""""name"""":""""MESOS_EXECUTOR_ID"""",""""value"""":""""80ddad17-5693-4883-a79d-1925a7b41661""""},{""""name"""":""""MESOS_EXECUTOR_SHUTDOWN_GRACE_PERIOD"""",""""value"""":""""5secs""""},{""""name"""":""""MESOS_FRAMEWORK_ID"""",""""value"""":""""cf36e3a9-60fb-4885-8145-831ca6998263-0000""""},{""""name"""":""""MESOS_HTTP_COMMAND_EXECUTOR"""",""""value"""":""""0""""},{""""name"""":""""MESOS_SLAVE_ID"""",""""value"""":""""cf36e3a9-60fb-4885-8145-831ca6998263-S0""""},{""""name"""":""""MESOS_SLAVE_PID"""",""""value"""":""""slave(576)@172.16.10.182:60883""""},{""""name"""":""""MESOS_SANDBOX"""",""""value"""":""""\/mnt\/teamcity\/temp\/buildTmp\/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg\/slaves\/cf36e3a9-60fb-4885-8145-831ca6998263-S0\/frameworks\/cf36e3a9-60fb-4885-8145-831ca6998263-0000\/executors\/80ddad17-5693-4883-a79d-1925a7b41661\/runs\/696f16bc-efdc-4a36-b94c-b0739cc1c095""""},{""""name"""":""""LIBPROCESS_IP"""",""""value"""":""""0.0.0.0""""}]},""""user"""":""""root"""",""""working_directory"""":""""\/mnt\/teamcity\/temp\/buildTmp\/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg\/slaves\/cf36e3a9-60fb-4885-8145-831ca6998263-S0\/frameworks\/cf36e3a9-60fb-4885-8145-831ca6998263-0000\/executors\/80ddad17-5693-4883-a79d-1925a7b41661\/runs\/696f16bc-efdc-4a36-b94c-b0739cc1c095""""}"""" --pipe_read=""""28"""" --pipe_write=""""32"""" --runtime_directory=""""/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_kPcBAv/containers/696f16bc-efdc-4a36-b94c-b0739cc1c095"""" --unshare_namespace_mnt=""""false""""' I1214 05:04:11.674006 24117 linux_launcher.cpp:429] Launching container 696f16bc-efdc-4a36-b94c-b0739cc1c095 and cloning with namespaces CLONE_NEWNS | CLONE_NEWUTS I1214 05:04:11.677374 24116 cni.cpp:844] Bind mounted '/proc/11118/ns/net' to '/run/mesos/isolators/network/cni/696f16bc-efdc-4a36-b94c-b0739cc1c095/ns' for container 696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.677500 24116 cni.cpp:1173] Invoking CNI plugin '/mnt/teamcity/temp/buildTmp/qEaG50/plugins/mockPlugin' with network configuration '{""""args"""":{""""org.apache.mesos"""":{""""network_info"""":{""""name"""":""""__MESOS_TEST__""""}}},""""name"""":""""__MESOS_TEST__"""",""""type"""":""""mockPlugin""""}' to attach container 696f16bc-efdc-4a36-b94c-b0739cc1c095 to network '__MESOS_TEST__' I1214 05:04:11.742821 24122 cni.cpp:1260] Got assigned IPv4 address '172.17.0.1/16' from CNI network '__MESOS_TEST__' for container 696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.743064 24116 cni.cpp:969] DNS nameservers for container 696f16bc-efdc-4a36-b94c-b0739cc1c095 are: nameserver 172.16.10.2 I1214 05:04:11.843673 24119 fetcher.cpp:349] Starting to fetch URIs for container: 696f16bc-efdc-4a36-b94c-b0739cc1c095, directory: /mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661/runs/696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.876816 11170 exec.cpp:162] Version: 1.2.0 I1214 05:04:11.879708 24118 slave.cpp:3314] Got registration for executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 from executor(1)@172.17.0.1:34346 I1214 05:04:11.880328 11169 exec.cpp:237] Executor registered on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 I1214 05:04:11.880398 24121 slave.cpp:2256] Sending queued task '80ddad17-5693-4883-a79d-1925a7b41661' to executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 at executor(1)@172.17.0.1:34346 Received SUBSCRIBED event Subscribed executor on ip-172-16-10-182.mesosphere.io Received LAUNCH event Starting task 80ddad17-5693-4883-a79d-1925a7b41661 /mnt/teamcity/work/4240ba9ddd0997c3/build/src/mesos-containerizer launch --help=""""false"""" --launch_info=""""{""""command"""":{""""shell"""":true,""""value"""":""""\n #!\/bin\/sh\n if [ x\""""$LIBPROCESS_IP\"""" == x\""""0.0.0.0\"""" ]; then\n exit 0\n else\n exit 1\n fi""""}}"""" --unshare_namespace_mnt=""""false"""" Forked command at 11172 I1214 05:04:11.884364 24116 slave.cpp:3749] Handling status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 from executor(1)@172.17.0.1:34346 I1214 05:04:11.884829 24121 status_update_manager.cpp:323] Received status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.884848 24121 status_update_manager.cpp:500] Creating StatusUpdate stream for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.884946 24121 status_update_manager.cpp:377] Forwarding update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to the agent I1214 05:04:11.885074 24120 slave.cpp:4190] Forwarding the update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to master@172.16.10.182:60883 I1214 05:04:11.885150 24120 slave.cpp:4084] Status update manager successfully handled status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.885172 24120 slave.cpp:4100] Sending acknowledgement for status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to executor(1)@172.17.0.1:34346 I1214 05:04:11.885200 24118 master.cpp:5769] Status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 from agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:11.885229 24118 master.cpp:5831] Forwarding status update TASK_RUNNING (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.885284 24118 master.cpp:7867] Updating the state of task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I1214 05:04:11.885409 24117 sched.cpp:1031] Scheduler::statusUpdate took 63040ns I1214 05:04:11.885541 24122 master.cpp:4877] Processing ACKNOWLEDGE call 32ed996d-9fc5-4e82-afb1-d7936670af16 for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 I1214 05:04:11.885753 24119 status_update_manager.cpp:395] Received status update acknowledgement (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.885821 24119 slave.cpp:3031] Status update manager successfully handled status update acknowledgement (UUID: 32ed996d-9fc5-4e82-afb1-d7936670af16) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 sh: 3: [: x0.0.0.0: unexpected operator Command exited with status 1 (pid: 11172) I1214 05:04:11.978447 24119 slave.cpp:3749] Handling status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 from executor(1)@172.17.0.1:34346 I1214 05:04:11.979202 24122 status_update_manager.cpp:323] Received status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.979249 24122 status_update_manager.cpp:377] Forwarding update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to the agent I1214 05:04:11.979321 24119 slave.cpp:4190] Forwarding the update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to master@172.16.10.182:60883 I1214 05:04:11.979404 24119 slave.cpp:4084] Status update manager successfully handled status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.979425 24119 slave.cpp:4100] Sending acknowledgement for status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 to executor(1)@172.17.0.1:34346 I1214 05:04:11.979462 24121 master.cpp:5769] Status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 from agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:11.979482 24121 master.cpp:5831] Forwarding status update TASK_FAILED (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.979528 24121 master.cpp:7867] Updating the state of task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (latest state: TASK_FAILED, status update state: TASK_FAILED) I1214 05:04:11.979662 24120 sched.cpp:1031] Scheduler::statusUpdate took 66057ns I1214 05:04:11.979784 24116 hierarchical.cpp:1023] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: {}) on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 from framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 ../../src/tests/containerizer/cni_isolator_tests.cpp:601: Failure Value of: statusFinished->state() Actual: TASK_FAILED Expected: TASK_FINISHED I1214 05:04:11.979811 24123 master.cpp:4877] Processing ACKNOWLEDGE call 361b2ed5-7dee-4c96-854a-585c78d25532 for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 I1214 05:04:11.979845 24102 sched.cpp:2008] Asked to stop the driver I1214 05:04:11.979832 24123 master.cpp:7963] Removing task 80ddad17-5693-4883-a79d-1925a7b41661 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 on agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:11.979892 24121 sched.cpp:1193] Stopping framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.979966 24122 master.cpp:7287] Processing TEARDOWN call for framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.979981 24122 master.cpp:7299] Removing framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 (default) at scheduler-dfbb45a6-1912-4912-acac-b4fb87c28181@172.16.10.182:60883 I1214 05:04:11.979990 24120 status_update_manager.cpp:395] Received status update acknowledgement (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980049 24120 status_update_manager.cpp:531] Cleaning up status update stream for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980075 24123 hierarchical.cpp:391] Deactivated framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980115 24121 slave.cpp:2584] Asked to shut down framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 by master@172.16.10.182:60883 I1214 05:04:11.980144 24121 slave.cpp:2609] Shutting down framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980160 24121 slave.cpp:4999] Shutting down executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 at executor(1)@172.17.0.1:34346 I1214 05:04:11.980175 24118 hierarchical.cpp:342] Removed framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980267 24121 slave.cpp:3031] Status update manager successfully handled status update acknowledgement (UUID: 361b2ed5-7dee-4c96-854a-585c78d25532) for task 80ddad17-5693-4883-a79d-1925a7b41661 of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:11.980285 24121 slave.cpp:6719] Completing task 80ddad17-5693-4883-a79d-1925a7b41661 I1214 05:04:11.980413 11170 exec.cpp:414] Executor asked to shutdown Received SHUTDOWN event Shutting down I1214 05:04:11.980662 24120 containerizer.cpp:2113] Destroying container 696f16bc-efdc-4a36-b94c-b0739cc1c095 in RUNNING state I1214 05:04:11.980731 24120 linux_launcher.cpp:505] Asked to destroy container 696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.981130 24120 linux_launcher.cpp:548] Using freezer to destroy cgroup mesos/696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.981762 24117 cgroups.cpp:2726] Freezing cgroup /sys/fs/cgroup/freezer/mesos/696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.982657 24123 cgroups.cpp:1439] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/696f16bc-efdc-4a36-b94c-b0739cc1c095 after 869120ns I1214 05:04:11.983755 24117 cgroups.cpp:2744] Thawing cgroup /sys/fs/cgroup/freezer/mesos/696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:11.984719 24118 cgroups.cpp:1468] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/696f16bc-efdc-4a36-b94c-b0739cc1c095 after 944128ns I1214 05:04:11.999498 24117 slave.cpp:4318] Got exited event for executor(1)@172.17.0.1:34346 I1214 05:04:12.043917 24120 containerizer.cpp:2476] Container 696f16bc-efdc-4a36-b94c-b0739cc1c095 has exited I1214 05:04:12.044694 24119 cni.cpp:1511] Invoking CNI plugin '/mnt/teamcity/temp/buildTmp/qEaG50/plugins/mockPlugin' with network configuration '/run/mesos/isolators/network/cni/696f16bc-efdc-4a36-b94c-b0739cc1c095/__MESOS_TEST__/network.conf' to detach container 696f16bc-efdc-4a36-b94c-b0739cc1c095 from network '__MESOS_TEST__' I1214 05:04:12.151413 24123 cni.cpp:1438] Unmounted the network namespace handle '/run/mesos/isolators/network/cni/696f16bc-efdc-4a36-b94c-b0739cc1c095/ns' for container 696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:12.151480 24123 cni.cpp:1449] Removed the container directory '/run/mesos/isolators/network/cni/696f16bc-efdc-4a36-b94c-b0739cc1c095' I1214 05:04:12.151793 24116 provisioner.cpp:324] Ignoring destroy request for unknown container 696f16bc-efdc-4a36-b94c-b0739cc1c095 I1214 05:04:12.152027 24117 slave.cpp:4681] Executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 terminated with signal Killed I1214 05:04:12.152050 24117 slave.cpp:4785] Cleaning up executor '80ddad17-5693-4883-a79d-1925a7b41661' of framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 at executor(1)@172.17.0.1:34346 I1214 05:04:12.152204 24121 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661/runs/696f16bc-efdc-4a36-b94c-b0739cc1c095' for gc 6.99999823931852days in the future I1214 05:04:12.152215 24117 slave.cpp:4873] Cleaning up framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:12.152261 24121 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000/executors/80ddad17-5693-4883-a79d-1925a7b41661' for gc 6.99999823841481days in the future I1214 05:04:12.152300 24116 status_update_manager.cpp:285] Closing status update streams for framework cf36e3a9-60fb-4885-8145-831ca6998263-0000 I1214 05:04:12.152401 24117 slave.cpp:796] Agent terminating I1214 05:04:12.152410 24122 gc.cpp:55] Scheduling '/mnt/teamcity/temp/buildTmp/CniIsolatorTest_ROOT_EnvironmentLibprocessIP_dqz8kg/slaves/cf36e3a9-60fb-4885-8145-831ca6998263-S0/frameworks/cf36e3a9-60fb-4885-8145-831ca6998263-0000' for gc 6.99999823711704days in the future I1214 05:04:12.152472 24122 master.cpp:1258] Agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) disconnected I1214 05:04:12.152488 24122 master.cpp:2977] Disconnecting agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:12.152509 24122 master.cpp:2996] Deactivating agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 at slave(576)@172.16.10.182:60883 (ip-172-16-10-182.mesosphere.io) I1214 05:04:12.152565 24122 hierarchical.cpp:589] Agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 deactivated I1214 05:04:12.154162 24102 master.cpp:1097] Master terminating I1214 05:04:12.154306 24118 hierarchical.cpp:522] Removed agent cf36e3a9-60fb-4885-8145-831ca6998263-S0 [ FAILED ] CniIsolatorTest.ROOT_EnvironmentLibprocessIP (503 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6795","12/14/2016 18:00:18",2,"Listening socket might get closed while the accept is still in flight. ""This might result in the SocketImpl::accept to invoke network::accept on a wrong fd (or a closed fd). We discovered this while triaging a weird behavior in test (https://issues.apache.org/jira/browse/MESOS-6759).""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6802","12/15/2016 17:52:12",3,"SSL socket can lose bytes in the case of EOF ""During recent work on SSL-enabled tests in libprocess (MESOS-5966), we discovered a bug in {{LibeventSSLSocketImpl}}, wherein the socket can either fail to receive an EOF, or lose data when an EOF is received. The {{LibeventSSLSocketImpl::event_callback(short events)}} method immediately sets any pending {{RecvRequest}}'s promise to zero upon receipt of an EOF. However, at the time the promise is set, there may actually be data waiting to be read by libevent. Upon receipt of an EOF, we should attempt to read the socket's bufferevent first to ensure that we aren't losing any data previously received by the socket.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6805","12/15/2016 20:56:07",2,"Check unreachable task cache for task ID collisions on launch ""As discussed in MESOS-6785, it is possible to crash the master by launching a task that reuses the ID of an unreachable/partitioned task. A complete solution to this problem will be quite involved, but an incremental improvement is easy: when we see a task launch operation, reject the launch attempt if the task ID collides with an ID in the per-framework {{unreachableTasks}} cache. This doesn't catch all situations in which IDs are reused, but it is better than nothing.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6806","12/15/2016 22:02:34",1,"Update the addition, deletion and modification logic of CNI configuration files. ""We need update the CNI documentation to highlight that we can add/delete and modify CNI networks on the fly without the need for agent restart.""","",0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6811","12/19/2016 10:26:16",3,"IOSwitchboardServerTest.SendHeartbeat and IOSwitchboardServerTest.ReceiveHeartbeat broken on OS X ""The tests IOSwitchboardServerTest.SendHeartbeat and IOSwitchboardServerTest.ReceiveHeartbeat are broken on OS X. The issue is caused by the way the socket paths are constructed in the tests, The lengths of the components are * sandbox path: 55 characters (including directory delimiters), * {{mesos-io-switchboard}}: 20 characters, * UUID: 36 characters which amounts to a total of 113 non-zero characters. Since the socket is already created in the test's sandbox and only a single socket is created in the test, it appears that it might be possible to strip e.g., the UUID from the path to make the path fit."""," [==========] Running 2 tests from 1 test case. [----------] Global test environment set-up. [----------] 2 tests from IOSwitchboardServerTest [ RUN ] IOSwitchboardServerTest.SendHeartbeat ../../src/tests/containerizer/io_switchboard_tests.cpp:392: Failure server: Failed to build address from '/var/folders/6t/yp_xgc8d6k32rpp0bsbfqm9m0000gp/T/04ioBQ/mesos-io-switchboard-0ce96b84-fc47-4a21-b7bc-71eddd8b0f13': Path too long, must be less than 104 bytes [ FAILED ] IOSwitchboardServerTest.SendHeartbeat (5 ms) [ RUN ] IOSwitchboardServerTest.ReceiveHeartbeat ../../src/tests/containerizer/io_switchboard_tests.cpp:630: Failure server: Failed to build address from '/var/folders/6t/yp_xgc8d6k32rpp0bsbfqm9m0000gp/T/FryfgE/mesos-io-switchboard-df1d7004-7ea1-43f3-bec9-bf0c2663b260': Path too long, must be less than 104 bytes [ FAILED ] IOSwitchboardServerTest.ReceiveHeartbeat (0 ms) [----------] 2 tests from IOSwitchboardServerTest (5 ms total) [----------] Global test environment tear-down [==========] 2 tests from 1 test case ran. (29 ms total) [ PASSED ] 0 tests. [ FAILED ] 2 tests, listed below: [ FAILED ] IOSwitchboardServerTest.SendHeartbeat [ FAILED ] IOSwitchboardServerTest.ReceiveHeartbeat string socketPath = path::join( sandbox.get(), """"mesos-io-switchboard-"""" + UUID::random().toString()); ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6815","12/19/2016 19:32:52",5,"Enable glog stack traces when we call things like `ABORT` on Windows ""Currently in the Windows builds, if we call `ABORT` (etc.) we will simply bail out, with no stack traces. This is highly undesirable. Stack traces are important for operating clusters in production. We should work to enable this behavior, including possibly working with glog to add this support if they currently they do not natively support it.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6820","12/20/2016 09:35:36",1,"FaultToleranceTest.FrameworkReregister is flaky. ""I just saw {{FaultToleranceTest.FrameworkReregister}} fail in internal CI on a Debian 8 system. Running the test in repetition on my OS X machine I was able to reproduce the issue on OS X as well. """," [ RUN ] FaultToleranceTest.FrameworkReregister I1219 23:04:12.914769 23530 cluster.cpp:160] Creating default 'local' authorizer I1219 23:04:12.915388 23545 master.cpp:380] Master 4daa3046-9990-49c7-b601-958964306799 (ip-172-16-10-223.mesosphere.io) started on 172.16.10.223:52614 I1219 23:04:12.915400 23545 master.cpp:382] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/mnt/teamcity/temp/buildTmp/4KpUDy/credentials"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/4KpUDy/master"""" --zk_session_timeout=""""10secs"""" I1219 23:04:12.915504 23545 master.cpp:432] Master only allowing authenticated frameworks to register I1219 23:04:12.915509 23545 master.cpp:446] Master only allowing authenticated agents to register I1219 23:04:12.915511 23545 master.cpp:459] Master only allowing authenticated HTTP frameworks to register I1219 23:04:12.915514 23545 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/4KpUDy/credentials' I1219 23:04:12.915570 23545 master.cpp:504] Using default 'crammd5' authenticator I1219 23:04:12.915597 23545 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-readonly' I1219 23:04:12.915617 23545 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' I1219 23:04:12.915658 23545 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' I1219 23:04:12.915688 23545 master.cpp:584] Authorization enabled I1219 23:04:12.915725 23546 whitelist_watcher.cpp:77] No whitelist given I1219 23:04:12.915737 23547 hierarchical.cpp:149] Initialized hierarchical allocator process I1219 23:04:12.916110 23545 master.cpp:2046] Elected as the leading master! I1219 23:04:12.916118 23545 master.cpp:1568] Recovering from registrar I1219 23:04:12.916179 23548 registrar.cpp:329] Recovering registrar I1219 23:04:12.916311 23545 registrar.cpp:362] Successfully fetched the registry (0B) in 115968ns I1219 23:04:12.916334 23545 registrar.cpp:461] Applied 1 operations in 1982ns; attempting to update the registry I1219 23:04:12.916554 23547 registrar.cpp:506] Successfully updated the registry in 208896ns I1219 23:04:12.916770 23547 registrar.cpp:392] Successfully recovered registrar I1219 23:04:12.916853 23547 master.cpp:1684] Recovered 0 agents from the registry (174B); allowing 10mins for agents to re-register I1219 23:04:12.916956 23544 hierarchical.cpp:176] Skipping recovery of hierarchical allocator: nothing to recover I1219 23:04:12.918097 23530 containerizer.cpp:220] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni I1219 23:04:12.920801 23530 linux_launcher.cpp:150] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher I1219 23:04:12.921478 23530 cluster.cpp:446] Creating default 'local' authorizer I1219 23:04:12.921813 23546 slave.cpp:209] Mesos agent started on (36)@172.16.10.223:52614 I1219 23:04:12.921823 23546 slave.cpp:210] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/mnt/teamcity/temp/buildTmp/mesos/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/mnt/teamcity/temp/buildTmp/FaultToleranceTest_FrameworkReregister_PIW1QY/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/mnt/teamcity/temp/buildTmp/mesos/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/mnt/teamcity/temp/buildTmp/FaultToleranceTest_FrameworkReregister_PIW1QY/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_command_executor=""""false"""" --http_credentials=""""/mnt/teamcity/temp/buildTmp/FaultToleranceTest_FrameworkReregister_PIW1QY/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --image_provisioner_backend=""""copy"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""linux"""" --launcher_dir=""""/mnt/teamcity/work/4240ba9ddd0997c3/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/mnt/teamcity/temp/buildTmp/FaultToleranceTest_FrameworkReregister_PIW1QY"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/mnt/teamcity/temp/buildTmp/FaultToleranceTest_FrameworkReregister_ytYsKg"""" I1219 23:04:12.922118 23546 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/FaultToleranceTest_FrameworkReregister_PIW1QY/credential' I1219 23:04:12.922175 23546 slave.cpp:352] Agent using credential for: test-principal I1219 23:04:12.922183 23546 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/FaultToleranceTest_FrameworkReregister_PIW1QY/http_credentials' I1219 23:04:12.922237 23546 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I1219 23:04:12.922269 23546 http.cpp:922] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' I1219 23:04:12.922341 23530 sched.cpp:232] Version: 1.2.0 I1219 23:04:12.922451 23551 sched.cpp:336] New master detected at master@172.16.10.223:52614 I1219 23:04:12.922477 23551 sched.cpp:402] Authenticating with master master@172.16.10.223:52614 I1219 23:04:12.922487 23551 sched.cpp:409] Using default CRAM-MD5 authenticatee I1219 23:04:12.922564 23545 authenticatee.cpp:121] Creating new client SASL connection I1219 23:04:12.922552 23546 slave.cpp:539] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1219 23:04:12.922590 23546 slave.cpp:547] Agent attributes: [ ] I1219 23:04:12.922596 23546 slave.cpp:552] Agent hostname: ip-172-16-10-223.mesosphere.io I1219 23:04:12.922739 23545 master.cpp:6751] Authenticating scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 I1219 23:04:12.922794 23547 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(100)@172.16.10.223:52614 I1219 23:04:12.922869 23545 authenticator.cpp:98] Creating new server SASL connection I1219 23:04:12.922893 23546 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/FaultToleranceTest_FrameworkReregister_ytYsKg/meta' I1219 23:04:12.923012 23546 status_update_manager.cpp:203] Recovering status update manager I1219 23:04:12.923072 23545 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1219 23:04:12.923086 23545 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1219 23:04:12.923137 23545 authenticator.cpp:204] Received SASL authentication start I1219 23:04:12.923156 23548 containerizer.cpp:599] Recovering containerizer I1219 23:04:12.923158 23545 authenticator.cpp:326] Authentication requires more steps I1219 23:04:12.923212 23545 authenticatee.cpp:259] Received SASL authentication step I1219 23:04:12.923260 23545 authenticator.cpp:232] Received SASL authentication step I1219 23:04:12.923276 23545 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-223.mesosphere.io' server FQDN: 'ip-172-16-10-223.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1219 23:04:12.923283 23545 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1219 23:04:12.923292 23545 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1219 23:04:12.923300 23545 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-223.mesosphere.io' server FQDN: 'ip-172-16-10-223.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1219 23:04:12.923306 23545 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1219 23:04:12.923312 23545 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1219 23:04:12.923322 23545 authenticator.cpp:318] Authentication success I1219 23:04:12.923377 23548 authenticatee.cpp:299] Authentication success I1219 23:04:12.923403 23550 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(100)@172.16.10.223:52614 I1219 23:04:12.923434 23545 master.cpp:6781] Successfully authenticated principal 'test-principal' at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 I1219 23:04:12.923477 23548 sched.cpp:508] Successfully authenticated with master master@172.16.10.223:52614 I1219 23:04:12.923485 23548 sched.cpp:826] Sending SUBSCRIBE call to master@172.16.10.223:52614 I1219 23:04:12.923519 23548 sched.cpp:859] Will retry registration in 1.760040858secs if necessary I1219 23:04:12.923565 23551 master.cpp:2634] Received SUBSCRIBE call for framework 'default' at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 I1219 23:04:12.923579 23551 master.cpp:2082] Authorizing framework principal 'test-principal' to receive offers for role '*' I1219 23:04:12.923686 23551 master.cpp:2710] Subscribing framework default with checkpointing disabled and capabilities [ ] I1219 23:04:12.923876 23551 sched.cpp:749] Framework registered with 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.923877 23547 hierarchical.cpp:277] Added framework 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.923913 23547 hierarchical.cpp:1690] No allocations performed I1219 23:04:12.923918 23551 sched.cpp:763] Scheduler::registered took 21516ns I1219 23:04:12.923923 23547 hierarchical.cpp:1785] No inverse offers to send out! I1219 23:04:12.923933 23547 hierarchical.cpp:1292] Performed allocation for 0 agents in 27585ns I1219 23:04:12.924137 23548 provisioner.cpp:253] Provisioner recovery complete I1219 23:04:12.924263 23545 slave.cpp:5407] Finished recovery I1219 23:04:12.924439 23545 slave.cpp:5581] Querying resource estimator for oversubscribable resources I1219 23:04:12.924515 23545 slave.cpp:924] New master detected at master@172.16.10.223:52614 I1219 23:04:12.924522 23547 status_update_manager.cpp:177] Pausing sending status updates I1219 23:04:12.924528 23545 slave.cpp:983] Authenticating with master master@172.16.10.223:52614 I1219 23:04:12.924545 23545 slave.cpp:994] Using default CRAM-MD5 authenticatee I1219 23:04:12.924588 23545 slave.cpp:956] Detecting new master I1219 23:04:12.924609 23551 authenticatee.cpp:121] Creating new client SASL connection I1219 23:04:12.924648 23545 slave.cpp:5595] Received oversubscribable resources {} from the resource estimator I1219 23:04:12.924757 23551 master.cpp:6751] Authenticating slave(36)@172.16.10.223:52614 I1219 23:04:12.924804 23551 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(101)@172.16.10.223:52614 I1219 23:04:12.924855 23551 authenticator.cpp:98] Creating new server SASL connection I1219 23:04:12.924983 23551 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1219 23:04:12.924993 23551 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1219 23:04:12.925047 23547 authenticator.cpp:204] Received SASL authentication start I1219 23:04:12.925068 23547 authenticator.cpp:326] Authentication requires more steps I1219 23:04:12.925098 23547 authenticatee.cpp:259] Received SASL authentication step I1219 23:04:12.925143 23547 authenticator.cpp:232] Received SASL authentication step I1219 23:04:12.925158 23547 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-223.mesosphere.io' server FQDN: 'ip-172-16-10-223.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1219 23:04:12.925165 23547 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1219 23:04:12.925171 23547 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1219 23:04:12.925176 23547 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-223.mesosphere.io' server FQDN: 'ip-172-16-10-223.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1219 23:04:12.925180 23547 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1219 23:04:12.925184 23547 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1219 23:04:12.925189 23547 authenticator.cpp:318] Authentication success I1219 23:04:12.925221 23547 authenticatee.cpp:299] Authentication success I1219 23:04:12.925243 23545 master.cpp:6781] Successfully authenticated principal 'test-principal' at slave(36)@172.16.10.223:52614 I1219 23:04:12.925264 23546 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(101)@172.16.10.223:52614 I1219 23:04:12.925400 23547 slave.cpp:1078] Successfully authenticated with master master@172.16.10.223:52614 I1219 23:04:12.925447 23547 slave.cpp:1493] Will retry registration in 8.623943ms if necessary I1219 23:04:12.925516 23545 master.cpp:5162] Registering agent at slave(36)@172.16.10.223:52614 (ip-172-16-10-223.mesosphere.io) with id 4daa3046-9990-49c7-b601-958964306799-S0 I1219 23:04:12.925616 23547 registrar.cpp:461] Applied 1 operations in 8538ns; attempting to update the registry I1219 23:04:12.925874 23548 registrar.cpp:506] Successfully updated the registry in 0ns I1219 23:04:12.926054 23548 slave.cpp:4263] Received ping from slave-observer(30)@172.16.10.223:52614 I1219 23:04:12.926064 23547 master.cpp:5233] Registered agent 4daa3046-9990-49c7-b601-958964306799-S0 at slave(36)@172.16.10.223:52614 (ip-172-16-10-223.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I1219 23:04:12.926103 23548 slave.cpp:1124] Registered with master master@172.16.10.223:52614; given agent ID 4daa3046-9990-49c7-b601-958964306799-S0 I1219 23:04:12.926118 23548 fetcher.cpp:90] Clearing fetcher cache I1219 23:04:12.926123 23550 hierarchical.cpp:491] Added agent 4daa3046-9990-49c7-b601-958964306799-S0 (ip-172-16-10-223.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: {}) I1219 23:04:12.926188 23547 status_update_manager.cpp:184] Resuming sending status updates I1219 23:04:12.926255 23548 slave.cpp:1147] Checkpointing SlaveInfo to '/mnt/teamcity/temp/buildTmp/FaultToleranceTest_FrameworkReregister_ytYsKg/meta/slaves/4daa3046-9990-49c7-b601-958964306799-S0/slave.info' I1219 23:04:12.926273 23550 hierarchical.cpp:1785] No inverse offers to send out! I1219 23:04:12.926287 23550 hierarchical.cpp:1315] Performed allocation for agent 4daa3046-9990-49c7-b601-958964306799-S0 in 148466ns I1219 23:04:12.926336 23548 slave.cpp:1184] Forwarding total oversubscribed resources {} I1219 23:04:12.926396 23549 master.cpp:6580] Sending 1 offers to framework 4daa3046-9990-49c7-b601-958964306799-0000 (default) at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 I1219 23:04:12.926475 23549 master.cpp:5636] Received update of agent 4daa3046-9990-49c7-b601-958964306799-S0 at slave(36)@172.16.10.223:52614 (ip-172-16-10-223.mesosphere.io) with total oversubscribed resources {} I1219 23:04:12.926549 23550 sched.cpp:923] Scheduler::resourceOffers took 13898ns I1219 23:04:12.926579 23548 hierarchical.cpp:561] Agent 4daa3046-9990-49c7-b601-958964306799-S0 (ip-172-16-10-223.mesosphere.io) updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) I1219 23:04:12.926609 23548 hierarchical.cpp:1690] No allocations performed I1219 23:04:12.926614 23548 hierarchical.cpp:1785] No inverse offers to send out! I1219 23:04:12.926620 23548 hierarchical.cpp:1315] Performed allocation for agent 4daa3046-9990-49c7-b601-958964306799-S0 in 22643ns I1219 23:04:12.926808 23547 sched.cpp:330] Scheduler::disconnected took 5874ns I1219 23:04:12.926820 23547 sched.cpp:336] New master detected at master@172.16.10.223:52614 I1219 23:04:12.926839 23547 sched.cpp:402] Authenticating with master master@172.16.10.223:52614 I1219 23:04:12.926846 23547 sched.cpp:409] Using default CRAM-MD5 authenticatee I1219 23:04:12.926908 23547 authenticatee.cpp:121] Creating new client SASL connection I1219 23:04:12.927048 23547 master.cpp:6751] Authenticating scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 I1219 23:04:12.927093 23547 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(102)@172.16.10.223:52614 I1219 23:04:12.927150 23550 authenticator.cpp:98] Creating new server SASL connection I1219 23:04:12.927280 23550 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1219 23:04:12.927294 23550 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1219 23:04:12.927328 23550 authenticator.cpp:204] Received SASL authentication start I1219 23:04:12.927351 23550 authenticator.cpp:326] Authentication requires more steps I1219 23:04:12.927383 23550 authenticatee.cpp:259] Received SASL authentication step I1219 23:04:12.927435 23547 authenticator.cpp:232] Received SASL authentication step I1219 23:04:12.927450 23547 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-223.mesosphere.io' server FQDN: 'ip-172-16-10-223.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1219 23:04:12.927456 23547 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1219 23:04:12.927461 23547 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1219 23:04:12.927466 23547 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-223.mesosphere.io' server FQDN: 'ip-172-16-10-223.mesosphere.io' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1219 23:04:12.927471 23547 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1219 23:04:12.927474 23547 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1219 23:04:12.927482 23547 authenticator.cpp:318] Authentication success I1219 23:04:12.927520 23547 master.cpp:6781] Successfully authenticated principal 'test-principal' at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 I1219 23:04:12.927542 23544 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(102)@172.16.10.223:52614 I1219 23:04:12.927557 23550 authenticatee.cpp:299] Authentication success I1219 23:04:12.927649 23549 sched.cpp:508] Successfully authenticated with master master@172.16.10.223:52614 I1219 23:04:12.927662 23549 sched.cpp:826] Sending SUBSCRIBE call to master@172.16.10.223:52614 I1219 23:04:12.927709 23549 sched.cpp:859] Will retry registration in 880.434321ms if necessary I1219 23:04:12.927744 23548 master.cpp:2634] Received SUBSCRIBE call for framework 'default' at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 I1219 23:04:12.927759 23548 master.cpp:2082] Authorizing framework principal 'test-principal' to receive offers for role '*' I1219 23:04:12.927870 23545 master.cpp:2710] Subscribing framework default with checkpointing disabled and capabilities [ ] I1219 23:04:12.927881 23545 master.cpp:2788] Updating info for framework 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.927911 23545 master.cpp:2804] Allowing framework 4daa3046-9990-49c7-b601-958964306799-0000 (default) at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 to subscribe with an already used id I1219 23:04:12.928001 23549 sched.cpp:826] Sending SUBSCRIBE call to master@172.16.10.223:52614 I1219 23:04:12.928040 23549 sched.cpp:859] Will retry registration in 1.080834119secs if necessary I1219 23:04:12.928081 23548 hierarchical.cpp:1024] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: {}) on agent 4daa3046-9990-49c7-b601-958964306799-S0 from framework 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.928102 23549 sched.cpp:935] Ignoring rescind offer message because the driver is disconnected! I1219 23:04:12.928150 23549 sched.cpp:791] Framework re-registered with 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.928181 23549 sched.cpp:805] Scheduler::reregistered took 18438ns I1219 23:04:12.928247 23548 hierarchical.cpp:1785] No inverse offers to send out! I1219 23:04:12.928254 23545 master.cpp:2634] Received SUBSCRIBE call for framework 'default' at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 I1219 23:04:12.928261 23548 hierarchical.cpp:1292] Performed allocation for 1 agents in 109229ns I1219 23:04:12.928268 23545 master.cpp:2082] Authorizing framework principal 'test-principal' to receive offers for role '*' I1219 23:04:12.928400 23545 master.cpp:6580] Sending 1 offers to framework 4daa3046-9990-49c7-b601-958964306799-0000 (default) at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 I1219 23:04:12.928444 23545 master.cpp:2710] Subscribing framework default with checkpointing disabled and capabilities [ ] I1219 23:04:12.928453 23545 master.cpp:2788] Updating info for framework 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.928472 23545 master.cpp:2804] Allowing framework 4daa3046-9990-49c7-b601-958964306799-0000 (default) at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 to subscribe with an already used id I1219 23:04:12.928524 23546 sched.cpp:923] Scheduler::resourceOffers took 26359ns I1219 23:04:12.928570 23546 sched.cpp:949] Rescinded offer 4daa3046-9990-49c7-b601-958964306799-O1 I1219 23:04:12.928592 23546 sched.cpp:960] Scheduler::offerRescinded took 9442ns I1219 23:04:12.928618 23546 sched.cpp:778] Ignoring framework re-registered message because the driver is already connected! I1219 23:04:12.928642 23547 hierarchical.cpp:1024] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: {}) on agent 4daa3046-9990-49c7-b601-958964306799-S0 from framework 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.929006 23548 hierarchical.cpp:1785] No inverse offers to send out! I1219 23:04:12.929029 23548 hierarchical.cpp:1292] Performed allocation for 1 agents in 150224ns I1219 23:04:12.929085 23545 master.cpp:6580] Sending 1 offers to framework 4daa3046-9990-49c7-b601-958964306799-0000 (default) at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 ../../src/tests/fault_tolerance_tests.cpp:833: Failure Mock function called more times than expected - returning directly. Function call: resourceOffers(0x7ffc5fce0260, @0x7f7e348a4ba0 { 152-byte object <90-E5 85-3F 7E-7F 00-00 00-00 00-00 00-00 00-00 1F-00 00-00 00-00 00-00 10-0A 01-24 7E-7F 00-00 40-0A 01-24 7E-7F 00-00 C0-24 00-24 7E-7F 00-00 A0-65 00-24 7E-7F 00-00 B0-93 00-24 7E-7F 00-00 ... 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00> }) Expected: to be called once Actual: called twice - over-saturated and active I1219 23:04:12.929224 23545 sched.cpp:923] Scheduler::resourceOffers took 69622ns I1219 23:04:12.929244 23544 process.cpp:3679] Handling HTTP event for process 'master' with path: '/master/state' I1219 23:04:12.929577 23551 http.cpp:402] HTTP GET for /master/state from 172.16.10.223:49741 I1219 23:04:12.930861 23530 sched.cpp:2008] Asked to stop the driver I1219 23:04:12.930907 23550 sched.cpp:1193] Stopping framework 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.930974 23551 master.cpp:7291] Processing TEARDOWN call for framework 4daa3046-9990-49c7-b601-958964306799-0000 (default) at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 I1219 23:04:12.930989 23551 master.cpp:7303] Removing framework 4daa3046-9990-49c7-b601-958964306799-0000 (default) at scheduler-09a58709-8c77-40d4-a431-5faba5cbbe7d@172.16.10.223:52614 I1219 23:04:12.931046 23548 slave.cpp:2581] Asked to shut down framework 4daa3046-9990-49c7-b601-958964306799-0000 by master@172.16.10.223:52614 I1219 23:04:12.931061 23548 slave.cpp:2596] Cannot shut down unknown framework 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.931068 23550 hierarchical.cpp:392] Deactivated framework 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.931175 23550 hierarchical.cpp:1024] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: {}) on agent 4daa3046-9990-49c7-b601-958964306799-S0 from framework 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.931290 23550 hierarchical.cpp:343] Removed framework 4daa3046-9990-49c7-b601-958964306799-0000 I1219 23:04:12.931711 23547 slave.cpp:796] Agent terminating I1219 23:04:12.931782 23544 master.cpp:1258] Agent 4daa3046-9990-49c7-b601-958964306799-S0 at slave(36)@172.16.10.223:52614 (ip-172-16-10-223.mesosphere.io) disconnected I1219 23:04:12.931797 23544 master.cpp:2978] Disconnecting agent 4daa3046-9990-49c7-b601-958964306799-S0 at slave(36)@172.16.10.223:52614 (ip-172-16-10-223.mesosphere.io) I1219 23:04:12.931813 23544 master.cpp:2997] Deactivating agent 4daa3046-9990-49c7-b601-958964306799-S0 at slave(36)@172.16.10.223:52614 (ip-172-16-10-223.mesosphere.io) I1219 23:04:12.931854 23548 hierarchical.cpp:590] Agent 4daa3046-9990-49c7-b601-958964306799-S0 deactivated I1219 23:04:12.933393 23530 master.cpp:1097] Master terminating I1219 23:04:12.933498 23544 hierarchical.cpp:523] Removed agent 4daa3046-9990-49c7-b601-958964306799-S0 [ FAILED ] FaultToleranceTest.FrameworkReregister (20 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6822","12/20/2016 16:01:11",2,"CNI reports confusing error message for failed interface setup. ""Saw this today: which is produced by this code: https://github.com/apache/mesos/blob/1e72605e9892eb4e518442ab9c1fe2a1a1696748/src/slave/containerizer/mesos/isolators/network/cni/cni.cpp#L1854-L1859 Note that ssh'ing into the machine confirmed that {{ifconfig}} is available in {{PATH}}. Full log: http://pastebin.com/hVdNz6yk"""," Failed to bring up the loopback interface in the new network namespace of pid 17067: Success ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"MESOS-6826","12/21/2016 17:09:19",3,"OsTest.User fails on recent Arch Linux. "" Appeared relatively recently (last two weeks). Cause appears to be that {{getpwnam\_r}} now returns {{EINVAL}} for an invalid input, which {{os::getuid()}} and {{os::getgid()}} are not prepared to handle."""," [ RUN ] OsTest.User ../../../mesos/3rdparty/stout/tests/os_tests.cpp:683: Failure Value of: os::getuid(UUID::random().toString()).isNone() Actual: false Expected: true ../../../mesos/3rdparty/stout/tests/os_tests.cpp:684: Failure Value of: os::getgid(UUID::random().toString()).isNone() Actual: false Expected: true [ FAILED ] OsTest.User (12 ms) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6837","12/22/2016 22:30:34",3,"FaultToleranceTest.FrameworkReregister is flaky ""Observed on internal CI: Looks like another instance of MESOS-4695."""," [21:27:38] : [Step 11/11] /mnt/teamcity/work/4240ba9ddd0997c3/src/tests/fault_tolerance_tests.cpp:892: Failure [21:27:38] : [Step 11/11] Value of: framework.values[""""registered_time""""].as().as() [21:27:38] : [Step 11/11] Actual: 1482442093 [21:27:38] : [Step 11/11] Expected: static_cast(registerTime.secs()) [21:27:38] : [Step 11/11] Which is: 1482442094 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6840","12/27/2016 11:23:40",5,"Tests for quota capacity heuristic. ""We need more tests to ensure capacity heuristic works as expected.""","",0,0,0,1,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6860","01/05/2017 17:40:49",5,"Some tests use CHECK instead of ASSERT ""Some tests check preconditions with {{CHECK}} instead of e.g., {{ASSERT_TRUE}}. When such a check fails it leads to a undesirable complete abort of the test run, potentially dumping core. We should make sure tests check preconditions in a proper way, e.g., with {{ASSERT_TRUE}}.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6868","01/05/2017 23:13:02",3,"Transition Windows away from `os::killtree`. ""Windows does not have as robust a notion of a process hierarchy as Unix, and thus functions like `os::killtree` will always have critical limitations and semantic mismatches between Unix and Windows. We should transition away from this function when we can, and replace it with something similar to how we kill a cgroup.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6874","01/06/2017 20:08:05",5,"Agent silently ignores FS isolation when protobuf is malformed ""cc [~vinodkone] I accidentally set my Mesos ContainerInfo to include a DockerInfo instead of a MesosInfo: I would have expected a validation error before or during containerization, but instead, the agent silently decided to ignore filesystem isolation altogether, and launch my executor on the host filesystem. """," executorInfoBuilder.setContainer( Protos.ContainerInfo.newBuilder() .setType(Protos.ContainerInfo.Type.MESOS) .setDocker(Protos.ContainerInfo.DockerInfo.newBuilder() .setImage(podSpec.getContainer().get().getImageName())) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6886","01/07/2017 02:15:57",3,"Add authorization tests for debug API handlers ""Should test authz of all 3 debug calls.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6892","01/07/2017 23:27:52",5,"Reconsider process creation primitives on Windows ""Windows does not have the same notions of process hierarchies as Unix, and so killing groups of processes requires us to make sure all processes are contained in a job object, which acts something like a cgroup. This is particularly important when we decide to kill a task, as there is no way to reliably do this unless all the processes you'd like to kill are in the job object. This causes us a number of issues; it is a big reason we needed to fork the command executor, and it is the reason tasks are currently unkillable in the default executor. As we clean this issue up, we need to think carefully about the process governance semantics of Mesos, and how we can map them to a reliable, simple Windows implementation.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6894","01/08/2017 22:18:39",5,"Checkpoint 'ContainerConfig' in Mesos Containerizer. ""This information can be used ford image GC in Mesos Containerizer, as well as other purposes.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6900","01/10/2017 14:41:13",2,"Add test for framework upgrading to multi-role capability. ""Frameworks can upgrade to multi-role capability as long as the framework's role remains the same. We consider the framework roles unchanged if * a framework previously didn't specify a {{role}} now has {{roles=()}}, or * a framework which previously had {{role=A}} and now has {{roles=(A)}}.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6934","01/17/2017 16:46:19",8,"Support pulling Docker images with V2 Schema 2 image manifest ""MESOS-3505 added support for pulling Docker images by their digest to the Mesos Containerizer provisioner. However currently it only works with images that were pushed with Docker 1.9 and older or with Registry 2.2.1 and older. Newer versions use Schema 2 manifests by default. Because of CAS constraints the registry does not convert those manifests on-the-fly to Schema 1 when they are being pulled by digest. Compatibility details are documented here: https://docs.docker.com/registry/compatibility/ Image Manifest V2, Schema 2 is documented here: https://docs.docker.com/registry/spec/manifest-v2-2/""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6938","01/17/2017 23:16:45",3,"Libprocess reinitialization is flaky, can segfault. ""This was observed on ASF CI. Based on the placement of the stacktrace, the segfault seems to occur during libprocess reinitialization, when {{process::initialize}} is called: """," [----------] 4 tests from Encryption/NetSocketTest [ RUN ] Encryption/NetSocketTest.EOFBeforeRecv/0 I0117 15:18:35.320691 27596 openssl.cpp:419] CA file path is unspecified! NOTE: Set CA file path with LIBPROCESS_SSL_CA_FILE= I0117 15:18:35.320714 27596 openssl.cpp:424] CA directory path unspecified! NOTE: Set CA directory path with LIBPROCESS_SSL_CA_DIR= I0117 15:18:35.320719 27596 openssl.cpp:429] Will not verify peer certificate! NOTE: Set LIBPROCESS_SSL_VERIFY_CERT=1 to enable peer certificate verification I0117 15:18:35.320726 27596 openssl.cpp:435] Will only verify peer certificate if presented! NOTE: Set LIBPROCESS_SSL_REQUIRE_CERT=1 to require peer certificate verification I0117 15:18:35.335141 27596 process.cpp:1234] libprocess is initialized on 172.17.0.3:46415 with 16 worker threads [ OK ] Encryption/NetSocketTest.EOFBeforeRecv/0 (422 ms) [ RUN ] Encryption/NetSocketTest.EOFBeforeRecv/1 I0117 15:18:35.390697 27596 process.cpp:1234] libprocess is initialized on 172.17.0.3:39822 with 16 worker threads [ OK ] Encryption/NetSocketTest.EOFBeforeRecv/1 (6 ms) [ RUN ] Encryption/NetSocketTest.EOFAfterRecv/0 I0117 15:18:35.998528 27596 openssl.cpp:419] CA file path is unspecified! NOTE: Set CA file path with LIBPROCESS_SSL_CA_FILE= I0117 15:18:35.998559 27596 openssl.cpp:424] CA directory path unspecified! NOTE: Set CA directory path with LIBPROCESS_SSL_CA_DIR= I0117 15:18:35.998566 27596 openssl.cpp:429] Will not verify peer certificate! NOTE: Set LIBPROCESS_SSL_VERIFY_CERT=1 to enable peer certificate verification I0117 15:18:35.998572 27596 openssl.cpp:435] Will only verify peer certificate if presented! NOTE: Set LIBPROCESS_SSL_REQUIRE_CERT=1 to require peer certificate verification I0117 15:18:36.010643 27596 process.cpp:1234] libprocess is initialized on 172.17.0.3:47429 with 16 worker threads [ OK ] Encryption/NetSocketTest.EOFAfterRecv/0 (664 ms) [ RUN ] Encryption/NetSocketTest.EOFAfterRecv/1 I0117 15:18:36.079453 27596 process.cpp:1234] libprocess is initialized on 172.17.0.3:38149 with 16 worker threads [ OK ] Encryption/NetSocketTest.EOFAfterRecv/1 (19 ms) *** Aborted at 1484666316 (unix time) try """"date -d @1484666316"""" if you are using GNU date *** PC: @ 0x7f7643ad7c56 __memcpy_ssse3_back *** SIGSEGV (@0x57c10f8) received by PID 27596 (TID 0x7f76393c2700) from PID 92016888; stack trace: *** @ 0x7f7644ba0370 (unknown) @ 0x7f7643ad7c56 __memcpy_ssse3_back @ 0x7f76443248e0 (unknown) @ 0x7f7644324f8c (unknown) @ 0x422a4d process::UPID::UPID() I0117 15:18:36.090376 27596 process.cpp:1234] libprocess is initialized on 172.17.0.3:43835 with 16 worker threads [----------] 4 tests from Encryption/NetSocketTest (1116 ms total) [----------] 6 tests from SSLVerifyIPAdd/SSLTest [ RUN ] SSLVerifyIPAdd/SSLTest.BasicSameProcess/0 @ 0x8ae4a8 process::DispatchEvent::DispatchEvent() @ 0x8a6a5e process::internal::dispatch() @ 0x8c0b44 process::dispatch<>() @ 0x8a598a process::ProcessBase::route() @ 0x98be53 process::ProcessBase::route<>() @ 0x988096 process::Help::initialize() @ 0x89ef2a process::ProcessManager::resume() @ 0x89b976 _ZZN7process14ProcessManager12init_threadsEvENKUt_clEv @ 0x8adb3c _ZNSt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUt_vEE9_M_invokeIIEEEvSt12_Index_tupleIIXspT_EEE @ 0x8ada80 _ZNSt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUt_vEEclEv @ 0x8ada0a _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUt_vEEE6_M_runEv @ 0x7f764431b230 (unknown) @ 0x7f7644b98dc5 start_thread @ 0x7f7643a8473d __clone make[7]: *** [check-local] Segmentation fault ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6949","01/19/2017 00:08:39",2,"SchedulerTest.MasterFailover is flaky ""This was observed in a CentOS 7 VM, with libevent and SSL enabled: Find attached the entire log from a failed run."""," W0118 22:38:33.789465 3407 scheduler.cpp:513] Dropping SUBSCRIBE: Scheduler is in state DISCONNECTED I0118 22:38:33.811820 3408 scheduler.cpp:361] Connected with the master at http://127.0.0.1:43211/master/api/v1/scheduler ../../src/tests/scheduler_tests.cpp:315: Failure Mock function called more times than expected - returning directly. Function call: connected(0x7fff97227550) Expected: to be called once Actual: called twice - over-saturated and active ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6950","01/19/2017 00:13:59",2,"Launching two tasks with the same Docker image simultaneously may cause a staging dir never cleaned up ""If user launches two tasks with the same Docker image simultaneously (e.g., run {{mesos-executor}} twice with the same Docker image), there will be a staging directory which is for the second task never cleaned up, like this: """," └── store └── docker ├── layers │ ... ├── staging │   └── a6rXWC └── storedImages ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6951","01/19/2017 01:55:56",3,"Docker containerizer: mangled environment when env value contains LF byte. ""Consider this Marathon app definition: The JSON-encoded newline in the value of the {{TESTVAR}} environment variable leads to a corrupted task environment. What follows is a subset of the resulting task environment (as printed via {{env}}, i.e. in key=value notation): That is, the trailing part of the intended value ended up being interpreted as variable name, and only the leading part of the intended value was used as actual value for {{TESTVAR}}. Common application scenarios that would badly break with that involve pretty-printed JSON documents or YAML documents passed along via the environment. Following the code and information flow led to the conclusion that Docker's {{--env-file}} command line interface is the weak point in the flow. It is currently used in Mesos' Docker containerizer for passing the environment to the container: (Ref: [code|https://github.com/apache/mesos/blob/c0aee8cc10b1d1f4b2db5ff12b771372fdd5b1f3/src/docker/docker.cpp#L584]) Docker's {{--env-file}} argument behavior is documented via {quote} The --env-file flag takes a filename as an argument and expects each line to be in the VAR=VAL format, {quote} (Ref: https://docs.docker.com/engine/reference/commandline/run/) That is, Docker identifies individual environment variable key/value pair definitions based on newline bytes in that file which explains the observed environment variable value fragmentation. Notably, Docker does not provide a mechanism for escaping newline bytes in the values specified in this environment file. I think it is important to understand that Docker's {{--env-file}} mechanism is ill-posed in the sense that it is not capable of transmitting the whole range of environment variable values allowed by POSIX. That's what the Single UNIX Specification, Version 3 has to say about environment variable values: {quote} the value shall be composed of characters from the portable character set (except NUL and as indicated below). {quote} (Ref: http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap08.html) About """"The portable character set"""": http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap06.html#tagtcjh_3 It includes (among others) the LF byte. Understandably, the current Docker {{--env-file}} behavior will not change, so this is not an issue that can be deferred to Docker: https://github.com/docker/docker/issues/12997 Notably, the {{--env-file}} method for communicating environment variables to Docker containers was just recently introduced to Mesos as of https://issues.apache.org/jira/browse/MESOS-6566, for not leaking secrets through the process listing. Previously, we specified env key/value pairs on the command line which leaked secrets to the process list and probably also did not support the full range of valid environment variable values. We need a solution that 1) does not leak sensitive values (i.e. is compliant with MESOS-6566). 2) allows for passing arbitrary environment variable values. It seems that Docker's {{--env}} method can be used for that. It can be used to define _just the names of the environment variables_ to-be-passed-along, in which case the docker binary will read the corresponding values from its own environment, which we can clearly prepare appropriately when we invoke the corresponding child process. This method would still leak environment variable _names_ to the process listing, but (especially if documented) this should be fine."""," { """"id"""": """"/testapp"""", """"cmd"""": """"env && tail -f /dev/null"""", """"env"""":{ """"TESTVAR"""":""""line1\nline2"""" }, """"cpus"""": 0.1, """"mem"""": 10, """"instances"""": 1, """"container"""": { """"type"""": """"DOCKER"""", """"docker"""": { """"image"""": """"alpine"""" } } } line2= TESTVAR=line1 argv.push_back(""""--env-file""""); argv.push_back(environmentFile); ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6959","01/20/2017 00:47:43",3,"Separate the mesos-containerizer binary into a static binary, which only depends on stout ""The {{mesos-containerizer}} binary currently has [three commands|https://github.com/apache/mesos/blob/6cf3a94a52e87a593c9cba373bf433cfc4178639/src/slave/containerizer/mesos/main.cpp#L46-L48]: * [MesosContainerizerLaunch|https://github.com/apache/mesos/blob/6cf3a94a52e87a593c9cba373bf433cfc4178639/src/slave/containerizer/mesos/launch.cpp] * [MesosContainerizerMount|https://github.com/apache/mesos/blob/6cf3a94a52e87a593c9cba373bf433cfc4178639/src/slave/containerizer/mesos/mount.cpp] * [NetworkCniIsolatorSetup|https://github.com/apache/mesos/blob/6cf3a94a52e87a593c9cba373bf433cfc4178639/src/slave/containerizer/mesos/isolators/network/cni/cni.cpp#L1776-L1997] These commands are all heavily dependent on stout, and have no need to be linked to libprocess. In fact, adding an erroneous call to {{process::initialize}} (either explicitly, or by accidentally using a libprocess method) will break {{mesos-containerizer}} can cause several Mesos containerizer tests to fail. (The tasks fail to launch, saying {{Failed to synchronize with agent (it's probably exited)}}). Because this binary only depends on stout, we can separate it from the other source files and make this a static binary. ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6961","01/20/2017 14:00:15",1,"Executors don't use glog for logging. ""Built-in Mesos executors use {{cout}}/{{cerr}} for logging. This is not only inconsistent with the rest of the codebase, it also complicates debugging, since, e.g., a stack trace is not printed on an abort. Having timestamps will be also a huge plus. Consider migrating logging in all built-in executors to glog. There have been reported issues related to glog internal state races when a process that has glog initialized {{fork-exec}}s another process that also initialize glog. We should investigate how this issue is related to this ticket, cc [~tillt], [~vinodkone], [~bmahler].""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-6989","01/25/2017 18:59:28",1,"Docker executor segfaults in ~MesosExecutorDriver() ""With the current Mesos master state (commit 42e515bc5c175a318e914d34473016feda4db6ff), the Docker executor segfaults during shutdown. Steps to reproduce: 1) Start master: (note that building it at 13:37 is not part of the repro) 2) Start agent: 3) Run {{mesos-execute}} with the Docker containerizer: Relevant agent output that shows the executor segfault: The complete task stderr: """," $ ./bin/mesos-master.sh --ip=127.0.0.1 --work_dir=/tmp/jp/mesos WARNING: Logging before InitGoogleLogging() is written to STDERR I0125 13:41:15.963775 14744 main.cpp:278] Build: 2017-01-25 13:37:42 by jp I0125 13:41:15.963868 14744 main.cpp:279] Version: 1.2.0 I0125 13:41:15.963877 14744 main.cpp:286] Git SHA: 42e515bc5c175a318e914d34473016feda4db6ff $ ./bin/mesos-slave.sh --containerizers=mesos,docker --master=127.0.0.1:5050 --work_dir=/tmp/jp/mesos $ ./src/mesos-execute --master=127.0.0.1:5050 --name=testcommand --containerizer=docker --docker_image=debian --command=env I0125 13:43:59.704973 14951 scheduler.cpp:184] Version: 1.2.0 I0125 13:43:59.706425 14952 scheduler.cpp:470] New master detected at master@127.0.0.1:5050 Subscribed with ID 57596743-06f4-45f1-a975-348cf70589b1-0000 Submitted task 'testcommand' to agent '57596743-06f4-45f1-a975-348cf70589b1-S0' Received status update TASK_RUNNING for task 'testcommand' source: SOURCE_EXECUTOR Received status update TASK_FINISHED for task 'testcommand' message: 'Container exited with status 0' source: SOURCE_EXECUTOR [...] I0125 13:44:16.249191 14823 slave.cpp:4328] Got exited event for executor(1)@192.99.40.208:33529 I0125 13:44:16.347095 14830 docker.cpp:2358] Executor for container 396282a9-7bf0-48ee-ba07-3ff2ca801d53 has exited I0125 13:44:16.347127 14830 docker.cpp:2052] Destroying container 396282a9-7bf0-48ee-ba07-3ff2ca801d53 I0125 13:44:16.347439 14830 docker.cpp:2179] Running docker stop on container 396282a9-7bf0-48ee-ba07-3ff2ca801d53 I0125 13:44:16.349215 14826 slave.cpp:4691] Executor 'testcommand' of framework 57596743-06f4-45f1-a975-348cf70589b1-0000 terminated with signal Segmentation fault (core dumped) [...] $ cat /tmp/jp/mesos/slaves/57596743-06f4-45f1-a975-348cf70589b1-S0/frameworks/57596743-06f4-45f1-a975-348cf70589b1-0000/executors/testcommand/runs/latest/stderr I0125 13:44:12.850073 15030 exec.cpp:162] Version: 1.2.0 I0125 13:44:12.864229 15050 exec.cpp:237] Executor registered on agent 57596743-06f4-45f1-a975-348cf70589b1-S0 I0125 13:44:12.865842 15054 docker.cpp:850] Running docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 134217728 --env-file /tmp/xFZ8G9 -v /tmp/jp/mesos/slaves/57596743-06f4-45f1-a975-348cf70589b1-S0/frameworks/57596743-06f4-45f1-a975-348cf70589b1-0000/executors/testcommand/runs/396282a9-7bf0-48ee-ba07-3ff2ca801d53:/mnt/mesos/sandbox --net host --entrypoint /bin/sh --name mesos-57596743-06f4-45f1-a975-348cf70589b1-S0.396282a9-7bf0-48ee-ba07-3ff2ca801d53 debian -c env I0125 13:44:15.248721 15064 exec.cpp:410] Executor asked to shutdown *** Aborted at 1485369856 (unix time) try """"date -d @1485369856"""" if you are using GNU date *** PC: @ 0x7fb38f153dd0 (unknown) *** SIGSEGV (@0x68) received by PID 15030 (TID 0x7fb3961a88c0) from PID 104; stack trace: *** @ 0x7fb38f15b5c0 (unknown) @ 0x7fb38f153dd0 (unknown) @ 0x7fb39332c607 __gthread_mutex_lock() @ 0x7fb39332c657 __gthread_recursive_mutex_lock() @ 0x7fb39332edca std::recursive_mutex::lock() @ 0x7fb393337bd8 _ZZ11synchronizeISt15recursive_mutexE12SynchronizedIT_EPS2_ENKUlPS0_E_clES5_ @ 0x7fb393337bf8 _ZZ11synchronizeISt15recursive_mutexE12SynchronizedIT_EPS2_ENUlPS0_E_4_FUNES5_ @ 0x7fb39333ba6b Synchronized<>::Synchronized() @ 0x7fb393337cac synchronize<>() @ 0x7fb39492f15c process::ProcessManager::wait() @ 0x7fb3949353f0 process::wait() @ 0x55fd63f31fe5 process::wait() @ 0x7fb39332ce3c mesos::MesosExecutorDriver::~MesosExecutorDriver() @ 0x55fd63f2bd86 main @ 0x7fb38e4fc401 __libc_start_main @ 0x55fd63f2ab5a _start ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6996","01/26/2017 07:29:23",2,"Add a 'Secret' protobuf message ""A {{Secret}} protobuf message should be added to serve as a generic message for sending credentials and other secrets throughout Mesos.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6997","01/26/2017 07:31:04",2,"Add the SecretGenerator module interface ""A new {{SecretGenerator}} module interface will be added to permit the agent to generate default executor credentials.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-6998","01/26/2017 07:38:22",5,"Add authentication support to agent's '/v1/executor' endpoint ""The new agent flag {{--authenticate_http_executors}} must be added. When set, it will require that requests received on the {{/v1/executor}} endpoint be authenticated, and the default JWT authenticator will be loaded. Note that this will require the addition of a new authentication realm for that endpoint.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-6999","01/26/2017 07:44:14",5,"Add agent support for generating and passing executor secrets ""The agent must generate and pass executor secrets to all executors using the V1 API. For MVP, the agent will have this behavior by default when compiled with SSL support. To accomplish this, the agent must: * load the default {{SecretGenerator}} module * call the secret generator when launching an executor * pass the generated secret into the executor's environment""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7000","01/26/2017 07:47:05",5,"Implement a JWT SecretGenerator ""The default {{SecretGenerator}} for the generation of default executor credentials will be a module which generates JSON web tokens. This module will be loaded by default when executor secret generation is enabled.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7001","01/26/2017 07:51:15",5,"Implement a JWT authenticator ""A JSON web token (JWT) authenticator module should be added to authenticate executors which use default credentials generated by the agent. This module will be loaded as an HTTP authenticator by default when {{--authenticate_http_executors}} is set, unless HTTP authenticators are specified explicitly.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7003","01/26/2017 07:55:40",5,"Introduce a 'Principal' type ""We will introduce a new type to represent the identity of an authenticated entity in Mesos: the {{Principal}}. To accomplish this, the following should be done: * Add the new {{Principal}} type * Update the {{AuthenticationResult}} to use {{Principal}} * Update all authenticated endpoint handlers to handle this new type * Update the default authenticator modules to use the new type""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7004","01/26/2017 08:19:50",5,"Enable multiple HTTP authenticator modules ""To accommodate executor authentication, we will add support for the loading of multiple authenticator modules. The {{--http_authenticators}} flag is already set up for this, but we must relax the constraint in Mesos which enforces just a single authenticator. In order to load multiple authenticators for a realm, a new Mesos-level authenticator, the {{CombinedAuthenticator}}, will be added. This class will call multiple authenticators and combine their results if necessary.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7005","01/26/2017 08:21:46",3,"Add executor authentication documentation ""Documentation should be added regarding executor authentication. This will include updating: 1) the configuration docs to include new agent flags 2) the authentication documentation 3) the authorization documentation 4) the upgrade documentation 5) the CHANGELOG""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7007","01/26/2017 11:33:10",3,"filesystem/shared and --default_container_info broken since 1.1 ""I face this issue, that prevent me to upgrade to 1.1.0 (and the change was consequently introduced in this version): I'm using default_container_info to mount a /tmp volume in the container's mount namespace from its current sandbox, meaning that each container have a dedicated /tmp, thanks to the {{filesystem/shared}} isolator. I noticed through our automation pipeline that integration tests were failing and found that this is because /tmp (the one from the host!) contents is trashed each time a container is created. Here is my setup: * {{--isolation='cgroups/cpu,cgroups/mem,namespaces/pid,*disk/du,filesystem/shared,filesystem/linux*,docker/runtime'}} * {{--default_container_info='\{""""type"""":""""MESOS"""",""""volumes"""":\[\{""""host_path"""":""""tmp"""",""""container_path"""":""""/tmp"""",""""mode"""":""""RW""""\}\]\}'}} I discovered this issue in the early days of 1.1 (end of Nov, spoke with someone on Slack), but had unfortunately no time to dig into the symptoms a bit more. I found nothing interesting even using GLOGv=3. Maybe it's a bad usage of isolators that trigger this issue ? If it's the case, then at least a documentation update should be done. Let me know if more information is needed.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7008","01/26/2017 12:40:50",3,"Quota not recovered from registry in empty cluster. ""When a quota was set and the master is restarted, removal of the quota reliably leads to a {{CHECK}} failure for me. Start a master: Set a quota. This creates an implicit role. Restart the master process using the same {{work_dir}} and attempt to delete the quota after the master is started. The {{DELETE}} succeeds with an {{OK}}. After handling the request, the master hits a {{CHECK}} failure and is aborted. """," $ mesos-master --work_dir=work_dir $ cat quota.json { """"role"""": """"role2"""", """"force"""": true, """"guarantee"""": [ { """"name"""": """"cpus"""", """"type"""": """"SCALAR"""", """"scalar"""": { """"value"""": 1 } } ] } $ cat quota.json| http POST :5050/quota HTTP/1.1 200 OK Content-Length: 0 Date: Thu, 26 Jan 2017 12:33:38 GMT $ http GET :5050/quota HTTP/1.1 200 OK Content-Length: 108 Content-Type: application/json Date: Thu, 26 Jan 2017 12:33:56 GMT { """"infos"""": [ { """"guarantee"""": [ { """"name"""": """"cpus"""", """"role"""": """"*"""", """"scalar"""": { """"value"""": 1.0 }, """"type"""": """"SCALAR"""" } ], """"role"""": """"role2"""" } ] } $ http GET :5050/roles HTTP/1.1 200 OK Content-Length: 106 Content-Type: application/json Date: Thu, 26 Jan 2017 12:34:10 GMT { """"roles"""": [ { """"frameworks"""": [], """"name"""": """"role2"""", """"resources"""": { """"cpus"""": 0, """"disk"""": 0, """"gpus"""": 0, """"mem"""": 0 }, """"weight"""": 1.0 } ] } $ http DELETE :5050/quota/role2 HTTP/1.1 200 OK Content-Length: 0 Date: Thu, 26 Jan 2017 12:36:04 GMT $ mesos-master --work_dir=work_dir WARNING: Logging before InitGoogleLogging() is written to STDERR I0126 13:34:57.528599 3145483200 main.cpp:278] Build: 2017-01-23 07:57:34 by bbannier I0126 13:34:57.529131 3145483200 main.cpp:279] Version: 1.2.0 I0126 13:34:57.529139 3145483200 main.cpp:286] Git SHA: dd07d025d40975ec660ed17031d95ec0dba842d2 [warn] kq_init: detected broken kqueue; not using.: No such process I0126 13:34:57.758896 3145483200 main.cpp:385] Using 'HierarchicalDRF' allocator I0126 13:34:57.764276 3145483200 replica.cpp:778] Replica recovered with log positions 3 -> 4 with 0 holes and 0 unlearned I0126 13:34:57.765278 256114688 recover.cpp:451] Starting replica recovery I0126 13:34:57.765547 256114688 recover.cpp:477] Replica is in VOTING status I0126 13:34:57.795964 257187840 master.cpp:383] Master 569073cc-1195-45e9-b0d4-e2e1bf0d13d5 (172.18.9.56) started on 172.18.9.56:5050 I0126 13:34:57.796023 257187840 master.cpp:385] Flags at startup: --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""false"""" --authenticate_frameworks=""""false"""" --authenticate_http_frameworks=""""false"""" --authenticate_http_readonly=""""false"""" --authenticate_http_readwrite=""""false"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_unreachable_tasks_per_framework=""""1000"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""replicated_log"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""20secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""work_dir"""" --zk_session_timeout=""""10secs"""" I0126 13:34:57.796478 257187840 master.cpp:437] Master allowing unauthenticated frameworks to register I0126 13:34:57.796507 257187840 master.cpp:451] Master allowing unauthenticated agents to register I0126 13:34:57.796517 257187840 master.cpp:465] Master allowing HTTP frameworks to register without authentication I0126 13:34:57.796540 257187840 master.cpp:507] Using default 'crammd5' authenticator W0126 13:34:57.796573 257187840 authenticator.cpp:512] No credentials provided, authentication requests will be refused I0126 13:34:57.796584 257187840 authenticator.cpp:519] Initializing server SASL I0126 13:34:57.825337 255578112 master.cpp:2121] Elected as the leading master! I0126 13:34:57.825362 255578112 master.cpp:1643] Recovering from registrar I0126 13:34:57.825736 255578112 log.cpp:553] Attempting to start the writer I0126 13:34:57.826889 258260992 replica.cpp:495] Replica received implicit promise request from __req_res__(1)@172.18.9.56:5050 with proposal 2 I0126 13:34:57.828855 258260992 replica.cpp:344] Persisted promised to 2 I0126 13:34:57.829273 258260992 coordinator.cpp:238] Coordinator attempting to fill missing positions I0126 13:34:57.829375 259334144 log.cpp:569] Writer started with ending position 4 I0126 13:34:57.830878 257187840 registrar.cpp:362] Successfully fetched the registry (159B) in 5.427968ms I0126 13:34:57.831029 257187840 registrar.cpp:461] Applied 1 operations in 24us; attempting to update the registry I0126 13:34:57.836194 259334144 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 5 I0126 13:34:57.836676 257724416 replica.cpp:539] Replica received write request for position 5 from __req_res__(2)@172.18.9.56:5050 I0126 13:34:57.837102 255578112 replica.cpp:693] Replica received learned notice for position 5 from @0.0.0.0:0 I0126 13:34:57.837745 257187840 registrar.cpp:506] Successfully updated the registry in 6.685184ms I0126 13:34:57.837806 257187840 registrar.cpp:392] Successfully recovered registrar I0126 13:34:57.837924 255578112 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 6 I0126 13:34:57.838132 256651264 master.cpp:1759] Recovered 0 agents from the registry (159B); allowing 10mins for agents to re-register I0126 13:34:57.838312 257187840 replica.cpp:539] Replica received write request for position 6 from __req_res__(3)@172.18.9.56:5050 I0126 13:34:57.838692 256651264 replica.cpp:693] Replica received learned notice for position 6 from @0.0.0.0:0 I0126 13:36:04.887257 256114688 http.cpp:420] HTTP DELETE for /master/quota/role2 from 127.0.0.1:51458 with User-Agent='HTTPie/0.9.8' I0126 13:36:04.887512 255578112 registrar.cpp:461] Applied 1 operations in 42us; attempting to update the registry I0126 13:36:04.892643 255578112 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 7 I0126 13:36:04.893127 258797568 replica.cpp:539] Replica received write request for position 7 from __req_res__(4)@172.18.9.56:5050 I0126 13:36:04.895309 257187840 replica.cpp:693] Replica received learned notice for position 7 from @0.0.0.0:0 I0126 13:36:04.895814 258260992 registrar.cpp:506] Successfully updated the registry in 8.2688ms F0126 13:36:04.895956 256114688 hierarchical.cpp:1180] Check failed: quotas.contains(role) *** Check failure stack trace: *** I0126 13:36:04.895961 255578112 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 8 I0126 13:36:04.896437 257187840 replica.cpp:539] Replica received write request for position 8 from __req_res__(5)@172.18.9.56:5050 I0126 13:36:04.896908 259334144 replica.cpp:693] Replica received learned notice for position 8 from @0.0.0.0:0 @ 0x10b5e52aa google::LogMessage::Fail() E0126 13:36:04.905042 259870720 process.cpp:2419] Failed to shutdown socket with fd 11: Socket is not connected @ 0x10b5e282c google::LogMessage::SendToLog() @ 0x10b5e3959 google::LogMessage::Flush() @ 0x10b5ee159 google::LogMessageFatal::~LogMessageFatal() @ 0x10b5e5795 google::LogMessageFatal::~LogMessageFatal() @ 0x1089e8d17 mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::removeQuota() @ 0x107ebbc13 _ZZN7process8dispatchIN5mesos8internal6master9allocator21MesosAllocatorProcessERKNSt3__112basic_stringIcNS6_11char_traitsIcEENS6_9allocatorIcEEEESC_EEvRKNS_3PIDIT_EEMSG_FvT0_ET1_ENKUlPNS_11ProcessBaseEE_clESP_ @ 0x107ebbab0 _ZNSt3__128__invoke_void_return_wrapperIvE6__callIJRZN7process8dispatchIN5mesos8internal6master9allocator21MesosAllocatorProcessERKNS_12basic_stringIcNS_11char_traitsIcEENS_9allocatorIcEEEESF_EEvRKNS3_3PIDIT_EEMSJ_FvT0_ET1_EUlPNS3_11ProcessBaseEE_SS_EEEvDpOT_ @ 0x107ebb7b9 _ZNSt3__110__function6__funcIZN7process8dispatchIN5mesos8internal6master9allocator21MesosAllocatorProcessERKNS_12basic_stringIcNS_11char_traitsIcEENS_9allocatorIcEEEESE_EEvRKNS2_3PIDIT_EEMSI_FvT0_ET1_EUlPNS2_11ProcessBaseEE_NSC_ISS_EEFvSR_EEclEOSR_ @ 0x10b38ba27 std::__1::function<>::operator()() @ 0x10b38b96c process::ProcessBase::visit() @ 0x10b40415e process::DispatchEvent::visit() @ 0x107665171 process::ProcessBase::serve() @ 0x10b385c07 process::ProcessManager::resume() @ 0x10b47db90 process::ProcessManager::init_threads()::$_0::operator()() @ 0x10b47d7e0 _ZNSt3__114__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN7process14ProcessManager12init_threadsEvE3$_0EEEEEPvSB_ @ 0x7fffb2b8eaab _pthread_body @ 0x7fffb2b8e9f7 _pthread_start @ 0x7fffb2b8e1fd thread_start [2] 59343 abort mesos-master --work_dir=work_dir ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7009","01/26/2017 17:07:49",1,"Add a 'secret' field to the 'Environment' message ""A new field of type {{Secret}} should be added to the {{Environment}} message to enable the inclusion of secrets in executor and task environments.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7011","01/26/2017 17:30:00",1,"Add an '--executor_secret_key' flag to the agent ""A new {{\-\-executor_secret_key}} flag should be added to the agent to allow the operator to specify a secret file to be loaded into the default executor JWT authenticator and SecretGenerator modules. This secret will be used to generate default executor secrets when {{\-\-generate_executor_secrets}} is set, and will be used to verify those secrets when {{\-\-authenticate_http_executors}} is set.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7012","01/26/2017 17:34:04",2,"Add authorization actions for V1 executor calls ""Authorization actions should be added for the V1 executor calls: * Subscribe * Update * Message""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7013","01/26/2017 17:44:07",2,"Update the authorizer interface for executor authentication ""The authorizer interface must be updated to accommodate changes introduced by the implementation of executor authentication: * The {{authorization::Subject}} message must be extended to include the {{claims}} from a {{Principal}} * The local authorizer must be updated to accommodate this interface change""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7014","01/26/2017 17:48:52",3,"Add implicit executor authorization to local authorizer ""The local authorizer should be updated to perform implicit authorization of executor actions. When executors authenticate using a default executor secret, the authorizer will receive an authorization {{Subject}} which contains claims, but no principal. In this case, implicit authorization should be performed. Implicit authorization rules should enforce that an executor can perform actions on itself; i.e., subscribe as itself, send messages as itself, launch nested containers within itself.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7022","01/27/2017 20:50:52",3,"Update framework authorization to support multiple roles ""Currently the master assumes that a framework is only in a single role, see {{Master::authorizeFramework}}. This code should be updated to support frameworks with multiple roles. In particular it should get authorization of the framework's principal to register in each of the framework's roles.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7026","01/27/2017 23:42:19",5,"Update authorization / authorization-filtering to handle hierarchical roles. ""Authorization and endpoint filtering will need to be updated in order to allow the authorization to be performed in a hierarchical manner (e.g. a user can see all beneath /eng/* vs. a user can see all beneath /eng/frontend/*).""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7028","01/28/2017 16:21:06",5,"NetSocketTest.EOFBeforeRecv is flaky. ""This was observed on ASF CI: """," [ RUN ] Encryption/NetSocketTest.EOFBeforeRecv/0 I0128 03:48:51.444228 27745 openssl.cpp:419] CA file path is unspecified! NOTE: Set CA file path with LIBPROCESS_SSL_CA_FILE= I0128 03:48:51.444252 27745 openssl.cpp:424] CA directory path unspecified! NOTE: Set CA directory path with LIBPROCESS_SSL_CA_DIR= I0128 03:48:51.444257 27745 openssl.cpp:429] Will not verify peer certificate! NOTE: Set LIBPROCESS_SSL_VERIFY_CERT=1 to enable peer certificate verification I0128 03:48:51.444262 27745 openssl.cpp:435] Will only verify peer certificate if presented! NOTE: Set LIBPROCESS_SSL_REQUIRE_CERT=1 to require peer certificate verification I0128 03:48:51.447341 27745 process.cpp:1246] libprocess is initialized on 172.17.0.2:45515 with 16 worker threads ../../../3rdparty/libprocess/src/tests/socket_tests.cpp:196: Failure Failed to wait 15secs for client->recv() [ FAILED ] Encryption/NetSocketTest.EOFBeforeRecv/0, where GetParam() = """"SSL"""" (15269 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7042","01/31/2017 20:04:58",2,"Send SIGKILL after SIGTERM to IOSwitchboard after container termination. ""This is follow up for MESOS-6664""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7047","02/01/2017 18:00:44",2,"Update agent for hierarchical roles. ""Agents use the role name in the file system path for persistent volumes: a persistent volume is written to {{work_dir/volumes/roles//}}. When using hierarchical roles, {{role-name}} might contain slashes. It seems like there are three options here: # When converting the role name into the file system path, escape any slashes that appear. # Hash the role name before using it in the file system path. # Create a directory hierarchy that corresponds to the nesting in the role name. So a volume for role {{a/b/c/d}} would be stored in {{roles/a/b/c/d/}}. If we adopt #3, we'd probably also want to cleanup the filesystem when a volume is removed.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7051","02/02/2017 00:53:49",2,"Introduce a new http::Headers abstraction. ""Introduce a new http::Headers abstraction to replace the previous hashmap 'Headers'. The benefit is that it can be embedded with other header classes (e.g., WWW-Authenticate) to parse a header content, as well as doing validation.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7069","02/06/2017 20:14:51",2,"The linux filesystem isolator should set mode and ownership for host volumes. ""If the host path is a relative path, the linux filesystem isolator should set the mode and ownership for this host volume since it allows non-root user to write to the volume. Note that this is the case of sharing the host fileysystem (without rootfs).""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7076","02/07/2017 15:30:27",8,"libprocess tests fail when using libevent 2.1.8 ""Running {{libprocess-tests}} on Mesos compiled with {{--enable-libevent --enable-ssl}} on an operating system using libevent 2.1.8, SSL related tests fail like Tests failing are """," [ RUN ] SSLTest.SSLSocket I0207 15:20:46.017881 2528580544 openssl.cpp:419] CA file path is unspecified! NOTE: Set CA file path with LIBPROCESS_SSL_CA_FILE= I0207 15:20:46.017904 2528580544 openssl.cpp:424] CA directory path unspecified! NOTE: Set CA directory path with LIBPROCESS_SSL_CA_DIR= I0207 15:20:46.017918 2528580544 openssl.cpp:429] Will not verify peer certificate! NOTE: Set LIBPROCESS_SSL_VERIFY_CERT=1 to enable peer certificate verification I0207 15:20:46.017923 2528580544 openssl.cpp:435] Will only verify peer certificate if presented! NOTE: Set LIBPROCESS_SSL_REQUIRE_CERT=1 to require peer certificate verification WARNING: Logging before InitGoogleLogging() is written to STDERR I0207 15:20:46.033001 2528580544 openssl.cpp:419] CA file path is unspecified! NOTE: Set CA file path with LIBPROCESS_SSL_CA_FILE= I0207 15:20:46.033179 2528580544 openssl.cpp:424] CA directory path unspecified! NOTE: Set CA directory path with LIBPROCESS_SSL_CA_DIR= I0207 15:20:46.033196 2528580544 openssl.cpp:429] Will not verify peer certificate! NOTE: Set LIBPROCESS_SSL_VERIFY_CERT=1 to enable peer certificate verification I0207 15:20:46.033201 2528580544 openssl.cpp:435] Will only verify peer certificate if presented! NOTE: Set LIBPROCESS_SSL_REQUIRE_CERT=1 to require peer certificate verification ../../../3rdparty/libprocess/src/tests/ssl_tests.cpp:257: Failure Failed to wait 15secs for Socket(socket.get()).recv() [ FAILED ] SSLTest.SSLSocket (15196 ms) SSLTest.SSLSocket SSLTest.NoVerifyBadCA SSLTest.VerifyCertificate SSLTest.ProtocolMismatch SSLTest.ECDHESupport SSLTest.PeerAddress SSLTest.HTTPSGet SSLTest.HTTPSPost SSLTest.SilentSocket SSLTest.ShutdownThenSend SSLVerifyIPAdd/SSLTest.BasicSameProcess/0, where GetParam() = """"false"""" SSLVerifyIPAdd/SSLTest.BasicSameProcess/1, where GetParam() = """"true"""" SSLVerifyIPAdd/SSLTest.BasicSameProcessUnix/0, where GetParam() = """"false"""" SSLVerifyIPAdd/SSLTest.BasicSameProcessUnix/1, where GetParam() = """"true"""" SSLVerifyIPAdd/SSLTest.RequireCertificate/0, where GetParam() = """"false"""" SSLVerifyIPAdd/SSLTest.RequireCertificate/1, where GetParam() = """"true"""" ",0,0,1,0,0,0,0,0,0,0,1,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7099","02/09/2017 23:31:17",5,"Quota can be exceeded due to coarse-grained offer technique. ""The current implementation of quota allocation allocates the entire available resources on an agent when trying to satisfy the quota. What this means is that quota can be exceeded by the size of an agent. This is especially problematic for large machines, consider a 48 core, 512 GB memory server where a role is given 4 cores and 4GB of memory. Given our current approach, we will send an offer for the entire 48 cores and 512 GB of memory! This ticket is to perform fine grained offers when the allocation will exceed the quota.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7102","02/10/2017 00:35:44",2,"Crash when sending a SIGUSR1 signal to the agent. ""Looks like sending a {{SIGUSR1}} to the agent crashes it. This is a regression and used to work fine in the 1.1 release. Note that the agent does unregisters with the master and the crash happens after that. Steps to reproduce: - Start the agent. - Send it a {{SIGUSR1}} signal. The agent should crash with a stack trace similar to this: """," I0209 16:19:46.210819 31977472 slave.cpp:851] Received SIGUSR1 signal from user gmann; unregistering and shutting down I0209 16:19:46.210960 31977472 slave.cpp:803] Agent terminating *** Aborted at 1486685986 (unix time) try """"date -d @1486685986"""" if you are using GNU date *** PC: @ 0x7fffbc4904fc _pthread_key_global_init *** SIGSEGV (@0x38) received by PID 88894 (TID 0x7fffc50c83c0) stack trace: *** @ 0x7fffbc488bba _sigtramp @ 0x7fe8a5d03f38 (unknown) @ 0x10b6d67d9 _ZZ11synchronizeINSt3__115recursive_mutexEE12SynchronizedIT_EPS3_ENKUlPS1_E_clES6_ @ 0x10b6d67b8 _ZZ11synchronizeINSt3__115recursive_mutexEE12SynchronizedIT_EPS3_ENUlPS1_E_8__invokeES6_ @ 0x10b6d6889 Synchronized<>::Synchronized() @ 0x10b6d678d Synchronized<>::Synchronized() @ 0x10b6a708a synchronize<>() @ 0x10e2f148d process::ProcessManager::wait() @ 0x10e2e9a78 process::wait() @ 0x10b30614f process::wait() @ 0x10c9619dc mesos::internal::slave::StatusUpdateManager::~StatusUpdateManager() @ 0x10c961a55 mesos::internal::slave::StatusUpdateManager::~StatusUpdateManager() @ 0x10b1ab035 main @ 0x7fffbc27b255 start [1] 88894 segmentation fault bin/mesos-agent.sh —master=127.0.0.1:5050 ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7124","02/14/2017 09:09:17",3,"Replace monadic type get() functions with operator* ""In MESOS-2757 we introduced {{T* operator->}} for {{Option}}, {{Future}} and {{Try}}. This provided a convenient short-hand for existing member functions {{T& get}} providing identical functionality. To finalize the work of MESOS-2757 we should replace the existing {{T& get()}} member functions with functions {{T& operator*}}. This is desirable as having both {{operator->}} and {{get}} in the code base at the same time lures developers into using the old-style {{get}} instead of {{operator->}} where it is not needed, e.g., instead of We still require the functionality of {{get}} to directly access the contained value, but the current API unnecessarily conflates two (at least from a usage perspective) unrelated aspects; in these instances, we should use an {{operator*}} instead, Using {{operator*}} in these instances makes it much less likely that users would use it in instances when they wanted to call functions of the wrapped value, i.e., appears more natural than Note that this proposed change is in line with the interface of {{std::optional}}. Also, {{std::shared_ptr}}'s {{get}} is a useful function and implements an unrelated interface: it surfaces the wrapped pointer as opposed to its {{operator*}} which dereferences the wrapped pointer. Similarly, our current {{get}} also produce values, and are unrelated to {{std::shared_ptr}}'s {{get}}."""," m.get().fun(); m->fun(); void f(const T&); Try m = ..; f(*m); // instead of: f(m.get()); m->fun(); (*m).fun(); ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7130","02/15/2017 18:52:52",2,"port_mapping isolator: executor hangs when running on EC2 ""Hi, I'm experiencing a weird issue: I'm using a CI to do testing on infrastructure automation. I recently activated the {{network/port_mapping}} isolator. I'm able to make the changes work and pass the test for bare-metal servers and virtualbox VMs using this configuration. But when I try on EC2 (on which my CI pipeline rely) it systematically fails to run any container. It appears that the sandbox is created and the port_mapping isolator seems to be OK according to the logs in stdout and stderr and the {tc} output : Then the executor never come back in REGISTERED state and hang indefinitely. {GLOG_v=3} doesn't help here. My skills in this area are limited, but trying to load the symbols and attach a gdb to the mesos-executor process, I'm able to print this stack: I concluded that the underlying shell script launched by the isolator or the task itself is just .. blocked. But I don't understand why. Here is a process tree to show that I've no task running but the executor is: If someone has a clue about the issue I could experience on EC2, I would be interested to talk..."""," + mount --make-rslave /run/netns + test -f /proc/sys/net/ipv6/conf/all/disable_ipv6 + echo 1 + ip link set lo address 02:44:20:bb:42:cf mtu 9001 up + ethtool -K eth0 rx off (...) + tc filter show dev eth0 parent ffff:0 + tc filter show dev lo parent ffff:0 I0215 16:01:13.941375 1 exec.cpp:161] Version: 1.0.2 #0 0x00007feffc1386d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib64/libpthread.so.0 #1 0x00007feffbed69ec in std::condition_variable::wait(std::unique_lock&) () from /usr/lib64/libstdc++.so.6 #2 0x00007ff0003dd8ec in void synchronized_wait(std::condition_variable*, std::mutex*) () from /usr/lib64/libmesos-1.0.2.so #3 0x00007ff0017d595d in Gate::arrive(long) () from /usr/lib64/libmesos-1.0.2.so #4 0x00007ff0017c00ed in process::ProcessManager::wait(process::UPID const&) () from /usr/lib64/libmesos-1.0.2.so #5 0x00007ff0017c5c05 in process::wait(process::UPID const&, Duration const&) () from /usr/lib64/libmesos-1.0.2.so #6 0x00000000004ab26f in process::wait(process::ProcessBase const*, Duration const&) () #7 0x00000000004a3903 in main () root 28420 0.8 3.0 1061420 124940 ? Ssl 17:56 0:25 /usr/sbin/mesos-slave --advertise_ip=127.0.0.1 --attributes=platform:centos;platform_major_version:7;type:base --cgroups_enable_cfs --cgroups_hierarchy=/sys/fs/cgroup --cgroups_net_cls_primary_handle=0xC370 --container_logger=org_apache_mesos_LogrotateContainerLogger --containerizers=mesos,docker --credential=file:///etc/mesos-chef/slave-credential --default_container_info={""""type"""":""""MESOS"""",""""volumes"""":[{""""host_path"""":""""tmp"""",""""container_path"""":""""/tmp"""",""""mode"""":""""RW""""}]} --default_role=default --docker_registry=/usr/share/mesos/users --docker_store_dir=/var/opt/mesos/store/docker --egress_unique_flow_per_container --enforce_container_disk_quota --ephemeral_ports_per_container=128 --executor_environment_variables={""""PATH"""":""""/bin:/usr/bin:/usr/sbin"""",""""CRITEO_DC"""":""""par"""",""""CRITEO_ENV"""":""""prod""""} --image_providers=docker --image_provisioner_backend=copy --isolation=cgroups/cpu,cgroups/mem,cgroups/net_cls,namespaces/pid,disk/du,filesystem/shared,filesystem/linux,docker/runtime,network/cni,network/port_mapping --logging_level=INFO --master=zk://mesos:test@localhost.localdomain:2181/mesos --modules=file:///etc/mesos-chef/slave-modules.json --port=5051 --recover=reconnect --resources=ports:[31000-32000];ephemeral_ports:[32768-57344] --strict --work_dir=/var/opt/mesos root 28484 0.0 2.3 433676 95016 ? Ssl 17:56 0:00 \_ mesos-logrotate-logger --help=false --log_filename=/var/opt/mesos/slaves/cdf94219-87b2-4af2-9f61-5697f0442915-S0/frameworks/366e8ed2-730e-4423-9324-086704d182b0-0000/executors/group_simplehttp.16f7c2ee-f3a8-11e6-be1c-0242b44d071f/runs/1d3e6b1c-cda8-47e5-92c4-a161429a7ac6/stdout --logrotate_options=rotate 5 --logrotate_path=logrotate --max_size=10MB root 28485 0.0 2.3 499212 94724 ? Ssl 17:56 0:00 \_ mesos-logrotate-logger --help=false --log_filename=/var/opt/mesos/slaves/cdf94219-87b2-4af2-9f61-5697f0442915-S0/frameworks/366e8ed2-730e-4423-9324-086704d182b0-0000/executors/group_simplehttp.16f7c2ee-f3a8-11e6-be1c-0242b44d071f/runs/1d3e6b1c-cda8-47e5-92c4-a161429a7ac6/stderr --logrotate_options=rotate 5 --logrotate_path=logrotate --max_size=10MB marathon 28487 0.0 2.4 635780 97388 ? Ssl 17:56 0:00 \_ mesos-executor --launcher_dir=/usr/libexec/mesos ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7153","02/21/2017 22:43:28",3,"The new http::Headers abstraction may break some modules. ""In the favor of the new http::Headers abstraction, the headers class was changed from a hashmap to a class. However, this change may potentially break some modules since functionalities like constructor using initializer or calling methods from undered_map. We should have the new class derived from the hashmap instead. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7154","02/21/2017 23:28:27",2,"Document provisioner auto backend support. ""Document the provisioner auto backend semantic in container-image.md""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7160","02/22/2017 21:19:15",3,"Parsing of perf version segfaults ""Parsing the perf version [fails with a segfault in ASF CI|https://builds.apache.org/job/Mesos-Buildbot/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu:14.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/3294/], """," E0222 20:54:03.033464 805 perf.cpp:237] Failed to get perf version: Failed to execute perf: terminated with signal Aborted (core dumped) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7168","02/24/2017 21:24:40",3,"Agent should validate that the nested container ID does not exceed certain length. ""This is related to MESOS-691. Since nested container ID is generated by the executor, the agent should verify that the length of it does not exceed certain length.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7193","03/01/2017 16:24:42",5,"Use of `GTEST_IS_THREADSAFE` in asserts is problematic. ""Some test cases in libprocess use {{ASSERT_TRUE(GTEST_IS_THREADSAFE)}}. This is a misuse of that define, [the documentation in GTest says|https://github.com/google/googletest/blob/master/googletest/include/gtest/internal/gtest-port.h#L155-L163]: Currently, the use of {{GTEST_IS_THREADSAFE}} works fine in the assert, because it is defined to be {{1}}. But newer upstream versions of GTest use a more complicated define, that can yield to be undefined, causing compilation errors."""," Macros indicating which Google Test features are available (a macro is defined to 1 if the corresponding feature is supported; otherwise UNDEFINED -- it's never defined to 0.). Google Test defines these macros automatically. Code outside Google Test MUST NOT define them. ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7197","03/02/2017 09:01:53",3,"Requesting tiny amount of CPU crashes master. ""If a task is submitted with a tiny CPU request e.g. 0.0004, then when it completes the master crashes due to a CHECK failure: I can reproduce this with the following command: If I replace 0.0004 with 0.001 the issue no longer occurs."""," F0302 10:48:26.654909 15391 sorter.cpp:291] Check failed: allocations[name].resources[slaveId].contains(resources) mesos-execute --command='sleep 5' --master=$MASTER --name=crashtest --resources='cpus:0.0004;mem:128' ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7200","03/02/2017 17:29:56",5,"Add support for hierarchical roles to the local authorizer ""We should update the local authorizer so that role values for role-based actions matching whole role subhierarchies are understood, e.g., given roles {{a/b/c}}, {{a/b/d}} and {{a/e}} it should be possible to specify a role value {{a/b/%}} matching actions on roles {{a/b/c}} and {{a/b/d}}, or a value {{a/%}} matching actions on all above roles.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7208","03/06/2017 01:57:46",3,"Persistent volume ownership is set to root when task is running with non-root user ""I’m running docker container in universal containerizer, mesos 1.1.0. switch_user=true, isolator=filesystem/linux,docker/runtime. Container is launched with marathon, “user”:”someappuser”. I’d want to use persistent volume, but it’s exposed to container with root user permissions even if root folder is created with someppuser ownership (looks like mesos do chown to this folder). here logs for my container: """," I0305 22:51:36.414655 10175 slave.cpp:1701] Launching task 'md_hdfs_journal.23f813ab-01dd-11e7-a012-0242ce94d92a' for framework e9d0e39e-b67d-4142-b95d-b0987998eb92-0000 I0305 22:51:36.415118 10175 paths.cpp:536] Trying to chown '/export/intssd/mesos-slave/workdir/slaves/85150805-a201-4b23-ab21-b332a458fc97-S10/frameworks/e9d0e39e-b67d-4142-b95d-b0987998eb92-0000/executors/md_hdfs_journal.23f813ab-01dd-11e7-a012-0242ce94d92a/runs/e978d4eb-5ec1-44ad-b50a-9ae6bfe1065a' to user 'root' I0305 22:51:36.422992 10175 slave.cpp:6179] Launching executor 'md_hdfs_journal.23f813ab-01dd-11e7-a012-0242ce94d92a' of framework e9d0e39e-b67d-4142-b95d-b0987998eb92-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/export/intssd/mesos-slave/workdir/slaves/85150805-a201-4b23-ab21-b332a458fc97-S10/frameworks/e9d0e39e-b67d-4142-b95d-b0987998eb92-0000/executors/md_hdfs_journal.23f813ab-01dd-11e7-a012-0242ce94d92a/runs/e978d4eb-5ec1-44ad-b50a-9ae6bfe1065a' I0305 22:51:36.424278 10175 slave.cpp:1987] Queued task 'md_hdfs_journal.23f813ab-01dd-11e7-a012-0242ce94d92a' for executor 'md_hdfs_journal.23f813ab-01dd-11e7-a012-0242ce94d92a' of framework e9d0e39e-b67d-4142-b95d-b0987998eb92-0000 I0305 22:51:36.424347 10158 docker.cpp:1000] Skipping non-docker container I0305 22:51:36.425639 10142 containerizer.cpp:938] Starting container e978d4eb-5ec1-44ad-b50a-9ae6bfe1065a for executor 'md_hdfs_journal.23f813ab-01dd-11e7-a012-0242ce94d92a' of framework e9d0e39e-b67d-4142-b95d-b0987998eb92-0000 I0305 22:51:36.428725 10166 provisioner.cpp:294] Provisioning image rootfs '/export/intssd/mesos-slave/workdir/provisioner/containers/e978d4eb-5ec1-44ad-b50a-9ae6bfe1065a/backends/copy/rootfses/0e2181e9-1bf2-42d4-8cb0-ee70e466c3ae' for container e978d4eb-5ec1-44ad-b50a-9ae6bfe1065a I0305 22:51:42.981240 10149 linux.cpp:695] Changing the ownership of the persistent volume at '/export/intssd/mesos-slave/data/volumes/roles/general_marathon_service_role/md_hdfs_journal#data#23f813aa-01dd-11e7-a012-0242ce94d92a' with uid 0 and gid 0 I0305 22:51:42.986593 10136 linux_launcher.cpp:421] Launching container e978d4eb-5ec1-44ad-b50a-9ae6bfe1065a and cloning with namespaces CLONE_NEWNS ls -la /export/intssd/mesos-slave/workdir/slaves/85150805-a201-4b23-ab21-b332a458fc97-S10/frameworks/e9d0e39e-b67d-4142-b95d-b0987998eb92-0000/executors/md_hdfs_journal.23f813ab-01dd-11e7-a012-0242ce94d92a/runs/e978d4eb-5ec1-44ad-b50a-9ae6bfe1065a/ drwxr-xr-x 3 someappuser someappgroup 4096 22:51 . drwxr-xr-x 3 root root 4096 22:51 .. drwxr-xr-x 2 root root 4096 22:51 data -rw-r--r-- 1 root root 169 22:51 stderr -rw-r--r-- 1 root root 183012 23:00 stdout ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7210","03/06/2017 08:04:37",3,"HTTP health check doesn't work when mesos runs with --docker_mesos_image ""When running mesos-slave with option """"docker_mesos_image"""" like: from the container that was started with option """"pid: host"""" like: and example marathon job, that use MESOS_HTTP checks like: I see the errors like: Looks like option docker_mesos_image makes, that newly started mesos job is not using """"pid host"""" option same as mother container was started, but has his own PID namespace (so it doesn't matter if mother container was started with """"pid host"""" or not it will never be able to find PID)"""," --master=zk://standalone:2181/mesos --containerizers=docker,mesos --executor_registration_timeout=5mins --hostname=standalone --ip=0.0.0.0 --docker_stop_timeout=5secs --gc_delay=1days --docker_socket=/var/run/docker.sock --no-systemd_enable_support --work_dir=/tmp/mesos --docker_mesos_image=panteras/paas-in-a-box:0.4.0 net: host privileged: true pid: host { """"id"""": """"python-example-stable"""", """"cmd"""": """"python3 -m http.server 8080"""", """"mem"""": 16, """"cpus"""": 0.1, """"instances"""": 2, """"container"""": { """"type"""": """"DOCKER"""", """"docker"""": { """"image"""": """"python:alpine"""", """"network"""": """"BRIDGE"""", """"portMappings"""": [ { """"containerPort"""": 8080, """"hostPort"""": 0, """"protocol"""": """"tcp"""" } ] } }, """"env"""": { """"SERVICE_NAME"""" : """"python"""" }, """"healthChecks"""": [ { """"path"""": """"/"""", """"portIndex"""": 0, """"protocol"""": """"MESOS_HTTP"""", """"gracePeriodSeconds"""": 30, """"intervalSeconds"""": 10, """"timeoutSeconds"""": 30, """"maxConsecutiveFailures"""": 3 } ] } F0306 07:41:58.844293 35 health_checker.cpp:94] Failed to enter the net namespace of task (pid: '13527'): Pid 13527 does not exist *** Check failure stack trace: *** @ 0x7f51770b0c1d google::LogMessage::Fail() @ 0x7f51770b29d0 google::LogMessage::SendToLog() @ 0x7f51770b0803 google::LogMessage::Flush() @ 0x7f51770b33f9 google::LogMessageFatal::~LogMessageFatal() @ 0x7f517647ce46 _ZNSt17_Function_handlerIFivEZN5mesos8internal6health14cloneWithSetnsERKSt8functionIS0_E6OptionIiERKSt6vectorINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESaISG_EEEUlvE_E9_M_invokeERKSt9_Any_data @ 0x7f517647bf2b mesos::internal::health::cloneWithSetns() @ 0x7f517648374b std::_Function_handler<>::_M_invoke() @ 0x7f5177068167 process::internal::cloneChild() @ 0x7f5177065c32 process::subprocess() @ 0x7f5176481a9d mesos::internal::health::HealthCheckerProcess::_httpHealthCheck() @ 0x7f51764831f7 mesos::internal::health::HealthCheckerProcess::_healthCheck() @ 0x7f517701f38c process::ProcessBase::visit() @ 0x7f517702c8b3 process::ProcessManager::resume() @ 0x7f517702fb77 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUt_vEEE6_M_runEv @ 0x7f51754ddc80 (unknown) @ 0x7f5174cf06ba start_thread @ 0x7f5174a2682d (unknown) I0306 07:41:59.077986 9 health_checker.cpp:199] Ignoring failure as health check still in grace period ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7220","03/08/2017 11:29:35",1,"'EXPECT_SOME' and other asserts don't work with 'Try's that have a custom error state. ""MESOS-5110 introduced an additional template parameter for {{Try}} to support custom error types. Using these values with {{EXPECT_SOME}} doesn't work, i.e. won't compile. The other assertions in {{stout/gtest.hpp}} are likely affected as well."""," Try foo = bar(); EXPECT_SOME(foo); ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7226","03/09/2017 23:08:15",5,"Introduce precompiled headers (on Windows) ""Precompiled headers (PCHs) exist on both Windows and Linux. For Linux, you can refer to https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html. Straight from the GNU CC documentation: """"The time the compiler takes to process these header files over and over again can account for nearly all of the time required to build the project."""" PCHs are only being proposed for the CMake system. In theory, we can introduce this change with only a few, non-intrusive code changes. The feature will primarily be a CMake change. See: https://github.com/sakra/cotire""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7246","03/14/2017 21:27:44",1,"Add documentation for AGENT_ADDED/AGENT_REMOVED events. ""We need to add documentation to the existing Mesos Operator API docs for the newly added {{AGENT_ADDED}}/{{AGENT_REMOVED}} events. The protobuf definition for the events can be found here: https://github.com/apache/mesos/blob/master/include/mesos/v1/master/master.proto""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7251","03/16/2017 03:32:02",2,"Support pulling images from AliCloud private registry. ""The image puller via curl doesn't work when I'm specifying the image name as: registry.cn-hangzhou.aliyuncs.com/kaiyu/pytorch-cuda75 400 BAD REQUEST But the docker pulls it successfully bq. docker pull registry.cn-hangzhou.aliyuncs.com/kaiyu/pytorch-cuda75""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7252","03/16/2017 05:15:14",2,"Need to fix resource check in long-lived framework ""The multi-role changes in Mesos changed the implementation of `Resources::contains`. This results in the search for a given resource to be performed only for unallocated resources. For allocated resources the search is actually performed only for a given role. Due to this change the resource check in both the long-lived framework are failing leading to these frameworks not launching any tasks. The fix would be to unallocate all resources in a given offer and than do the `contains` check.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7258","03/16/2017 19:03:36",13,"Provide scheduler calls to subscribe to additional roles and unsubscribe from roles. ""The current support for schedulers to subscribe to additional roles or unsubscribe from some of their roles requires that the scheduler obtain a new subscription with the master which invalidates the event stream. A more lightweight mechanism would be to provide calls for the scheduler to subscribe to additional roles or unsubscribe from some roles such that the existing event stream remains open and offers to the new roles arrive on the existing event stream. E.g. SUBSCRIBE_TO_ROLE UNSUBSCRIBE_FROM_ROLE One open question pertains to the terminology here, whether we would want to avoid using """"subscribe"""" in this context. An alternative would be: UPDATE_FRAMEWORK_INFO Which provides a generic mechanism for a framework to perform framework info updates without obtaining a new event stream. In addition, it would be easier to use if it returned 200 on success and an error response if invalid, etc. Rather than returning 202. *NOTE*: Not specific to this issue, but we need to figure out how to allow the framework to not leak reservations, e.g. MESOS-7651.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0 +"MESOS-7263","03/20/2017 02:16:23",3,"User supplied task environment variables cause warnings in sandbox stdout. ""The default executor causes task/command environment variables to get duplicated internally, causing warnings in the resulting sandbox {{stdout}}. Result in {{stdout}} of the sandbox: """," $ ./src/mesos-execute --name=""""test"""" --env='{""""key1"""":""""value1""""}' --command='sleep 1000' --master=127.0.0.1:5050 Overwriting environment variable 'key1', original: 'value1', new: 'value1' ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7264","03/20/2017 02:49:06",1,"Possibly duplicate environment variables should not leak values to the sandbox. ""When looking into MESOS-7263, the following also came up. Within the contents of `stdout`: There seems no obvious need to warn the user as the value is identical."""," ./src/mesos-execute --name=""""test"""" --env='{""""key1"""":""""value1""""}' --command='sleep 1000' --master=127.0.0.1:5050 Overwriting environment variable 'key1', original: 'value1', new: 'value1' ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7265","03/20/2017 03:13:54",3,"Containerizer startup may cause sensitive data to leak into sandbox logs. ""The task sandbox logging does show the callup for the containerizer launch with all of its flags. This is not safe when assuming that we may not want to leak sensitive data into the sandbox logging. Example: """," Received SUBSCRIBED event Subscribed executor on lobomacpro2.fritz.box Received LAUNCH event Starting task test /Users/till/Development/mesos-private/build/src/mesos-containerizer launch --help=""""false"""" --launch_info=""""{""""command"""":{""""environment"""":{""""variables"""":[{""""name"""":""""key1"""",""""type"""":""""VALUE"""",""""value"""":""""value1""""}]},""""shell"""":true,""""value"""":""""sleep 1000""""},""""environment"""":{""""variables"""":[{""""name"""":""""BIN_SH"""",""""type"""":""""VALUE"""",""""value"""":""""xpg4""""},{""""name"""":""""DUALCASE"""",""""type"""":""""VALUE"""",""""value"""":""""1""""},{""""name"""":""""DYLD_LIBRARY_PATH"""",""""type"""":""""VALUE"""",""""value"""":""""\/Users\/till\/Development\/mesos-private\/build\/src\/.libs""""},{""""name"""":""""LIBPROCESS_PORT"""",""""type"""":""""VALUE"""",""""value"""":""""0""""},{""""name"""":""""MESOS_AGENT_ENDPOINT"""",""""type"""":""""VALUE"""",""""value"""":""""192.168.178.20:5051""""},{""""name"""":""""MESOS_CHECKPOINT"""",""""type"""":""""VALUE"""",""""value"""":""""0""""},{""""name"""":""""MESOS_DIRECTORY"""",""""type"""":""""VALUE"""",""""value"""":""""\/tmp\/mesos\/slaves\/816619b6-f5ce-42d6-ad6b-2ef2001adc0a-S0\/frameworks\/4c8a82d4-8a5b-47f5-a660-5fef15da71a5-0000\/executors\/test\/runs\/b4bd0251-b42a-4ab3-9f02-60ede75bf3b1""""},{""""name"""":""""MESOS_EXECUTOR_ID"""",""""type"""":""""VALUE"""",""""value"""":""""test""""},{""""name"""":""""MESOS_EXECUTOR_SHUTDOWN_GRACE_PERIOD"""",""""type"""":""""VALUE"""",""""value"""":""""5secs""""},{""""name"""":""""MESOS_FRAMEWORK_ID"""",""""type"""":""""VALUE"""",""""value"""":""""4c8a82d4-8a5b-47f5-a660-5fef15da71a5-0000""""},{""""name"""":""""MESOS_HTTP_COMMAND_EXECUTOR"""",""""type"""":""""VALUE"""",""""value"""":""""0""""},{""""name"""":""""MESOS_SANDBOX"""",""""type"""":""""VALUE"""",""""value"""":""""\/tmp\/mesos\/slaves\/816619b6-f5ce-42d6-ad6b-2ef2001adc0a-S0\/frameworks\/4c8a82d4-8a5b-47f5-a660-5fef15da71a5-0000\/executors\/test\/runs\/b4bd0251-b42a-4ab3-9f02-60ede75bf3b1""""},{""""name"""":""""MESOS_SLAVE_ID"""",""""type"""":""""VALUE"""",""""value"""":""""816619b6-f5ce-42d6-ad6b-2ef2001adc0a-S0""""},{""""name"""":""""MESOS_SLAVE_PID"""",""""type"""":""""VALUE"""",""""value"""":""""slave(1)@192.168.178.20:5051""""},{""""name"""":""""PATH"""",""""type"""":""""VALUE"""",""""value"""":""""\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin""""},{""""name"""":""""PWD"""",""""type"""":""""VALUE"""",""""value"""":""""\/private\/tmp\/mesos\/slaves\/816619b6-f5ce-42d6-ad6b-2ef2001adc0a-S0\/frameworks\/4c8a82d4-8a5b-47f5-a660-5fef15da71a5-0000\/executors\/test\/runs\/b4bd0251-b42a-4ab3-9f02-60ede75bf3b1""""},{""""name"""":""""SHLVL"""",""""type"""":""""VALUE"""",""""value"""":""""0""""},{""""name"""":""""__CF_USER_TEXT_ENCODING"""",""""type"""":""""VALUE"""",""""value"""":""""0x1F5:0x0:0x0""""},{""""name"""":""""key1"""",""""type"""":""""VALUE"""",""""value"""":""""value1""""},{""""name"""":""""key1"""",""""type"""":""""VALUE"""",""""value"""":""""value1""""}]}}"""" Forked command at 16329 ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7270","03/21/2017 00:27:37",1,"Java V1 Framwork Test failed on macOS ""macOS's scheduler make ExamplesTest.JavaV1Framework terminate before the scheduler driver stopped, which results in an exception and a test failure. This failure is not seen in an Linux environment yet but there's a possibility that it would also happen.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7272","03/21/2017 05:34:44",2,"Unified containerizer does not support docker registry version < 2.3. ""in file `src/uri/fetchers/docker.cpp` ``` Option contentType = response.headers.get(""""Content-Type""""); if (contentType.isSome() && !strings::startsWith( contentType.get(), """"application/vnd.docker.distribution.manifest.v1"""")) { return Failure( """"Unsupported manifest MIME type: """" + contentType.get()); } ``` Docker fetcher check the contentType strictly, while docker registry with version < 2.3 returns manifests with contentType `application/json`, that leading failure like `E0321 13:27:27.572402 40370 slave.cpp:4650] Container 'xxx' for executor 'xxx' of framework xxx failed to start: Unsupported manifest MIME type: application/json; charset=utf-8`.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7280","03/22/2017 02:12:02",2,"Unified containerizer provisions docker image error with COPY backend ""Error occurs on some specific docker images with COPY backend, both 1.0.2 and 1.2.0. It works well with OVERLAY backend on 1.2.0. {quote} I0321 09:36:07.308830 27613 paths.cpp:528] Trying to chown '/data/mesos/slaves/55f6df5e-2812-40a0-baf5-ce96f20677d3-S102/frameworks/20151223-150303-2677017098-5050-30032-0000/executors/ct:Transcoding_Test_114489497_1490060156172:3/runs/7e518538-7b56-4b14-a3c9-bee43c669bd7' to user 'root' I0321 09:36:07.319628 27613 slave.cpp:5703] Launching executor ct:Transcoding_Test_114489497_1490060156172:3 of framework 20151223-150303-2677017098-5050-30032-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/data/mesos/slaves/55f6df5e-2812-40a0-baf5-ce96f20677d3-S102/frameworks/20151223-150303-2677017098-5050-30032-0000/executors/ct:Transcoding_Test_114489497_1490060156172:3/runs/7e518538-7b56-4b14-a3c9-bee43c669bd7' I0321 09:36:07.321436 27615 containerizer.cpp:781] Starting container '7e518538-7b56-4b14-a3c9-bee43c669bd7' for executor 'ct:Transcoding_Test_114489497_1490060156172:3' of framework '20151223-150303-2677017098-5050-30032-0000' I0321 09:36:37.902195 27600 provisioner.cpp:294] Provisioning image rootfs '/data/mesos/provisioner/containers/7e518538-7b56-4b14-a3c9-bee43c669bd7/backends/copy/rootfses/8d2f7fe8-71ff-4317-a33c-a436241a93d9' for container 7e518538-7b56-4b14-a3c9-bee43c669bd7 *E0321 09:36:58.707718 27606 slave.cpp:4000] Container '7e518538-7b56-4b14-a3c9-bee43c669bd7' for executor 'ct:Transcoding_Test_114489497_1490060156172:3' of framework 20151223-150303-2677017098-5050-30032-0000 failed to start: Collect failed: Failed to copy layer: cp: cannot create regular file ‘/data/mesos/provisioner/containers/7e518538-7b56-4b14-a3c9-bee43c669bd7/backends/copy/rootfses/8d2f7fe8-71ff-4317-a33c-a436241a93d9/usr/bin/python’: Text file busy* I0321 09:36:58.707991 27608 containerizer.cpp:1622] Destroying container '7e518538-7b56-4b14-a3c9-bee43c669bd7' I0321 09:36:58.708468 27607 provisioner.cpp:434] Destroying container rootfs at '/data/mesos/provisioner/containers/7e518538-7b56-4b14-a3c9-bee43c669bd7/backends/copy/rootfses/8d2f7fe8-71ff-4317-a33c-a436241a93d9' for container 7e518538-7b56-4b14-a3c9-bee43c669bd7 {quote} Docker image is a private one, so that i have to try to reproduce this bug with some sample Dockerfile as possible.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7304","03/24/2017 19:27:03",3,"Fetcher should not depend on SlaveID. ""Currently, various Fetcher interfaces depends on SlaveID, which is an unnecessary coupling. For instance: Looks like the only reason we need a SlaveID is because we need to calculate the fetcher cache directory based on that. We should calculate the fetcher cache directory in the caller and pass that directory to Fetcher."""," Try Fetcher::recover(const SlaveID& slaveId, const Flags& flags); Future Fetcher::fetch( const ContainerID& containerId, const CommandInfo& commandInfo, const string& sandboxDirectory, const Option& user, const SlaveID& slaveId, const Flags& flags); ",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7305","03/24/2017 19:56:17",8,"Adjust the recover logic of MesosContainerizer to allow standalone containers. ""The current recovery logic in MesosContainerizer assumes that all top level containers are tied to some Mesos executors. Adding standalone containers will invalid this assumption. The recovery logic must be changed to adapt to that.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7306","03/24/2017 20:35:31",8,"Support mount propagation for host volumes. ""Currently, all mounts in a container are marked as 'slave' by default. However, for some cases, we may want mounts under certain directory in a container to be propagate back to the root mount namespace. This is useful for the case where we want the mounts to survive container failures. See more documentation about mount propagation in: https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt Given mount propagation is very hard for users to understand, probably worth limiting this to just host volumes because we only see use case for that at the moment. Some relevant discussion can be found here: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/propagation.md""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7314","03/27/2017 14:50:27",5,"Add offer operations for converting disk resources ""One should be able to convert {{RAW}} and {{BLOCK}} disk resources into a different types by applying operations to them. The offer operations and the related validation and resource handling needs to be implemented.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7316","03/27/2017 21:39:47",1,"Upgrading Mesos to 1.2.0 results in some information missing from the `/flags` endpoint. ""From OSS Mesos Slack: I recently tried upgrading one of our Mesos clusters from 1.1.0 to 1.2.0. After doing this, it looks like the {{zk}} field on the {{/master/flags}} endpoint is no longer present. This looks related to the recent {{Flags}} refactoring that was done which resulted in some flags no longer being populated since they were not part of {{master::Flags}} in {{src/master/flags.hpp}}.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7329","03/31/2017 10:28:09",3,"Authorize offer operations for converting disk resources ""All offer operations are authorized, hence authorization logic has to be added to new offer operations as well.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7347","04/05/2017 08:44:20",8,"Prototype resource offer operation handling in the master ""Prototype the following workflow in the master, in accordance with the resource provider design; * Handle accept calls including resource provider related offer operations ({{CREATE_VOLUME}}, ...) * Implement internal bookkeeping of the disk resources these operations will be applied on * Implement resource bookkeeping for resource providers in the master * Send resource provider operations to resource providers""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7349","04/05/2017 16:51:18",3,"Document Mesos ""check"" feature. ""This should include framework authors recommendations about how and when to use general checks as well as comparison with health checks.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7355","04/06/2017 10:42:55",1,"Set MESOS_SANDBOX in debug containers. ""Currently {{MESOS_SANDBOX}} is not set for debug containers, see [https://github.com/apache/mesos/blob/7f04cf886fc2ed59414bf0056a2f351959a2d1f8/src/slave/containerizer/mesos/containerizer.cpp#L1392-L1407]. The most reasonable value seems to be task's sandbox.""","",0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7361","04/06/2017 15:28:13",3,"Command checks via agent pollute agent logs. ""Command checks via agent leverage debug container API of the agent to start checks. Each such invocation triggers a bunch of logs on the agent, because the API was not originally designed with periodic invocations in mind. We should find a way to avoid excessive logging on the agent.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7364","04/06/2017 18:16:35",3,"Upgrade vendored GMock / GTest ""We currently vendor gmock 1.7.0. The latest upstream version of gmock is 1.8.0, which fixes at least one annoying warning (MESOS-6539).""","",0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7367","04/07/2017 13:39:22",1,"MasterAPITest.GetRoles is flaky on machines with non-C locale. ""{{MasterAPITest.GetRoles}} test sets role weight to a real number using {{.}} as a decimal mark. This however is not correct on machines with non-standard locale, because weight parsing code relies on locale: [https://github.com/apache/mesos/blob/7f04cf886fc2ed59414bf0056a2f351959a2d1f8/src/master/master.cpp#L727-L750]. This leads to test failures: [https://pastebin.com/sQR2Tr2Q]. There are several solutions here. h4. 1. Change parsing code to be locale-agnostic. This seems to be the most robust solution. However, the {{--weights}} flag is deprecated and will probably be removed soon, together with the parsing code. h4. 2. Fix call sites in our tests to ensure decimal mark is locale dependent. This seems like a reasonable solution, but I'd argue we can do even better. h4. 3. Use locale-agnostic format for doubles in tests. Instead of saying {{""""2.5""""}} we can say {{""""25e-1""""}} which is locale agnostic.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7374","04/10/2017 03:57:00",3,"Running DOCKER images in Mesos Container Runtime without `linux/filesystem` isolation enabled renders host unusable ""If I run the pod below (using Marathon 1.4.2) against a mesos agent that has the flags (also below), then the overlay filesystem replaces the system root mount, effectively rendering the host unusable until reboot. flags: - {{--containerizers mesos,docker}} - {{--image_providers APPC,DOCKER}} - {{--isolation cgroups/cpu,cgroups/mem,docker/runtime}} pod definition for Marathon: Mesos should probably check for this and avoid replacing the system root mount point at startup or launch time."""," { """"id"""": """"/simplepod"""", """"scaling"""": { """"kind"""": """"fixed"""", """"instances"""": 1 }, """"containers"""": [ { """"name"""": """"sleep1"""", """"exec"""": { """"command"""": { """"shell"""": """"sleep 1000"""" } }, """"resources"""": { """"cpus"""": 0.1, """"mem"""": 32 }, """"image"""": { """"id"""": """"alpine"""", """"kind"""": """"DOCKER"""" } } ], """"networks"""": [ {""""mode"""": """"host""""} ] } ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7377","04/11/2017 16:57:32",2,"Add authentication to the checker and health checker libraries ""The health checker library in {{src/checks/health_checker.cpp}} must be updated to authenticate with the agent's HTTP operator API.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7414","04/24/2017 09:50:46",5,"Enable authorization for master's logging API calls: GET_LOGGING_LEVEL and SET_LOGGING_LEVEL ""The Operator API calls {{GET_LOGGING_LEVEL}} and {{SET_LOGGING_LEVEL}} lack authorization so any recognized user will be able to change the logging level of a given master. The v0 endpoint {{/logging/toggle}} has authorization through the {{GET_ENDPOINT_WITH_PATH}} action. We need to decide whether it should also use additional authorization. Note that there are already actions defined for authorization of these actions as they were already implemented in the agent.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7415","04/24/2017 14:08:21",3,"Add authorization to master's operator maintenance API in v0 and v1 ""None of the maintenance primitives in either API v0 or API v1 have any kind of authorization, which allows any user with valid credentials to do things such as shutting down a machine, schedule time off on an agent, modify maintenance schedule, etc. The authorization support needs to be added to the v0 endpoints: * {{/master/machine/up}} * {{/master/machine/down}} * {{/master/maintenance/schedule}} * {{/master/maintenance/status}} as well as to the v1 calls: * {{GET_MAINTENANCE_STATUS}} * {{GET_MAINTENANCE_SCHEDULE}} * {{UPDATE_MAINTENANCE_SCHEDULE}} * {{START_MAINTENANCE}} * {{STOP_MAINTENANCE}}""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7416","04/24/2017 14:24:23",5,"Filter results of `/master/slaves` and the v1 call GET_AGENTS ""The results returned by both the endpoint {{/master/slaves}} and the API v1 {{GET_AGENTS}} return full information about the agent state which probably need to be filtered for certain uses, particularly in a multi-tenancy scenario. The kind of leaked data includes specific role names and their specific allocations.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7431","04/27/2017 03:55:37",5,"Registry puller cannot fetch manifests from Google GCR: 403 Forbidden. ""When the registry puller is pulling a repository from Google's GCE Container Registry, a '403 Forbidden' error occurs instead of 401 when fetching manifests.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7433","04/27/2017 22:53:40",3,"Set working directory in DEBUG containers. ""Currently working directory is not set for DEBUG containers. The most reasonable value seems to be parent's working directory.""","",0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7438","04/28/2017 23:42:02",2,"Double free or corruption when using parallel test runner ""I observed the following when using the parallel test runner: Not sure how reproducible this is, appears to occur in the authentication path of the agent."""," /home/bmahler/git/mesos/build/../support/mesos-gtest-runner.py --sequential=*ROOT_* ./mesos-tests .. *** Error in `/home/bmahler/git/mesos/build/src/.libs/mesos-tests': double free or corruption (out): 0x00007fa818001310 *** ======= Backtrace: ========= /usr/lib64/libc.so.6(+0x7c503)[0x7fa87f27e503] /usr/lib64/libsasl2.so.3(+0x866d)[0x7fa880f0d66d] /usr/lib64/libsasl2.so.3(sasl_dispose+0x3b)[0x7fa880f1075b] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(_ZN5mesos8internal8cram_md527CRAMMD5AuthenticateeProcessD1Ev+0x5d)[0x7fa88708f67d] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(_ZN5mesos8internal8cram_md527CRAMMD5AuthenticateeProcessD0Ev+0x18)[0x7fa88708f734] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(_ZN5mesos8internal8cram_md520CRAMMD5AuthenticateeD1Ev+0xfb)[0x7fa88708a065] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(_ZN5mesos8internal8cram_md520CRAMMD5AuthenticateeD0Ev+0x18)[0x7fa88708a0b4] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(_ZN5mesos8internal5slave5Slave13_authenticateEv+0x67)[0x7fa8879ff579] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(_ZZN7process8dispatchIN5mesos8internal5slave5SlaveEEEvRKNS_3PIDIT_EEMS6_FvvEENKUlPNS_11ProcessBaseEE_clESD_+0xe2)[0x7fa887a60b7a] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(_ZNSt17_Function_handlerIFvPN7process11ProcessBaseEEZNS0_8dispatchIN5mesos8internal5slave5SlaveEEEvRKNS0_3PIDIT_EEMSA_FvvEEUlS2_E_E9_M_invokeERKSt9_Any_dataS2_+0x37)[0x7fa887aa0efe] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(_ZNKSt8functionIFvPN7process11ProcessBaseEEEclES2_+0x49)[0x7fa8888d1177] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(_ZN7process11ProcessBase5visitERKNS_13DispatchEventE+0x2f)[0x7fa8888b5063] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(_ZNK7process13DispatchEvent5visitEPNS_12EventVisitorE+0x2e)[0x7fa8888c0422] /home/bmahler/git/mesos/build/src/.libs/mesos-tests(_ZN7process11ProcessBase5serveERKNS_5EventE+0x2e)[0xb088c8] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(_ZN7process14ProcessManager6resumeEPNS_11ProcessBaseE+0x525)[0x7fa8888b10d5] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(+0x5f1a880)[0x7fa8888ad880] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(+0x5f2ca8a)[0x7fa8888bfa8a] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(+0x5f2c9ce)[0x7fa8888bf9ce] /home/bmahler/git/mesos/build/src/.libs/libmesos-1.3.0.so(+0x5f2c958)[0x7fa8888bf958] /usr/lib64/libstdc++.so.6(+0xb5230)[0x7fa87fb90230] /usr/lib64/libpthread.so.0(+0x7dc5)[0x7fa88040ddc5] /usr/lib64/libc.so.6(clone+0x6d)[0x7fa87f2f973d] ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7449","05/01/2017 22:24:20",5,"Refactor containerizers to not depend on TaskInfo or ExecutorInfo ""The Containerizer interfaces should be refactored so that they do not depend on {{TaskInfo}} or {{ExecutorInfo}}, as a standalone container will have neither. Currently, the {{launch}} interface depends on those fields. Instead, we should consistently use {{ContainerInfo}} and {{CommandInfo}} in Containerizer and isolators.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7457","05/03/2017 21:44:53",2,"HierarchicalAllocatorTest.NestedRoleQuota is flaky "" """," 10:25:30 [ RUN ] HierarchicalAllocatorTest.NestedRoleQuota 10:25:30 I0503 03:25:30.335433 1601536 hierarchical.cpp:158] Initialized hierarchical allocator process 10:25:30 I0503 03:25:30.335777 1601536 hierarchical.cpp:273] Added framework framework1 10:25:30 I0503 03:25:30.335902 1601536 hierarchical.cpp:1286] Set quota cpus(*):2; mem(*):1024 for role 'a/b' 10:25:30 I0503 03:25:30.335971 1601536 hierarchical.cpp:1850] No allocations performed 10:25:30 I0503 03:25:30.335984 1601536 hierarchical.cpp:1940] No inverse offers to send out! 10:25:30 I0503 03:25:30.336010 1601536 hierarchical.cpp:1434] Performed allocation for 0 agents in 73us 10:25:30 I0503 03:25:30.336104 1601536 hierarchical.cpp:525] Added agent agent1 (agent1) with cpus(*):1; mem(*):512 (allocated: {}) 10:25:30 I0503 03:25:30.336408 1601536 hierarchical.cpp:1940] No inverse offers to send out! 10:25:30 I0503 03:25:30.336423 1601536 hierarchical.cpp:1434] Performed allocation for 1 agents in 287us 10:25:30 I0503 03:25:30.336890 3211264 hierarchical.cpp:1114] Recovered cpus(*)(allocated: a/b):1; mem(*)(allocated: a/b):512 (total: cpus(*):1; mem(*):512, allocated: {}) on agent agent1 from framework framework1 10:25:30 I0503 03:25:30.336913 3211264 hierarchical.cpp:1151] Framework framework1 filtered agent agent1 for 10secs 10:25:30 I0503 03:25:30.337071 3211264 hierarchical.cpp:273] Added framework framework2 10:25:30 I0503 03:25:30.337180 3211264 hierarchical.cpp:2084] Filtered offer with cpus(*):1; mem(*):512 on agent agent1 for role a/b of framework framework1 10:25:30 I0503 03:25:30.337229 3211264 hierarchical.cpp:1850] No allocations performed 10:25:30 I0503 03:25:30.337249 3211264 hierarchical.cpp:1940] No inverse offers to send out! 10:25:30 I0503 03:25:30.337263 3211264 hierarchical.cpp:1434] Performed allocation for 1 agents in 150us 10:25:30 I0503 03:25:30.337530 1601536 hierarchical.cpp:273] Added framework framework3 10:25:30 I0503 03:25:30.337641 1601536 hierarchical.cpp:2084] Filtered offer with cpus(*):1; mem(*):512 on agent agent1 for role a/b of framework framework1 10:25:30 I0503 03:25:30.337684 1601536 hierarchical.cpp:1850] No allocations performed 10:25:30 I0503 03:25:30.337699 1601536 hierarchical.cpp:1940] No inverse offers to send out! 10:25:30 I0503 03:25:30.337713 1601536 hierarchical.cpp:1434] Performed allocation for 1 agents in 160us 10:25:30 I0503 03:25:30.337847 1601536 hierarchical.cpp:1286] Set quota cpus(*):1; mem(*):512 for role 'a/b/d' 10:25:45 ../../src/tests/hierarchical_allocator_tests.cpp:4498: Failure 10:25:45 Failed to wait 15secs for allocation 10:25:45 [ FAILED ] HierarchicalAllocatorTest.NestedRoleQuota (15020 ms) ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7471","05/08/2017 20:40:40",2,"Provisioner recover should not always assume 'rootfses' dir exists. ""The mesos agent would restart due to many reasons (e.g., disk full). Always assume the provisioner 'rootfses' dir exists would block the agent to recover. This issue may occur due to the race between removing the provisioner container dir and the agent restarts: In provisioner recover, when listing the container rootfses, it is possible that the 'rootfses' dir does not exist. Because a possible race between the provisioner destroy and the agent restart. For instance, while the provisioner is destroying the container dir the agent restarts. Due to os::rmdir() is recursive by traversing the FTS tree, it is possible that 'rootfses' dir is removed but the others (e.g., scratch dir) are not. Currently, we are returning an error if the 'rootfses' dir does not exist, which blocks the agent from recovery. We should skip it if 'rootfses' does not exist."""," Failed to perform recovery: Collect failed: Unable to list rootfses belonged to container a30b74d5-53ac-4fbf-b8f3-5cfba58ea847: Unable to list the backend directory: Failed to opendir '/var/lib/mesos/slave/provisioner/containers/a30b74d5-53ac-4fbf-b8f3-5cfba58ea847/backends/overlay/rootfses': No such file or directory May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.058349 11441 linux_launcher.cpp:429] Launching container a30b74d5-53ac-4fbf-b8f3-5cfba58ea847 and cloning with namespaces CLONE_NEWNS | CLONE_NEWPID May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.072191 11441 systemd.cpp:96] Assigned child process '11577' to 'mesos_executors.slice' May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.075932 11439 containerizer.cpp:1592] Checkpointing container's forked pid 11577 to '/var/lib/mesos/slave/meta/slaves/36a25adb-4ea2-49d3-a195-448cff1dc146-S34/frameworks/6dd898d6-7f3a-406c-8ead-24b4d55ed262-0008/executors/node__fc5e0825-f10e-465c-a2e2-938b9dc3fe05/runs/a30b74d5-53ac-4fbf-b8f3-5cfba58ea847/pids/forked.pid' May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.081516 11438 linux_launcher.cpp:429] Launching container 03a57a37-eede-46ec-8420-dda3cc54e2e0 and cloning with namespaces CLONE_NEWNS | CLONE_NEWPID May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.083516 11438 systemd.cpp:96] Assigned child process '11579' to 'mesos_executors.slice' May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.087345 11444 containerizer.cpp:1592] Checkpointing container's forked pid 11579 to '/var/lib/mesos/slave/meta/slaves/36a25adb-4ea2-49d3-a195-448cff1dc146-S34/frameworks/36a25adb-4ea2-49d3-a195-448cff1dc146-0002/executors/66897/runs/03a57a37-eede-46ec-8420-dda3cc54e2e0/pids/forked.pid' May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: W0505 02:14:32.213049 11440 fetcher.cpp:896] Begin fetcher log (stderr in sandbox) for container 6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac from running command: /opt/mesosphere/packages/mesos--aaedd03eee0d57f5c0d49c74ff1e5721862cad98/libexec/mesos/mesos-fetcher May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.006201 11561 fetcher.cpp:531] Fetcher Info: {""""cache_directory"""":""""\/tmp\/mesos\/fetch\/slaves\/36a25adb-4ea2-49d3-a195-448cff1dc146-S34\/root"""",""""items"""":[{""""action"""":""""BYPASS_CACHE"""",""""uri"""":{""""extract"""":true,""""value"""":""""https:\/\/downloads.mesosphere.com\/libmesos-bundle\/libmesos-bundle-1.9.0-rc2-1.2.0-rc2-1.tar.gz""""}},{""""action"""":""""BYPASS_CACHE"""", May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.009678 11561 fetcher.cpp:442] Fetching URI 'https://downloads.mesosphere.com/libmesos-bundle/libmesos-bundle-1.9.0-rc2-1.2.0-rc2-1.tar.gz' May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.009693 11561 fetcher.cpp:283] Fetching directly into the sandbox directory May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.009711 11561 fetcher.cpp:220] Fetching URI 'https://downloads.mesosphere.com/libmesos-bundle/libmesos-bundle-1.9.0-rc2-1.2.0-rc2-1.tar.gz' May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.009723 11561 fetcher.cpp:163] Downloading resource from 'https://downloads.mesosphere.com/libmesos-bundle/libmesos-bundle-1.9.0-rc2-1.2.0-rc2-1.tar.gz' to '/var/lib/mesos/slave/slaves/36a25adb-4ea2-49d3-a195-448cff1dc146-S34/frameworks/6dd898d6-7f3a-406c-8ead-24b4d55ed262-0011/executors/hello__91922a16-889e-4e94-9dab-9f6754f091de/ May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: Failed to fetch 'https://downloads.mesosphere.com/libmesos-bundle/libmesos-bundle-1.9.0-rc2-1.2.0-rc2-1.tar.gz': Error downloading resource: Failed writing received data to disk/application May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: End fetcher log for container 6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: E0505 02:14:32.213114 11440 fetcher.cpp:558] Failed to run mesos-fetcher: Failed to fetch all URIs for container '6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac' with exit status: 256 May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: E0505 02:14:32.213351 11444 slave.cpp:4642] Container '6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac' for executor 'hello__91922a16-889e-4e94-9dab-9f6754f091de' of framework 6dd898d6-7f3a-406c-8ead-24b4d55ed262-0011 failed to start: Failed to fetch all URIs for container '6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac' with exit status: 256 May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.213614 11443 containerizer.cpp:2071] Destroying container 6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac in FETCHING state May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.213977 11443 linux_launcher.cpp:505] Asked to destroy container 6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.214757 11443 linux_launcher.cpp:548] Using freezer to destroy cgroup mesos/6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.216047 11444 cgroups.cpp:2692] Freezing cgroup /sys/fs/cgroup/freezer/mesos/6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.218407 11443 cgroups.cpp:1405] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac after 2.326016ms May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.220391 11445 cgroups.cpp:2710] Thawing cgroup /sys/fs/cgroup/freezer/mesos/6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.222124 11445 cgroups.cpp:1434] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac after 1.693952ms May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: E0505 02:14:32.239018 11441 fetcher.cpp:558] Failed to run mesos-fetcher: Failed to create 'stdout' file: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: E0505 02:14:32.239162 11442 slave.cpp:4642] Container 'a30b74d5-53ac-4fbf-b8f3-5cfba58ea847' for executor 'node__fc5e0825-f10e-465c-a2e2-938b9dc3fe05' of framework 6dd898d6-7f3a-406c-8ead-24b4d55ed262-0008 failed to start: Failed to create 'stdout' file: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.239284 11445 containerizer.cpp:2071] Destroying container a30b74d5-53ac-4fbf-b8f3-5cfba58ea847 in FETCHING state May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.239390 11444 linux_launcher.cpp:505] Asked to destroy container a30b74d5-53ac-4fbf-b8f3-5cfba58ea847 May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.240103 11444 linux_launcher.cpp:548] Using freezer to destroy cgroup mesos/a30b74d5-53ac-4fbf-b8f3-5cfba58ea847 May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.241353 11440 cgroups.cpp:2692] Freezing cgroup /sys/fs/cgroup/freezer/mesos/a30b74d5-53ac-4fbf-b8f3-5cfba58ea847 May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.243120 11444 cgroups.cpp:1405] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/a30b74d5-53ac-4fbf-b8f3-5cfba58ea847 after 1.726976ms May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.245045 11440 cgroups.cpp:2710] Thawing cgroup /sys/fs/cgroup/freezer/mesos/a30b74d5-53ac-4fbf-b8f3-5cfba58ea847 May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.246800 11440 cgroups.cpp:1434] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/a30b74d5-53ac-4fbf-b8f3-5cfba58ea847 after 1.715968ms May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.285477 11438 slave.cpp:1625] Got assigned task 'dse-1-agent__720d6f09-9d60-4667-b224-abcd495e0e58' for framework 6dd898d6-7f3a-406c-8ead-24b4d55ed262-0009 May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: F0505 02:14:32.296481 11438 slave.cpp:6381] CHECK_SOME(state::checkpoint(path, info)): Failed to create temporary file: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: *** Check failure stack trace: *** May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: @ 0x7f5856be857d google::LogMessage::Fail() May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: @ 0x7f5856bea3ad google::LogMessage::SendToLog() May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: @ 0x7f5856be816c google::LogMessage::Flush() May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: @ 0x7f5856beaca9 google::LogMessageFatal::~LogMessageFatal() May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: @ 0x7f5855e4b5e9 _CheckFatal::~_CheckFatal() May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.314082 11445 containerizer.cpp:2434] Container 6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac has exited May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.314826 11440 containerizer.cpp:2434] Container a30b74d5-53ac-4fbf-b8f3-5cfba58ea847 has exited May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[17142]: Failed to write: No space left on device May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.316660 11439 container_assigner.cpp:101] Unregistering container_id[value: """"6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac""""]. May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.316761 11474 container_assigner_strategy.cpp:202] Closing ephemeral-port reader for container[value: """"6aebb9e0-fd2c-4a42-b8f4-bd6ba11e9eac""""] at endpoint[198.51.100.1:34273]. May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.316804 11474 container_reader_impl.cpp:38] Triggering ContainerReader shutdown May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.316833 11474 sync_util.hpp:39] Dispatching and waiting <=5s for ticket 7: ~ContainerReaderImpl:shutdown May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.316769 11439 container_assigner.cpp:101] Unregistering container_id[value: """"a30b74d5-53ac-4fbf-b8f3-5cfba58ea847""""]. May 05 02:14:32 ip-172-31-7-83.us-west-2.compute.internal mesos-agent[11432]: I0505 02:14:32.316864 11474 container_reader ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0 +"MESOS-7474","05/08/2017 22:27:25",5,"Mesos fetcher cache doesn't retry when missed. ""Mesos Fetcher doesn't retry when a cache is missed. It needs to have the ability to pull from source when it fails. 421 15:52:53.022902 32751 fetcher.cpp:498] Fetcher Info: {""""cache_directory"""":""""\/tmp\/mesos\/fetch\/slaves\/"""",""""items"""":[\{""""action"""":""""RETRIEVE_FROM_CACHE"""",""""cache_filename"""":"""")"""",""""uri"""":\{""""cache"""":true,""""executable"""":false,""""extract"""":true,""""value"""":""""https:\/\/\/""""}}],""""sandbox_directory"""":""""\/var\/lib\/mesos\/slave\/slaves\/\/frameworks\\/executors\/name\/runs\/""""} I0421 15:52:53.024926 32751 fetcher.cpp:409] Fetching URI '""""https:\/\/\/"""" I0421 15:52:53.024942 32751 fetcher.cpp:306] Fetching from cache I0421 15:52:53.024958 32751 fetcher.cpp:84] Extracting with command: tar -C """"\/var\/lib\/mesos\/slave\/slaves\/\/frameworks\\/executors\/name\/runs\/' -xf '/tmp/mesos/fetch/slaves/f3feeab8-a2fe-4ac1-afeb-ec7bd4ce7b0d-S29/c1-docker-hub.tar.gz' tar: /""""https:\/\/\/"""": Cannot open: No such file or directory tar: Error is not recoverable: exiting now Failed to fetch '""""https:\/\/\/""""': Failed to extract: command tar -C '""""\/var\/lib\/mesos\/slave\/slaves\/\/frameworks\\/executors\/name\/runs\/' -xf '/tmp/mesos/fetch/slaves/""""' exited with status: 512 ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7488","05/10/2017 07:41:32",5,"Add `--ip6` and `--ip6_discovery_command` flag to Mesos agent ""As a first step to support IPv6 containers on Mesos, we need to provide {{--ip6}} and {{--ip6_discovery_command}} flags to the agent so that the operator can specify an IPv6 address for the {{libprocess}} actor on the agent. In this ticket we will not aim to add IPv6 communication support for Mesos but will aim to use the IPv6 address provided by the operator to fill in the v6 address for any containers running on the host network in a dual stack environment.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7502","05/11/2017 21:29:26",1,"Build error on Windows when using ""int"" for a file descriptor ""There is a build error for mesos-tests in src/tests/check_tests.cpp on Windows associated with the use of an """"int"""" file descriptor: C:\mesos\mesos\src\tests\check_tests.cpp(1890): error C2440: 'initializing': cannot convert from 'Try,Error>' to 'Try,Error>' [C:\mesos\mesos\build\src\tests\mesos-tests.vcxproj]""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7504","05/15/2017 18:54:44",3,"Parent's mount namespace cannot be determined when launching a nested container. ""I've observed this failure twice in different Linux environments. Here is an example of such failure: """," [ RUN ] NestedMesosContainerizerTest.ROOT_CGROUPS_DestroyDebugContainerOnRecover I0509 21:53:25.471657 17167 containerizer.cpp:221] Using isolation: cgroups/cpu,filesystem/linux,namespaces/pid,network/cni,volume/image I0509 21:53:25.475124 17167 linux_launcher.cpp:150] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher I0509 21:53:25.475407 17167 provisioner.cpp:249] Using default backend 'overlay' I0509 21:53:25.481232 17186 containerizer.cpp:608] Recovering containerizer I0509 21:53:25.482295 17186 provisioner.cpp:410] Provisioner recovery complete I0509 21:53:25.482587 17187 containerizer.cpp:1001] Starting container 21bc372c-0f2c-49f5-b8ab-8d32c232b95d for executor 'executor' of framework I0509 21:53:25.482918 17189 cgroups.cpp:410] Creating cgroup at '/sys/fs/cgroup/cpu,cpuacct/mesos_test_d989f526-efe0-4553-bf79-936ad66c3753/21bc372c-0f2c-49f5-b8ab-8d32c232b95d' for container 21bc372c-0f2c-49f5-b8ab-8d32c232b95d I0509 21:53:25.484103 17190 cpu.cpp:101] Updated 'cpu.shares' to 1024 (cpus 1) for container 21bc372c-0f2c-49f5-b8ab-8d32c232b95d I0509 21:53:25.484808 17186 containerizer.cpp:1524] Launching 'mesos-containerizer' with flags '--help=""""false"""" --launch_info=""""{""""clone_namespaces"""":[131072,536870912],""""command"""":{""""shell"""":true,""""value"""":""""sleep 1000""""},""""environment"""":{""""variables"""":[{""""name"""":""""MESOS_SANDBOX"""",""""type"""":""""VALUE"""",""""value"""":""""\/tmp\/NestedMesosContainerizerTest_ROOT_CGROUPS_DestroyDebugContainerOnRecover_zlywyr""""}]},""""pre_exec_commands"""":[{""""arguments"""":[""""mesos-containerizer"""",""""mount"""",""""--help=false"""",""""--operation=make-rslave"""",""""--path=\/""""],""""shell"""":false,""""value"""":""""\/home\/ubuntu\/workspace\/mesos\/Mesos_CI-build\/FLAG\/SSL\/label\/mesos-ec2-ubuntu-16.04\/mesos\/build\/src\/mesos-containerizer""""},{""""shell"""":true,""""value"""":""""mount -n -t proc proc \/proc -o nosuid,noexec,nodev""""}],""""working_directory"""":""""\/tmp\/NestedMesosContainerizerTest_ROOT_CGROUPS_DestroyDebugContainerOnRecover_zlywyr""""}"""" --pipe_read=""""29"""" --pipe_write=""""32"""" --runtime_directory=""""/tmp/NestedMesosContainerizerTest_ROOT_CGROUPS_DestroyDebugContainerOnRecover_sKhtj7/containers/21bc372c-0f2c-49f5-b8ab-8d32c232b95d"""" --unshare_namespace_mnt=""""false""""' I0509 21:53:25.484978 17189 linux_launcher.cpp:429] Launching container 21bc372c-0f2c-49f5-b8ab-8d32c232b95d and cloning with namespaces CLONE_NEWNS | CLONE_NEWPID I0509 21:53:25.513890 17186 containerizer.cpp:1623] Checkpointing container's forked pid 1873 to '/tmp/NestedMesosContainerizerTest_ROOT_CGROUPS_DestroyDebugContainerOnRecover_Rdjw6M/meta/slaves/frameworks/executors/executor/runs/21bc372c-0f2c-49f5-b8ab-8d32c232b95d/pids/forked.pid' I0509 21:53:25.515878 17190 fetcher.cpp:353] Starting to fetch URIs for container: 21bc372c-0f2c-49f5-b8ab-8d32c232b95d, directory: /tmp/NestedMesosContainerizerTest_ROOT_CGROUPS_DestroyDebugContainerOnRecover_zlywyr I0509 21:53:25.517715 17193 containerizer.cpp:1791] Starting nested container 21bc372c-0f2c-49f5-b8ab-8d32c232b95d.ea991d38-e1a5-44fe-a522-622b15142e35 I0509 21:53:25.518569 17193 switchboard.cpp:545] Launching 'mesos-io-switchboard' with flags '--heartbeat_interval=""""30secs"""" --help=""""false"""" --socket_address=""""/tmp/mesos-io-switchboard-ca463cf2-70ba-4121-a5c6-1a170ae40c1b"""" --stderr_from_fd=""""36"""" --stderr_to_fd=""""2"""" --stdin_to_fd=""""32"""" --stdout_from_fd=""""33"""" --stdout_to_fd=""""1"""" --tty=""""false"""" --wait_for_connection=""""true""""' for container 21bc372c-0f2c-49f5-b8ab-8d32c232b95d.ea991d38-e1a5-44fe-a522-622b15142e35 I0509 21:53:25.521229 17193 switchboard.cpp:575] Created I/O switchboard server (pid: 1881) listening on socket file '/tmp/mesos-io-switchboard-ca463cf2-70ba-4121-a5c6-1a170ae40c1b' for container 21bc372c-0f2c-49f5-b8ab-8d32c232b95d.ea991d38-e1a5-44fe-a522-622b15142e35 I0509 21:53:25.522195 17191 containerizer.cpp:1524] Launching 'mesos-containerizer' with flags '--help=""""false"""" --launch_info=""""{""""command"""":{""""shell"""":true,""""value"""":""""sleep 1000""""},""""enter_namespaces"""":[131072,536870912],""""environment"""":{}}"""" --pipe_read=""""32"""" --pipe_write=""""33"""" --runtime_directory=""""/tmp/NestedMesosContainerizerTest_ROOT_CGROUPS_DestroyDebugContainerOnRecover_sKhtj7/containers/21bc372c-0f2c-49f5-b8ab-8d32c232b95d/containers/ea991d38-e1a5-44fe-a522-622b15142e35"""" --unshare_namespace_mnt=""""false""""' ../../src/tests/containerizer/nested_mesos_containerizer_tests.cpp:543: Failure (launch).failure(): Cannot get target mount namespace from process 1873: Cannot get 'mnt' namespace for child process '1885' I0509 21:53:25.536957 17191 cgroups.cpp:2692] Freezing cgroup /sys/fs/cgroup/freezer/mesos_test_d989f526-efe0-4553-bf79-936ad66c3753/21bc372c-0f2c-49f5-b8ab-8d32c232b95d I0509 21:53:25.638844 17192 cgroups.cpp:1405] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos_test_d989f526-efe0-4553-bf79-936ad66c3753/21bc372c-0f2c-49f5-b8ab-8d32c232b95d after 101.84192ms I0509 21:53:25.639927 17189 cgroups.cpp:2710] Thawing cgroup /sys/fs/cgroup/freezer/mesos_test_d989f526-efe0-4553-bf79-936ad66c3753/21bc372c-0f2c-49f5-b8ab-8d32c232b95d I0509 21:53:25.640831 17189 cgroups.cpp:1434] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos_test_d989f526-efe0-4553-bf79-936ad66c3753/21bc372c-0f2c-49f5-b8ab-8d32c232b95d after 872960ns I0509 21:53:25.642843 17189 cgroups.cpp:2692] Freezing cgroup /sys/fs/cgroup/freezer/mesos_test_d989f526-efe0-4553-bf79-936ad66c3753 I0509 21:53:25.745189 17186 cgroups.cpp:1405] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos_test_d989f526-efe0-4553-bf79-936ad66c3753 after 102.276096ms I0509 21:53:25.746119 17189 cgroups.cpp:2710] Thawing cgroup /sys/fs/cgroup/freezer/mesos_test_d989f526-efe0-4553-bf79-936ad66c3753 I0509 21:53:25.747002 17189 cgroups.cpp:1434] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos_test_d989f526-efe0-4553-bf79-936ad66c3753 after 856064ns [ FAILED ] NestedMesosContainerizerTest.ROOT_CGROUPS_DestroyDebugContainerOnRecover (325 ms) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7506","05/15/2017 19:58:30",8,"Multiple tests leave orphan containers. ""I've observed a number of flaky tests that leave orphan containers upon cleanup. A typical log looks like this: All currently affected tests: """," ../../src/tests/cluster.cpp:580: Failure Value of: containers->empty() Actual: false Expected: true Failed to destroy containers: { da3e8aa8-98e7-4e72-a8fd-5d0bae960014 } SlaveTest.RestartSlaveRequireExecutorAuthentication // cannot reproduce any more ROOT_IsolatorFlags // see MESOS-8489 ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7540","05/22/2017 19:45:37",1,"Add an agent flag for executor re-registration timeout. ""Currently, the executor re-register timeout is hard-coded at 2 seconds. It would be beneficial to allow operators to specify this value.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7542","05/22/2017 22:25:07",3,"Add executor reconnection retry logic to the agent ""Currently, the agent sends a single {{ReconnectExecutorMessage}} to PID-based executors during recovery. It would be more robust to have the agent retry these messages until {{executor_reregister_timeout}} has elapsed.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7546","05/23/2017 00:26:38",3,"WAIT_NESTED_CONTAINER sometimes returns 404 ""{{WAIT_NESTED_CONTAINER}} sometimes returns 404s even though the nested container has already exited and the parent task/executor is still running. This happens when an agent uses more than one containerizer (e.g., {{docker,mesos}}, {{WAIT_NESTED_CONTAINER}} and the exit status of the nested container has already been checkpointed. The root cause of this is a bug in the {{ComposingContainerizer}} in the following lines: https://github.com/apache/mesos/blob/1c7ffbeb505b3f5ab759202195f0b946a20cb803/src/slave/containerizer/composing.cpp#L620-L628 ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7555","05/24/2017 13:20:24",5,"Add resource provider IDs to the registry ""To support resource provider re-registration following a master fail-over, the IDs of registered resource providers need to be kept in the registry. An operation to commit those IDs using the registrar needs to be added as well.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7558","05/24/2017 13:32:09",3,"Add resource provider validation ""Similar to how it's done during agent registration/re-registration, the informations provided by a resource provider need to get validation during certain operation (e.g. re-registration, while applying offer operations, ...). Some of these validations only cover the provided informations (e.g. are the resources in {{ResourceProviderInfo}} only of type {{disk}}), others take the current cluster state into account (e.g. do the resources that a task wants to use exist on the resource provider).""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7561","05/24/2017 22:28:14",2,"Add storage resource provider specific information in ResourceProviderInfo. ""For storage resource provider, there will be some specific configuration information. For instance, the most important one is the `ContainerConfig` of the CSI Plugin container. That config information will be sent to the corresponding agent that will use the resources provided by the resource provider. For storage resource provider particularly, the agent needs to launch the CSI Node Plugin to mount the volumes. Comparing to adding first class storage resource provider information, an alternative is to add a generic labels field in ResourceProviderInfo and let resource provider itself figure out the format of the labels. However, I believe a first class solution is better and more clear.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-7564","05/25/2017 01:24:49",5,"Introduce a heartbeat mechanism for v1 HTTP executor <-> agent communication. ""Currently, we do not have heartbeats for executor <-> agent communication. This is especially problematic in scenarios when IPFilters are enabled since the default conntrack keep alive timeout is 5 days. When that timeout elapses, the executor doesn't get notified via a socket disconnection when the agent process restarts. The executor would then get killed if it doesn't re-register when the agent recovery process is completed. Enabling application level heartbeats or TCP KeepAlive's can be a possible way for fixing this issue. We should also update executor API documentation to explain the new behavior.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7578","05/26/2017 22:21:24",5,"Write a proposal to make the I/O Switchboards optional ""Right now DEBUG containers can only be started using the LaunchNestedContainerSession API call. They will enter its parent’s namespaces, inherit environment variables, stream its I/O, and Mesos will tie their life-cycle to the lifetime of the HTTP connection. Streaming the I/O of a container requires an I/O Switchboard and adds some overhead and complexity: - Mesos will launch an extra process, called an I/O Switchboard for each nested container. These process aren’t free, they take some time to create/destroy and consume resources. - I/O Switchboards are managed by a complex isolator. - /O Swichboards introduce new race conditions, and have been a source of deadlocks in the past. Some use cases require some of the features provided by DEBUG containers, but don’t need the functionality provided by the I/O switchboard. For instance, the Default Executor uses DEBUG containers to perform (health)checks, but it doesn’t need to stream anything to/from the container. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7581","05/29/2017 09:34:17",1,"Specifying an unbundled dependency can cause build to pick up wrong Boost version ""Specifying an unbundled dependency can cause the build to pick up a wrong Boost version. Assuming we have e.g., both protobuf and Boost installed in {{PREFIX}}, configuring with {{--with-protobuf=PREFIX}} causes the build to pick up the Boost version from {{PREFIX}} instead of using the bundled one. This appears to be due to how we specify Boost include paths. Boost paths are added with {{-isystem}} to suppress warnings; the protobuf include path, on the other hand, would be added with {{-I}}. GCC and for compatibility clang first search all paths specified with {{-I}} left-to-right before looking at paths given with {{-isystem}}, see [the GCC documenation|https://gcc.gnu.org/onlinedocs/gcc/Directory-Options.html].""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7594","05/31/2017 13:36:13",5,"Implement 'apply' for resource provider related operations ""Resource providers provide new offer operations ({{CREATE_BLOCK}}, {{DESTROY_BLOCK}}, {{CREATE_VOLUME}}, {{DESTROY_VOLUME}}). These operations can be applied by frameworks when they accept on offer. Handling of these operations has to be added to the master's {{accept}} call. I.e. the corresponding resource provider needs be extracted from the offer's resources and a {{resource_provider::Event::OPERATION}} has to be sent to the resource provider. The resource provider will answer with a {{resource_provider::Call::Update}} which needs to be handled as well.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7604","06/02/2017 01:27:12",1,"SlaveTest.ExecutorReregistrationTimeoutFlag aborts on Windows """""," [ RUN ] SlaveTest.ExecutorReregistrationTimeoutFlag rk ae9679b1-67c9-4db6-8187-0641b0e929d2-0000 I0601 23:53:23.488337 2748 master.cpp:1156] Master terminating I0601 23:53:23.492337 2728 hierarchical.cpp:579] Removed agent ae9679b1-67c9-4db6-8187-0641b0e929d2-S0 I0601 23:53:23.530340 1512 cluster.cpp:162] Creating default 'local' authorizer I0601 23:53:23.544342 2728 master.cpp:436] Master f07f4fdd-cd91-4d62-bf33-169b20d02020 (ip-172-20-128-1.ec2.internal) started on 172.20.128.1:51241 I0601 23:53:23.545341 2728 master.cpp:438] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""false"""" --authenticate_frameworks=""""false"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""C:\temp\FWZORI\credentials"""" --filter_gpu_resources=""""true"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_unreachable_tasks_per_framework=""""1000"""" --port=""""5050"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/webui"""" --work_dir=""""C:\temp\FWZORI\master"""" --zk_session_timeout=""""10secs"""" I0601 23:53:23.550338 2728 master.cpp:515] Master only allowing authenticated HTTP frameworks to register I0601 23:53:23.550338 2728 credentials.hpp:37] Loading credentials for authentication from 'C:\temp\FWZORI\credentials' I0601 23:53:23.552338 2728 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly' I0601 23:53:23.553339 2728 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' I0601 23:53:23.554340 2728 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' I0601 23:53:23.555341 2728 master.cpp:640] Authorization enabled I0601 23:53:23.570340 2124 master.cpp:2159] Elected as the leading master! I0601 23:53:23.570340 2124 master.cpp:1698] Recovering from registrar I0601 23:53:23.573341 1920 registrar.cpp:389] Successfully fetched the registry (0B) in 0ns I0601 23:53:23.573341 1920 registrar.cpp:493] Applied 1 operations in 0ns; attempting to update the registry I0601 23:53:23.575342 1920 registrar.cpp:550] Successfully updated the registry in 0ns I0601 23:53:23.576344 1920 registrar.cpp:422] Successfully recovered registrar I0601 23:53:23.577342 2728 master.cpp:1797] Recovered 0 agents from the registry (167B); allowing 10mins for agents to re-register I0601 23:53:23.595341 1512 containerizer.cpp:230] Using isolation: windows/cpu,filesystem/windows,environment_secret I0601 23:53:23.596343 1512 provisioner.cpp:255] Using default backend 'copy' I0601 23:53:23.626343 3976 slave.cpp:248] Mesos agent started on (133)@172.20.128.1:51241 I0601 23:53:23.627342 3976 slave.cpp:249] Flags at startup: --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""C:\temp\kglZbS\store\appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""//./pipe/docker_engine"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""C:\temp\kglZbS\store\docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""15secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""C:\temp\kglZbS\fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""C:\temp\kglZbS\http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""windows/cpu"""" --launcher=""""windows"""" --launcher_dir=""""C:\Users\Administrator\workspace\mesos\Mesos_CI-build\FLAG\Plain\label\mesos-ec2-windows\mesos\build\src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --runtime_dir=""""C:\temp\kglZbS"""" --sandbox_directory=""""C:\mesos\sandbox"""" --strict=""""true"""" --version=""""false"""" --work_dir=""""C:\temp\b1wVnd"""" I0601 23:53:23.632310 3976 credentials.hpp:37] Loading credentials for authentication from 'C:\temp\kglZbS\http_credentials' I0601 23:53:23.634342 3976 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I0601 23:53:23.635347 3976 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' I0601 23:53:23.640344 3976 slave.cpp:552] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0601 23:53:23.641345 3976 slave.cpp:560] Agent attributes: [ ] I0601 23:53:23.641345 3976 slave.cpp:565] Agent hostname: ip-172-20-128-1.ec2.internal I0601 23:53:23.641345 2124 status_update_manager.cpp:177] Pausing sending status updates I0601 23:53:23.643345 1512 sched.cpp:232] Version: 1.4.0 I0601 23:53:23.645345 1920 sched.cpp:336] New master detected at master@172.20.128.1:51241 I0601 23:53:23.646344 1920 sched.cpp:365] Authentication is not available on this platform. Attempting to register without authentication I0601 23:53:23.647344 4168 master.cpp:2811] Received SUBSCRIBE call for framework 'default' at scheduler-895fb702-afb9-42fe-8802-83c47f72432e@172.20.128.1:51241 I0601 23:53:23.647344 4168 master.cpp:2195] Authorizing framework principal 'test-principal' to receive offers for roles '{ * }' I0601 23:53:23.648345 2728 state.cpp:62] Recovering state from 'C:\temp\b1wVnd\meta' I0601 23:53:23.649345 4168 master.cpp:2811] Received SUBSCRIBE call for framework 'default' at scheduler-895fb702-afb9-42fe-8802-83c47f72432e@172.20.128.1:51241 I0601 23:53:23.649345 4168 master.cpp:2195] Authorizing framework principal 'test-principal' to receive offers for roles '{ * }' I0601 23:53:23.649345 3976 status_update_manager.cpp:203] Recovering status update manager I0601 23:53:23.650308 4168 master.cpp:2888] Subscribing framework default with checkpointing enabled and capabilities [ ] I0601 23:53:23.650308 4496 containerizer.cpp:582] Recovering containerizer I0601 23:53:23.652308 4168 master.cpp:2888] Subscribing framework default with checkpointing enabled and capabilities [ ] I0601 23:53:23.652308 1920 sched.cpp:759] Framework registered with f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:23.652308 4168 master.cpp:2898] Framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 (default) at scheduler-895fb702-afb9-42fe-8802-83c47f72432e@172.20.128.1:51241 already subscribed, resending acknowledgement I0601 23:53:23.653309 3976 hierarchical.cpp:294] Added framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:23.658309 1920 provisioner.cpp:416] Provisioner recovery complete I0601 23:53:23.659309 2748 slave.cpp:6119] Finished recovery I0601 23:53:23.662308 4496 status_update_manager.cpp:177] Pausing sending status updates I0601 23:53:23.662308 2748 slave.cpp:945] New master detected at master@172.20.128.1:51241 I0601 23:53:23.663310 2748 slave.cpp:969] No credentials provided. Attempting to register without authentication I0601 23:53:23.663310 2748 slave.cpp:980] Detecting new master I0601 23:53:23.664309 3976 master.cpp:5425] Received register agent message from slave(133)@172.20.128.1:51241 (ip-172-20-128-1.ec2.internal) I0601 23:53:23.665309 3976 master.cpp:3657] Authorizing agent without a principal I0601 23:53:23.666309 2124 master.cpp:5564] Registering agent at slave(133)@172.20.128.1:51241 (ip-172-20-128-1.ec2.internal) with id f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 I0601 23:53:23.668309 3976 registrar.cpp:493] Applied 1 operations in 0ns; attempting to update the registry I0601 23:53:23.670310 3976 registrar.cpp:550] Successfully updated the registry in 0ns I0601 23:53:23.674311 1920 slave.cpp:1148] Registered with master master@172.20.128.1:51241; given agent ID f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 I0601 23:53:23.674311 4168 master.cpp:5642] Registered agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 at slave(133)@172.20.128.1:51241 (ip-172-20-128-1.ec2.internal) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0601 23:53:23.675309 3976 status_update_manager.cpp:184] Resuming sending status updates I0601 23:53:23.676309 2728 hierarchical.cpp:546] Added agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 (ip-172-20-128-1.ec2.internal) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: {}) I0601 23:53:23.681309 1920 slave.cpp:1206] Forwarding total oversubscribed resources {} I0601 23:53:23.682309 2124 master.cpp:6295] Received update of agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 at slave(133)@172.20.128.1:51241 (ip-172-20-128-1.ec2.internal) with total oversubscribed resources {} I0601 23:53:23.686309 2124 master.cpp:7252] Sending 1 offers to framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 (default) at scheduler-895fb702-afb9-42fe-8802-83c47f72432e@172.20.128.1:51241 I0601 23:53:23.694308 2728 master.cpp:3872] Processing ACCEPT call for offers: [ f07f4fdd-cd91-4d62-bf33-169b20d02020-O0 ] on agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 at slave(133)@172.20.128.1:51241 (ip-172-20-128-1.ec2.internal) for framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 (default) at scheduler-895fb702-afb9-42fe-8802-83c47f72432e@172.20.128.1:51241 I0601 23:53:23.694308 2728 master.cpp:3424] Authorizing framework principal 'test-principal' to launch task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe I0601 23:53:23.705309 2728 master.cpp:9265] Adding task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe with resources cpus(*)(allocated: *):2; mem(*)(allocated: *):1024; disk(*)(allocated: *):1024; ports(*)(allocated: *):[31000-32000] on agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 at slave(133)@172.20.128.1:51241 (ip-172-20-128-1.ec2.internal) I0601 23:53:23.707309 2728 master.cpp:4527] Launching task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 (default) at scheduler-895fb702-afb9-42fe-8802-83c47f72432e@172.20.128.1:51241 with resources cpus(*)(allocated: *):2; mem(*)(allocated: *):1024; disk(*)(allocated: *):1024; ports(*)(allocated: *):[31000-32000] on agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 at slave(133)@172.20.128.1:51241 (ip-172-20-128-1.ec2.internal) I0601 23:53:23.710311 4496 slave.cpp:1632] Got assigned task '792f1a13-d0ee-4e98-a4c8-82b9849adfbe' for framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:23.721310 3976 hierarchical.cpp:871] Updated allocation of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 on agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 from cpus(*)(allocated: *):2; mem(*)(allocated: *):1024; disk(*)(allocated: *):1024; ports(*)(allocated: *):[31000-32000] to cpus(*)(allocated: *):2; mem(*)(allocated: *):1024; disk(*)(allocated: *):1024; ports(*)(allocated: *):[31000-32000] I0601 23:53:23.723309 4496 slave.cpp:1913] Authorizing task '792f1a13-d0ee-4e98-a4c8-82b9849adfbe' for framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:23.727311 4496 slave.cpp:2100] Launching task '792f1a13-d0ee-4e98-a4c8-82b9849adfbe' for framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:23.738310 4496 slave.cpp:7078] Launching executor '792f1a13-d0ee-4e98-a4c8-82b9849adfbe' of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 with resources cpus(*)(allocated: *):0.1; mem(*)(allocated: *):32 in work directory 'C:\temp\b1wVnd\slaves\f07f4fdd-cd91-4d62-bf33-169b20d02020-S0\frameworks\f07f4fdd-cd91-4d62-bf33-169b20d02020-0000\executors\792f1a13-d0ee-4e98-a4c8-82b9849adfbe\runs\1cbce8a6-ae59-484f-b898-e2ea6396d2a9' I0601 23:53:23.741310 4496 slave.cpp:2795] Launching container 1cbce8a6-ae59-484f-b898-e2ea6396d2a9 for executor '792f1a13-d0ee-4e98-a4c8-82b9849adfbe' of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:23.743311 3976 containerizer.cpp:1056] Starting container 1cbce8a6-ae59-484f-b898-e2ea6396d2a9 I0601 23:53:23.750272 4496 slave.cpp:2329] Queued task '792f1a13-d0ee-4e98-a4c8-82b9849adfbe' for executor '792f1a13-d0ee-4e98-a4c8-82b9849adfbe' of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:23.797314 1856 launcher.cppReceived SUBSCRIBED event Subscribed executor on ip-172-20-128-1.ec2.internal Received LAUNCH event Starting task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe Running 'C:\Users\Administrator\workspace\mesos\Mesos_CI-build\FLAG\Plain\label\mesos-ec2-windows\mesos\build\src\mesos-containerizer.exe launch ' Forked command at 1180 :140] Forked child with pid '4980' for container '1cbce8a6-ae59-484f-b898-e2ea6396d2a9' I0601 23:53:23.798315 1856 containerizer.cpp:1722] Checkpointing container's forked pid 4980 to 'C:\temp\b1wVnd\meta\slaves\f07f4fdd-cd91-4d62-bf33-169b20d02020-S0\frameworks\f07f4fdd-cd91-4d62-bf33-169b20d02020-0000\executors\792f1a13-d0ee-4e98-a4c8-82b9849adfbe\runs\1cbce8a6-ae59-484f-b898-e2ea6396d2a9\pids\forked.pid' I0601 23:53:24.029322 1856 slave.cpp:3825] Got registration for executor '792f1a13-d0ee-4e98-a4c8-82b9849adfbe' of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 from executor(1)@172.20.128.1:51637 I0601 23:53:24.043324 2124 slave.cpp:2542] Sending queued task '792f1a13-d0ee-4e98-a4c8-82b9849adfbe' to executor '792f1a13-d0ee-4e98-a4c8-82b9849adfbe' of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 at executor(1)@172.20.128.1:51637 I0601 23:53:24.185328 2728 slave.cpp:4295] Handling status update TASK_RUNNING (UUID: d814fd3c-25f8-4307-a8a7-3235177322b9) for task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 from executor(1)@172.20.128.1:51637 I0601 23:53:24.192329 2124 status_update_manager.cpp:323] Received status update TASK_RUNNING (UUID: d814fd3c-25f8-4307-a8a7-3235177322b9) for task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:24.195328 2124 status_update_manager.cpp:834] Checkpointing UPDATE for status update TASK_RUNNING (UUID: d814fd3c-25f8-4307-a8a7-3235177322b9) for task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:24.197327 2728 slave.cpp:4735] Forwarding the update TASK_RUNNING (UUID: d814fd3c-25f8-4307-a8a7-3235177322b9) for task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 to master@172.20.128.1:51241 I0601 23:53:24.199362 4496 master.cpp:6440] Status update TASK_RUNNING (UUID: d814fd3c-25f8-4307-a8a7-3235177322b9) for task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 from agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 at slave(133)@172.20.128.1:51241 (ip-172-20-128-1.ec2.internal) I0601 23:53:24.198328 2728 slave.cpp:4645] Sending acknowledgement for status update TASK_RUNNING (UUID: d814fd3c-25f8-4307-a8a7-3235177322b9) for task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 to executor(1)@172.20.128.1:51637 I0601 23:53:24.199362 4496 master.cpp:6502] Forwarding status update TASK_RUNNING (UUID: d814fd3c-25f8-4307-a8a7-3235177322b9) for task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:24.200330 4496 master.cpp:8507] Updating the state of task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0601 23:53:24.207330 4496 master.cpp:5190] Processing ACKNOWLEDGE call d814fd3c-25f8-4307-a8a7-3235177322b9 for task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 (default) at scheduler-895fb702-afb9-42fe-8802-83c47f72432e@172.20.128.1:51241 on agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 I0601 23:53:24.208281 4168 status_update_manager.cpp:395] Received status update acknowledgement (UUID: d814fd3c-25f8-4307-a8a7-3235177322b9) for task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:24.209329 4168 status_update_manager.cpp:834] Checkpointing ACK for status update TASK_RUNNING (UUID: d814fd3c-25f8-4307-a8a7-3235177322b9) for task 792f1a13-d0ee-4e98-a4c8-82b9849adfbe of framework f07f4fdd-cd91-4d62-bf33-169b20d02020-0000 I0601 23:53:24.211320 4496 slave.cpp:817] Agent terminating I0601 23:53:24.212329 4168 master.cpp:1314] Agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 at slave(133)@172.20.128.1:51241 (ip-172-20-128-1.ec2.internal) disconnected I0601 23:53:24.212329 4168 master.cpp:3195] Disconnecting agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 at slave(133)@172.20.128.1:51241 (ip-172-20-128-1.ec2.internal) I0601 23:53:24.212329 4168 master.cpp:3214] Deactivating agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 at slave(133)@172.20.128.1:51241 (ip-172-20-128-1.ec2.internal) I0601 23:53:24.213330 1920 hierarchical.cpp:674] Agent f07f4fdd-cd91-4d62-bf33-169b20d02020-S0 deactivated I0601 23:53:24.214330 1512 containerizer.cpp:230] Using isolation: windows/cpu,filesystem/windows,environment_secret I0601 23:53:24.215328 1512 provisioner.cpp:255] Using default backend 'copy' I0601 23:53:24.254333 2728 slave.cpp:248] Mesos agent started on (134)@172.20.128.1:51241 I0601 23:53:24.254333 2728 slave.cpp:249] Flags at startup: --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""C:\temp\kglZbS\store\appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""//./pipe/docker_engine"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""C:\temp\kglZbS\store\docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""15secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""C:\temp\kglZbS\fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""C:\temp\kglZbS\http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""windows/cpu"""" --launcher=""""windows"""" --launcher_dir=""""C:\Users\Administrator\workspace\mesos\Mesos_CI-build\FLAG\Plain\label\mesos-ec2-windows\mesos\build\src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --runtime_dir=""""C:\temp\kglZbS"""" --sandbox_directory=""""C:\mesos\sandbox"""" --strict=""""true"""" --version=""""false"""" --work_dir=""""C:\temp\b1wVnd"""" I0601 23:53:24.258334 2728 credentials.hpp:37] Loading credentials for authentication from 'C:\temp\kglZbS\http_credentials' I0601 23:53:24.261334 2728 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I0601 23:53:24.262333 2728 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' I0601 23:53:24.269332 2728 slave.cpp:552] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] I0601 23:53:24.269332 2728 slave.cpp:560] Agent attributes: [ ] I0601 23:53:24.269332 2728 slave.cpp:565] Agent hostname: ip-172-20-128-1.ec2.internal I0601 23:53:24.270332 2748 status_update_manager.cpp:177] Pausing sending status updates I0601 23:53:24.279335 4496 state.cpp:62] Recovering state from 'C:\temp\b1wVnd\meta' I0601 23:53:24.280333 4496 state.cpp:710] No committed checkpointed resources found at 'C:\temp\b1wVnd\meta\resources\resources.info' E0601 23:53:24.308333 3976 slave.cpp:6110] EXIT with status 1: Failed to perform recovery: Incompatible agent info detected. ------------------------------------------------------------ Old agent info: hostname: """"ip-172-20-128-1.ec2.internal"""" resources { name: """"cpus"""" type: SCALAR scalar { value: 2 } role: """"*"""" } resources { name: """"mem"""" type: SCALAR scalar { value: 1024 } role: """"*"""" } resources { name: """"disk"""" type: SCALAR scalar { value: 1024 } role: """"*"""" } resources { name: """"ports"""" type: RANGES ranges { range { begin: 31000 end: 32000 } } role: """"*"""" } id { value: """"f07f4fdd-cd91-4d62-bf33-169b20d02020-S0"""" } checkpoint: true port: 51241 ------------------------------------------------------------ New agent info: hostname: """"ip-172-20-128-1.ec2.internal"""" resources { name: """"cpus"""" type: SCALAR scalar { value: 2 } role: """"*"""" } resources { name: """"mem"""" type: SCALAR scalar { value: 1024 } role: """"*"""" } resources { name: """"disk"""" type: SCALAR scalar { value: 1024 } role: """"*"""" } resources { name: """"ports"""" type: RANGES ranges { range { begin: 31000 end: 32000 } } role: """"*"""" } id { value: """"latest"""" } checkpoint: true port: 51241 ------------------------------------------------------------ To remedy this do as follows: Step 1: rm -f C:\temp\b1wVnd\meta\slaves\latest This ensures agent doesn't recover old live executors. Step 2: Restart the agent. ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7627","06/06/2017 12:51:36",3,"Mesos slave stucks ""*Description of the problem* Sometimes all containers on mesos-slave becomes unhealthy without any reason. Then Mesos try to kill them without success. As result old containers are still running in unhealthy state and new containers have not started. You can see what happens on host machine and in docker container mesos-slave. We have been seen this problem several times on month on different hosts and different clusters. Restart of mesos-slave solves the problem, but it is not solution. {quote} Mesos 1.1 Marathon 1.3.6 Docker version 17.03.0-ce {quote} stderr of container: stdout of container n container with mesos-slave Container with mesos-slave Container with mesos-slave host-machine Mesos slave logs I ask you to take a look on this problem before mesos-slave is not restared and I can collect some additional information."""," W0531 12:06:07.396096 1716 health_checker.cpp:205] Health check failed 1 times consecutively: COMMAND health check failed: Command has not returned after 5secs; aborting W0531 12:06:17.455224 1719 health_checker.cpp:205] Health check failed 2 times consecutively: COMMAND health check failed: Command has not returned after 5secs; aborting W0531 12:06:40.555650 1710 health_checker.cpp:205] Health check failed 1 times consecutively: COMMAND health check failed: Command has not returned after 5secs; aborting W0531 12:19:46.499477 1703 health_checker.cpp:205] Health check failed 1 times consecutively: COMMAND health check failed: Command has not returned after 5secs; aborting W0531 12:23:08.441121 1703 health_checker.cpp:205] Health check failed 1 times consecutively: COMMAND health check failed: Command has not returned after 5secs; aborting W0531 12:23:18.486780 1684 health_checker.cpp:205] Health check failed 2 times consecutively: COMMAND health check failed: Command has not returned after 5secs; aborting W0531 12:23:29.194329 1685 health_checker.cpp:205] Health check failed 3 times consecutively: COMMAND health check failed: Command has not returned after 5secs; aborting W0531 12:23:39.251193 1712 health_checker.cpp:205] Health check failed 4 times consecutively: COMMAND health check failed: Command has not returned after 5secs; aborting W0531 12:23:49.306136 1701 health_checker.cpp:205] Health check failed 5 times consecutively: COMMAND health check failed: Command has not returned after 5secs; aborting Health check failed:COMMAND health check failed: Command has not returned after 5secs; aborting Received task health update, healthy: true Received task health update, healthy: false Received task health update, healthy: false Received task health update, healthy: true Received task health update, healthy: false Received task health update, healthy: true Received task health update, healthy: false Received task health update, healthy: true Received task health update, healthy: false Received task health update, healthy: false Received task health update, healthy: false Received task health update, healthy: false Received task health update, healthy: false Received killTask for task wgt21_wgcmp_adm.5f308cf3-45f9-11e7-8c9b-fa163e3c4349 # lsof /var/run/docker.sock /# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 9268 3948 ? Ssl May25 0:01 /usr/sbin/kogia --skip-init --skip-post --skip-env mesos-slave --resources=cpus:192;mem:320000;disk:95000 --credential=/etc/mes root 12 1.2 0.1 3944032 159520 ? Sl May25 223:16 mesos-slave --resources=cpus:192;mem:320000;disk:95000 --credential=/etc/mesos-slave-creds --gc_delay=10mins --docker_remove_de root 721 1.0 0.0 3769696 33156 ? Ssl May31 86:45 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.0f7d29da-7a0c-4db1-b782-72b8135f720b --docker=d root 778 0.0 0.0 1380488 17836 ? Sl May31 1:47 docker -H unix:///var/run/docker.sock run --cpu-shares 256 --memory 536870912 -e SERVICE_APP=datap -e DP_AMQP_DOCUMENTS_REQUEST root 857 0.9 0.0 3769696 33340 ? Ssl May25 173:55 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.8b398a22-39e1-40f5-a801-763aa55c61be --docker=d root 866 0.9 0.0 3769696 31044 ? Ssl May25 173:09 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.c350451b-c2ea-4e79-93e6-b649fca6f589 --docker=d root 1074 0.0 0.0 1273236 17752 ? Sl May25 2:14 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 134217728 -e SERVICE_APP=monitoring -e SERVICE_DNS_SUFFIX=i root 1082 0.0 0.0 1453448 15552 ? Sl May25 1:19 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 268435456 -e MARATHON_APP_VERSION=2017-05-19T17:40:55.101Z root 1349 1.0 0.0 3769700 33340 ? Ssl May25 182:37 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.13500c0e-6d3c-401f-87f4-64534eeda8c9 --docker=d root 1400 0.0 0.0 1469592 15864 ? Sl May25 2:46 docker -H unix:///var/run/docker.sock run --cpu-shares 512 --memory 536870912 -e SERVICE_APP=ordo -e SERVICE_DNS_SUFFIX=ixs.wgc root 1522 0.9 0.0 3769696 32356 ? Ssl May25 172:17 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.59ab5c53-f023-4196-8d49-3120f59904c1 --docker=d root 1577 0.0 0.0 1674136 18032 ? Sl May25 2:04 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 1073741824 -e SERVICE_APP=kibana -e SERVICE_DNS_SUFFIX=ixs root 1678 0.9 0.0 3769700 33568 ? Ssl May31 77:57 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.3764dd96-acf1-46c3-aabd-c1e224e0cf50 --docker=d root 1729 0.0 0.0 1510032 14592 ? Sl May31 0:47 docker -H unix:///var/run/docker.sock run --cpu-shares 512 --memory 1073741824 -e SERVICE_APP=wgcmp -e SERVICE_CONFIG_DIR=/conf root 2337 1.0 0.0 3769700 33304 ? Ssl May29 109:38 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.6c44e901-43b9-48c8-9adf-ee591ec64d6d --docker=d root 2389 0.0 0.0 1584272 15496 ? Sl May29 2:07 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 1073741824 -e SERVICE_APP=exp -e SERVICE_DNS_SUFFIX=ixs.wg root 3099 0.9 0.0 3769696 32716 ? Ssl May25 173:10 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.ca19e6d0-670e-4829-8644-3edd3951d474 --docker=d root 3149 0.0 0.0 1452940 17740 ? Sl May25 2:03 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 536870912 -e SERVICE_APP=grafana -e SERVICE_DNS_SUFFIX=ixs root 3269 0.9 0.0 3769696 32352 ? Ssl May25 174:29 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.0dc4fd8b-5b61-4b5d-a143-ca81082eb542 --docker=d root 3319 0.0 0.0 1592720 15224 ? Sl May25 2:05 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 134217728 -e SERVICE_APP=qaapp -e SERVICE_CONFIG_DIR=/confi root 3676 0.9 0.0 3769700 33784 ? Ssl May31 77:51 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.f56f6342-4fd0-4e60-bf04-44b63209eb70 --docker=d root 3726 0.0 0.0 1511568 13076 ? Sl May31 1:02 docker -H unix:///var/run/docker.sock run --cpu-shares 6144 --memory 838860800 -e SERVICE_APP=bckr -e VOLUME_BCKR_WORKERS_LOG_B root 3776 0.9 0.0 3769700 33512 ? Ssl May29 100:38 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.f0fbd4de-8849-40a2-b508-567984c6cf09 --docker=d root 3881 0.0 0.0 1658520 16040 ? Sl May29 1:20 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 268435456 -e SERVICE_APP=ftitleconfig -e APP___SETTINGS_INF root 4638 0.8 0.0 3769700 32740 ? Ssl May29 91:52 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.7f6c9f81-a8e2-437c-85e8-fb3627cf269e --docker=d root 4820 0.0 0.0 1378956 16276 ? Sl May29 1:19 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 536870912 -e SERVICE_APP=wgnc -e SERVICE_DNS_SUFFIX=ixs.wg root 7181 1.0 0.0 3769700 32916 ? Ssl May29 110:13 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.af2cbf6f-5716-414a-a711-fb618bb43f79 --docker=d root 7231 0.0 0.0 1379468 15776 ? Sl May29 2:07 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 536870912 -e SERVICE_APP=wgnc -e SERVICE_DNS_SUFFIX=ixs.wg root 9480 0.9 0.0 3769704 34612 ? Ssl Jun04 32:26 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.1bb8b395-9e3f-4931-a5b5-1f678a1778e4 --docker=d root 9533 0.0 0.0 1511572 13524 ? Sl Jun04 0:29 docker -H unix:///var/run/docker.sock run --cpu-shares 512 --memory 536870912 -e SERVICE_APP=wgrs -e APP_BW_LIB_PACKAGE=wowp-bw root 10097 0.9 0.0 3769700 33868 ? Ssl 10:21 0:37 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.36516a1a-6c9b-414d-9052-9fa653f7a20f --docker=d root 10098 0.9 0.0 3769700 33000 ? Ssl 10:21 0:36 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.dc048d4f-7c5c-4086-953a-bb378b5865df --docker=d root 10150 0.0 0.0 1239692 15168 ? Sl 10:21 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 536870912 -e SERVICE_APP=bckr -e VOLUME_BCKR_CONFIGS_BACKU root 10214 0.0 0.0 1371532 14160 ? Sl 10:21 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 512 --memory 1073741824 -e SERVICE_APP=wgcmp -e SERVICE_CONFIG_DIR=/conf root 10268 0.8 0.0 3769700 33036 ? Ssl 10:21 0:33 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.28e151bf-87c8-437e-b412-1a862389aaf2 --docker=d root 10319 0.0 0.0 1371016 15040 ? Sl 10:21 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 512 --memory 536870912 -e SERVICE_APP=banw -e PORT_10002=31069 -e SERVIC root 10450 0.8 0.0 3769696 32276 ? Ssl 10:21 0:33 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.dc8f81e7-e540-4cbf-8d39-7e15732e6389 --docker=d root 10505 0.0 0.0 1437060 14304 ? Sl 10:21 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 134217728 -e SERVICE_APP=helworld -e SERVICE_DNS_SUFFIX=ixs root 11270 0.9 0.0 3769704 34452 ? Ssl 10:22 0:34 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.853a11ed-0f4e-4d5d-bc38-d7902b6dd937 --docker=d root 11324 0.0 0.0 1435152 14796 ? Sl 10:22 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 204 --memory 268435456 -e SERVICE_APP=wgrs -e APP_BW_LIB_PACKAGE=wowp-bw root 11499 0.9 0.0 3769700 32480 ? Ssl 10:22 0:36 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.ff5317fe-c978-4234-b89e-c643a797a8e5 --docker=d root 11549 0.0 0.0 1239944 14552 ? Sl 10:22 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 51 --memory 134217728 -e SERVICE_APP=cwh -e SERVICE_CONFIG_DIR=/configs root 11813 0.9 0.0 3769700 32844 ? Ssl 10:22 0:35 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.3a0127f8-8c50-406f-a7a0-1561f2dcb6c0 --docker=d root 11863 0.0 0.0 1501836 14836 ? Sl 10:22 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 536870912 -e SERVICE_APP=tmscore -e SERVICE_DNS_SUFFIX=ixs root 12126 0.9 0.0 3769700 33128 ? Ssl 09:58 0:48 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.11782dee-90f8-4419-9073-c0c145dcd42a --docker=d root 12176 0.0 0.0 1567880 15052 ? Sl 09:58 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 512 --memory 268435456 -e SERVICE_APP=wgus -e VOLUME_HTDOCS_BACKUP_SCHED root 12393 0.9 0.0 3769700 33252 ? Ssl 10:22 0:34 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.70c4e304-6f4e-4547-9bfd-68359dfbef04 --docker=d root 12443 0.0 0.0 1370760 14060 ? Sl 10:22 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 51 --memory 536870912 -e SERVICE_APP=cwh -e SERVICE_CONFIG_DIR=/configs root 12451 0.9 0.0 3769700 33452 ? Ssl Jun01 66:16 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.c3bef4bf-7ef7-4b3d-ad4d-44d423b7b06d --docker=d root 12501 0.0 0.0 1510288 15280 ? Sl Jun01 0:51 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 1073741824 -e SERVICE_APP=spa -e PORT_10031=31686 -e SERVIC root 13824 0.8 0.0 3769696 33212 ? Ssl 10:22 0:32 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.436d61a5-281c-4a81-8dc2-d34941948ecf --docker=d root 13874 0.0 0.0 1502084 14920 ? Sl 10:22 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 1073741824 -e SERVICE_APP=mkdocs -e SERVICE_DNS_SUFFIX=ixs root 14872 0.9 0.0 3769700 33356 ? Ssl Jun01 65:48 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.62ae27c2-40eb-4a5d-bc74-75f827af812e --docker=d root 14922 0.0 0.0 1249680 15112 ? Sl Jun01 0:52 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 1073741824 -e SERVICE_APP=spa -e SERVICE_CONFIG_DIR=/config root 18590 0.9 0.0 3769696 32504 ? Ssl Jun04 26:35 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.a0580a9a-c574-46b9-bc33-309936259fef --docker=d root 18640 0.0 0.0 1510792 14384 ? Sl Jun04 0:21 docker -H unix:///var/run/docker.sock run --cpu-shares 307 --memory 1073741824 -e SERVICE_APP=fgateway -e SERVICE_8080_NAME=gat root 20144 0.9 0.0 3769696 33264 ? Ssl Jun05 7:28 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.1493e114-f578-4e53-bd16-5d4078576da6 --docker=d root 20194 0.0 0.0 1641608 16240 ? Sl Jun05 0:06 docker -H unix:///var/run/docker.sock run --cpu-shares 307 --memory 1073741824 -e SERVICE_APP=ftitleconfig -e SERVICE_8080_NAME root 25084 0.8 0.0 3769696 32932 ? Ssl Jun04 24:06 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.02898f3d-809a-4e5a-ad58-0ab202cb4b49 --docker=d root 25137 0.0 0.0 1445000 14148 ? Sl Jun04 0:21 docker -H unix:///var/run/docker.sock run --cpu-shares 307 --memory 1073741824 -e SERVICE_APP=fwebhook -e SERVICE_8080_NAME=web root 25861 0.8 0.0 3769700 32988 ? Ssl 07:54 1:52 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.c8aaaacb-228b-4c6b-8bee-0e02784035ee --docker=d root 25911 0.0 0.0 1437064 14080 ? Sl 07:54 0:04 docker -H unix:///var/run/docker.sock run --cpu-shares 512 --memory 536870912 -e SERVICE_APP=ordo -e VOLUME_ORDO_APP_LOGS_BACKU root 28888 0.9 0.0 3769700 32720 ? Ssl Jun01 68:07 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.dcc3517f-caff-4919-91ed-3e13e8c86abd --docker=d root 28952 0.0 0.0 1453456 16300 ? Sl Jun01 1:23 docker -H unix:///var/run/docker.sock run --cpu-shares 512 --memory 1073741824 -e SERVICE_APP=wgpm -e SERVICE_DNS_SUFFIX=ixs.wg root 29016 0.9 0.0 3769700 33788 ? Ssl 10:46 0:20 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.1fd9bade-cf98-43ce-9c79-2aabeec2a31f --docker=d root 29066 0.0 0.0 1566856 14716 ? Sl 10:46 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 134217728 -e SERVICE_APP=wghub -e VOLUME_WGHUB_MAINTENANCE root 29984 0.9 0.0 3769700 33088 ? Ssl 10:47 0:20 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.15712a04-2477-431e-85ea-d4d6eb49bd13 --docker=d root 30054 0.0 0.0 1314444 14676 ? Sl 10:47 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 536870912 -e SERVICE_APP=wghub -e VOLUME_HTDOCS_BACKUP_SCH root 30286 0.9 0.0 3769700 33520 ? Ssl May30 96:49 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.07b4c879-8e50-4f4c-83b4-d84375215c1a --docker=d root 30336 0.0 0.0 1453716 15664 ? Sl May30 2:01 docker -H unix:///var/run/docker.sock run --cpu-shares 10 --memory 536870912 -e SERVICE_APP=tmscore -e SERVICE_DNS_SUFFIX=ixs.w root 31794 0.9 0.0 3769700 33076 ? Ssl 10:47 0:20 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.87f75c50-2a10-48c0-b74c-cc600a7e28d9 --docker=d root 31854 0.0 0.0 1369864 14300 ? Sl 10:47 0:00 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 268435456 -e SERVICE_APP=wghub -e VOLUME_HTDOCS_BACKUP_SCH root 33076 0.0 0.0 18176 3312 ? Ss 11:23 0:00 bash root 34722 0.3 0.0 383328 15616 ? Sl 11:24 0:00 docker -H unix:///var/run/docker.sock pull registry.wdo.io/exp/exp-base:docker-52 root 35061 0.0 0.0 4456 796 ? S 11:24 0:00 sh -c docker exec mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.eaacd1e1-5b0d-4742-a85c-24248dea04d8 sh -c """" python /home/httpd root 35062 0.5 0.0 605684 15440 ? Sl 11:24 0:00 docker exec mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.eaacd1e1-5b0d-4742-a85c-24248dea04d8 sh -c python /home/httpd/app/sr root 35108 0.0 0.0 4456 780 ? S 11:24 0:00 sh -c docker exec mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.b3bba1cb-f367-413a-952d-4275c1592143 sh -c """" ls /update_done >/ root 35109 2.0 0.0 449768 15272 ? Sl 11:24 0:00 docker exec mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.b3bba1cb-f367-413a-952d-4275c1592143 sh -c ls /update_done >/dev/nul root 35121 0.0 0.0 4452 768 ? S 11:24 0:00 sh -c docker exec mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.87f75c50-2a10-48c0-b74c-cc600a7e28d9 sh -c """" ss -4tln 2>/dev/nu root 35122 0.0 0.0 217076 14380 ? Sl 11:24 0:00 docker exec mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.87f75c50-2a10-48c0-b74c-cc600a7e28d9 sh -c ss -4tln 2>/dev/null | eg root 35130 0.0 0.0 4456 808 ? S 11:24 0:00 sh -c docker exec mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.853a11ed-0f4e-4d5d-bc38-d7902b6dd937 sh -c """" curl -i localhost/ root 35131 1.0 0.0 383080 15568 ? Sl 11:24 0:00 docker exec mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.853a11ed-0f4e-4d5d-bc38-d7902b6dd937 sh -c curl -i localhost/api/pin root 35143 1.0 0.0 15568 2052 ? R+ 11:24 0:00 ps aux root 39774 1.0 0.0 3769700 33316 ? Ssl May29 118:57 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.5d73de0c-67c2-4fd4-b5a0-31f753ccfa44 --docker=d root 39875 0.0 0.0 1526680 15584 ? Sl May29 2:18 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 536870912 -e SERVICE_APP=wgcmp -e SERVICE_CONFIG_DIR=/confi root 42589 0.8 0.0 3769700 33008 ? Ssl May31 72:28 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.fdb9e620-17bf-4e58-bacd-d0b903bc47bb --docker=d root 42590 0.9 0.0 3769700 33248 ? Ssl May31 79:16 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.ca428c85-1ab5-4018-867e-eb6afa74d04b --docker=d root 42591 0.9 0.0 3769704 34976 ? Ssl May31 79:58 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.eaacd1e1-5b0d-4742-a85c-24248dea04d8 --docker=d root 42746 0.0 0.0 1190804 14168 ? Sl May31 1:04 docker -H unix:///var/run/docker.sock run --cpu-shares 2048 --memory 1258291200 -e SERVICE_APP=tmscore -e SERVICE_DNS_SUFFIX=ix root 42748 0.0 0.0 1059732 14316 ? Sl May31 1:04 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 536870912 -e SERVICE_APP=tmscore -e SERVICE_DNS_SUFFIX=ixs root 42768 0.0 0.0 1600676 13796 ? Sl May31 1:07 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 1073741824 -e SERVICE_APP=wgrs -e APP_AMQP_WGNC_PORT=5672 root 42834 0.8 0.0 3769700 33504 ? Ssl May31 70:05 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.154fdb16-ae30-4b8d-82f9-efb5a504942f --docker=d root 42884 0.0 0.0 1723540 14508 ? Sl May31 1:00 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 1073741824 -e SERVICE_APP=wi -e SERVICE_CONFIG_DIR=/config root 43023 0.9 0.0 3769696 32368 ? Ssl May31 79:05 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.d40dd321-9357-42eb-a46a-2895ea203d7a --docker=d root 43073 0.0 0.0 1461648 15884 ? Sl May31 1:01 docker -H unix:///var/run/docker.sock run --cpu-shares 512 --memory 536870912 -e SERVICE_APP=esmeter -e SERVICE_DNS_SUFFIX=ixs. root 43188 0.8 0.0 3769700 33452 ? Ssl May31 70:24 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.a3d32462-3506-4180-a731-6a260ea8f8a5 --docker=d root 43238 0.0 0.0 1330068 16588 ? Sl May31 1:03 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 1073741824 -e SERVICE_APP=wi -e SERVICE_CONFIG_DIR=/config root 43454 0.9 0.0 3769704 35308 ? Ssl May31 80:40 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.cc4da9b4-1171-462d-8fe4-06a20e31528a --docker=d root 43455 0.9 0.0 3769708 35084 ? Ssl May31 80:46 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.b3bba1cb-f367-413a-952d-4275c1592143 --docker=d root 43555 0.0 0.0 1585056 16176 ? Sl May31 1:11 docker -H unix:///var/run/docker.sock run --cpu-shares 204 --memory 268435456 -e SERVICE_APP=wgrs -e APP_AMQP_WGNC_PORT=5672 -e root 43556 0.0 0.0 1452700 12764 ? Sl May31 1:08 docker -H unix:///var/run/docker.sock run --cpu-shares 1536 --memory 2147483648 -e SERVICE_APP=wgrs -e APP_AMQP_WGNC_PORT=5672 root 43606 0.9 0.0 3769696 33244 ? Ssl May31 79:51 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.6e078ea4-0bd6-4875-9bd7-9555e174ec46 --docker=d root 43672 0.0 0.0 1470100 18752 ? Sl May31 1:03 docker -H unix:///var/run/docker.sock run --cpu-shares 512 --memory 134217728 -e SERVICE_APP=esmeter -e SERVICE_DNS_SUFFIX=ixs. root 43698 0.9 0.0 3769696 32688 ? Ssl May31 80:51 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.ad446a7a-be8a-48b5-a34b-6362cc112268 --docker=d root 43699 0.9 0.0 3769696 32588 ? Ssl May31 79:29 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.cfe58f39-a41e-470f-94f7-85837773eddf --docker=d root 43800 0.0 0.0 1584012 17480 ? Sl May31 1:02 docker -H unix:///var/run/docker.sock run --cpu-shares 512 --memory 3212836864 -e SERVICE_APP=ftools -e SERVICE_8080_NAME=tools root 43807 0.0 0.0 1510792 19184 ? Sl May31 1:02 docker -H unix:///var/run/docker.sock run --cpu-shares 409 --memory 3212836864 -e SERVICE_APP=fgateway -e SERVICE_8080_NAME=gat root 44401 0.9 0.0 3769700 33272 ? Ssl May31 78:57 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.cb2f3f84-da02-412a-9a09-bfbc12a4c9db --docker=d root 44457 0.0 0.0 1461400 15536 ? Sl May31 1:03 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 268435456 -e SERVICE_APP=fmetadata -e APP___SETTINGS_INFRAS root 44480 0.9 0.0 3769708 34832 ? Ssl May31 79:52 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.34d5f2b7-0fa0-4c49-8037-8c5b19075113 --docker=d root 44533 0.0 0.0 1453472 15752 ? Sl May31 1:02 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 268435456 -e SERVICE_APP=fimporter -e APP___SETTINGS_INFRAS root 44585 0.8 0.0 3769696 32808 ? Ssl May31 70:32 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.6d710be6-d107-40f1-83ec-8e77bed07b34 --docker=d root 44641 0.0 0.0 1641864 15292 ? Sl May31 1:04 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 268435456 -e SERVICE_APP=fimporter -e SERVICE_DNS_SUFFIX=ix root 47895 0.9 0.0 3769700 32888 ? Ssl May26 152:53 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.7f5cf99e-b2b6-45b9-bb55-7f26b9835c6e --docker=d root 47945 0.0 0.0 1395604 14492 ? Sl May26 2:14 docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 536870912 -e SERVICE_APP=wghub -e VOLUME_HTDOCS_BACKUP_SCH root 48825 0.8 0.0 3769696 32736 ? Ssl May29 90:56 mesos-docker-executor --container=mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.7dadf638-4692-44d9-ad16-4e5d40d390a2 --docker=d root 48880 0.0 0.0 1182088 15716 ? Sl May29 1:13 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 134217728 -e SERVICE_APP=helloworld -e SERVICE_DNS_SUFFIX=i # netstat -aenp Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 0.0.0.0:9631 0.0.0.0:* LISTEN 0 3836267470 12451/mesos-docker- tcp 0 0 0.0.0.0:16127 0.0.0.0:* LISTEN 0 3740429388 44480/mesos-docker- tcp 0 0 0.0.0.0:16863 0.0.0.0:* LISTEN 0 3079243357 1522/mesos-docker-e tcp 0 0 0.0.0.0:24321 0.0.0.0:* LISTEN 0 3416640112 39774/mesos-docker- tcp 0 0 0.0.0.0:10050 0.0.0.0:* LISTEN 994 34159425 - tcp 0 0 0.0.0.0:5507 0.0.0.0:* LISTEN 0 40095735 10268/mesos-docker- tcp 0 0 0.0.0.0:13891 0.0.0.0:* LISTEN 0 40087189 10098/mesos-docker- tcp 0 0 0.0.0.0:23011 0.0.0.0:* LISTEN 0 3740296756 43188/mesos-docker- tcp 0 0 0.0.0.0:23075 0.0.0.0:* LISTEN 0 3576234276 30286/mesos-docker- tcp 0 0 0.0.0.0:29861 0.0.0.0:* LISTEN 0 45564595 35331/mesos-docker- tcp 0 0 0.0.0.0:24549 0.0.0.0:* LISTEN 0 40206737 12393/mesos-docker- tcp 0 0 0.0.0.0:1221 0.0.0.0:* LISTEN 0 40104554 10097/mesos-docker- tcp 0 0 0.0.0.0:17413 0.0.0.0:* LISTEN 0 37388352 12126/mesos-docker- tcp 0 0 0.0.0.0:29477 0.0.0.0:* LISTEN 0 3079299412 3269/mesos-docker-e tcp 0 0 0.0.0.0:15557 0.0.0.0:* LISTEN 0 3079253460 1349/mesos-docker-e tcp 0 0 0.0.0.0:5159 0.0.0.0:* LISTEN 0 40141701 11270/mesos-docker- tcp 0 0 0.0.0.0:18119 0.0.0.0:* LISTEN 0 3740389923 44585/mesos-docker- tcp 0 0 0.0.0.0:28775 0.0.0.0:* LISTEN 0 3492969080 2337/mesos-docker-e tcp 0 0 127.0.0.1:199 0.0.0.0:* LISTEN 0 17695 - tcp 0 0 0.0.0.0:26857 0.0.0.0:* LISTEN 0 42414195 31794/mesos-docker- tcp 0 0 0.0.0.0:2281 0.0.0.0:* LISTEN 0 3190246186 47895/mesos-docker- tcp 0 0 0.0.0.0:8010 0.0.0.0:* LISTEN 0 206135709 - tcp 0 0 0.0.0.0:3755 0.0.0.0:* LISTEN 0 4282305848 20144/mesos-docker- tcp 0 0 0.0.0.0:27083 0.0.0.0:* LISTEN 0 3740398611 43455/mesos-docker- tcp 0 0 0.0.0.0:26987 0.0.0.0:* LISTEN 0 3740277671 42591/mesos-docker- tcp 0 0 0.0.0.0:26347 0.0.0.0:* LISTEN 0 3493006337 3776/mesos-docker-e tcp 0 0 0.0.0.0:12171 0.0.0.0:* LISTEN 0 3079250755 866/mesos-docker-ex tcp 0 0 0.0.0.0:33005 0.0.0.0:* LISTEN 0 3836398246 14872/mesos-docker- tcp 0 0 0.0.0.0:3021 0.0.0.0:* LISTEN 0 3740352491 44401/mesos-docker- tcp 0 0 0.0.0.0:29773 0.0.0.0:* LISTEN 0 3740389822 43698/mesos-docker- tcp 0 0 0.0.0.0:31631 0.0.0.0:* LISTEN 0 40164733 11499/mesos-docker- tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 43172 - tcp 0 0 0.0.0.0:33169 0.0.0.0:* LISTEN 0 3740403906 43699/mesos-docker- tcp 0 0 0.0.0.0:24977 0.0.0.0:* LISTEN 0 3740366634 43606/mesos-docker- tcp 0 0 0.0.0.0:10227 0.0.0.0:* LISTEN 0 3740315146 43454/mesos-docker- tcp 0 0 0.0.0.0:18195 0.0.0.0:* LISTEN 0 3492900013 48825/mesos-docker- tcp 0 0 0.0.0.0:21653 0.0.0.0:* LISTEN 0 40226280 13824/mesos-docker- tcp 0 0 0.0.0.0:15957 0.0.0.0:* LISTEN 0 4091781125 9480/mesos-docker-e tcp 0 0 0.0.0.0:23861 0.0.0.0:* LISTEN 0 3842051309 28888/mesos-docker- tcp 0 0 0.0.0.0:18549 0.0.0.0:* LISTEN 0 3740279582 43023/mesos-docker- tcp 0 0 0.0.0.0:11317 0.0.0.0:* LISTEN 0 3740272554 42834/mesos-docker- tcp 0 0 0.0.0.0:26069 0.0.0.0:* LISTEN 0 3740296746 42589/mesos-docker- tcp 0 0 0.0.0.0:2327 0.0.0.0:* LISTEN 0 40184926 11813/mesos-docker- tcp 0 0 0.0.0.0:20951 0.0.0.0:* LISTEN 0 4136235820 18590/mesos-docker- tcp 0 0 0.0.0.0:30199 0.0.0.0:* LISTEN 0 3737010452 1678/mesos-docker-e tcp 0 0 0.0.0.0:8183 0.0.0.0:* LISTEN 0 3493082896 7181/mesos-docker-e tcp 0 0 0.0.0.0:16983 0.0.0.0:* LISTEN 0 3493019145 4638/mesos-docker-e tcp 0 0 0.0.0.0:11671 0.0.0.0:* LISTEN 0 3079226641 857/mesos-docker-ex tcp 0 0 0.0.0.0:2200 0.0.0.0:* LISTEN 0 60103 - tcp 0 0 0.0.0.0:3545 0.0.0.0:* LISTEN 0 40107941 10450/mesos-docker- tcp 0 0 0.0.0.0:11481 0.0.0.0:* LISTEN 0 3756359351 3676/mesos-docker-e tcp 0 0 0.0.0.0:7065 0.0.0.0:* LISTEN 0 3079305706 3099/mesos-docker-e tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 0 82553638 - tcp 0 0 0.0.0.0:28251 0.0.0.0:* LISTEN 0 42296084 29984/mesos-docker- tcp 0 0 0.0.0.0:28635 0.0.0.0:* LISTEN 0 29558012 25861/mesos-docker- tcp 0 0 0.0.0.0:13979 0.0.0.0:* LISTEN 0 3740704798 721/mesos-docker-ex tcp 0 0 0.0.0.0:5051 0.0.0.0:* LISTEN 0 3079224878 12/mesos-slave tcp 0 0 0.0.0.0:14139 0.0.0.0:* LISTEN 0 29608 - tcp 0 0 0.0.0.0:27037 0.0.0.0:* LISTEN 0 42281013 29016/mesos-docker- tcp 0 0 0.0.0.0:3997 0.0.0.0:* LISTEN 0 4136447646 25084/mesos-docker- tcp 0 0 0.0.0.0:21949 0.0.0.0:* LISTEN 0 3740352281 42590/mesos-docker- tcp 0 0 0.0.0.0:25117 0.0.0.0:* LISTEN 29 27399 - tcp 0 0 10.224.250.141:10050 10.140.130.254:62552 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:28520 10.224.250.141:5051 ESTABLISHED 0 4091763249 9480/mesos-docker-e tcp 0 0 10.224.250.141:10050 10.140.130.254:57312 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:8934 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:36762 10.224.250.141:5051 ESTABLISHED 0 3842055271 28888/mesos-docker- tcp 0 0 10.224.250.141:14322 10.224.250.141:5051 ESTABLISHED 0 3493016892 4638/mesos-docker-e tcp 0 0 10.224.250.141:30694 10.224.250.141:5051 ESTABLISHED 0 3740405839 42834/mesos-docker- tcp 0 0 10.224.250.141:31118 10.224.250.141:5051 ESTABLISHED 0 3740315149 43455/mesos-docker- tcp 0 0 10.224.250.141:16127 10.224.250.141:2136 ESTABLISHED 0 3740425432 44480/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:65014 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:57380 10.224.251.19:9200 ESTABLISHED 1005 45243046 - tcp 0 0 10.224.250.141:10050 10.140.130.254:51942 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:59102 10.140.75.52:111 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:51098 10.224.250.141:5051 ESTABLISHED 0 40107944 10450/mesos-docker- tcp 0 0 10.224.250.141:5051 10.224.250.141:31128 ESTABLISHED 0 3740408956 12/mesos-slave tcp 0 0 10.224.250.141:61572 10.224.250.141:5051 ESTABLISHED 0 3836278216 12451/mesos-docker- tcp 0 0 10.224.250.141:26069 10.224.250.141:54606 ESTABLISHED 0 3740272547 42589/mesos-docker- tcp 0 0 127.0.0.1:30408 127.0.0.1:8010 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:54800 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:24549 10.224.250.141:60958 ESTABLISHED 0 40179599 12393/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:48432 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:51378 ESTABLISHED 0 40165659 12/mesos-slave tcp 0 0 10.224.250.141:18195 10.224.250.141:64554 ESTABLISHED 0 3492887400 48825/mesos-docker- tcp 0 0 10.224.250.141:29477 10.224.250.141:39770 ESTABLISHED 0 3079308632 3269/mesos-docker-e tcp 0 0 10.224.250.141:5051 10.224.250.141:31382 ESTABLISHED 0 3740378890 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:9340 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:19074 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:37020 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:65440 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:13018 10.224.250.141:5051 ESTABLISHED 0 3492893802 48825/mesos-docker- tcp 0 0 10.224.250.141:13946 10.224.250.141:29861 ESTABLISHED 0 45543234 12/mesos-slave tcp 0 0 10.224.250.141:26857 10.224.250.141:49086 ESTABLISHED 0 42367846 31794/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:51534 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:44702 10.140.75.14:5672 ESTABLISHED 993 307438065 - tcp 0 0 10.224.250.141:10050 10.140.130.254:37058 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:34040 10.224.250.141:5051 ESTABLISHED 0 3079310551 3099/mesos-docker-e tcp 0 0 10.224.250.141:10050 10.140.130.254:12150 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:35828 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:30854 ESTABLISHED 0 3740315068 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:18850 ESTABLISHED 0 45562856 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:23298 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:24500 10.224.250.141:11671 ESTABLISHED 0 3079228591 12/mesos-slave tcp 0 0 10.224.250.141:28635 10.224.250.141:47972 ESTABLISHED 0 29541087 25861/mesos-docker- tcp 0 0 10.224.250.141:62184 10.224.250.141:5051 ESTABLISHED 0 3836392704 14872/mesos-docker- tcp 0 0 10.224.250.141:43192 10.224.250.141:24977 ESTABLISHED 0 3740330824 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:51722 ESTABLISHED 0 40200439 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:16608 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:11538 10.224.250.141:5051 ESTABLISHED 0 3736980401 1678/mesos-docker-e tcp 0 0 10.224.250.141:10050 10.140.130.254:41256 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:31116 ESTABLISHED 0 3740390755 12/mesos-slave tcp 0 0 10.224.250.141:16863 10.224.250.141:6480 ESTABLISHED 0 3079236519 1522/mesos-docker-e tcp 0 0 10.224.250.141:64370 10.224.251.13:2181 ESTABLISHED 0 3730176202 12/mesos-slave tcp 0 0 10.224.250.141:18119 10.224.250.141:34626 ESTABLISHED 0 3740424814 44585/mesos-docker- tcp 0 0 10.224.250.141:30648 10.224.250.141:5051 ESTABLISHED 0 3740365088 42591/mesos-docker- tcp 0 0 10.224.250.141:50982 10.224.250.141:5051 ESTABLISHED 0 40045454 10098/mesos-docker- tcp 0 0 10.224.250.141:23738 10.224.250.141:3545 ESTABLISHED 0 40130634 12/mesos-slave tcp 0 0 10.224.250.141:31128 10.224.250.141:5051 ESTABLISHED 0 3740330823 43606/mesos-docker- tcp 0 0 10.224.250.141:58678 10.224.251.21:9200 ESTABLISHED 1005 41419755 - tcp 0 0 10.224.250.141:10050 10.140.130.254:47348 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:56496 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:28775 10.224.250.141:54772 ESTABLISHED 0 3492959561 2337/mesos-docker-e tcp 0 0 10.224.250.141:10050 10.140.130.254:52518 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:22788 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:17436 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:15130 ESTABLISHED 0 3493122090 12/mesos-slave tcp 0 0 10.224.250.141:11481 10.224.250.141:50172 ESTABLISHED 0 3756365736 3676/mesos-docker-e tcp 0 0 10.224.250.141:9682 10.224.250.141:28251 ESTABLISHED 0 42315490 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:42820 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.128.36.129:65072 ESTABLISHED 0 45381323 12/mesos-slave tcp 0 0 127.0.0.1:30406 127.0.0.1:8010 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:38442 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:31134 ESTABLISHED 0 3740406029 12/mesos-slave tcp 0 13660 10.224.250.141:2200 10.128.36.129:65083 ESTABLISHED 0 45483842 - tcp 0 0 10.224.250.141:27083 10.224.250.141:54754 ESTABLISHED 0 3740366626 43455/mesos-docker- tcp 0 0 10.224.250.141:15557 10.224.250.141:57042 ESTABLISHED 0 3079249857 1349/mesos-docker-e tcp 0 0 127.0.0.1:30410 127.0.0.1:8010 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.128.36.129:65076 ESTABLISHED 0 45396145 12/mesos-slave tcp 0 0 127.0.0.1:30420 127.0.0.1:8010 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:21062 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:49086 10.224.250.141:26857 ESTABLISHED 0 42405313 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:47642 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:8956 10.224.250.141:2281 ESTABLISHED 0 3190244930 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:30638 ESTABLISHED 0 3740329280 12/mesos-slave tcp 0 0 10.224.250.141:33005 10.224.250.141:25360 ESTABLISHED 0 3836421329 14872/mesos-docker- tcp 0 0 10.224.250.141:11671 10.224.250.141:24500 ESTABLISHED 0 3078547456 857/mesos-docker-ex tcp 0 0 10.224.250.141:5051 10.224.250.141:31138 ESTABLISHED 0 3740390763 12/mesos-slave tcp 0 0 10.224.250.141:60958 10.224.250.141:24549 ESTABLISHED 0 40194337 12/mesos-slave tcp 0 0 10.224.250.141:29861 10.224.250.141:13946 ESTABLISHED 0 45578946 35331/mesos-docker- tcp 0 0 10.224.250.141:46802 10.224.250.141:26987 ESTABLISHED 0 3740405831 12/mesos-slave tcp 0 0 10.224.250.141:42868 10.224.250.141:15957 ESTABLISHED 0 4091753445 12/mesos-slave tcp 0 0 10.224.250.141:1472 10.224.250.141:5507 ESTABLISHED 0 40122532 12/mesos-slave tcp 0 0 10.224.250.141:20951 10.224.250.141:40300 ESTABLISHED 0 4136248788 18590/mesos-docker- tcp 0 0 10.224.250.141:45844 10.224.250.141:13979 ESTABLISHED 0 3740683761 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:38374 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:33126 10.224.250.141:5051 ESTABLISHED 0 3079243360 1522/mesos-docker-e tcp 0 0 10.224.250.141:31500 10.224.251.21:9200 ESTABLISHED 0 3079193552 - tcp 0 0 10.224.250.141:10050 10.140.130.254:22484 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:63828 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:30648 ESTABLISHED 0 3740327179 12/mesos-slave tcp 0 0 10.224.250.141:32106 10.224.250.141:5051 ESTABLISHED 0 3190272070 47895/mesos-docker- tcp 0 579 10.224.250.141:59484 10.224.251.20:9200 ESTABLISHED 1005 45332607 - tcp 0 0 127.0.0.1:8010 127.0.0.1:30426 TIME_WAIT 0 0 - tcp 0 0 127.0.0.1:30412 127.0.0.1:8010 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:31400 ESTABLISHED 0 3740404369 12/mesos-slave tcp 0 0 10.224.250.141:24977 10.224.250.141:43192 ESTABLISHED 0 3740380963 43606/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:54564 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:56736 10.224.251.19:5046 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:59660 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:38572 10.224.250.141:5051 ESTABLISHED 0 37366406 12126/mesos-docker- tcp 0 0 10.224.250.141:5051 10.224.250.141:54186 ESTABLISHED 0 3576153019 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:36442 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:16118 10.224.250.141:23011 ESTABLISHED 0 3740378650 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:33034 ESTABLISHED 0 3079226645 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:36762 ESTABLISHED 0 3842031448 12/mesos-slave tcp 0 0 10.224.250.141:51722 10.224.250.141:5051 ESTABLISHED 0 40155848 12393/mesos-docker- tcp 0 0 10.224.250.141:5051 10.224.250.141:61572 ESTABLISHED 0 3836299412 12/mesos-slave tcp 0 0 10.224.250.141:51376 10.224.250.141:23861 ESTABLISHED 0 3842056313 12/mesos-slave tcp 0 0 10.224.250.141:54772 10.224.250.141:28775 ESTABLISHED 0 3492954430 12/mesos-slave tcp 0 0 10.224.250.141:55216 10.224.250.141:16983 ESTABLISHED 0 3493028861 12/mesos-slave tcp 0 0 10.224.250.141:61842 10.224.250.141:29773 ESTABLISHED 0 3740416004 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:38572 ESTABLISHED 0 37387297 12/mesos-slave tcp 0 0 10.224.250.141:3021 10.224.250.141:21132 ESTABLISHED 0 3740394147 44401/mesos-docker- tcp 0 0 10.224.250.141:19218 10.224.250.141:5051 ESTABLISHED 0 4136264768 18590/mesos-docker- tcp 0 0 10.224.250.141:5051 10.224.250.141:56588 ESTABLISHED 0 3416674768 12/mesos-slave tcp 0 0 10.224.250.141:8298 10.224.251.19:9200 ESTABLISHED 0 3079193555 - tcp 0 0 10.224.250.141:5051 10.224.250.141:33126 ESTABLISHED 0 3079245621 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:33028 ESTABLISHED 0 3079230543 12/mesos-slave tcp 0 0 10.224.250.141:49888 10.224.250.141:3755 ESTABLISHED 0 4282222554 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:51514 ESTABLISHED 0 40153565 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:1804 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:37158 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:51000 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.128.36.129:65089 ESTABLISHED 0 45635232 12/mesos-slave tcp 0 0 10.224.250.141:17413 10.224.250.141:5954 ESTABLISHED 0 37382319 12126/mesos-docker- tcp 0 0 10.224.250.141:1221 10.224.250.141:58114 ESTABLISHED 0 40124604 10097/mesos-docker- tcp 0 0 10.224.250.141:33076 10.224.251.13:2181 CLOSE_WAIT 0 3142153646 2337/mesos-docker-e tcp 0 0 10.224.250.141:54650 10.224.250.141:27037 ESTABLISHED 0 42265209 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:13018 ESTABLISHED 0 3492843260 12/mesos-slave tcp 0 0 10.224.250.141:54186 10.224.250.141:5051 ESTABLISHED 0 3576215341 30286/mesos-docker- tcp 0 0 10.224.250.141:31382 10.224.250.141:5051 ESTABLISHED 0 3740408171 44401/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:12920 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:50978 ESTABLISHED 0 40097343 12/mesos-slave tcp 0 0 10.224.250.141:51064 10.224.250.141:5051 ESTABLISHED 0 40130607 10268/mesos-docker- tcp 0 0 10.224.250.141:39770 10.224.250.141:29477 ESTABLISHED 0 3079314551 12/mesos-slave tcp 0 0 10.224.250.141:10227 10.224.250.141:15888 ESTABLISHED 0 3740390757 43454/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:56288 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:982 10.140.75.52:2049 ESTABLISHED 0 3023625350 - tcp 0 0 10.224.250.141:5051 10.224.250.141:50982 ESTABLISHED 0 40120056 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:6162 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:30642 10.224.250.141:5051 ESTABLISHED 0 3740260352 42590/mesos-docker- tcp 0 0 10.224.250.141:58908 10.224.251.19:9200 ESTABLISHED 1005 45472510 - tcp 0 0 10.224.250.141:10050 10.140.130.254:24152 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:58200 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:60126 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:13768 10.224.250.141:5051 ESTABLISHED 0 3492965845 2337/mesos-docker-e tcp 0 0 10.224.250.141:10050 10.140.130.254:21856 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:65020 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:50156 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:7072 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:23075 10.224.250.141:29312 ESTABLISHED 0 3576242221 30286/mesos-docker- tcp 0 0 10.224.250.141:26987 10.224.250.141:46802 ESTABLISHED 0 3740399629 42591/mesos-docker- tcp 0 0 10.224.250.141:5051 10.224.250.141:51098 ESTABLISHED 0 40045511 12/mesos-slave tcp 0 0 10.224.250.141:18850 10.224.250.141:5051 ESTABLISHED 0 45596782 35331/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:14944 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:57274 10.224.251.19:5046 ESTABLISHED 0 45544074 - tcp 0 0 10.224.250.141:11317 10.224.250.141:51934 ESTABLISHED 0 3740393558 42834/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:39736 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:33086 ESTABLISHED 0 3079254240 12/mesos-slave tcp 0 0 10.224.250.141:23011 10.224.250.141:16118 ESTABLISHED 0 3740403779 43188/mesos-docker- tcp 0 0 10.224.250.141:28251 10.224.250.141:9682 ESTABLISHED 0 42327602 29984/mesos-docker- tcp 0 0 10.224.250.141:5051 10.224.250.141:64762 ESTABLISHED 0 42357686 12/mesos-slave tcp 0 0 10.224.250.141:15957 10.224.250.141:42868 ESTABLISHED 0 4091760400 9480/mesos-docker-e tcp 0 0 127.0.0.1:30414 127.0.0.1:8010 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:56250 91.189.88.152:80 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:3997 10.224.250.141:32076 ESTABLISHED 0 4136435601 25084/mesos-docker- tcp 0 0 10.224.250.141:5159 10.224.250.141:43638 ESTABLISHED 0 40141704 11270/mesos-docker- tcp 0 0 10.224.250.141:55802 10.224.250.141:21653 ESTABLISHED 0 40247900 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:18400 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:30990 ESTABLISHED 0 3740388750 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:11538 ESTABLISHED 0 3737010455 12/mesos-slave tcp 0 0 10.224.250.141:15130 10.224.250.141:5051 ESTABLISHED 0 3493068724 7181/mesos-docker-e tcp 0 0 10.224.250.141:10050 10.140.130.254:14608 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:14026 ESTABLISHED 0 3492953674 12/mesos-slave tcp 0 0 10.224.250.141:30638 10.224.250.141:5051 ESTABLISHED 0 3740352284 42589/mesos-docker- tcp 0 0 10.224.250.141:47972 10.224.250.141:28635 ESTABLISHED 0 29514216 12/mesos-slave tcp 0 0 10.224.250.141:30854 10.224.250.141:5051 ESTABLISHED 0 3740277685 43023/mesos-docker- tcp 0 0 10.224.250.141:31388 10.224.250.141:5051 ESTABLISHED 0 3740276594 44480/mesos-docker- tcp 0 0 10.224.250.141:33086 10.224.250.141:5051 ESTABLISHED 0 3079250764 1349/mesos-docker-e tcp 0 0 10.224.250.141:40300 10.224.250.141:20951 ESTABLISHED 0 4136227525 12/mesos-slave tcp 0 0 10.224.250.141:51444 10.224.250.141:5051 ESTABLISHED 0 40172036 11499/mesos-docker- tcp 0 0 10.224.250.141:61052 10.224.251.20:9200 ESTABLISHED 1005 45594172 - tcp 0 0 10.224.250.141:10050 10.140.130.254:35174 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:30990 10.224.250.141:5051 ESTABLISHED 0 3740276407 43188/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:21936 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:15888 10.224.250.141:10227 ESTABLISHED 0 3740403901 12/mesos-slave tcp 0 0 10.224.250.141:17538 10.224.250.141:24321 ESTABLISHED 0 3416683666 12/mesos-slave tcp 0 0 10.224.250.141:33169 10.224.250.141:59776 ESTABLISHED 0 3740330825 43699/mesos-docker- tcp 0 0 10.224.250.141:50172 10.224.250.141:11481 ESTABLISHED 0 3756393512 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:51064 ESTABLISHED 0 40116621 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:54530 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:31388 ESTABLISHED 0 3740425431 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:52962 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:7065 10.224.250.141:63078 ESTABLISHED 0 3079306536 3099/mesos-docker-e tcp 0 0 10.224.250.141:16983 10.224.250.141:55216 ESTABLISHED 0 3493000675 4638/mesos-docker-e tcp 0 0 10.224.250.141:51378 10.224.250.141:5051 ESTABLISHED 0 40152310 11270/mesos-docker- tcp 0 0 10.224.250.141:5051 10.224.250.141:62184 ESTABLISHED 0 3836414537 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:61994 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:18710 10.224.250.141:9631 ESTABLISHED 0 3836272474 12/mesos-slave tcp 0 0 10.224.250.141:14026 10.224.250.141:5051 ESTABLISHED 0 3492975360 3776/mesos-docker-e tcp 0 0 10.224.250.141:13979 10.224.250.141:45844 ESTABLISHED 0 3740656304 721/mesos-docker-ex tcp 0 0 10.224.250.141:10050 10.140.130.254:55474 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:46322 10.224.250.141:5051 ESTABLISHED 0 29524900 25861/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:23328 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:19218 ESTABLISHED 0 4136238308 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:58418 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:27037 10.224.250.141:54650 ESTABLISHED 0 42243756 29016/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:4322 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:54036 ESTABLISHED 0 4282218231 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:23936 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:32106 ESTABLISHED 0 3190252979 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:40160 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:22510 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:911 10.140.75.52:32803 ESTABLISHED 0 3079299436 - tcp 0 0 10.224.250.141:10050 10.140.130.254:35576 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:30694 ESTABLISHED 0 3740380816 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:59884 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:34626 10.224.250.141:18119 ESTABLISHED 0 3740420429 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:19528 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:38952 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:43638 10.224.250.141:5159 ESTABLISHED 0 40168530 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:51968 ESTABLISHED 0 40252644 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:31118 ESTABLISHED 0 3740390756 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.251.14:24084 ESTABLISHED 0 3141724602 12/mesos-slave tcp 0 0 10.224.250.141:50978 10.224.250.141:5051 ESTABLISHED 0 39872370 10097/mesos-docker- tcp 0 0 127.0.0.1:30426 127.0.0.1:8010 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:20598 ESTABLISHED 0 4136444754 12/mesos-slave tcp 0 0 10.224.250.141:33028 10.224.250.141:5051 ESTABLISHED 0 3079228590 857/mesos-docker-ex tcp 0 0 10.224.250.141:10050 10.140.130.254:14040 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:12856 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:34118 10.224.250.141:5051 ESTABLISHED 0 3079292926 3269/mesos-docker-e tcp 0 0 10.224.250.141:10050 10.140.130.254:37600 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:31138 10.224.250.141:5051 ESTABLISHED 0 3740366638 43699/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:1308 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:54754 10.224.250.141:27083 ESTABLISHED 0 3740403902 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:16476 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:64180 10.224.250.141:5051 ESTABLISHED 0 42323416 29984/mesos-docker- tcp 0 0 10.224.250.141:36912 10.224.250.141:26347 ESTABLISHED 0 3492979393 12/mesos-slave tcp 0 0 10.224.250.141:63078 10.224.250.141:7065 ESTABLISHED 0 3079310552 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:52034 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:55784 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:48570 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:28520 ESTABLISHED 0 4091776119 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:60258 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:6480 10.224.250.141:16863 ESTABLISHED 0 3079246321 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:53182 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:58726 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:22694 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:59558 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:42682 10.224.250.141:2327 ESTABLISHED 0 40186891 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:30642 ESTABLISHED 0 3740276395 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:63980 ESTABLISHED 0 42282107 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:8868 ESTABLISHED 0 3756352160 12/mesos-slave tcp 0 0 10.224.250.141:54036 10.224.250.141:5051 ESTABLISHED 0 4282248858 20144/mesos-docker- tcp 0 0 10.224.250.141:64554 10.224.250.141:18195 ESTABLISHED 0 3492851468 12/mesos-slave tcp 0 0 10.224.250.141:51934 10.224.250.141:11317 ESTABLISHED 0 3740366487 12/mesos-slave tcp 0 0 10.224.250.141:16140 10.224.250.141:12171 ESTABLISHED 0 3077727572 12/mesos-slave tcp 0 0 10.224.250.141:30199 10.224.250.141:64320 ESTABLISHED 0 3736989201 1678/mesos-docker-e tcp 0 0 10.224.250.141:5051 10.224.250.141:46322 ESTABLISHED 0 29545727 12/mesos-slave tcp 0 0 10.224.250.141:31134 10.224.250.141:5051 ESTABLISHED 0 3740390762 43698/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:41346 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:18549 10.224.250.141:62406 ESTABLISHED 0 3740328098 43023/mesos-docker- tcp 0 0 10.224.250.141:2281 10.224.250.141:8956 ESTABLISHED 0 3190264603 47895/mesos-docker- tcp 0 0 10.224.250.141:31116 10.224.250.141:5051 ESTABLISHED 0 3740314193 43454/mesos-docker- tcp 0 0 10.224.250.141:61896 10.224.250.141:8183 ESTABLISHED 0 3493125166 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:13768 ESTABLISHED 0 3492927823 12/mesos-slave tcp 0 0 10.224.250.141:12171 10.224.250.141:16140 ESTABLISHED 0 3079052786 866/mesos-docker-ex tcp 0 0 10.224.250.141:9631 10.224.250.141:18710 ESTABLISHED 0 3836280810 12451/mesos-docker- tcp 0 0 10.224.250.141:39030 10.224.250.141:13891 ESTABLISHED 0 40016891 12/mesos-slave tcp 0 0 10.224.250.141:5507 10.224.250.141:1472 ESTABLISHED 0 40012793 10268/mesos-docker- tcp 0 0 10.224.250.141:62406 10.224.250.141:18549 ESTABLISHED 0 3740352300 12/mesos-slave tcp 0 0 10.224.250.141:23861 10.224.250.141:51376 ESTABLISHED 0 3842042448 28888/mesos-docker- tcp 0 0 10.224.250.141:2200 10.128.36.70:62281 ESTABLISHED 0 44678236 - tcp 0 0 10.224.250.141:5051 10.224.250.141:34118 ESTABLISHED 0 3079299415 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:64252 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:14322 ESTABLISHED 0 3493005230 12/mesos-slave tcp 0 0 10.224.250.141:11176 50.31.164.148:443 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:1432 10.140.75.23:9700 ESTABLISHED 0 27359744 - tcp 0 0 10.224.250.141:21132 10.224.250.141:3021 ESTABLISHED 0 3740404294 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:42444 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:31631 10.224.250.141:9908 ESTABLISHED 0 39998400 11499/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:60824 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:58114 10.224.250.141:1221 ESTABLISHED 0 40096477 12/mesos-slave tcp 0 0 127.0.0.1:30404 127.0.0.1:8010 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:21949 10.224.250.141:4472 ESTABLISHED 0 3740399628 42590/mesos-docker- tcp 0 0 10.224.250.141:5051 10.224.250.141:34040 ESTABLISHED 0 3079300351 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:47340 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:9908 10.224.250.141:31631 ESTABLISHED 0 40168711 12/mesos-slave tcp 0 0 10.224.250.141:21653 10.224.250.141:55802 ESTABLISHED 0 40218355 13824/mesos-docker- tcp 0 0 10.224.250.141:26347 10.224.250.141:36912 ESTABLISHED 0 3492979394 3776/mesos-docker-e tcp 0 0 10.224.250.141:64320 10.224.250.141:30199 ESTABLISHED 0 3737017345 12/mesos-slave tcp 0 0 10.224.250.141:13891 10.224.250.141:39030 ESTABLISHED 0 40012771 10098/mesos-docker- tcp 0 0 10.224.250.141:2327 10.224.250.141:42682 ESTABLISHED 0 40173961 11813/mesos-docker- tcp 0 0 10.224.250.141:8868 10.224.250.141:5051 ESTABLISHED 0 3756333775 3676/mesos-docker-e tcp 0 0 10.224.250.141:2136 10.224.250.141:16127 ESTABLISHED 0 3740394960 12/mesos-slave tcp 0 0 10.224.250.141:57042 10.224.250.141:15557 ESTABLISHED 0 3079243342 12/mesos-slave tcp 0 0 10.224.250.141:51514 10.224.250.141:5051 ESTABLISHED 0 40156752 11813/mesos-docker- tcp 0 0 10.224.250.141:5954 10.224.250.141:17413 ESTABLISHED 0 37326685 12/mesos-slave tcp 0 0 10.224.250.141:63980 10.224.250.141:5051 ESTABLISHED 0 42277657 29016/mesos-docker- tcp 0 0 127.0.0.1:30424 127.0.0.1:8010 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:31400 10.224.250.141:5051 ESTABLISHED 0 3740389926 44585/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:46582 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:40416 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:23606 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:51394 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:3755 10.224.250.141:49888 ESTABLISHED 0 4282269463 20144/mesos-docker- tcp 0 0 10.224.250.141:29773 10.224.250.141:61842 ESTABLISHED 0 3740327341 43698/mesos-docker- tcp 0 0 10.224.250.141:62070 10.224.251.14:5050 ESTABLISHED 0 3141745693 12/mesos-slave tcp 0 0 10.224.250.141:32076 10.224.250.141:3997 ESTABLISHED 0 4136459320 12/mesos-slave tcp 0 0 10.224.250.141:59776 10.224.250.141:33169 ESTABLISHED 0 3740393697 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:50914 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:25360 10.224.250.141:33005 ESTABLISHED 0 3836423256 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:39172 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:33034 10.224.250.141:5051 ESTABLISHED 0 3079248059 866/mesos-docker-ex tcp 0 0 10.224.250.141:31210 10.224.251.14:2181 ESTABLISHED 1005 3740697745 - tcp 0 0 10.224.250.141:10050 10.140.130.254:63180 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:3545 10.224.250.141:23738 ESTABLISHED 0 40045512 10450/mesos-docker- tcp 0 0 10.224.250.141:10050 10.140.130.254:34700 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:5051 10.224.250.141:64180 ESTABLISHED 0 42313164 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:33692 ESTABLISHED 0 3740699806 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:7448 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:64762 10.224.250.141:5051 ESTABLISHED 0 42414198 31794/mesos-docker- tcp 0 0 10.224.250.141:56588 10.224.250.141:5051 ESTABLISHED 0 3416659879 39774/mesos-docker- tcp 0 0 10.224.250.141:24321 10.224.250.141:17538 ESTABLISHED 0 3416667590 39774/mesos-docker- tcp 0 0 10.224.250.141:51968 10.224.250.141:5051 ESTABLISHED 0 40247899 13824/mesos-docker- tcp 0 0 10.224.250.141:20598 10.224.250.141:5051 ESTABLISHED 0 4136456492 25084/mesos-docker- tcp 0 0 10.224.250.141:29312 10.224.250.141:23075 ESTABLISHED 0 3576193897 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:50030 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:54606 10.224.250.141:26069 ESTABLISHED 0 3740399627 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:38522 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:52698 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:4472 10.224.250.141:21949 ESTABLISHED 0 3740329281 12/mesos-slave tcp 0 0 10.224.250.141:5051 10.224.250.141:51444 ESTABLISHED 0 40170822 12/mesos-slave tcp 0 0 10.224.250.141:10050 10.140.130.254:37214 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:56864 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:10050 10.140.130.254:55382 TIME_WAIT 0 0 - tcp 0 0 10.224.250.141:33692 10.224.250.141:5051 ESTABLISHED 0 3740656303 721/mesos-docker-ex tcp 0 0 10.224.250.141:8183 10.224.250.141:61896 ESTABLISHED 0 3493103916 7181/mesos-docker-e tcp6 0 0 :::10050 :::* LISTEN 994 34159426 - tcp6 0 0 :::54055 :::* LISTEN 29 27403 - tcp6 0 0 :::111 :::* LISTEN 0 43175 - tcp6 0 0 :::19573 :::* LISTEN 0 29610 - udp 0 0 10.224.250.141:48091 10.224.251.19:12201 ESTABLISHED 0 42328356 - udp 0 0 10.224.250.141:48652 10.224.251.19:12201 ESTABLISHED 0 3836414631 - udp 0 0 10.224.250.141:48714 10.224.251.19:12201 ESTABLISHED 0 3079260350 - udp 0 0 10.224.250.141:49501 10.224.251.21:12201 ESTABLISHED 0 40185956 - udp 0 0 10.224.250.141:49540 10.224.251.21:12201 ESTABLISHED 0 3836279276 - udp 0 0 10.224.250.141:50117 10.224.251.20:12201 ESTABLISHED 0 4282292814 - udp 0 0 10.224.250.141:50299 10.224.251.20:12201 ESTABLISHED 0 3079301861 - udp 0 0 10.224.250.141:51896 10.224.251.20:12201 ESTABLISHED 0 3740409103 - udp 0 0 10.224.250.141:53510 10.224.251.20:12201 ESTABLISHED 0 4136450588 - udp 0 0 10.224.250.141:56156 10.224.251.21:12201 ESTABLISHED 0 3842032406 - udp 0 0 10.224.250.141:57900 10.224.251.21:12201 ESTABLISHED 0 3079299441 - udp 0 0 10.224.250.141:59313 10.224.251.20:12201 ESTABLISHED 0 3740393274 - udp 0 0 10.224.250.141:62643 10.224.251.19:12201 ESTABLISHED 0 42283057 - udp 0 0 10.224.250.141:62735 10.224.251.20:12201 ESTABLISHED 0 40143546 - udp 0 0 10.224.250.141:64571 10.224.251.21:12201 ESTABLISHED 0 3079253519 - udp 0 0 0.0.0.0:111 0.0.0.0:* 0 43170 - udp 0 0 172.18.0.1:123 0.0.0.0:* 38 2363789221 - udp 0 0 172.17.0.1:123 0.0.0.0:* 38 63555 - udp 0 0 10.224.254.141:123 0.0.0.0:* 38 29624 - udp 0 0 10.224.250.141:123 0.0.0.0:* 38 29623 - udp 0 0 127.0.0.1:123 0.0.0.0:* 0 23669 - udp 0 0 0.0.0.0:123 0.0.0.0:* 0 23662 - udp 0 0 0.0.0.0:161 0.0.0.0:* 0 17693 - udp 0 0 127.0.0.1:755 0.0.0.0:* 0 54381 - udp 0 0 0.0.0.0:756 0.0.0.0:* 0 43171 - udp 0 0 10.224.250.141:2123 10.224.251.20:12201 ESTABLISHED 0 3740409135 - udp 0 0 10.224.250.141:2506 10.224.251.20:12201 ESTABLISHED 0 40168728 - udp 0 0 10.224.250.141:6617 10.224.251.19:12201 ESTABLISHED 0 3493091885 - udp 0 0 10.224.250.141:7408 10.224.251.20:12201 ESTABLISHED 0 3740409109 - udp 0 0 10.224.250.141:8134 10.224.251.20:12201 ESTABLISHED 0 3740408895 - udp 0 0 10.224.250.141:8652 10.224.251.19:12201 ESTABLISHED 0 40238643 - udp 0 0 10.224.250.141:9349 10.224.251.21:12201 ESTABLISHED 0 29514238 - udp 0 0 10.224.250.141:10729 10.224.251.20:12201 ESTABLISHED 0 3190196205 - udp 0 0 10.224.250.141:12051 10.224.251.21:12201 ESTABLISHED 0 3740408915 - udp 0 0 10.224.250.141:15835 10.224.251.20:12201 ESTABLISHED 0 3492901715 - udp 0 0 10.224.250.141:16814 10.224.251.19:12201 ESTABLISHED 0 37353383 - udp 0 0 10.224.250.141:16947 10.224.251.20:12201 ESTABLISHED 0 4091750367 - udp 0 0 10.224.250.141:18518 10.224.251.20:12201 ESTABLISHED 0 3740409094 - udp 0 0 10.224.250.141:18882 10.224.251.21:12201 ESTABLISHED 0 3492876816 - udp 0 0 0.0.0.0:20642 0.0.0.0:* 0 29607 - udp 0 0 10.224.250.141:22242 10.224.251.19:12201 ESTABLISHED 0 40045482 - udp 0 0 10.224.250.141:22319 10.224.251.21:12201 ESTABLISHED 0 3740390694 - udp 0 0 10.224.250.141:22789 10.224.251.19:12201 ESTABLISHED 0 40045506 - udp 0 0 10.224.250.141:23279 10.224.251.21:12201 ESTABLISHED 0 4136226448 - udp 0 0 10.224.250.141:24338 10.224.251.21:12201 ESTABLISHED 0 3493019159 - udp 0 0 10.224.250.141:25377 10.224.251.20:12201 ESTABLISHED 0 3756364212 - udp 0 0 10.224.250.141:26252 10.224.251.20:12201 ESTABLISHED 0 3416673179 - udp 0 0 10.224.250.141:28815 10.224.251.21:12201 ESTABLISHED 0 3740395861 - udp 0 0 0.0.0.0:30380 0.0.0.0:* 29 27397 - udp 0 0 0.0.0.0:31527 0.0.0.0:* 0 17692 - udp 0 0 10.224.250.141:31949 10.224.251.21:12201 ESTABLISHED 0 3740427504 - udp 0 0 10.224.250.141:32906 10.224.251.20:12201 ESTABLISHED 0 3576186665 - udp 0 0 10.224.250.141:34876 10.140.130.253:1514 ESTABLISHED 991 2586286298 - udp 0 0 10.224.250.141:35787 10.224.251.19:12201 ESTABLISHED 0 3079248205 - udp 0 0 10.224.250.141:37369 10.224.251.19:12201 ESTABLISHED 0 3740429985 - udp 0 0 10.224.250.141:38479 10.224.251.21:12201 ESTABLISHED 0 3493018649 - udp 0 0 10.224.250.141:40455 10.224.251.20:12201 ESTABLISHED 0 40186293 - udp 0 0 10.224.250.141:40464 10.224.251.21:12201 ESTABLISHED 0 3740428886 - udp 0 0 10.224.250.141:40562 10.224.251.20:12201 ESTABLISHED 0 3740683794 - udp 0 0 10.224.250.141:44205 10.224.251.20:12201 ESTABLISHED 0 42408309 - udp 0 0 10.224.250.141:45143 10.224.251.20:12201 ESTABLISHED 0 40116619 - udp 0 0 10.224.250.141:46073 10.224.251.21:12201 ESTABLISHED 0 3740394289 - udp 0 0 10.224.250.141:46197 10.224.251.21:12201 ESTABLISHED 0 45592027 - udp 0 0 10.224.250.141:46422 10.224.251.19:12201 ESTABLISHED 0 40104693 - udp 0 0 10.224.250.141:46612 10.224.251.21:12201 ESTABLISHED 0 3740393873 - udp 0 0 10.224.250.141:47592 10.224.251.21:12201 ESTABLISHED 0 3736989214 - udp6 0 0 :::63854 :::* 0 29609 - udp6 0 0 :::111 :::* 0 43173 - udp6 0 0 :::123 :::* 0 23663 - udp6 0 0 :::755 :::* 0 43174 - udp6 0 0 :::32615 :::* 29 27401 - Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 4 [ ] DGRAM 2586299470 - /queue/ossec/queue unix 2 [ ACC ] STREAM LISTENING 45485752 - /tmp/ssh-OdD0XoNo4U/agent.5621 unix 2 [ ACC ] STREAM LISTENING 82553652 - private/tlsmgr unix 2 [ ACC ] STREAM LISTENING 52381 - /var/lib/gssproxy/default.sock unix 2 [ ACC ] STREAM LISTENING 17694 - /var/agentx/master unix 2 [ ACC ] STREAM LISTENING 44678384 - /tmp/ssh-fKSoZJI37n/agent.13011 unix 2 [ ACC ] STREAM LISTENING 29483 - /run/gssproxy.sock unix 3 [ ] DGRAM 2586276599 - /var/ossec/queue/alerts/execq unix 2 [ ACC ] STREAM LISTENING 82553688 - private/error unix 2 [ ACC ] STREAM LISTENING 82553691 - private/retry unix 2 [ ACC ] STREAM LISTENING 82553694 - private/discard unix 2 [ ACC ] STREAM LISTENING 82553697 - private/local unix 2 [ ACC ] STREAM LISTENING 82553700 - private/virtual unix 2 [ ACC ] STREAM LISTENING 82553703 - private/lmtp unix 2 [ ACC ] STREAM LISTENING 82553706 - private/anvil unix 2 [ ACC ] STREAM LISTENING 82553709 - private/scache unix 2 [ ACC ] STREAM LISTENING 82553655 - private/rewrite unix 2 [ ACC ] STREAM LISTENING 82553641 - public/pickup unix 2 [ ACC ] STREAM LISTENING 82553658 - private/bounce unix 2 [ ACC ] STREAM LISTENING 82553645 - public/cleanup unix 2 [ ACC ] STREAM LISTENING 82553661 - private/defer unix 2 [ ACC ] STREAM LISTENING 82553648 - public/qmgr unix 2 [ ACC ] STREAM LISTENING 82553670 - public/flush unix 2 [ ACC ] STREAM LISTENING 82553664 - private/trace unix 2 [ ACC ] STREAM LISTENING 82553685 - public/showq unix 2 [ ACC ] STREAM LISTENING 82553667 - private/verify unix 2 [ ACC ] STREAM LISTENING 82553673 - private/proxymap unix 2 [ ACC ] STREAM LISTENING 82553676 - private/proxywrite unix 2 [ ACC ] STREAM LISTENING 82553679 - private/smtp unix 2 [ ACC ] STREAM LISTENING 82553682 - private/relay unix 2 [ ACC ] STREAM LISTENING 26698 - /run/lvm/lvmetad.socket unix 2 [ ACC ] STREAM LISTENING 26706 - /run/lvm/lvmpolld.socket unix 2 [ ] DGRAM 83 - /run/systemd/notify unix 2 [ ] DGRAM 85 - /run/systemd/cgroups-agent unix 2 [ ACC ] STREAM LISTENING 94 - /run/systemd/journal/stdout unix 3 [ ] DGRAM 97 - /run/systemd/journal/socket unix 25 [ ] DGRAM 99 - /dev/log unix 2 [ ACC ] STREAM LISTENING 35970 - /var/run/rpcbind.sock unix 2 [ ACC ] STREAM LISTENING 35972 - /var/run/dbus/system_bus_socket unix 2 [ ACC ] STREAM LISTENING 265343638 - /var/run/docker.sock unix 2 [ ] DGRAM 24744 - /run/systemd/shutdownd unix 2 [ ] DGRAM 24746 - /run/systemd/journal/syslog unix 2 [ ACC ] STREAM LISTENING 265316524 - /run/docker/libnetwork/0797dcb126b2f4b47506ac3a10a113ac32e9b10e6a44a655dd9b56db50fa79d5.sock unix 2 [ ACC ] SEQPACKET LISTENING 24750 - /run/udev/control unix 2 [ ACC ] SEQPACKET LISTENING 17633 - /var/run/teamd/bond2.sock unix 2 [ ACC ] SEQPACKET LISTENING 51427 - /var/run/teamd/bond0.sock unix 2 [ ACC ] STREAM LISTENING 8931 - /run/systemd/private unix 2 [ ACC ] SEQPACKET LISTENING 3201955817 - /var/run/ladvd.sock unix 2 [ ACC ] STREAM LISTENING 4135349244 - /var/run/docker/libcontainerd/docker-containerd.sock unix 3 [ ] STREAM CONNECTED 3836398245 14872/mesos-docker- unix 3 [ ] STREAM CONNECTED 3836295456 12501/docker unix 3 [ ] STREAM CONNECTED 3740396046 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079260332 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537449422 - unix 3 [ ] STREAM CONNECTED 12582 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 40168715 11549/docker unix 3 [ ] STREAM CONNECTED 37373737 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3842043251 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537453114 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740291960 43699/mesos-docker- unix 3 [ ] STREAM CONNECTED 3079241588 1082/docker unix 3 [ ] STREAM CONNECTED 82553675 - unix 3 [ ] STREAM CONNECTED 40156563 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 29514226 25911/docker unix 3 [ ] STREAM CONNECTED 3756376353 3726/docker unix 3 [ ] STREAM CONNECTED 3537437029 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537445038 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537432885 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40116592 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740378572 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45592011 35409/docker unix 3 [ ] STREAM CONNECTED 4282305847 20144/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537449434 - unix 3 [ ] STREAM CONNECTED 27837 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 3740366716 43807/docker unix 3 [ ] STREAM CONNECTED 3740296754 43188/mesos-docker- unix 3 [ ] STREAM CONNECTED 40186283 12443/docker unix 3 [ ] STREAM CONNECTED 3537445030 - /var/run/docker.sock unix 2 [ ] DGRAM 366704 - unix 3 [ ] STREAM CONNECTED 40171440 11863/docker unix 3 [ ] STREAM CONNECTED 3740382980 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537417037 - unix 3 [ ] STREAM CONNECTED 29558010 25861/mesos-docker- unix 2 [ ] STREAM CONNECTED 3738901294 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537450141 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537445024 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537449436 - unix 3 [ ] STREAM CONNECTED 20640 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 45479559 - unix 3 [ ] STREAM CONNECTED 29524901 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740315059 42768/docker unix 3 [ ] STREAM CONNECTED 3537417051 - unix 3 [ ] STREAM CONNECTED 82553678 - unix 3 [ ] STREAM CONNECTED 53437 - unix 3 [ ] STREAM CONNECTED 3737010450 1678/mesos-docker-e unix 3 [ ] STREAM CONNECTED 3537440114 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537449426 - unix 3 [ ] STREAM CONNECTED 82553666 - unix 3 [ ] STREAM CONNECTED 32897 - unix 3 [ ] STREAM CONNECTED 40117861 11324/docker unix 3 [ ] STREAM CONNECTED 40117441 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740364579 44457/docker unix 3 [ ] STREAM CONNECTED 45567413 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 29558011 25861/mesos-docker- unix 3 [ ] STREAM CONNECTED 19641 - unix 3 [ ] STREAM CONNECTED 82553674 - unix 3 [ ] STREAM CONNECTED 3537417060 - unix 3 [ ] STREAM CONNECTED 3537453116 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079300352 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079241592 1400/docker unix 3 [ ] STREAM CONNECTED 3740690196 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740296755 43188/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537437027 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433975 - unix 2 [ ] DGRAM 2586277606 - unix 3 [ ] STREAM CONNECTED 37387365 12176/docker unix 3 [ ] STREAM CONNECTED 4091756536 9533/docker unix 3 [ ] STREAM CONNECTED 3537417046 - unix 3 [ ] STREAM CONNECTED 3537433967 - unix 3 [ ] STREAM CONNECTED 3537449432 - unix 3 [ ] STREAM CONNECTED 3537446213 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492930026 2337/mesos-docker-e unix 3 [ ] STREAM CONNECTED 3416673020 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40247902 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3756391372 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3737010017 1729/docker unix 3 [ ] STREAM CONNECTED 40164732 11499/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740399677 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 42367863 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433961 - unix 2 [ ] DGRAM 372825 - unix 3 [ ] STREAM CONNECTED 3537437025 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537449420 - unix 3 [ ] STREAM CONNECTED 3201955819 - unix 3 [ ] STREAM CONNECTED 3079242349 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3493019144 4638/mesos-docker-e unix 3 [ ] STREAM CONNECTED 40132428 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4135361160 - /var/run/docker/libcontainerd/docker-containerd.sock unix 3 [ ] STREAM CONNECTED 3836398252 14922/docker unix 3 [ ] STREAM CONNECTED 3537453120 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537400655 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45603433 - unix 3 [ ] STREAM CONNECTED 3756396689 3726/docker unix 3 [ ] STREAM CONNECTED 3740380967 43800/docker unix 3 [ ] STREAM CONNECTED 3737010016 1729/docker unix 3 [ ] STREAM CONNECTED 33111134 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 40192479 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537450147 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433969 - unix 3 [ ] STREAM CONNECTED 42274061 29066/docker unix 3 [ ] STREAM CONNECTED 3740364440 43800/docker unix 3 [ ] STREAM CONNECTED 3576222247 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537449412 - unix 3 [ ] STREAM CONNECTED 3416640111 39774/mesos-docker- unix 3 [ ] STREAM CONNECTED 3190257276 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40141700 11270/mesos-docker- unix 3 [ ] STREAM CONNECTED 3756396690 3726/docker unix 3 [ ] STREAM CONNECTED 82553672 - unix 2 [ ] DGRAM 48233 - unix 3 [ ] STREAM CONNECTED 40012783 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740357942 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537450143 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537445026 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537428692 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079313673 3319/docker unix 3 [ ] STREAM CONNECTED 82553677 - unix 3 [ ] STREAM CONNECTED 40138243 11324/docker unix 3 [ ] STREAM CONNECTED 40010114 10505/docker unix 3 [ ] STREAM CONNECTED 3492980578 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3836383053 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740389821 43698/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740395446 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3736993494 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740358466 44533/docker unix 3 [ ] STREAM CONNECTED 3537440120 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433971 - unix 3 [ ] STREAM CONNECTED 45497224 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40242672 13874/docker unix 3 [ ] STREAM CONNECTED 45596785 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537450139 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740403910 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40104552 10097/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740315144 43454/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537450145 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433979 - unix 3 [ ] STREAM CONNECTED 3079295962 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45482612 - unix 3 [ ] STREAM CONNECTED 3190246184 47895/mesos-docker- unix 3 [ ] STREAM CONNECTED 82553681 - unix 3 [ ] STREAM CONNECTED 45592809 35409/docker unix 3 [ ] STREAM CONNECTED 3537450137 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537434881 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537453118 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40116590 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079244634 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537437002 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079254252 1400/docker unix 3 [ ] STREAM CONNECTED 3740341712 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3493004109 - /var/run/docker.sock unix 2 [ ] DGRAM 82537048 - unix 3 [ ] STREAM CONNECTED 3836424366 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537449438 - unix 3 [ ] STREAM CONNECTED 1446398707 - unix 3 [ ] STREAM CONNECTED 40141699 11270/mesos-docker- unix 3 [ ] STREAM CONNECTED 4136236521 18640/docker unix 3 [ ] STREAM CONNECTED 3740352279 42590/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537450151 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079243329 857/mesos-docker-ex unix 3 [ ] STREAM CONNECTED 33046866 - unix 3 [ ] STREAM CONNECTED 3740407812 42748/docker unix 3 [ ] STREAM CONNECTED 3537450149 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40156561 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 29548048 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740417093 43556/docker unix 3 [ ] STREAM CONNECTED 3836398305 14922/docker unix 3 [ ] STREAM CONNECTED 3756359350 3676/mesos-docker-e unix 3 [ ] STREAM CONNECTED 3537447057 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45614704 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40250615 13874/docker unix 3 [ ] STREAM CONNECTED 45479560 - unix 3 [ ] STREAM CONNECTED 3740398614 42884/docker unix 3 [ ] STREAM CONNECTED 3740315145 43454/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537417049 - unix 3 [ ] STREAM CONNECTED 3537417043 - unix 3 [ ] STREAM CONNECTED 40099109 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740330917 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537440116 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433965 - unix 3 [ ] STREAM CONNECTED 3416640110 39774/mesos-docker- unix 3 [ ] STREAM CONNECTED 82553710 - unix 3 [ ] STREAM CONNECTED 3740272553 42834/mesos-docker- unix 3 [ ] STREAM CONNECTED 35998 - /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 55349 - unix 3 [ ] STREAM CONNECTED 3537417055 - unix 2 [ ] DGRAM 265343636 - unix 3 [ ] STREAM CONNECTED 3493040535 4820/docker unix 2 [ ] DGRAM 2586286299 - unix 3 [ ] STREAM CONNECTED 48265 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 40184932 11863/docker unix 3 [ ] STREAM CONNECTED 4135372010 - unix 3 [ ] STREAM CONNECTED 3537417044 - unix 3 [ ] STREAM CONNECTED 82553711 - unix 3 [ ] STREAM CONNECTED 45261 - /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 82553671 - unix 3 [ ] STREAM CONNECTED 3740399772 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740399679 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537417056 - unix 3 [ ] STREAM CONNECTED 3537437004 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 18601 - unix 3 [ ] STREAM CONNECTED 3740402434 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537449428 - unix 3 [ ] STREAM CONNECTED 45603435 - unix 3 [ ] STREAM CONNECTED 3740366684 43672/docker unix 3 [ ] STREAM CONNECTED 3740352280 42590/mesos-docker- unix 3 [ ] STREAM CONNECTED 40128299 10214/docker unix 3 [ ] STREAM CONNECTED 3740315049 42746/docker unix 3 [ ] STREAM CONNECTED 3537437031 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537445034 - /var/run/docker.sock unix 2 [ ] DGRAM 62529 - unix 3 [ ] STREAM CONNECTED 40115461 10214/docker unix 3 [ ] STREAM CONNECTED 3842051308 28888/mesos-docker- unix 3 [ ] STREAM CONNECTED 3492902793 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45595012 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40206736 12393/mesos-docker- unix 3 [ ] STREAM CONNECTED 3842053173 28952/docker unix 3 [ ] STREAM CONNECTED 3740393585 42748/docker unix 3 [ ] STREAM CONNECTED 3537417045 - unix 3 [ ] STREAM CONNECTED 3537449418 - unix 3 [ ] STREAM CONNECTED 3493026056 4820/docker unix 3 [ ] STREAM CONNECTED 3079236437 12/mesos-slave unix 3 [ ] STREAM CONNECTED 40104562 10150/docker unix 3 [ ] STREAM CONNECTED 3740394821 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553663 - unix 3 [ ] STREAM CONNECTED 51233 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 40104553 10097/mesos-docker- unix 3 [ ] STREAM CONNECTED 3756377517 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537449450 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433977 - unix 3 [ ] STREAM CONNECTED 29558017 25911/docker unix 3 [ ] STREAM CONNECTED 3740287696 44533/docker unix 3 [ ] STREAM CONNECTED 3493037074 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 38007 - /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 3537400651 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45610260 - unix 3 [ ] STREAM CONNECTED 3079243330 857/mesos-docker-ex unix 3 [ ] STREAM CONNECTED 3537417053 - unix 3 [ ] STREAM CONNECTED 3201955820 - unix 3 [ ] STREAM CONNECTED 45259 - unix 3 [ ] STREAM CONNECTED 3740378898 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537428699 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740429779 44457/docker unix 3 [ ] STREAM CONNECTED 3537449430 - unix 3 [ ] STREAM CONNECTED 3416660391 39875/docker unix 3 [ ] STREAM CONNECTED 1446478406 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 82553707 - unix 3 [ ] STREAM CONNECTED 39472899 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40012797 10319/docker unix 3 [ ] STREAM CONNECTED 3537433981 - unix 3 [ ] STREAM CONNECTED 45260 - unix 3 [ ] STREAM CONNECTED 40010115 10505/docker unix 3 [ ] STREAM CONNECTED 4282222555 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3842051307 28888/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740279580 43023/mesos-docker- unix 3 [ ] STREAM CONNECTED 3078918642 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537453112 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537400653 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553708 - unix 3 [ ] DGRAM 51284 - unix 3 [ ] STREAM CONNECTED 45605276 - unix 3 [ ] STREAM CONNECTED 4091778157 9533/docker unix 3 [ ] STREAM CONNECTED 3537417052 - unix 3 [ ] STREAM CONNECTED 3537445036 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 57955 - unix 3 [ ] STREAM CONNECTED 45567412 - unix 3 [ ] STREAM CONNECTED 3756359349 3676/mesos-docker-e unix 3 [ ] STREAM CONNECTED 3537449416 - unix 2 [ ] DGRAM 27352598 - unix 3 [ ] DGRAM 51285 - unix 3 [ ] STREAM CONNECTED 4091780137 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079228259 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 17635 - unix 3 [ ] STREAM CONNECTED 3537417061 - unix 3 [ ] STREAM CONNECTED 40210180 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40168228 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492977293 3881/docker unix 2 [ ] DGRAM 34159421 - unix 3 [ ] STREAM CONNECTED 3740429782 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537440118 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3736989209 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492964269 2389/docker unix 3 [ ] STREAM CONNECTED 82553705 - unix 3 [ ] STREAM CONNECTED 3740677835 778/docker unix 3 [ ] STREAM CONNECTED 3740674773 - unix 3 [ ] STREAM CONNECTED 3537417057 - unix 3 [ ] STREAM CONNECTED 3076255639 1577/docker unix 3 [ ] STREAM CONNECTED 3737013474 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537453108 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079314555 3319/docker unix 2 [ ] DGRAM 1446472377 - unix 3 [ ] STREAM CONNECTED 37388351 12126/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740395445 44457/docker unix 3 [ ] STREAM CONNECTED 82553665 - unix 3 [ ] STREAM CONNECTED 40164731 11499/mesos-docker- unix 3 [ ] STREAM CONNECTED 40012785 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537417054 - unix 3 [ ] STREAM CONNECTED 3537445028 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537453104 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3493019143 4638/mesos-docker-e unix 3 [ ] STREAM CONNECTED 40206735 12393/mesos-docker- unix 3 [ ] STREAM CONNECTED 40174742 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740276503 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537417048 - unix 3 [ ] STREAM CONNECTED 3537449414 - unix 3 [ ] STREAM CONNECTED 3493035112 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079236438 12/mesos-slave unix 3 [ ] STREAM CONNECTED 40115492 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553669 - unix 3 [ ] STREAM CONNECTED 43535 - unix 3 [ ] STREAM CONNECTED 3537433973 - unix 3 [ ] STREAM CONNECTED 3537437006 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079242330 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740404300 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740402721 42768/docker unix 3 [ ] STREAM CONNECTED 3737010451 1678/mesos-docker-e unix 3 [ ] STREAM CONNECTED 3537417047 - unix 3 [ ] STREAM CONNECTED 3190270238 47945/docker unix 3 [ ] STREAM CONNECTED 3492958047 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740678486 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 3740398615 42884/docker unix 3 [ ] STREAM CONNECTED 3537449452 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4282289544 20194/docker unix 3 [ ] STREAM CONNECTED 3740279581 43023/mesos-docker- unix 3 [ ] STREAM CONNECTED 3576232394 30336/docker unix 3 [ ] STREAM CONNECTED 3493020735 4820/docker unix 3 [ ] STREAM CONNECTED 21703 - unix 3 [ ] STREAM CONNECTED 37388350 12126/mesos-docker- unix 3 [ ] STREAM CONNECTED 4136228801 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3836278373 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3756364206 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079307664 3149/docker unix 3 [ ] STREAM CONNECTED 43088 - unix 3 [ ] STREAM CONNECTED 3537417050 - unix 3 [ ] STREAM CONNECTED 3537445032 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537449424 - unix 3 [ ] STREAM CONNECTED 3190246185 47895/mesos-docker- unix 3 [ ] STREAM CONNECTED 206083933 - unix 3 [ ] STREAM CONNECTED 48231 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 3740327326 43073/docker unix 3 [ ] STREAM CONNECTED 3492930027 2337/mesos-docker-e unix 3 [ ] STREAM CONNECTED 45603424 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 51411 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 3740352286 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537417059 - unix 3 [ ] STREAM CONNECTED 3537433939 - unix 3 [ ] STREAM CONNECTED 42282115 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492972588 2389/docker unix 3 [ ] STREAM CONNECTED 53374 - unix 3 [ ] STREAM CONNECTED 42363668 31854/docker unix 3 [ ] STREAM CONNECTED 4282305846 20144/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537433963 - unix 3 [ ] STREAM CONNECTED 40010669 11549/docker unix 3 [ ] STREAM CONNECTED 3836398244 14872/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740272552 42834/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537437033 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553668 - unix 3 [ ] STREAM CONNECTED 3537440122 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740402751 43807/docker unix 3 [ ] STREAM CONNECTED 3740389820 43698/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537434879 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079299416 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740291961 43699/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537417062 - unix 3 [ ] STREAM CONNECTED 3079255813 1074/docker unix 2 [ ] DGRAM 2586269189 - unix 3 [ ] STREAM CONNECTED 51541 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 3740314137 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537417058 - unix 3 [ ] STREAM CONNECTED 3537433941 - unix 3 [ ] STREAM CONNECTED 3737001479 1729/docker unix 3 [ ] STREAM CONNECTED 3079225004 1577/docker unix 3 [ ] STREAM CONNECTED 206128718 - /run/systemd/journal/stdout unix 2 [ ] STREAM CONNECTED 4135351212 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740390640 42746/docker unix 3 [ ] STREAM CONNECTED 3537433959 - unix 3 [ ] STREAM CONNECTED 20748 - unix 3 [ ] STREAM CONNECTED 37391367 12176/docker unix 3 [ ] STREAM CONNECTED 4282285568 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537434885 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537437008 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079246473 - unix 3 [ ] STREAM CONNECTED 4136210209 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492860907 - /var/run/docker.sock unix 3 [ ] DGRAM 3201976997 - unix 2 [ ] DGRAM 45478766 - unix 3 [ ] STREAM CONNECTED 3740393862 43555/docker unix 3 [ ] STREAM CONNECTED 3416667591 39875/docker unix 3 [ ] STREAM CONNECTED 3202001529 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 40192520 12443/docker unix 3 [ ] STREAM CONNECTED 3537445012 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492900012 48825/mesos-docker- unix 3 [ ] STREAM CONNECTED 40013190 10268/mesos-docker- unix 3 [ ] STREAM CONNECTED 3079250754 866/mesos-docker-ex unix 3 [ ] STREAM CONNECTED 3836421387 14922/docker unix 3 [ ] STREAM CONNECTED 3740398610 43455/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537437000 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079247239 1400/docker unix 3 [ ] STREAM CONNECTED 3078678751 1522/mesos-docker-e unix 3 [ ] STREAM CONNECTED 82553687 - unix 3 [ ] STREAM CONNECTED 3836267469 12451/mesos-docker- unix 3 [ ] STREAM CONNECTED 3493118647 7231/docker unix 3 [ ] STREAM CONNECTED 3079299359 - /var/run/docker.sock unix 2 [ ] DGRAM 31784 - unix 3 [ ] STREAM CONNECTED 40107939 10450/mesos-docker- unix 3 [ ] STREAM CONNECTED 4136442568 25137/docker unix 3 [ ] STREAM CONNECTED 3079299437 3319/docker unix 3 [ ] STREAM CONNECTED 42417420 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4136441394 - /var/run/docker.sock unix 2 [ ] DGRAM 45543068 - unix 3 [ ] STREAM CONNECTED 40245652 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4136209148 18640/docker unix 3 [ ] STREAM CONNECTED 82553660 - unix 3 [ ] STREAM CONNECTED 13537 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 3537445006 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 22667 - /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 37353376 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537437016 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 29494 - /run/gssproxy.sock unix 3 [ ] STREAM CONNECTED 3740286651 43555/docker unix 3 [ ] STREAM CONNECTED 3740398609 43455/mesos-docker- unix 3 [ ] STREAM CONNECTED 82553699 - unix 3 [ ] STREAM CONNECTED 39105 - unix 3 [ ] STREAM CONNECTED 29554517 25911/docker unix 3 [ ] STREAM CONNECTED 3740420939 44641/docker unix 3 [ ] STREAM CONNECTED 3576244711 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079288546 3149/docker unix 3 [ ] STREAM CONNECTED 48243 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 3740678545 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45579737 35409/docker unix 3 [ ] STREAM CONNECTED 42172789 29066/docker unix 3 [ ] STREAM CONNECTED 4136460438 25137/docker unix 3 [ ] STREAM CONNECTED 3537433937 - unix 3 [ ] STREAM CONNECTED 3079238475 1082/docker unix 3 [ ] STREAM CONNECTED 82553693 - unix 3 [ ] STREAM CONNECTED 82553640 - unix 3 [ ] STREAM CONNECTED 3740420940 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4282222577 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4136447645 25084/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740329294 42748/docker unix 2 [ ] DGRAM 45543063 - unix 3 [ ] STREAM CONNECTED 3740315165 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3493113998 7231/docker unix 3 [ ] STREAM CONNECTED 82553698 - unix 3 [ ] STREAM CONNECTED 4091781124 9480/mesos-docker-e unix 3 [ ] STREAM CONNECTED 3576234274 30286/mesos-docker- unix 3 [ ] STREAM CONNECTED 40226279 13824/mesos-docker- unix 3 [ ] STREAM CONNECTED 49339 - unix 3 [ ] STREAM CONNECTED 42281011 29016/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740626624 - unix 3 [ ] STREAM CONNECTED 3740341814 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740405847 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079253458 1349/mesos-docker-e unix 3 [ ] STREAM CONNECTED 3740341980 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553657 - unix 3 [ ] STREAM CONNECTED 40169888 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740393881 43238/docker unix 3 [ ] STREAM CONNECTED 3493003714 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3416686803 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 47182 - unix 3 [ ] STREAM CONNECTED 40087208 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740683788 778/docker unix 3 [ ] STREAM CONNECTED 35951 - unix 3 [ ] STREAM CONNECTED 42286161 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740622629 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537449442 - unix 2 [ ] DGRAM 45543066 - unix 3 [ ] STREAM CONNECTED 42317189 30054/docker unix 3 [ ] STREAM CONNECTED 40184925 11813/mesos-docker- unix 3 [ ] STREAM CONNECTED 3493106094 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4136235818 18590/mesos-docker- unix 3 [ ] STREAM CONNECTED 3836280823 12501/docker unix 3 [ ] STREAM CONNECTED 3740390641 42746/docker unix 3 [ ] STREAM CONNECTED 3492870057 48880/docker unix 3 [ ] STREAM CONNECTED 3079273260 3149/docker unix 3 [ ] STREAM CONNECTED 3537433949 - unix 3 [ ] STREAM CONNECTED 14440 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 45624553 - unix 3 [ ] STREAM CONNECTED 44653335 - unix 3 [ ] STREAM CONNECTED 37381205 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537417041 - unix 3 [ ] STREAM CONNECTED 42271987 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40238639 13874/docker unix 3 [ ] STREAM CONNECTED 40093395 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537437014 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3078678752 1522/mesos-docker-e unix 3 [ ] STREAM CONNECTED 82553684 - unix 3 [ ] STREAM CONNECTED 42414194 31794/mesos-docker- unix 3 [ ] STREAM CONNECTED 40226278 13824/mesos-docker- unix 3 [ ] STREAM CONNECTED 4136447644 25084/mesos-docker- unix 3 [ ] STREAM CONNECTED 4136457435 25137/docker unix 3 [ ] STREAM CONNECTED 3537440110 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433985 - unix 3 [ ] STREAM CONNECTED 3537400660 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079299410 3269/mesos-docker-e unix 3 [ ] STREAM CONNECTED 3842024427 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740389922 44585/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537436998 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553696 - unix 3 [ ] STREAM CONNECTED 29549954 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740409007 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3190162181 47945/docker unix 3 [ ] STREAM CONNECTED 265352667 - /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 3740461011 44641/docker unix 3 [ ] STREAM CONNECTED 3537426259 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3416682881 39875/docker unix 3 [ ] STREAM CONNECTED 3079224996 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4091750360 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740330922 43555/docker unix 3 [ ] STREAM CONNECTED 82553647 - unix 3 [ ] STREAM CONNECTED 33067274 - unix 3 [ ] STREAM CONNECTED 3740402435 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40192521 12443/docker unix 3 [ ] STREAM CONNECTED 46271 - unix 2 [ ] STREAM CONNECTED 4135351211 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433991 - unix 3 [ ] STREAM CONNECTED 3493022747 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079250753 866/mesos-docker-ex unix 3 [ ] STREAM CONNECTED 42296087 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492878067 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3201977000 - unix 3 [ ] STREAM CONNECTED 82553659 - unix 3 [ ] STREAM CONNECTED 3740403980 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740277670 42591/mesos-docker- unix 3 [ ] STREAM CONNECTED 3576213222 30336/docker unix 3 [ ] STREAM CONNECTED 3492852521 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079305704 3099/mesos-docker-e unix 3 [ ] STREAM CONNECTED 40101493 10319/docker unix 3 [ ] STREAM CONNECTED 40129540 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3842032402 28952/docker unix 3 [ ] STREAM CONNECTED 3576244710 30336/docker unix 3 [ ] STREAM CONNECTED 3537433953 - unix 3 [ ] STREAM CONNECTED 3740277674 42768/docker unix 3 [ ] STREAM CONNECTED 3493126211 7231/docker unix 3 [ ] STREAM CONNECTED 82553701 - unix 3 [ ] STREAM CONNECTED 45603426 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740415765 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 42414193 31794/mesos-docker- unix 3 [ ] STREAM CONNECTED 40209451 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4136449411 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740296745 42589/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537437022 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537449440 - unix 3 [ ] STREAM CONNECTED 3079111999 1074/docker unix 3 [ ] STREAM CONNECTED 45625598 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45564593 35331/mesos-docker- unix 3 [ ] STREAM CONNECTED 40184924 11813/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740286719 44533/docker unix 3 [ ] STREAM CONNECTED 3537436996 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492950015 3776/mesos-docker-e unix 3 [ ] DGRAM 3201976995 - unix 3 [ ] STREAM CONNECTED 82553690 - unix 3 [ ] STREAM CONNECTED 45610253 - unix 3 [ ] STREAM CONNECTED 3190239370 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537445008 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45625599 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40174119 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4282290686 20194/docker unix 3 [ ] STREAM CONNECTED 3740296744 42589/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537440112 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433947 - unix 2 [ ] DGRAM 54374 - unix 3 [ ] STREAM CONNECTED 45499098 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740341813 43556/docker unix 3 [ ] STREAM CONNECTED 3079249891 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553646 - unix 3 [ ] STREAM CONNECTED 4091781123 9480/mesos-docker-e unix 3 [ ] STREAM CONNECTED 40110725 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537445014 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740314143 43073/docker unix 3 [ ] STREAM CONNECTED 3492933460 2389/docker unix 2 [ ] DGRAM 45543065 - unix 3 [ ] STREAM CONNECTED 3740341711 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553642 - unix 3 [ ] STREAM CONNECTED 3740388031 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740277669 42591/mesos-docker- unix 3 [ ] STREAM CONNECTED 40107940 10450/mesos-docker- unix 2 [ ] DGRAM 45543064 - unix 3 [ ] STREAM CONNECTED 40087188 10098/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740393276 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 366705 - /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 4136235819 18590/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740394763 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079305705 3099/mesos-docker-e unix 3 [ ] STREAM CONNECTED 3537417039 - unix 3 [ ] STREAM CONNECTED 35952 - unix 3 [ ] STREAM CONNECTED 45625603 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 44653336 - unix 2 [ ] DGRAM 45543070 - unix 3 [ ] STREAM CONNECTED 3537449444 - unix 3 [ ] STREAM CONNECTED 3492891258 48880/docker unix 3 [ ] STREAM CONNECTED 340398 - unix 3 [ ] STREAM CONNECTED 3740393337 43238/docker unix 3 [ ] STREAM CONNECTED 3740363297 - /var/run/docker.sock unix 2 [ ] DGRAM 47254 - unix 3 [ ] STREAM CONNECTED 42296083 29984/mesos-docker- unix 3 [ ] STREAM CONNECTED 3836299413 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740352490 44401/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537433957 - unix 3 [ ] STREAM CONNECTED 3492900011 48825/mesos-docker- unix 3 [ ] STREAM CONNECTED 3079274686 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079228244 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 42413573 31854/docker unix 3 [ ] STREAM CONNECTED 42271031 29066/docker unix 3 [ ] STREAM CONNECTED 3842044502 28952/docker unix 3 [ ] STREAM CONNECTED 3537433983 - unix 3 [ ] STREAM CONNECTED 82553654 - unix 3 [ ] STREAM CONNECTED 3740404382 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740393244 43672/docker unix 3 [ ] STREAM CONNECTED 265328347 - unix 3 [ ] STREAM CONNECTED 3836251132 12501/docker unix 3 [ ] STREAM CONNECTED 3740352489 44401/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537433951 - unix 3 [ ] STREAM CONNECTED 45625604 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45624554 - unix 3 [ ] STREAM CONNECTED 37367725 12176/docker unix 3 [ ] STREAM CONNECTED 3537433945 - unix 3 [ ] STREAM CONNECTED 3079299411 3269/mesos-docker-e unix 3 [ ] STREAM CONNECTED 3537433931 - unix 3 [ ] DGRAM 3201976996 - unix 3 [ ] STREAM CONNECTED 82553702 - unix 3 [ ] STREAM CONNECTED 3740393149 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537434877 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433989 - unix 3 [ ] STREAM CONNECTED 3537400662 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492950016 3776/mesos-docker-e unix 3 [ ] STREAM CONNECTED 82553692 - unix 3 [ ] STREAM CONNECTED 33092130 - /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 40128298 10214/docker unix 3 [ ] STREAM CONNECTED 3740291921 42884/docker unix 3 [ ] STREAM CONNECTED 40013191 10268/mesos-docker- unix 2 [ ] DGRAM 82533020 - unix 3 [ ] STREAM CONNECTED 3740704797 721/mesos-docker-ex unix 3 [ ] STREAM CONNECTED 3740390801 43807/docker unix 3 [ ] STREAM CONNECTED 3740330919 43800/docker unix 3 [ ] STREAM CONNECTED 3537433933 - unix 3 [ ] STREAM CONNECTED 3493106107 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553683 - unix 3 [ ] STREAM CONNECTED 82553656 - unix 3 [ ] STREAM CONNECTED 3740390960 44641/docker unix 3 [ ] STREAM CONNECTED 3190123972 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45609225 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40101492 10319/docker unix 3 [ ] STREAM CONNECTED 4136210203 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3842043253 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45541308 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740705881 778/docker unix 3 [ ] STREAM CONNECTED 3416682031 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45564594 35331/mesos-docker- unix 3 [ ] STREAM CONNECTED 3201976999 - unix 3 [ ] STREAM CONNECTED 82553689 - unix 3 [ ] STREAM CONNECTED 42313173 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3493091871 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 40209453 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740677783 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740366633 43606/mesos-docker- unix 3 [ ] STREAM CONNECTED 4091763262 9533/docker unix 3 [ ] STREAM CONNECTED 82553639 - unix 3 [ ] STREAM CONNECTED 42296101 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3836267468 12451/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740394780 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4282222576 20194/docker unix 3 [ ] STREAM CONNECTED 3537433955 - unix 3 [ ] STREAM CONNECTED 3493082895 7181/mesos-docker-e unix 3 [ ] STREAM CONNECTED 82553662 - unix 3 [ ] STREAM CONNECTED 3740429386 44480/mesos-docker- unix 3 [ ] STREAM CONNECTED 42357687 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4091750352 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740389921 44585/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537433929 - unix 3 [ ] STREAM CONNECTED 3740363298 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 49338 - unix 3 [ ] STREAM CONNECTED 3740405836 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740415761 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492897848 48880/docker unix 3 [ ] DGRAM 3201976994 - unix 3 [ ] STREAM CONNECTED 3740390761 43672/docker unix 3 [ ] STREAM CONNECTED 3740277680 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 45546237 - unix 3 [ ] STREAM CONNECTED 3740276545 43556/docker unix 3 [ ] STREAM CONNECTED 3537433987 - unix 3 [ ] STREAM CONNECTED 3537433943 - unix 3 [ ] STREAM CONNECTED 3079253459 1349/mesos-docker-e unix 2 [ ] STREAM CONNECTED 4135351209 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3836383051 - /var/run/docker.sock unix 2 [ ] STREAM CONNECTED 3738941114 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537433935 - unix 3 [ ] STREAM CONNECTED 40137597 11549/docker unix 3 [ ] STREAM CONNECTED 3836267495 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079305709 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 4136233849 18640/docker unix 2 [ ] STREAM CONNECTED 29598 - unix 3 [ ] STREAM CONNECTED 3740429387 44480/mesos-docker- unix 3 [ ] STREAM CONNECTED 3537434887 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492954439 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079228243 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553680 - unix 2 [ ] DGRAM 45543067 - unix 3 [ ] STREAM CONNECTED 3537437012 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553686 - unix 3 [ ] STREAM CONNECTED 82553650 - unix 3 [ ] STREAM CONNECTED 3576234275 30286/mesos-docker- unix 3 [ ] STREAM CONNECTED 3201965700 - unix 3 [ ] STREAM CONNECTED 3190162182 47945/docker unix 3 [ ] STREAM CONNECTED 40107875 10150/docker unix 3 [ ] STREAM CONNECTED 3576217255 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079305853 - /var/run/docker.sock unix 2 [ ] STREAM CONNECTED 3738946634 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537445004 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 42385324 31854/docker unix 3 [ ] STREAM CONNECTED 3079244662 1577/docker unix 3 [ ] STREAM CONNECTED 82553653 - unix 2 [ ] DGRAM 45628354 - unix 3 [ ] STREAM CONNECTED 3492982417 3881/docker unix 3 [ ] STREAM CONNECTED 3537445010 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3493082894 7181/mesos-docker-e unix 3 [ ] STREAM CONNECTED 40187079 11863/docker unix 3 [ ] STREAM CONNECTED 4136449409 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537453122 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3492988825 3881/docker unix 2 [ ] DGRAM 45543069 - unix 3 [ ] STREAM CONNECTED 3079259140 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553643 - unix 3 [ ] STREAM CONNECTED 42312478 30054/docker unix 3 [ ] STREAM CONNECTED 40141708 11324/docker unix 3 [ ] STREAM CONNECTED 3740393670 43073/docker unix 3 [ ] STREAM CONNECTED 3740366559 43238/docker unix 3 [ ] STREAM CONNECTED 40107874 10150/docker unix 2 [ ] DGRAM 35950 - unix 3 [ ] STREAM CONNECTED 3740461012 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 29487 - unix 3 [ ] STREAM CONNECTED 45488735 - unix 3 [ ] STREAM CONNECTED 40130639 10505/docker unix 3 [ ] STREAM CONNECTED 40087187 10098/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740704796 721/mesos-docker-ex unix 3 [ ] STREAM CONNECTED 82553704 - unix 3 [ ] STREAM CONNECTED 82553649 - unix 3 [ ] STREAM CONNECTED 42312479 30054/docker unix 2 [ ] DGRAM 42102594 - unix 3 [ ] STREAM CONNECTED 40168532 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 42296082 29984/mesos-docker- unix 3 [ ] STREAM CONNECTED 40134667 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079247381 1074/docker unix 3 [ ] STREAM CONNECTED 3078964323 1082/docker unix 3 [ ] STREAM CONNECTED 42281012 29016/mesos-docker- unix 2 [ ] STREAM CONNECTED 4135351201 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3740366632 43606/mesos-docker- unix 3 [ ] STREAM CONNECTED 3740393283 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3537400664 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 3079256397 - /var/run/docker.sock unix 3 [ ] STREAM CONNECTED 82553695 - # lsof /var/run/docker.sock COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME dockerd 20181 root 5u unix 0xffff881b27f58000 0t0 265343638 /var/run/docker.sock dockerd 20181 root 6u unix 0xffff8804c947c000 0t0 3756377517 /var/run/docker.sock dockerd 20181 root 12u unix 0xffff880b0bfe3c00 0t0 3836299413 /var/run/docker.sock dockerd 20181 root 22u unix 0xffff881edcf80c00 0t0 3190257276 /var/run/docker.sock dockerd 20181 root 25u unix 0xffff8803f5928400 0t0 4091750352 /var/run/docker.sock dockerd 20181 root 26u unix 0xffff880df1395400 0t0 40169888 /var/run/docker.sock dockerd 20181 root 28u unix 0xffff8801b3cf5800 0t0 40132428 /var/run/docker.sock dockerd 20181 root 31u unix 0xffff8819406ab000 0t0 3740341814 /var/run/docker.sock dockerd 20181 root 37u unix 0xffff8809d4ac7400 0t0 3537450141 /var/run/docker.sock dockerd 20181 root 42u unix 0xffff8804e0f8f400 0t0 3079305709 /var/run/docker.sock dockerd 20181 root 53u unix 0xffff881b2792e000 0t0 39472899 /var/run/docker.sock dockerd 20181 root 56u unix 0xffff8803f592cc00 0t0 4091750360 /var/run/docker.sock dockerd 20181 root 58u unix 0xffff880cd1c65c00 0t0 3492902793 /var/run/docker.sock dockerd 20181 root 60u unix 0xffff880ad22ba400 0t0 4091780137 /var/run/docker.sock dockerd 20181 root 69u unix 0xffff8801b3c1d000 0t0 37381205 /var/run/docker.sock dockerd 20181 root 75u unix 0xffff880f60d74800 0t0 3079256397 /var/run/docker.sock dockerd 20181 root 79u unix 0xffff8804cc9ab800 0t0 3079228243 /var/run/docker.sock dockerd 20181 root 83u unix 0xffff880bfd3eb400 0t0 42282115 /var/run/docker.sock dockerd 20181 root 84u unix 0xffff8804e0f8c800 0t0 3740393149 /var/run/docker.sock dockerd 20181 root 88u unix 0xffff88070e365000 0t0 3493035112 /var/run/docker.sock dockerd 20181 root 89u unix 0xffff8802ce6fbc00 0t0 40247902 /var/run/docker.sock dockerd 20181 root 90u unix 0xffff8803a36c9000 0t0 40210180 /var/run/docker.sock dockerd 20181 root 92u unix 0xffff880eeec88800 0t0 3493022747 /var/run/docker.sock dockerd 20181 root 94u unix 0xffff880a440d7c00 0t0 3079244634 /var/run/docker.sock dockerd 20181 root 98u unix 0xffff8801bbee9c00 0t0 3079228244 /var/run/docker.sock dockerd 20181 root 100u unix 0xffff8801791d0c00 0t0 40174742 /var/run/docker.sock dockerd 20181 root 101u unix 0xffff881b0d068000 0t0 40245652 /var/run/docker.sock dockerd 20181 root 103u unix 0xffff88189937d800 0t0 3740352286 /var/run/docker.sock dockerd 20181 root 110u unix 0xffff881025c7c400 0t0 40192479 /var/run/docker.sock dockerd 20181 root 112u unix 0xffff881030f5c400 0t0 3079260332 /var/run/docker.sock dockerd 20181 root 118u unix 0xffff880a440d1400 0t0 3079228259 /var/run/docker.sock dockerd 20181 root 124u unix 0xffff8817fb4c8000 0t0 3493004109 /var/run/docker.sock dockerd 20181 root 126u unix 0xffff88071db5d400 0t0 3079299416 /var/run/docker.sock dockerd 20181 root 127u unix 0xffff881daac12400 0t0 3493037074 /var/run/docker.sock dockerd 20181 root 134u unix 0xffff881868669000 0t0 42286161 /var/run/docker.sock dockerd 20181 root 138u unix 0xffff881ecb34c000 0t0 3740330917 /var/run/docker.sock dockerd 20181 root 142u unix 0xffff88102301cc00 0t0 3492954439 /var/run/docker.sock dockerd 20181 root 143u unix 0xffff880c68b12800 0t0 3576217255 /var/run/docker.sock dockerd 20181 root 153u unix 0xffff880cc13c4800 0t0 40209451 /var/run/docker.sock dockerd 20181 root 154u unix 0xffff880cc13c2000 0t0 40209453 /var/run/docker.sock dockerd 20181 root 165u unix 0xffff880c62b4fc00 0t0 3740357942 /var/run/docker.sock dockerd 20181 root 170u unix 0xffff88066473b400 0t0 42271987 /var/run/docker.sock dockerd 20181 root 173u unix 0xffff880a8e1d3000 0t0 40093395 /var/run/docker.sock dockerd 20181 root 178u unix 0xffff8807832ad800 0t0 3836278373 /var/run/docker.sock dockerd 20181 root 183u unix 0xffff880d258b1400 0t0 40174119 /var/run/docker.sock dockerd 20181 root 185u unix 0xffff880e75cf9800 0t0 3079242349 /var/run/docker.sock dockerd 20181 root 186u unix 0xffff880e3cd68000 0t0 3079242330 /var/run/docker.sock dockerd 20181 root 187u unix 0xffff880c3fa93c00 0t0 3079224996 /var/run/docker.sock dockerd 20181 root 188u unix 0xffff880b24a76400 0t0 3740415761 /var/run/docker.sock dockerd 20181 root 192u unix 0xffff8809192ac400 0t0 3079259140 /var/run/docker.sock dockerd 20181 root 193u unix 0xffff88089b007800 0t0 42357687 /var/run/docker.sock dockerd 20181 root 201u unix 0xffff88083a4e9800 0t0 3836267495 /var/run/docker.sock dockerd 20181 root 203u unix 0xffff88103405e800 0t0 4136210203 /var/run/docker.sock dockerd 20181 root 210u unix 0xffff880700e68000 0t0 40012785 /var/run/docker.sock dockerd 20181 root 212u unix 0xffff88038f1d1800 0t0 42417420 /var/run/docker.sock dockerd 20181 root 213u unix 0xffff8820267eb000 0t0 3740277680 /var/run/docker.sock dockerd 20181 root 220u unix 0xffff881f2ff70000 0t0 3190123972 /var/run/docker.sock dockerd 20181 root 221u unix 0xffff8801b3f01400 0t0 4282222555 /var/run/docker.sock dockerd 20181 root 222u unix 0xffff8808c0869800 0t0 3740677783 /var/run/docker.sock dockerd 20181 root 225u unix 0xffff880a9b617400 0t0 37353376 /var/run/docker.sock dockerd 20181 root 226u unix 0xffff8801372f1c00 0t0 37373737 /var/run/docker.sock dockerd 20181 root 228u unix 0xffff881aed324800 0t0 3740315165 /var/run/docker.sock dockerd 20181 root 236u unix 0xffff88083cb20c00 0t0 4282285568 /var/run/docker.sock dockerd 20181 root 238u unix 0xffff88019ec73400 0t0 3492958047 /var/run/docker.sock dockerd 20181 root 239u unix 0xffff8807128f0800 0t0 4282222577 /var/run/docker.sock dockerd 20181 root 243u unix 0xffff88017ddd0800 0t0 29549954 /var/run/docker.sock dockerd 20181 root 245u unix 0xffff8806ed05c800 0t0 42367863 /var/run/docker.sock dockerd 20181 root 249u unix 0xffff880681b9f000 0t0 3740363297 /var/run/docker.sock dockerd 20181 root 267u unix 0xffff880e27a03c00 0t0 3079249891 /var/run/docker.sock dockerd 20181 root 274u unix 0xffff880d1ba4e800 0t0 3078918642 /var/run/docker.sock dockerd 20181 root 276u unix 0xffff880f60d71000 0t0 3842024427 /var/run/docker.sock dockerd 20181 root 278u unix 0xffff880697138000 0t0 3842043251 /var/run/docker.sock dockerd 20181 root 280u unix 0xffff880377ba1400 0t0 3842043253 /var/run/docker.sock dockerd 20181 root 281u unix 0xffff88085c409400 0t0 3079299359 /var/run/docker.sock dockerd 20181 root 283u unix 0xffff880b7a99c800 0t0 3492980578 /var/run/docker.sock dockerd 20181 root 285u unix 0xffff8809f233d800 0t0 3079300352 /var/run/docker.sock dockerd 20181 root 288u unix 0xffff88102b1ac000 0t0 3079305853 /var/run/docker.sock dockerd 20181 root 292u unix 0xffff881ecfd18400 0t0 3740405847 /var/run/docker.sock dockerd 20181 root 299u unix 0xffff88019ee84400 0t0 3079295962 /var/run/docker.sock dockerd 20181 root 319u unix 0xffff8802db38bc00 0t0 40168228 /var/run/docker.sock dockerd 20181 root 336u unix 0xffff88042e937400 0t0 3079274686 /var/run/docker.sock dockerd 20181 root 340u unix 0xffff880d3b04d400 0t0 4135351209 /var/run/docker.sock dockerd 20181 root 345u unix 0xffff881f2ff76000 0t0 3190239370 /var/run/docker.sock dockerd 20181 root 346u unix 0xffff880a9b842400 0t0 40156561 /var/run/docker.sock dockerd 20181 root 348u unix 0xffff880700e6cc00 0t0 40012783 /var/run/docker.sock dockerd 20181 root 349u unix 0xffff8801a44ae800 0t0 40156563 /var/run/docker.sock dockerd 20181 root 351u unix 0xffff880378415000 0t0 40115492 /var/run/docker.sock dockerd 20181 root 353u unix 0xffff8806443d6400 0t0 3756391372 /var/run/docker.sock dockerd 20181 root 366u unix 0xffff880ad8997400 0t0 40116590 /var/run/docker.sock dockerd 20181 root 369u unix 0xffff880a9b841800 0t0 40116592 /var/run/docker.sock dockerd 20181 root 377u unix 0xffff880220451400 0t0 4136228801 /var/run/docker.sock dockerd 20181 root 378u unix 0xffff880e27288c00 0t0 4136210209 /var/run/docker.sock dockerd 20181 root 395u unix 0xffff880b201dd800 0t0 3740363298 /var/run/docker.sock dockerd 20181 root 398u unix 0xffff88174db31800 0t0 3740405836 /var/run/docker.sock dockerd 20181 root 402u unix 0xffff880dc389b400 0t0 4135351201 /var/run/docker.sock dockerd 20181 root 406u unix 0xffff880c654fa000 0t0 4136441394 /var/run/docker.sock dockerd 20181 root 407u unix 0xffff880bdf264800 0t0 40129540 /var/run/docker.sock dockerd 20181 root 408u unix 0xffff880bdf264400 0t0 40110725 /var/run/docker.sock dockerd 20181 root 414u unix 0xffff8806443d0800 0t0 3756364206 /var/run/docker.sock dockerd 20181 root 418u unix 0xffff881bc0abc400 0t0 3492878067 /var/run/docker.sock dockerd 20181 root 426u unix 0xffff88189937d000 0t0 3740314137 /var/run/docker.sock dockerd 20181 root 442u unix 0xffff880cb8faec00 0t0 4135351211 /var/run/docker.sock dockerd 20181 root 443u unix 0xffff88038faba400 0t0 4135351212 /var/run/docker.sock dockerd 20181 root 445u unix 0xffff88031d45a400 0t0 40168532 /var/run/docker.sock dockerd 20181 root 465u unix 0xffff880a84d2a000 0t0 4136449409 /var/run/docker.sock dockerd 20181 root 466u unix 0xffff880d67aa7800 0t0 4136449411 /var/run/docker.sock dockerd 20181 root 472u unix 0xffff88035af5bc00 0t0 40117441 /var/run/docker.sock dockerd 20181 root 473u unix 0xffff881d35e6f400 0t0 3576222247 /var/run/docker.sock dockerd 20181 root 513u unix 0xffff88021e164000 0t0 3740402434 /var/run/docker.sock dockerd 20181 root 515u unix 0xffff880b0bfe0800 0t0 40134667 /var/run/docker.sock dockerd 20181 root 516u unix 0xffff880b0bfe6800 0t0 40087208 /var/run/docker.sock dockerd 20181 root 517u unix 0xffff8801f93ac400 0t0 3740378572 /var/run/docker.sock dockerd 20181 root 518u unix 0xffff880681b99400 0t0 3740402435 /var/run/docker.sock dockerd 20181 root 528u unix 0xffff880b24a74000 0t0 3740393276 /var/run/docker.sock dockerd 20181 root 553u unix 0xffff8819fafa9000 0t0 3576244711 /var/run/docker.sock dockerd 20181 root 568u unix 0xffff88035af59000 0t0 40099109 /var/run/docker.sock dockerd 20181 root 623u unix 0xffff880038cc4000 0t0 3740415765 /var/run/docker.sock dockerd 20181 root 629u unix 0xffff880c058b9000 0t0 3740399677 /var/run/docker.sock dockerd 20181 root 639u unix 0xffff8819b7d09800 0t0 3740393283 /var/run/docker.sock dockerd 20181 root 640u unix 0xffff880137320400 0t0 3740396046 /var/run/docker.sock dockerd 20181 root 643u unix 0xffff880cf19f6000 0t0 29524901 /var/run/docker.sock dockerd 20181 root 651u unix 0xffff880c058bc800 0t0 3740399679 /var/run/docker.sock dockerd 20181 root 654u unix 0xffff880af04b8000 0t0 29548048 /var/run/docker.sock dockerd 20181 root 671u unix 0xffff881c39951c00 0t0 3492852521 /var/run/docker.sock dockerd 20181 root 674u unix 0xffff881a3c972800 0t0 3492860907 /var/run/docker.sock dockerd 20181 root 728u unix 0xffff8818f1f96000 0t0 3740276503 /var/run/docker.sock dockerd 20181 root 736u unix 0xffff881d06d64c00 0t0 3493003714 /var/run/docker.sock dockerd 20181 root 738u unix 0xffff880038cc3000 0t0 3740394763 /var/run/docker.sock dockerd 20181 root 745u unix 0xffff880681b9d800 0t0 3740394780 /var/run/docker.sock dockerd 20181 root 761u unix 0xffff880b12b50800 0t0 42296087 /var/run/docker.sock dockerd 20181 root 775u unix 0xffff8805cab6a000 0t0 42313173 /var/run/docker.sock dockerd 20181 root 776u unix 0xffff881ecfdcac00 0t0 3740341711 /var/run/docker.sock dockerd 20181 root 777u unix 0xffff881963580c00 0t0 3740341712 /var/run/docker.sock dockerd 20181 root 778u unix 0xffff880e40f58400 0t0 3740403910 /var/run/docker.sock dockerd 20181 root 780u unix 0xffff8805cab69c00 0t0 42296101 /var/run/docker.sock dockerd 20181 root 798u unix 0xffff8801f93ac800 0t0 3740382980 /var/run/docker.sock dockerd 20181 root 805u unix 0xffff880ba8c14400 0t0 3740395446 /var/run/docker.sock dockerd 20181 root 815u unix 0xffff880137324800 0t0 3740394821 /var/run/docker.sock dockerd 20181 root 843u unix 0xffff8804f6e2a400 0t0 3740403980 /var/run/docker.sock dockerd 20181 root 867u unix 0xffff880038cc3400 0t0 3740388031 /var/run/docker.sock dockerd 20181 root 868u unix 0xffff8804f6e2bc00 0t0 3740409007 /var/run/docker.sock dockerd 20181 root 905u unix 0xffff88086242ec00 0t0 3740404382 /var/run/docker.sock dockerd 20181 root 906u unix 0xffff88196aecc000 0t0 3740399772 /var/run/docker.sock dockerd 20181 root 971u unix 0xffff881e2913e000 0t0 3416682031 /var/run/docker.sock dockerd 20181 root 979u unix 0xffff880519e5e000 0t0 3416673020 /var/run/docker.sock dockerd 20181 root 984u unix 0xffff8801664ab000 0t0 3416686803 /var/run/docker.sock dockerd 20181 root 1138u unix 0xffff881ecb34ac00 0t0 3740378898 /var/run/docker.sock dockerd 20181 root 1158u unix 0xffff881e12710800 0t0 3740404300 /var/run/docker.sock dockerd 20181 root 1171u unix 0xffff881ecfdc8800 0t0 3740341980 /var/run/docker.sock dockerd 20181 root 1493u unix 0xffff880d92cbc800 0t0 3740678545 /var/run/docker.sock dockerd 20181 root 1501u unix 0xffff8809eed63400 0t0 3537445024 /var/run/docker.sock dockerd 20181 root 1507u unix 0xffff880afa1c7800 0t0 3740622629 /var/run/docker.sock dockerd 20181 root 1518u unix 0xffff880f6c8acc00 0t0 3537450143 /var/run/docker.sock dockerd 20181 root 1519u unix 0xffff8808a4860800 0t0 3537445026 /var/run/docker.sock dockerd 20181 root 1538u unix 0xffff880d17a9d800 0t0 3537453104 /var/run/docker.sock dockerd 20181 root 1539u unix 0xffff880f6c8ad400 0t0 3537450145 /var/run/docker.sock dockerd 20181 root 1540u unix 0xffff8808a4864400 0t0 3537445028 /var/run/docker.sock dockerd 20181 root 1547u unix 0xffff880f6c8ac400 0t0 3537437025 /var/run/docker.sock dockerd 20181 root 1548u unix 0xffff880d17a99400 0t0 3537428692 /var/run/docker.sock dockerd 20181 root 1551u unix 0xffff880169a3d000 0t0 3740690196 /var/run/docker.sock dockerd 20181 root 1555u unix 0xffff8808a4862800 0t0 3537445030 /var/run/docker.sock dockerd 20181 root 1562u unix 0xffff880d17a9b800 0t0 3537432885 /var/run/docker.sock dockerd 20181 root 1569u unix 0xffff880f6c8afc00 0t0 3537437027 /var/run/docker.sock dockerd 20181 root 1570u unix 0xffff8817200c7400 0t0 3537453112 /var/run/docker.sock dockerd 20181 root 1577u unix 0xffff8808a4865000 0t0 3537445032 /var/run/docker.sock dockerd 20181 root 1578u unix 0xffff880f6c8ae000 0t0 3537437029 /var/run/docker.sock dockerd 20181 root 1579u unix 0xffff8808a4866800 0t0 3537445034 /var/run/docker.sock dockerd 20181 root 1580u unix 0xffff880f6c8aac00 0t0 3537437031 /var/run/docker.sock dockerd 20181 root 1581u unix 0xffff8808a4867800 0t0 3537445036 /var/run/docker.sock dockerd 20181 root 1582u unix 0xffff8809f233b800 0t0 3537437033 /var/run/docker.sock dockerd 20181 root 1583u unix 0xffff880caf456800 0t0 3537434885 /var/run/docker.sock dockerd 20181 root 1584u unix 0xffff8808a4862400 0t0 3537445038 /var/run/docker.sock dockerd 20181 root 1585u unix 0xffff8809eed60c00 0t0 3537434881 /var/run/docker.sock dockerd 20181 root 1586u unix 0xffff8817200c2800 0t0 3537453114 /var/run/docker.sock dockerd 20181 root 1587u unix 0xffff8809fa16f000 0t0 3537446213 /var/run/docker.sock dockerd 20181 root 1588u unix 0xffff8809fa16bc00 0t0 3537400651 /var/run/docker.sock dockerd 20181 root 1589u unix 0xffff8809d4ac4c00 0t0 3537450137 /var/run/docker.sock dockerd 20181 root 1590u unix 0xffff8809fa16b400 0t0 3537400653 /var/run/docker.sock dockerd 20181 root 1591u unix 0xffff8809db300400 0t0 3537437012 /var/run/docker.sock dockerd 20181 root 1592u unix 0xffff8809fa16c800 0t0 3537400655 /var/run/docker.sock dockerd 20181 root 1593u unix 0xffff8809d4ac4800 0t0 3537450139 /var/run/docker.sock dockerd 20181 root 1595u unix 0xffff8809db304800 0t0 3537437014 /var/run/docker.sock dockerd 20181 root 1601u unix 0xffff8817200c1000 0t0 3537447057 /var/run/docker.sock dockerd 20181 root 1602u unix 0xffff8817200c4800 0t0 3537453108 /var/run/docker.sock dockerd 20181 root 1603u unix 0xffff8817200c3400 0t0 3537453116 /var/run/docker.sock dockerd 20181 root 1604u unix 0xffff881b6e108800 0t0 3537453122 /var/run/docker.sock dockerd 20181 root 1605u unix 0xffff8817200c1800 0t0 3537453118 /var/run/docker.sock dockerd 20181 root 1606u unix 0xffff8817200c0800 0t0 3537453120 /var/run/docker.sock dockerd 20181 root 1607u unix 0xffff881b6e10d800 0t0 3537400660 /var/run/docker.sock dockerd 20181 root 1608u unix 0xffff880f60d76400 0t0 3537400662 /var/run/docker.sock dockerd 20181 root 1609u unix 0xffff88201d270c00 0t0 3537437002 /var/run/docker.sock dockerd 20181 root 1610u unix 0xffff8806a69d9c00 0t0 3537436998 /var/run/docker.sock dockerd 20181 root 1611u unix 0xffff880caf456400 0t0 3537437016 /var/run/docker.sock dockerd 20181 root 1612u unix 0xffff880caf450400 0t0 3537434887 /var/run/docker.sock dockerd 20181 root 1613u unix 0xffff8806a69dc400 0t0 3537400664 /var/run/docker.sock dockerd 20181 root 1614u unix 0xffff8806a69d8c00 0t0 3537436996 /var/run/docker.sock dockerd 20181 root 1615u unix 0xffff8806a69dbc00 0t0 3537437000 /var/run/docker.sock dockerd 20181 root 1616u unix 0xffff88201b0d9400 0t0 3537437008 /var/run/docker.sock dockerd 20181 root 1617u unix 0xffff88201d277c00 0t0 3537437004 /var/run/docker.sock dockerd 20181 root 1618u unix 0xffff88201d275000 0t0 3537437006 /var/run/docker.sock dockerd 20181 root 1619u unix 0xffff880a86979400 0t0 3537445012 /var/run/docker.sock dockerd 20181 root 1620u unix 0xffff88201b0dbc00 0t0 3537426259 /var/run/docker.sock dockerd 20181 root 1621u unix 0xffff88201b0de400 0t0 3537445004 /var/run/docker.sock dockerd 20181 root 1622u unix 0xffff880a86979c00 0t0 3537445006 /var/run/docker.sock dockerd 20181 root 1623u unix 0xffff880a8697a800 0t0 3537445008 /var/run/docker.sock dockerd 20181 root 1624u unix 0xffff880a8697e400 0t0 3537445010 /var/run/docker.sock dockerd 20181 root 1626u unix 0xffff880a8697ac00 0t0 3537445014 /var/run/docker.sock dockerd 20181 root 1627u unix 0xffff880a8697c000 0t0 3537434877 /var/run/docker.sock dockerd 20181 root 1628u unix 0xffff8809eed64400 0t0 3537434879 /var/run/docker.sock dockerd 20181 root 1629u unix 0xffff880caf455c00 0t0 3537437022 /var/run/docker.sock dockerd 20181 root 1630u unix 0xffff88037d2e0000 0t0 3537428699 /var/run/docker.sock dockerd 20181 root 1637u unix 0xffff881fac020800 0t0 3537440110 /var/run/docker.sock dockerd 20181 root 1638u unix 0xffff881fac021400 0t0 3537440112 /var/run/docker.sock dockerd 20181 root 1639u unix 0xffff8809d4ac6800 0t0 3537440114 /var/run/docker.sock dockerd 20181 root 1640u unix 0xffff8809d4ac3800 0t0 3537440116 /var/run/docker.sock dockerd 20181 root 1641u unix 0xffff8809d4ac1800 0t0 3537440118 /var/run/docker.sock dockerd 20181 root 1642u unix 0xffff880f6c8ac800 0t0 3537440120 /var/run/docker.sock dockerd 20181 root 1643u unix 0xffff880f6c8a9c00 0t0 3537440122 /var/run/docker.sock dockerd 20181 root 1644u unix 0xffff880f6c8ac000 0t0 3537450147 /var/run/docker.sock dockerd 20181 root 1645u unix 0xffff880f6c8aa800 0t0 3537449450 /var/run/docker.sock dockerd 20181 root 1646u unix 0xffff880f6c8a8800 0t0 3537449452 /var/run/docker.sock dockerd 20181 root 1647u unix 0xffff880f6c8adc00 0t0 3537450149 /var/run/docker.sock dockerd 20181 root 1648u unix 0xffff8809f233f800 0t0 3537450151 /var/run/docker.sock dockerd 20181 root 1659u unix 0xffff881ab087a800 0t0 3493106094 /var/run/docker.sock dockerd 20181 root 1674u unix 0xffff881da7106000 0t0 3493106107 /var/run/docker.sock dockerd 20181 root 1675u unix 0xffff881b27976c00 0t0 3493091871 /var/run/docker.sock dockerd 20181 root 1939u unix 0xffff8807aed91400 0t0 3737013474 /var/run/docker.sock dockerd 20181 root 1942u unix 0xffff880ffa53c800 0t0 3736989209 /var/run/docker.sock dockerd 20181 root 1943u unix 0xffff880ffa538400 0t0 3736993494 /var/run/docker.sock dockerd 20181 root 2062u unix 0xffff88060b2b1800 0t0 3836424366 /var/run/docker.sock dockerd 20181 root 2066u unix 0xffff880652f94000 0t0 3836383051 /var/run/docker.sock dockerd 20181 root 2120u unix 0xffff88060b2b0c00 0t0 3836383053 /var/run/docker.sock dockerd 20181 root 2174u unix 0xffff8801ceee2c00 0t0 3738901294 /var/run/docker.sock dockerd 20181 root 2182u unix 0xffff880a7547c000 0t0 3738946634 /var/run/docker.sock dockerd 20181 root 2255u unix 0xffff8801894e2c00 0t0 3738941114 /var/run/docker.sock dockerd 20181 root 2560u unix 0xffff880a8b475800 0t0 3740420940 /var/run/docker.sock dockerd 20181 root 2568u unix 0xffff8810345c0800 0t0 3740429782 /var/run/docker.sock dockerd 20181 root 2578u unix 0xffff8808c086bc00 0t0 3740461012 /var/run/docker.sock 2017-05-31 10:52:06,201:12(0x7ff5626a7700):ZOO_ERROR@handle_socket_error_msg@1746: Socket [10.224.251.13:2181] zk retcode=-4, errno=112(Host is down): failed while receiving a server response 2017-05-31 10:52:06,202:12(0x7ff5626a7700):ZOO_ERROR@handle_socket_error_msg@1722: Socket [10.224.251.12:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2017-05-31 10:52:06,202:12(0x7ff5626a7700):ZOO_INFO@check_events@1728: initiated connection to server [10.224.251.14:2181] 2017-05-31 10:52:06,207:12(0x7ff5626a7700):ZOO_ERROR@handle_socket_error_msg@1746: Socket [10.224.251.14:2181] zk retcode=-4, errno=112(Host is down): failed while receiving a server response 2017-05-31 10:52:09,545:12(0x7ff5626a7700):ZOO_INFO@check_events@1728: initiated connection to server [10.224.251.13:2181] 2017-05-31 10:52:09,546:12(0x7ff5626a7700):ZOO_INFO@check_events@1775: session establishment complete on server [10.224.251.13:2181], sessionId=0x15c425a6bb20002, negotiated timeout=10000 2017-05-31 10:56:10,903:12(0x7ff5626a7700):ZOO_ERROR@handle_socket_error_msg@1746: Socket [10.224.251.13:2181] zk retcode=-4, errno=112(Host is down): failed while receiving a server response 2017-05-31 10:56:10,904:12(0x7ff5626a7700):ZOO_INFO@check_events@1728: initiated connection to server [10.224.251.12:2181] 2017-05-31 10:56:10,919:12(0x7ff5626a7700):ZOO_INFO@check_events@1775: session establishment complete on server [10.224.251.12:2181], sessionId=0x15c425a6bb20002, negotiated timeout=10000 2017-05-31 10:58:19,045:12(0x7ff5626a7700):ZOO_ERROR@handle_socket_error_msg@1746: Socket [10.224.251.12:2181] zk retcode=-4, errno=112(Host is down): failed while receiving a server response 2017-05-31 10:58:19,046:12(0x7ff5626a7700):ZOO_ERROR@handle_socket_error_msg@1722: Socket [10.224.251.14:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2017-05-31 10:58:22,383:12(0x7ff5626a7700):ZOO_INFO@check_events@1728: initiated connection to server [10.224.251.13:2181] 2017-05-31 10:58:22,387:12(0x7ff5626a7700):ZOO_INFO@check_events@1775: session establishment complete on server [10.224.251.13:2181], sessionId=0x15c425a6bb20002, negotiated timeout=10000 E0531 10:58:48.770263 61 process.cpp:2154] Failed to shutdown socket with fd 240: Transport endpoint is not connected E0531 11:35:02.120149 61 process.cpp:2154] Failed to shutdown socket with fd 240: Transport endpoint is not connected E0531 11:42:54.781968 27 slave.cpp:3903] Failed to update resources for container f5cfab67-dd9e-4dbf-a0f9-2cf0271863e2 of executor 'wgt300_wgcmp_app.7b546762-4454-11e7-ad03-fa163e902889' running task wgt300_wgcmp_app.7b546762-4454-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/21939/cgroup: Failed to open file: No such file or directory E0531 11:42:54.850325 44 slave.cpp:3903] Failed to update resources for container ebe24281-c5bd-4232-92e6-bcc21655e720 of executor 'wgt21_wgcmp_nginx.06a6baaa-44c7-11e7-ad03-fa163e902889' running task wgt21_wgcmp_nginx.06a6baaa-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/6494/cgroup: Failed to open file: No such file or directory E0531 11:42:54.857800 44 slave.cpp:3903] Failed to update resources for container 16dbf843-df65-4e93-afae-eca982f2fbb8 of executor 'wgt300_wgcmp_adm.59db61ce-419d-11e7-8748-fa163e264aed' running task wgt300_wgcmp_adm.59db61ce-419d-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/19521/cgroup: Failed to open file: No such file or directory E0531 11:42:54.858379 44 slave.cpp:3903] Failed to update resources for container d60f5cef-7967-45eb-9433-b7aec820784e of executor 'wgt14_cwh_uwsgicwh.7b5503ae-4454-11e7-ad03-fa163e902889' running task wgt14_cwh_uwsgicwh.7b5503ae-4454-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/28805/cgroup: Failed to open file: No such file or directory E0531 11:42:54.859057 44 slave.cpp:3903] Failed to update resources for container 828b8f66-b939-4d20-806c-d850db214373 of executor 'wgt21_wgcmp_app.dc61a3d4-422c-11e7-ad03-fa163e902889' running task wgt21_wgcmp_app.dc61a3d4-422c-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/41152/cgroup: Failed to open file: No such file or directory E0531 11:42:56.118301 61 process.cpp:2154] Failed to shutdown socket with fd 14: Transport endpoint is not connected E0531 11:43:19.610195 20 slave.cpp:3903] Failed to update resources for container 8c3e2714-a616-4b6a-94ca-a9edf1471075 of executor 'wgs11_freya_fnginx_nginx.02682686-44c7-11e7-ad03-fa163e902889' running task wgs11_freya_fnginx_nginx.02682686-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/6136/cgroup: Failed to open file: No such file or directory E0531 11:43:27.435741 39 slave.cpp:3903] Failed to update resources for container 2b434907-5425-494b-a058-630a237743cb of executor 'ci_esmeter_espe.026874aa-44c7-11e7-ad03-fa163e902889' running task ci_esmeter_espe.026874aa-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/24113/cgroup: Failed to open file: No such file or directory E0531 12:00:09.685317 16 slave.cpp:3903] Failed to update resources for container 71886d15-da52-45ac-8a5a-90e834a01e23 of executor 'wgt21_wot_wgrs_nginx.01cf1ad1-44c7-11e7-ad03-fa163e902889' running task wgt21_wot_wgrs_nginx.01cf1ad1-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/12048/cgroup: Failed to open file: No such file or directory 2017-05-31 12:23:20,964:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 47ms E0531 12:26:26.212996 29 slave.cpp:3903] Failed to update resources for container 9ffa884a-581e-412c-9384-5cb94c2a9f2d of executor 'wgt600_spa_workerscommon.c519346e-4103-11e7-8748-fa163e264aed' running task wgt600_spa_workerscommon.c519346e-4103-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/11176/cgroup: Failed to open file: No such file or directory E0531 12:26:26.262909 23 slave.cpp:3903] Failed to update resources for container 8eac02d3-8b4c-499d-941d-67e37dfc2993 of executor 'wgt21_banw_maintenance.9f798d18-4103-11e7-8748-fa163e264aed' running task wgt21_banw_maintenance.9f798d18-4103-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/41256/cgroup: Failed to open file: No such file or directory E0531 12:26:26.275084 23 slave.cpp:3903] Failed to update resources for container fc5a371d-4e78-4efc-a230-f8ef385d49d0 of executor 'banwperf_banw_app.2227f205-4103-11e7-8748-fa163e264aed' running task banwperf_banw_app.2227f205-4103-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/1469/cgroup: Failed to open file: No such file or directory E0531 12:26:26.358110 46 slave.cpp:3903] Failed to update resources for container 7d4ece2d-98cb-4c4c-988a-aaf1e52b11c1 of executor 'wgt21_banw_app.01cf1ad0-44c7-11e7-ad03-fa163e902889' running task wgt21_banw_app.01cf1ad0-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/1880/cgroup: Failed to open file: No such file or directory E0531 12:26:27.356802 61 process.cpp:2154] Failed to shutdown socket with fd 13: Transport endpoint is not connected E0531 12:26:27.358404 61 process.cpp:2154] Failed to shutdown socket with fd 13: Transport endpoint is not connected E0531 12:26:27.762145 61 process.cpp:2154] Failed to shutdown socket with fd 13: Transport endpoint is not connected E0531 12:26:28.541404 61 process.cpp:2154] Failed to shutdown socket with fd 13: Transport endpoint is not connected E0531 12:26:49.998587 24 slave.cpp:3903] Failed to update resources for container 066992d6-a912-4927-a205-031cf7dae447 of executor 'wgt600_spa_app.58679535-4103-11e7-8748-fa163e264aed' running task wgt600_spa_app.58679535-4103-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/21590/cgroup: Failed to open file: No such file or directory E0531 12:39:00.287341 61 process.cpp:2154] Failed to shutdown socket with fd 31: Transport endpoint is not connected 2017-05-31 12:42:08,813:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 18ms 2017-05-31 12:42:55,562:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 35ms E0531 12:43:39.313490 53 slave.cpp:3903] Failed to update resources for container 9658300e-b2d4-4fc6-8c15-ed18b0831b85 of executor 'wgt600_spa_workersonevent.01360f1b-44c7-11e7-ad03-fa163e902889' running task wgt600_spa_workersonevent.01360f1b-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/4324/cgroup: Failed to open file: No such file or directory E0531 12:43:39.376415 16 slave.cpp:3903] Failed to update resources for container c7fe430a-99f3-4ebc-bf57-d0ab1480dcc2 of executor 'wgt21_tmscore_app.be0ca00a-452e-11e7-ad03-fa163e902889' running task wgt21_tmscore_app.be0ca00a-452e-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/16202/cgroup: Failed to open file: No such file or directory E0531 12:43:39.402048 13 slave.cpp:3903] Failed to update resources for container 42b9d2ec-89bf-4c8e-b579-927b8ec9579b of executor 'wgt600_wotb_wi_app.beac55dd-42ad-11e7-ad03-fa163e902889' running task wgt600_wotb_wi_app.beac55dd-42ad-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/6640/cgroup: Failed to open file: No such file or directory E0531 12:43:39.402469 13 slave.cpp:3903] Failed to update resources for container e920ac57-59d1-4d05-9012-4e24441678fa of executor 'wgt600_spa_workersgame.a278000d-4103-11e7-8748-fa163e264aed' running task wgt600_spa_workersgame.a278000d-4103-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/43037/cgroup: Failed to open file: No such file or directory E0531 12:43:39.402739 13 slave.cpp:3903] Failed to update resources for container f4094a8c-9e04-4261-b04c-3d4a311d1576 of executor 'wgt21_wowp_wgrs_maintenance.02682685-44c7-11e7-ad03-fa163e902889' running task wgt21_wowp_wgrs_maintenance.02682685-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/4140/cgroup: Failed to open file: No such file or directory E0531 12:43:39.403175 13 slave.cpp:3903] Failed to update resources for container 8c8e5492-d9b9-444d-b27f-eb50093e5402 of executor 'ci_esmeter_prometheus.564f6d5d-4104-11e7-8748-fa163e264aed' running task ci_esmeter_prometheus.564f6d5d-4104-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/10239/cgroup: Failed to open file: No such file or directory E0531 12:43:39.403587 13 slave.cpp:3903] Failed to update resources for container 5c2db471-632f-457b-b21a-3b956d11a805 of executor 'wgs11_freya_fproduct_product.03010b2d-44c7-11e7-ad03-fa163e902889' running task wgs11_freya_fproduct_product.03010b2d-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/14522/cgroup: Failed to open file: No such file or directory E0531 12:43:39.403795 13 slave.cpp:3903] Failed to update resources for container a2dfa755-10fb-42ac-bdc5-017644643d87 of executor 'banwperf_banw_maintenance.01360f1a-44c7-11e7-ad03-fa163e902889' running task banwperf_banw_maintenance.01360f1a-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/1161/cgroup: Failed to open file: No such file or directory E0531 12:43:39.406555 25 slave.cpp:3903] Failed to update resources for container 769a79d1-b618-48c5-b5df-06b16e08ee6c of executor 'wgs11_freya_fwebhook_webhook.0399efd0-44c7-11e7-ad03-fa163e902889' running task wgs11_freya_fwebhook_webhook.0399efd0-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/14341/cgroup: Failed to open file: No such file or directory E0531 12:43:39.407330 25 slave.cpp:3903] Failed to update resources for container f506ecb3-1150-40be-a1f1-6a076963fe74 of executor 'wgs11_freya_fgateway_gatewayclient.311d98f6-4103-11e7-8748-fa163e264aed' running task wgs11_freya_fgateway_gatewayclient.311d98f6-4103-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/9573/cgroup: Failed to open file: No such file or directory E0531 12:43:39.415478 25 slave.cpp:3903] Failed to update resources for container bf40b2cd-c529-4a4d-82b9-d51e7dfe0bd5 of executor 'wgt21_wot_wgrs_workers.02689bbc-44c7-11e7-ad03-fa163e902889' running task wgt21_wot_wgrs_workers.02689bbc-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/4179/cgroup: Failed to open file: No such file or directory E0531 12:43:39.416038 25 slave.cpp:3903] Failed to update resources for container 426e0db9-134c-4600-b4a9-8ca71f9934d4 of executor 'wgt21_wowp_wgrs_workers.668aa121-41d5-11e7-ad03-fa163e902889' running task wgt21_wowp_wgrs_workers.668aa121-41d5-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/46564/cgroup: Failed to open file: No such file or directory E0531 12:43:39.461283 30 slave.cpp:3903] Failed to update resources for container 6765183e-dbe2-4ae2-a554-b89d39f772a0 of executor 'wgt21_tmscore_workers.df65aea4-452c-11e7-ad03-fa163e902889' running task wgt21_tmscore_workers.df65aea4-452c-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/41692/cgroup: Failed to open file: No such file or directory E0531 12:43:39.548141 54 slave.cpp:3903] Failed to update resources for container f6ddc877-e146-4257-814e-ae46228b01dd of executor 'wgs11_freya_fgateway_gatewayserver.7b548e75-4454-11e7-ad03-fa163e902889' running task wgs11_freya_fgateway_gatewayserver.7b548e75-4454-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/25188/cgroup: Failed to open file: No such file or directory E0531 12:43:42.201082 61 process.cpp:2154] Failed to shutdown socket with fd 15: Transport endpoint is not connected E0531 12:43:43.122143 61 process.cpp:2154] Failed to shutdown socket with fd 85: Transport endpoint is not connected E0531 12:43:45.167506 61 process.cpp:2154] Failed to shutdown socket with fd 44: Transport endpoint is not connected E0531 12:43:45.170045 61 process.cpp:2154] Failed to shutdown socket with fd 44: Transport endpoint is not connected E0531 12:43:45.310912 61 process.cpp:2154] Failed to shutdown socket with fd 44: Transport endpoint is not connected E0531 12:43:45.445400 61 process.cpp:2154] Failed to shutdown socket with fd 44: Transport endpoint is not connected E0531 12:43:45.715076 61 process.cpp:2154] Failed to shutdown socket with fd 13: Transport endpoint is not connected E0531 12:43:45.724380 61 process.cpp:2154] Failed to shutdown socket with fd 61: Transport endpoint is not connected E0531 12:43:50.314066 16 slave.cpp:3903] Failed to update resources for container fbc7d1a3-5b7e-4985-a2f2-cb7d1f2ff7ba of executor 'wgs11_freya_fimporter_dataimporter.cf3fc6d7-4103-11e7-8748-fa163e264aed' running task wgs11_freya_fimporter_dataimporter.cf3fc6d7-4103-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/17264/cgroup: Failed to open file: No such file or directory E0531 12:43:50.418159 31 slave.cpp:3903] Failed to update resources for container b522f8c3-fe4b-4eda-809a-59109627a3f6 of executor 'wgs11_freya_fmetadata_postgresimporter.01cef3bf-44c7-11e7-ad03-fa163e902889' running task wgs11_freya_fmetadata_postgresimporter.01cef3bf-44c7-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/22231/cgroup: Failed to open file: No such file or directory E0531 12:43:50.469344 29 slave.cpp:3903] Failed to update resources for container ac92a55b-c084-4db8-88e0-48a241f94168 of executor 'wgs11_freya_fimporter_consulimporter.22288e4c-4103-11e7-8748-fa163e264aed' running task wgs11_freya_fimporter_consulimporter.22288e4c-4103-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/7251/cgroup: Failed to open file: No such file or directory E0531 12:44:21.872993 61 process.cpp:2154] Failed to shutdown socket with fd 55: Transport endpoint is not connected 2017-05-31 12:44:39,161:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 166ms 2017-05-31 12:45:09,229:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 36ms 2017-05-31 12:45:26,379:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 471ms E0531 13:02:02.003201 61 process.cpp:2154] Failed to shutdown socket with fd 55: Transport endpoint is not connected 2017-05-31 14:29:02,921:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 14ms 2017-05-31 14:52:54,439:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 23ms 2017-05-31 15:02:05,042:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 40ms 2017-05-31 16:48:01,762:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 31ms 2017-05-31 16:54:22,155:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 11ms 2017-05-31 17:15:36,860:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 33ms E0601 01:58:19.483497 61 process.cpp:2154] Failed to shutdown socket with fd 55: Transport endpoint is not connected 2017-06-01 07:38:17,727:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 15ms E0601 08:13:17.876008 17 slave.cpp:3903] Failed to update resources for container 5dd7a23e-b068-42e0-93a8-0ff13a061747 of executor 'wgt21_wot_exp_maintenance.0481059d-46a2-11e7-8c9b-fa163e3c4349' running task wgt21_wot_exp_maintenance.0481059d-46a2-11e7-8c9b-fa163e3c4349 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/48829/cgroup: Failed to open file: No such file or directory E0601 08:14:24.227057 21 slave.cpp:3903] Failed to update resources for container 11ddb033-2347-41e4-b8af-169cc1330776 of executor 'wgt21_wot_exp_maintenance.2b0bf71e-46a2-11e7-8c9b-fa163e3c4349' running task wgt21_wot_exp_maintenance.2b0bf71e-46a2-11e7-8c9b-fa163e3c4349 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/14365/cgroup: Failed to open file: No such file or directory 2017-06-01 12:32:32,868:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 15ms 2017-06-01 12:59:34,608:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 43ms E0601 13:40:33.177947 39 slave.cpp:3903] Failed to update resources for container 678c2f45-1e6f-4e8b-8275-879b2a7190fd of executor 'wgt21_wgpm_uwsgi.036eb69e-4471-11e7-ad03-fa163e902889' running task wgt21_wgpm_uwsgi.036eb69e-4471-11e7-ad03-fa163e902889 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/25738/cgroup: Failed to open file: No such file or directory E0601 23:06:16.159608 61 process.cpp:2154] Failed to shutdown socket with fd 147: Transport endpoint is not connected E0602 07:46:05.424010 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected E0602 09:42:01.879622 59 slave.cpp:3903] Failed to update resources for container a1dae546-8ccf-4e5e-8e68-65226a3d3460 of executor 'wgt21_ordo_maintenance.590052c8-4103-11e7-8748-fa163e264aed' running task wgt21_ordo_maintenance.590052c8-4103-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/20862/cgroup: Failed to open file: No such file or directory E0602 09:42:57.649808 45 slave.cpp:3903] Failed to update resources for container 2f660a8a-4aef-4713-abff-2948e3463b21 of executor 'wgt21_ordo_nginx.22292a91-4103-11e7-8748-fa163e264aed' running task wgt21_ordo_nginx.22292a91-4103-11e7-8748-fa163e264aed on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/1885/cgroup: Failed to open file: No such file or directory E0602 10:32:55.532558 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected E0602 12:35:20.841418 61 process.cpp:2154] Failed to shutdown socket with fd 69: Transport endpoint is not connected 2017-06-02 16:50:29,124:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 32ms E0602 17:09:15.157632 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected 2017-06-03 01:53:26,674:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 20ms E0603 03:53:18.972594 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected E0603 04:35:22.940129 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected E0603 07:22:34.595063 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected E0603 07:30:53.358197 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected 2017-06-03 10:16:35,055:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 16ms E0603 15:13:01.959837 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected 2017-06-03 16:28:55,172:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 22ms 2017-06-03 18:36:29,874:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 13ms 2017-06-03 18:50:40,770:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 11ms E0603 22:52:20.867681 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected E0604 00:45:23.668624 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected 2017-06-04 00:55:40,358:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 16ms 2017-06-04 02:21:09,232:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 13ms E0604 03:24:53.075001 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected 2017-06-04 04:57:19,179:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 19ms E0604 05:38:24.357625 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected E0604 06:22:17.965489 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected E0604 06:59:33.082372 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected 2017-06-04 08:40:40,065:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 25ms E0604 10:24:44.604301 61 process.cpp:2154] Failed to shutdown socket with fd 20: Transport endpoint is not connected E0604 16:04:56.898818 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected E0604 16:24:39.959198 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected 2017-06-04 16:36:40,072:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 14ms E0604 19:56:22.129432 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected E0604 20:50:55.851435 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected 2017-06-04 22:38:10,051:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 17ms E0605 03:26:33.073452 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected 2017-06-05 06:54:11,405:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 15ms E0605 08:36:00.524884 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected E0605 09:38:17.197674 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected E0605 12:12:02.396128 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected 2017-06-05 12:46:33,755:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 18ms E0605 15:12:10.025334 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected 2017-06-05 15:40:28,052:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 19ms 2017-06-05 18:05:23,774:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 20ms 2017-06-05 18:11:34,173:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 14ms E0605 20:40:17.257387 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected E0605 21:55:19.333130 13 slave.cpp:3903] Failed to update resources for container 04a6d901-a44d-4de2-818a-b3cb3591528e of executor 'wgs11_freya_ftitleconfig_titleconfig.ca6adcb1-45fe-11e7-8c9b-fa163e3c4349' running task wgs11_freya_ftitleconfig_titleconfig.ca6adcb1-45fe-11e7-8c9b-fa163e3c4349 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/22374/cgroup: Failed to open file: No such file or directory E0605 23:03:18.848070 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected E0606 01:13:21.854162 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected 2017-06-06 01:20:34,329:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 17ms E0606 03:44:57.680379 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected E0606 06:48:26.621417 61 process.cpp:2154] Failed to shutdown socket with fd 160: Transport endpoint is not connected E0606 07:51:54.248551 61 process.cpp:2154] Failed to shutdown socket with fd 69: Transport endpoint is not connected 2017-06-06 08:04:46,678:12(0x7ff5626a7700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 12ms E0606 08:42:38.354423 61 process.cpp:2154] Failed to shutdown socket with fd 69: Transport endpoint is not connected E0606 09:08:32.764663 61 process.cpp:2154] Failed to shutdown socket with fd 92: Transport endpoint is not connected E0606 10:13:48.171314 45 slave.cpp:4423] Container 'e049c91a-f34b-43fc-833f-ab1aa82584f2' for executor 'wgt21_wot_wgus_nginx.b44f2d93-4aa0-11e7-adaf-fa163e902889' of framework a9e7de85-a973-44a7-ab6f-d5258493b9df-0000 failed to start: Failed to run 'docker -H unix:///var/run/docker.sock pull registry.wdo.io/wgus/wgus-nginx:2017.06.06_09.49_feature_docker_29046b': exited with status 1; stderr='unexpected EOF ' E0606 10:13:48.183207 50 slave.cpp:3903] Failed to update resources for container e049c91a-f34b-43fc-833f-ab1aa82584f2 of executor 'wgt21_wot_wgus_nginx.b44f2d93-4aa0-11e7-adaf-fa163e902889' running task wgt21_wot_wgus_nginx.b44f2d93-4aa0-11e7-adaf-fa163e902889 on status update for terminal task, destroying container: Container not found E0606 10:14:45.475520 34 slave.cpp:4423] Container '492e1610-7978-4719-92ce-754d1b55a941' for executor 'wgt21_wot_wgus_nginx.d4b69e65-4aa0-11e7-adaf-fa163e902889' of framework a9e7de85-a973-44a7-ab6f-d5258493b9df-0000 failed to start: Failed to run 'docker -H unix:///var/run/docker.sock pull registry.wdo.io/wgus/wgus-nginx:2017.06.06_09.49_feature_docker_29046b': exited with status 1; stderr='unexpected EOF ' E0606 10:14:45.478303 33 slave.cpp:3903] Failed to update resources for container 492e1610-7978-4719-92ce-754d1b55a941 of executor 'wgt21_wot_wgus_nginx.d4b69e65-4aa0-11e7-adaf-fa163e902889' running task wgt21_wot_wgus_nginx.d4b69e65-4aa0-11e7-adaf-fa163e902889 on status update for terminal task, destroying container: Container not found E0606 10:19:46.384974 40 slave.cpp:4423] Container '7621d499-5be4-44e2-809a-915ce63f88a3' for executor 'wgt21_wot_wgus_nginx.f72502c7-4aa0-11e7-adaf-fa163e902889' of framework a9e7de85-a973-44a7-ab6f-d5258493b9df-0000 failed to start: future discarded E0606 10:19:46.385099 40 slave.cpp:4529] Termination of executor 'wgt21_wot_wgus_nginx.f72502c7-4aa0-11e7-adaf-fa163e902889' of framework a9e7de85-a973-44a7-ab6f-d5258493b9df-0000 failed: unknown container E0606 10:19:47.412472 43 slave.cpp:3903] Failed to update resources for container 7621d499-5be4-44e2-809a-915ce63f88a3 of executor 'wgt21_wot_wgus_nginx.f72502c7-4aa0-11e7-adaf-fa163e902889' running task wgt21_wot_wgus_nginx.f72502c7-4aa0-11e7-adaf-fa163e902889 on status update for terminal task, destroying container: Failed to run 'docker -H unix:///var/run/docker.sock inspect mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.7621d499-5be4-44e2-809a-915ce63f88a3': exited with status 1; stderr='Error: No such object: mesos-a9e7de85-a973-44a7-ab6f-d5258493b9df-S8.7621d499-5be4-44e2-809a-915ce63f88a3 ' E0606 10:40:28.557216 61 process.cpp:2154] Failed to shutdown socket with fd 69: Transport endpoint is not connected E0606 11:10:00.787925 61 process.cpp:2154] Failed to shutdown socket with fd 69: Transport endpoint is not connected ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7630","06/06/2017 18:10:34",5,"Add simple filtering to unversioned operator API ""Add filtering for the following endpoints: - {{/frameworks}} - {{/slaves}} - {{/tasks}} - {{/containers}} We should investigate whether we should use RESTful style or query string to filter the specific resource. We should also figure out whether it's necessary to filter a list of resources.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7643","06/08/2017 17:42:36",2,"The order of isolators provided in '--isolation' flag is not preserved and instead sorted alphabetically ""According to documentation and comments in code the order of the entries in the --isolation flag should specify the ordering of the isolators. Specifically, the `create` and `prepare` calls for each isolator should run serially in the order in which they appear in the --isolation flag, while the `cleanup` call should be serialized in reverse order (with exception of filesystem isolator which is always first). But in fact, the isolators provided in '--isolation' flag are sorted alphabetically. That happens in [this line of code|https://github.com/apache/mesos/blob/master/src/slave/containerizer/mesos/containerizer.cpp#L377]. In this line use of 'set' is done (apparently instead of list or vector) and set is a sorted container.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7652","06/10/2017 03:39:10",3,"Docker image with universal containerizer does not work if WORKDIR is missing in the rootfs. ""hello, used the following docker image recently quay.io/spinnaker/front50:master https://quay.io/repository/spinnaker/front50 Here the link to the Dockerfile https://github.com/spinnaker/front50/blob/master/Dockerfile and here the source {color:blue}FROM java:8 MAINTAINER delivery-engineering@netflix.com COPY . workdir/ WORKDIR workdir RUN GRADLE_USER_HOME=cache ./gradlew buildDeb -x test && \ dpkg -i ./front50-web/build/distributions/*.deb && \ cd .. && \ rm -rf workdir CMD [""""/opt/front50/bin/front50""""]{color} The image works fine with the docker containerizer, but the universal containerizer shows the following in stderr. """"Failed to chdir into current working directory '/workdir': No such file or directory"""" The problem comes from the fact that the Dockerfile creates a workdir but then later removes the created dir as part of a RUN. The docker containerizer has no problem with it if you do docker run -ti --rm quay.io/spinnaker/front50:master bash you get into the working dir, but the universal containerizer fails with the error. thanks for your help, Michael""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7660","06/14/2017 00:32:10",3,"HierarchicalAllocator uses the default filter instead of a very long one ""If a framework accepts/refuses an offer using a very long filter, [the {{HierarchicalAllocator}} will use the default {{Filter}} instead|https://github.com/apache/mesos/blob/master/src/master/allocator/mesos/hierarchical.cpp#L1046-L1052]. Meaning that it will filter the resources for only 5 seconds. This can happen when a framework sets {{Filter::refuse_seconds}} to a number of seconds [larger than what fits in {{Duration}}|https://github.com/apache/mesos/blob/13cae29e7832d8bb879c68847ad0df449d227f17/3rdparty/stout/include/stout/duration.hpp#L401-L405]. The following [tests are flaky|https://issues.apache.org/jira/browse/MESOS-7514] because of this: {{ReservationTest.ReserveShareWithinRole}} and {{ReservationTest.PreventUnreservingAlienResources}}.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7661","06/14/2017 00:46:07",3,"Libprocess timers with long durations trigger immediately ""{{process::delay()}} will schedule a method to be run right ahead when called with a veeeery long {{Duration}}. This happens because [{{Timeout}} tries to add two long durations|https://github.com/apache/mesos/blob/13cae29e7832d8bb879c68847ad0df449d227f17/3rdparty/libprocess/include/process/timeout.hpp#L33-L38], leading to an [integer overflow in {{Duration}}|https://github.com/apache/mesos/blob/13cae29e7832d8bb879c68847ad0df449d227f17/3rdparty/stout/include/stout/duration.hpp#L116]. I'd expect libprocess to either: 1. Never run the method. 2. Schedule it in the longest possible {{Duration}}. {{Duration::operator+=()}} should probably also handle integer overflows differently. If an addition leads to an integer overflow, it might make more sense to return {{Duration::max()}} than a negative duration.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7662","06/14/2017 01:18:51",2,"Documentation regarding TASK_LOST is misleading ""Our protos describe {{TASK_LOST}} as a terminal state [\[1\]|https://github.com/apache/mesos/blob/fb54d469dcadf762e9c3f8a2fed78ed7b306120a/include/mesos/mesos.proto#L1722] [\[2\]|https://github.com/apache/mesos/blob/fb54d469dcadf762e9c3f8a2fed78ed7b306120a/include/mesos/mesos.proto#L64-L73]. A task might go from {{TASK_LOST}} to {{TASK_RUNNING}} or another state if Mesos is not using a strict register, so the documentation is misleading. Marathon used to assume that {{TASK_LOST}} was a terminal past and that resulted in production pain for some users. We should update the documentation to make the life of frameworks developers a bit better =).""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7663","06/14/2017 02:42:33",2,"Update the documentation to reflect the addition of reservation refinement. ""There are a few things we need to be sure to document: * What reservation refinement is. * The new """"format"""" for Resource, when using the RESERVATION_REFINEMENT capability. * The filtering of resources if a framework is not RESERVATION_REFINEMENT capable. * The current limitations that only a single reservation can be pushed / popped within a single RESERVE / UNRESERVE operation.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7675","06/15/2017 01:15:01",8,"Isolate network ports. ""If a task uses network ports, there is no isolator that can enforce that it only listens on the ports that it has resources for. Implement a ports isolator that can limit tasks to listen only on allocated TCP ports. Roughly, the algorithm for this follows what standard tools like {{lsof}} and {{ss}} do. * Find all the listening TCP sockets (using netlink) * Index the sockets by their node (from the netlink information) * Find all the open sockets on the system (by scanning {{/proc/\*/fd/\*}} links) * For each open socket, check whether its node (given in the link target) in the set of listen sockets that we scanned * If the socket is a listening socket and the corresponding PID is in the task, send a resource limitation for the task Matching pids to tasks depends on using cgroup isolation, otherwise we would have to build a full process tree, which would be nice to avoid. Scanning all the open sockets can be avoided by using the {{net_cls}} isolator with kernel + libnl3 patches to publish the socket classid when we find the listening socket. Design Doc: https://docs.google.com/document/d/1BGmANq8IW-H4-YVUlpdf6qZFTZnDe-OKAY_e7uNp7LA Kernel Patch: http://marc.info/?l=linux-kernel&m=150293015025396&w=2""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7691","06/17/2017 17:54:51",8,"Support local enabled cgroups subsystems automatically. ""Currently, each cgroup subsystem needs to be turned on as an isolator, e.g., """"cgroups/blkio"""". Ideally, mesos should be able to detect all local enabled cgroup subsystems and turn them on automatically (or we call it auto cgroups).""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7695","06/20/2017 03:26:36",3,"Add heartbeats to master stream API ""Just like master uses heartbeats for scheduler API to keep the connection alive, it should do the same for the streaming API.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7699","06/20/2017 19:54:08",8,"""stdlib.h: No such file or directory"" when building with GCC 6 (Debian stable freshly released) ""Hi, It seems the issue comes from a workaround added a while ago: https://reviews.apache.org/r/40326/ https://reviews.apache.org/r/40327/ When building with external libraries it turns out creating build commands line with -isystem /usr/include which is clearly stated as being wrong, according to GCC guys: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70129 I'll do some testing by reverting all -isystem to -I and I'll let it know if it gets built. Regards, Adam. """," configure:21642: result: no configure:21642: checking glog/logging.h presence configure:21642: g++ -E -I/usr/include -I/usr/include/apr-1 -I/usr/include/apr-1.0 -Wdate-time -D_FORTIFY_SOURCE=2 -isystem /usr/include -I/usr/include conftest.cpp In file included from /usr/include/c++/6/ext/string_conversions.h:41:0, from /usr/include/c++/6/bits/basic_string.h:5417, from /usr/include/c++/6/string:52, from /usr/include/c++/6/bits/locale_classes.h:40, from /usr/include/c++/6/bits/ios_base.h:41, from /usr/include/c++/6/ios:42, from /usr/include/c++/6/ostream:38, from /usr/include/glog/logging.h:43, from conftest.cpp:32: /usr/include/c++/6/cstdlib:75:25: fatal error: stdlib.h: No such file or directory #include_next ^ compilation terminated. configure:21642: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME """"mesos"""" | #define PACKAGE_TARNAME """"mesos"""" | #define PACKAGE_VERSION """"1.2.0"""" | #define PACKAGE_STRING """"mesos 1.2.0"""" | #define PACKAGE_BUGREPORT """""""" | #define PACKAGE_URL """""""" | #define PACKAGE """"mesos"""" | #define VERSION """"1.2.0"""" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_DLFCN_H 1 | #define LT_OBJDIR """".libs/"""" | #define HAVE_CXX11 1 | #define HAVE_PTHREAD_PRIO_INHERIT 1 | #define HAVE_PTHREAD 1 | #define HAVE_LIBZ 1 | #define HAVE_FTS_H 1 | #define HAVE_APR_POOLS_H 1 | #define HAVE_LIBAPR_1 1 | #define HAVE_BOOST_VERSION_HPP 1 | #define HAVE_LIBCURL 1 | /* end confdefs.h. */ | #include configure:21642: result: no configure:21642: checking for glog/logging.h configure:21642: result: no configure:21674: error: cannot find glog ------------------------------------------------------------------- You have requested the use of a non-bundled glog but no suitable glog could be found. You may want specify the location of glog by providing a prefix path via --with-glog=DIR, or check that the path you provided is correct if you're already doing this. ------------------------------------------------------------------- ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7709","06/22/2017 16:56:34",5,"Add --default_container_dns flag to the agent. ""Mesos support both CNI (through `network/cni` isolator) and CNM (through docker) specification. Both these specifications allow for DNS entries for containers to be set on a per-container, and per-network basis. Currently, the behavior of the agent is to use the DNS nameservers set in /etc/resolv.conf when the CNI or CNM plugin that is used to attached the container to the CNI/CNM network doesnt' explicitly set the DNS for the container. This is a bit inflexible especially when we have a mix of v4 and v6 networks. The operator should be able to specify DNS nameservers for the networks he installs either the override the ones provided by the plugin or as defaults when the plugins are not going to specify DNS name servers. In order to achieve the above goal we need to introduce a `\--dns` flag to the agent. The `\--dns` flag should support a JSON (or a JSON file) with the following schema: """," { """"mesos"""": [ { """"network"""" : , """"nameservers"""": [] } ], """"docker"""": [ { """"network"""" : , """"nameservers"""": [] } ] } ",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7713","06/22/2017 21:24:16",3,"Optimize number of copies made in dispatch/defer mechanism ""Profiling agents reregistration for a large cluster shows, that many CPU cycles are spent on copying protobuf objects. This is partially due to copies made by a code like this: {{param}} could be copied 8-10 times before it reaches {{method}}. Specifically, {{reregisterSlave}} accepts vectors of rather complex objects, which are passed to {{defer}}. Currently there are some places in {{defer}}, {{dispatch}} and {{Future}} code, which could use {{std::move}} and {{std::forward}} to evade some of the copies."""," future.then(defer(self(), &Process::method, param); ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7728","06/27/2017 19:16:49",3,"Java HTTP adapter crashes JVM when leading master disconnects. ""When a Java scheduler using HTTP v0-v1 adapter loses the leading Mesos master, {{V0ToV1AdapterProcess::disconnected()}} is invoked, which in turn invokes Java scheduler [code via JNI|https://github.com/apache/mesos/blob/87c38b9e2bc5b1030a071ddf0aab69db70d64781/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp#L446]. This call uses the wrong object, {{jmesos}} instead of {{jscheduler}}, which crashes JVM: """," # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007f4bca3849bf, pid=21, tid=0x00007f4b2ac45700 # # JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build 1.8.0_131-b11) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode linux-amd64 compressed oops) # Problematic frame: # V [libjvm.so+0x6d39bf] jni_invoke_nonstatic(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x1af Stack: [0x00007f4b2a445000,0x00007f4b2ac46000], sp=0x00007f4b2ac44a80, free space=8190k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x6d39bf] jni_invoke_nonstatic(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x1af V [libjvm.so+0x6d7fef] jni_CallVoidMethodV+0x10f C [libmesos-1.2.0.so+0x1aa32d3] JNIEnv_::CallVoidMethod(_jobject*, _jmethodID*, ...)+0x93 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7742","06/29/2017 18:50:10",5,"Race conditions in IOSwitchboard: listening on unix socket and premature closing of the connection. ""Observed this on ASF CI and internal Mesosphere CI. Affected tests: This issue comes at least in three different flavours. Take {{AgentAPIStreamingTest.AttachInputToNestedContainerSession}} as an example. h5. Flavour 1 h5. Flavour 2 h5. Flavour 3 """," AgentAPIStreamingTest.AttachInputToNestedContainerSession AgentAPITest.LaunchNestedContainerSession AgentAPITest.AttachContainerInputAuthorization/0 AgentAPITest.LaunchNestedContainerSessionWithTTY/0 AgentAPITest.LaunchNestedContainerSessionDisconnected/1 ../../src/tests/api_tests.cpp:6473 Value of: (response).get().status Actual: """"503 Service Unavailable"""" Expected: http::OK().status Which is: """"200 OK"""" Body: """""""" ../../src/tests/api_tests.cpp:6473 Value of: (response).get().status Actual: """"500 Internal Server Error"""" Expected: http::OK().status Which is: """"200 OK"""" Body: """"Disconnected"""" /home/ubuntu/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-ubuntu-16.04/mesos/src/tests/api_tests.cpp:6367 Value of: (sessionResponse).get().status Actual: """"500 Internal Server Error"""" Expected: http::OK().status Which is: """"200 OK"""" Body: """""""" ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7755","07/04/2017 15:37:04",3,"Update allocator to support updating agent total resources ""Agents encapsulate resource providers making their resources appear to the master as agent resources. In order to permit updates to the resources of a local resource provider (e.g., available disk expanded physically by adding another driver, resource provider resources disappeared since resource provider disappeared), we need to allow agents to change their total resources. Expected semantics for the hierarchical allocator would be that {{total}} can shrink independently of the current {{allocated}}; should {{allocated}} exceed {{total}} no allocations can be made until {{allocated < total}}.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7757","07/04/2017 15:56:56",5,"Update master to handle updates to agent total resources ""With MESOS-7755 we update the allocator interface to support updating the total resources on an agent. These allocator invocations are driven by the master when it receives an update the an agent's total resources. We could transport the updates from agents to the master either as update to {{UpdateSlaveMessage}}, e.g., by adding a {{repeated Resource total}} field; in order to distinguish updates to {{oversubscribed}} to updates to {{total}} we would need to introduce an additional tag field (an empty list of {{Resource}} has the same representation as an absent list of {{Resource}}). Alternatively we could introduce a new message transporting just the update to {{total}}; it should be possible to reuse such a message for external resource providers which we will likely add at a later point.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7758","07/05/2017 07:22:38",2,"Stout doesn't build standalone. ""Stout doesn't build in a standalone configuration: Note that the build expects {{3rdparty/googletest-release-1.8.0/googlemock-build-stamp}}, but {{googletest}} hasn't been staged yet: """," $ cd ~/src/mesos/3rdparty/stout $ ./bootstrap $ cd ~/build/stout $ ~/src/mesos/3rdparty/stout/configure ... $ make ... make[1]: Leaving directory '/home/vagrant/build/stout/3rdparty' make[1]: Entering directory '/home/vagrant/build/stout/3rdparty' make[1]: *** No rule to make target 'googlemock-build-stamp'. Stop. make[1]: Leaving directory '/home/vagrant/build/stout/3rdparty' make: *** [Makefile:1902: 3rdparty/googletest-release-1.8.0/googlemock-build-stamp] Error 2 [vagrant@fedora-26 stout]$ ls -l 3rdparty/ total 44 drwxr-xr-x. 3 vagrant vagrant 4096 Jan 18 2016 boost-1.53.0 -rw-rw-r--. 1 vagrant vagrant 0 Jul 5 06:16 boost-1.53.0-stamp drwxrwxr-x. 8 vagrant vagrant 4096 Aug 15 2016 elfio-3.2 -rw-rw-r--. 1 vagrant vagrant 0 Jul 5 06:16 elfio-3.2-stamp drwxr-xr-x. 10 vagrant vagrant 4096 Jul 5 06:16 glog-0.3.3 -rw-rw-r--. 1 vagrant vagrant 0 Jul 5 06:16 glog-0.3.3-build-stamp -rw-rw-r--. 1 vagrant vagrant 0 Jul 5 06:16 glog-0.3.3-stamp -rw-rw-r--. 1 vagrant vagrant 734 Jul 5 06:03 gmock_sources.cc -rw-rw-r--. 1 vagrant vagrant 25657 Jul 5 06:03 Makefile ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7761","07/05/2017 12:31:28",1,"Website ruby deps do not bundle on macOS ""When trying to bundle the ruby dependencies of the website on macOS-10.12.5 I get It seems eventmachine-1.0.3 has known and fixed issues on macOS-10.10.1 already. I suspect there might be a similar issue for the macOS-10.12.5 I am using."""," $ cd site/ $ bundle install Fetching gem metadata from https://rubygems.org/............ Fetching version metadata from https://rubygems.org/.. Fetching dependency metadata from https://rubygems.org/. Using coffee-script-source 1.6.3 Using multi_json 1.8.2 Using chunky_png 1.2.9 Using fssm 0.2.10 Using sass 3.2.12 Using tilt 1.3.7 Using kramdown 1.2.0 Using i18n 0.6.5 Using rb-fsevent 0.9.3 Using ffi 1.9.3 Using rack 1.5.2 Using thor 0.18.1 Using bundler 1.15.1 Using hike 1.2.3 Fetching eventmachine 1.0.3 Installing eventmachine 1.0.3 with native extensions Using http_parser.rb 0.5.3 Using addressable 2.3.5 Using atomic 1.1.14 Using rdiscount 2.1.7 Using htmlentities 4.3.2 Fetching libv8 3.16.14.15 Installing libv8 3.16.14.15 with native extensions Using ref 2.0.0 Using execjs 1.4.0 Using compass 0.12.2 Using haml 4.0.4 Using activesupport 3.2.15 Using rb-inotify 0.9.2 Using rb-kqueue 0.2.0 Using rack-test 0.6.2 Using rack-livereload 0.3.15 Using rouge 0.3.10 Using sprockets 2.10.0 Gem::Ext::BuildError: ERROR: Failed to build gem native extension. current directory: /Users/bbannier/src/mesos/site/vendor/ruby/2.4.0/gems/eventmachine-1.0.3/ext /Users/bbannier/src/homebrew/opt/ruby/bin/ruby -r ./siteconf20170705-46478-cy91ue.rb extconf.rb checking for rb_trap_immediate in ruby.h,rubysig.h... no checking for rb_thread_blocking_region()... no checking for inotify_init() in sys/inotify.h... no checking for __NR_inotify_init in sys/syscall.h... no checking for writev() in sys/uio.h... yes checking for rb_wait_for_single_fd()... yes checking for rb_enable_interrupt()... no checking for rb_time_new()... yes checking for sys/event.h... yes checking for sys/queue.h... yes creating Makefile current directory: /Users/bbannier/src/mesos/site/vendor/ruby/2.4.0/gems/eventmachine-1.0.3/ext make """"DESTDIR="""" clean current directory: /Users/bbannier/src/mesos/site/vendor/ruby/2.4.0/gems/eventmachine-1.0.3/ext make """"DESTDIR="""" compiling binder.cpp compiling cmain.cpp compiling ed.cpp compiling em.cpp compiling kb.cpp compiling page.cpp compiling pipe.cpp compiling rubymain.cpp compiling ssl.cpp In file included from pipe.cpp:20: In file included from ./project.h:149: ./binder.h:35:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long GetBinding() {return Binding;} ^~~~~~ In file included from page.cpp:21: In file included from ./project.h:149: ./binder.h:35:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long GetBinding() {return Binding;} ^~~~~~ In file included from kb.cpp:20: In file included from ./project.h:149: ./binder.h:35:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long GetBinding() {return Binding;} ^~~~~~ In file included from em.cpp:23: In file included from ./project.h:149: ./binder.h:35:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long GetBinding() {return Binding;} ^~~~~~ In file included from binder.cpp:20: In file included from ./project.h:149: ./binder.h:35:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long GetBinding() {return Binding;} ^~~~~~ In file included from rubymain.cpp:20: In file included from ./project.h:149: ./binder.h:35:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long GetBinding() {return Binding;} ^~~~~~ In file included from ed.cpp:20: In file included from ./project.h:149: ./binder.h:35:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long GetBinding() {return Binding;} ^~~~~~ In file included from cmain.cpp:20: In file included from ./project.h:149: ./binder.h:35:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long GetBinding() {return Binding;} ^~~~~~ In file included from ssl.cpp:23: In file included from ./project.h:149: ./binder.h:35:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long GetBinding() {return Binding;} ^~~~~~ In file included from pipe.cpp:20: In file included from ./project.h:150: ./em.h:84:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long InstallOneshotTimer (int); ^~~~~~ ./em.h:85:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToServer (const char *, int, const char *, int); ^~~~~~ ./em.h:86:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToUnixServer (const char *); ^~~~~~ ./em.h:88:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateTcpServer (const char *, int); ^~~~~~ ./em.h:89:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenDatagramSocket (const char *, int); ^~~~~~ ./em.h:90:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateUnixDomainServer (const char*); ^~~~~~ ./em.h:91:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenKeyboard(); ^~~~~~ ./em.h:93:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long Socketpair (char* const*); ^~~~~~ ./em.h:99:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long AttachFD (int, bool); ^~~~~~ ./em.h:116:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchFile (const char*); ^~~~~~ ./em.h:125:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchPid (int); ^~~~~~ In file included from binder.cpp:20: In file included from ./project.h:150: ./em.h:84:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long InstallOneshotTimer (int); ^~~~~~ ./em.h:In file included from kb.cpp:20: In file included from ./project.h:150: ./em.h:84:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] 85:3: const unsigned long InstallOneshotTimer (int); ^~~~~~ warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToServer (const char *, int, const char *, int);./em.h: 85: ^~~~~~ ./em.h:86:3: 3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToServer (const char *, int, const char *, int); ^~~~~~ ./em.h:86:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] In file included from const unsigned long ConnectToUnixServer (const char *); ^~~~~~ warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./em.h:88em.cpp:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] :23: In file included from ./project.h const unsigned long CreateTcpServer (const char *, int); ^~~~~~ :150: ./em.h:84:3: warning: ./em.h const unsigned long ConnectToUnixServer (const char *);'const' type qualifier on return type has no effect [-Wignored-qualifiers] ^~~~~~ const unsigned long InstallOneshotTimer (int); ^~~~~~ ./em.h:./em.h:85:3::88:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] 89: 3:warning: const unsigned long CreateTcpServer (const char *, int);'const' type qualifier on return type has no effect [-Wignored-qualifiers] warning: ^~~~~~ 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenDatagramSocket (const char *, int); ^~~~~~ ./em.h:89:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./em.h:90:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateUnixDomainServer (const char*); ^~~~~~ const unsigned long ConnectToServer (const char *, int, const char *, int); ^~~~~~ ./em.h:91:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenKeyboard(); ^~~~~~ ./em.h:./em.h:93 const unsigned long OpenDatagramSocket (const char *, int); : ^~~~~~ 3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long Socketpair (char* const*); ^~~~~~ ./em.h:90:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./em.h:99:3 const unsigned long CreateUnixDomainServer (const char*);86: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ^~~~~~ const unsigned long AttachFD (int, bool); ^~~~~~ ./em.h:91:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenKeyboard(); ^~~~~~ ./em.h:116:3:: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchFile (const char*); ^~~~~~ ./em.h3:93:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] : warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long Socketpair (char* const*); ^~~~~~ ./em.h:125:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToUnixServer (const char *); ^~~~~~ const unsigned long WatchPid (int); ^~~~~~ ./em.h:99:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./em.h:88:3: const unsigned long AttachFD (int, bool); ^~~~~~ warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateTcpServer (const char *, int); ^~~~~~ ./em.h:89:3: ./em.h:warning116:3: : warning'const' type qualifier on return type has no effect [-Wignored-qualifiers] : 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenDatagramSocket (const char *, int); ^~~~~~ const unsigned long WatchFile (const char*); ^~~~~~ ./em.h:90:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateUnixDomainServer (const char*); ^~~~~~ ./em.h:125:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./em.h:91:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchPid (int); const unsigned long OpenKeyboard(); ^~~~~~ ^~~~~~ ./em.h:93:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long Socketpair (char* const*); ^~~~~~ ./em.h:99:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long AttachFD (int, bool); ^~~~~~ ./em.h:116:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchFile (const char*); ^~~~~~ ./em.h:125:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchPid (int); ^~~~~~ In file included from rubymain.cpp:20: In file included from ./project.h:150: ./em.h:84:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] In file included from page.cpp:21: In file included from ./project.h:150 const unsigned long InstallOneshotTimer (int); ^~~~~~ : ./em.h:84:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long InstallOneshotTimer (int); ^~~~~~ ./em.h:85:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./em.h:85 const unsigned long ConnectToServer (const char *, int, const char *, int);: 3 ^~~~~~ : warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToServer (const char *, int, const char *, int); ^~~~~~ ./em.h:86:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./em.h:86:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToUnixServer (const char *); ^~~~~~ const unsigned long ConnectToUnixServer (const char *); ^~~~~~ ./em.h:88:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers]./em.h:88:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateTcpServer (const char *, int); ^~~~~~ const unsigned long CreateTcpServer (const char *, int); ^~~~~~ ./em.h:89:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenDatagramSocket (const char *, int); ^~~~~~ ./em.h:89:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./em.h:90:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenDatagramSocket (const char *, int); ^~~~~~ const unsigned long CreateUnixDomainServer (const char*); ^~~~~~ ./em.h:90:3: warning: ./em.h:91:3'const' type qualifier on return type has no effect [-Wignored-qualifiers] : warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateUnixDomainServer (const char*); const unsigned long OpenKeyboard(); ^~~~~~ ^~~~~~ ./em.h:91:./em.h:93:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] 3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long Socketpair (char* const*); ^~~~~~ const unsigned long OpenKeyboard(); ^~~~~~ ./em.h:93:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./em.h:99 const unsigned long Socketpair (char* const*);: 3 ^~~~~~: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long AttachFD (int, bool); ^~~~~~ ./em.h:99:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long AttachFD (int, bool); ^~~~~~ ./em.h:116:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchFile (const char*); ^~~~~~ ./em.h:116:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./em.h:125:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchFile (const char*); ^~~~~~ const unsigned long WatchPid (int); ^~~~~~ ./em.h:125:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchPid (int); ^~~~~~ In file included from cmain.cpp:20: In file included from ./project.h:150: ./em.h:84:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long InstallOneshotTimer (int); ^~~~~~ ./em.h:85:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToServer (const char *, int, const char *, int); ^~~~~~ ./em.h:86:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToUnixServer (const char *); ^~~~~~ ./em.h:88:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateTcpServer (const char *, int); ^~~~~~ ./em.h:89:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenDatagramSocket (const char *, int); ^~~~~~ ./em.h:90:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateUnixDomainServer (const char*); ^~~~~~ ./em.h:91:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenKeyboard(); ^~~~~~ ./em.h:93:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long Socketpair (char* const*); ^~~~~~ ./em.h:99:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long AttachFD (int, bool); ^~~~~~ ./em.h:116:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchFile (const char*); ^~~~~~ ./em.h:125:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchPid (int); ^~~~~~ In file included from ed.cpp:20: In file included from ./project.h:150: ./em.h:84:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long InstallOneshotTimer (int); ^~~~~~ ./em.h:85:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToServer (const char *, int, const char *, int); ^~~~~~ ./em.h:86:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToUnixServer (const char *); ^~~~~~ ./em.h:88:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateTcpServer (const char *, int); ^~~~~~ ./em.h:89:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenDatagramSocket (const char *, int); ^~~~~~ ./em.h:90:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateUnixDomainServer (const char*); ^~~~~~ ./em.h:91:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenKeyboard(); ^~~~~~ ./em.h:93:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long Socketpair (char* const*); ^~~~~~ ./em.h:99:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long AttachFD (int, bool); ^~~~~~ ./em.h:116:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchFile (const char*); ^~~~~~ ./em.h:125:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchPid (int); ^~~~~~ In file included from ssl.cpp:23: In file included from ./project.h:150: ./em.h:84:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long InstallOneshotTimer (int); ^~~~~~ ./em.h:85:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToServer (const char *, int, const char *, int); ^~~~~~ ./em.h:86:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long ConnectToUnixServer (const char *); ^~~~~~ ./em.h:88:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateTcpServer (const char *, int); ^~~~~~ ./em.h:89:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenDatagramSocket (const char *, int); ^~~~~~ ./em.h:90:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long CreateUnixDomainServer (const char*); ^~~~~~ ./em.h:91:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long OpenKeyboard(); ^~~~~~ ./em.h:93:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long Socketpair (char* const*); ^~~~~~ ./em.h:99:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long AttachFD (int, bool); ^~~~~~ ./em.h:116:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchFile (const char*); ^~~~~~ ./em.h:125:3: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long WatchPid (int); ^~~~~~ In file included from pipe.cpp:20: In file included from ./project.h:154: ./eventmachine.h:46:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_install_oneshot_timer (int seconds); ^~~~~~ ./eventmachine.h:47:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_server (const char *bind_addr, int bind_port, const char *server, int port); ^~~~~~ ./eventmachine.h:48:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_unix_server (const char *server); ^~~~~~ ./eventmachine.h:50:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_attach_fd (int file_descriptor, int watch_mode); ^~~~~~ ./eventmachine.h:65:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_tcp_server (const char *address, int port); ^~~~~~ ./eventmachine.h:66:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_unix_domain_server (const char *filename); ^~~~~~ ./eventmachine.h:67:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_datagram_socket (const char *server, int port); ^~~~~~ ./eventmachine.h:68:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_keyboard(); ^~~~~~ ./eventmachine.h:103:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_popen (char * const*cmd_strings); ^~~~~~ ./eventmachine.h:105:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_filename (const char *fname); ^~~~~~ ./eventmachine.h:108:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_pid (int); ^~~~~~ In file included from binder.cpp:20: In file included from ./project.h:154: ./eventmachine.h:46:2: warningIn file included from em.cpp: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] :23: In file included from ./project.h const unsigned long evma_install_oneshot_timer (int seconds); ^~~~~~ :154: ./eventmachine.h:46:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./eventmachine.h:47:2: const unsigned long evma_install_oneshot_timer (int seconds); ^~~~~~warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_server (const char *bind_addr, int bind_port, const char *server, int port); ^~~~~~ ./eventmachine.h:47:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./eventmachine.h const unsigned long evma_connect_to_server (const char *bind_addr, int bind_port, const char *server, int port); ^~~~~~ :48:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./eventmachine.h:48:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_unix_server (const char *server); ^~~~~~ const unsigned long evma_connect_to_unix_server (const char *server); ^~~~~~ ./eventmachine.h:50:./eventmachine.h:50:22: warning: warning: : 'const' type qualifier on return type has no effect [-Wignored-qualifiers] 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_attach_fd (int file_descriptor, int watch_mode); ^~~~~~ const unsigned long evma_attach_fd (int file_descriptor, int watch_mode); ^~~~~~ ./eventmachine.h:65:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./eventmachine.h:65:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_tcp_server (const char *address, int port); ^~~~~~ const unsigned long evma_create_tcp_server (const char *address, int port); ^~~~~~ ./eventmachine.h:66:2: ./eventmachine.h:66warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] :2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_unix_domain_server (const char *filename); ^~~~~~ const unsigned long evma_create_unix_domain_server (const char *filename); ^~~~~~ ./eventmachine.h:67:2: warning: ./eventmachine.h:'const' type qualifier on return type has no effect [-Wignored-qualifiers]67 :2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_datagram_socket (const char *server, int port); ^~~~~~ const unsigned long evma_open_datagram_socket (const char *server, int port); ^~~~~~ ./eventmachine.h./eventmachine.h::6868::22:: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_keyboard(); ^~~~~~ const unsigned long evma_open_keyboard(); ^~~~~~ ./eventmachine.h:103:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_popen (char * const*cmd_strings); ^~~~~~ ./eventmachine.h:103:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./eventmachine.h:105:2 const unsigned long evma_popen (char * const*cmd_strings);: ^~~~~~warning : 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_filename (const char *fname); ^~~~~~ ./eventmachine.h:105:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./eventmachine.h:108:2: warning: const unsigned long evma_watch_filename (const char *fname); 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ^~~~~~ const unsigned long evma_watch_pid (int); ^~~~~~ In file included from page.cpp:21: In file included from ./project.h:154: ./eventmachine.h:46:2./eventmachine.h:108: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] :2: const unsigned long evma_install_oneshot_timer (int seconds); ^~~~~~ warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_pid (int); ^~~~~~ ./eventmachine.h:47:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_server (const char *bind_addr, int bind_port, const char *server, int port); ^~~~~~ ./eventmachine.h:48:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_unix_server (const char *server); ^~~~~~ ./eventmachine.h:50:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_attach_fd (int file_descriptor, int watch_mode); ^~~~~~ ./eventmachine.h:65:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_tcp_server (const char *address, int port); ^~~~~~ ./eventmachine.h:66:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_unix_domain_server (const char *filename); ^~~~~~ ./eventmachine.h:67:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_datagram_socket (const char *server, int port); ^~~~~~ ./eventmachine.h:68:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_keyboard(); ^~~~~~ In file included from kb.cppIn file included from rubymain.cpp:20: :In file included from 20: In file included from ./project.h:154./project.h:154: ./eventmachine.h:46:2: ./eventmachine.h: warning: :46'const' type qualifier on return type has no effect [-Wignored-qualifiers]:2 : warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_install_oneshot_timer (int seconds); ^~~~~~ const unsigned long evma_install_oneshot_timer (int seconds); ^~~~~~ ./eventmachine.h:47:2:./eventmachine.h:47 :2: warning: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers]'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_server (const char *bind_addr, int bind_port, const char *server, int port); ^~~~~~ const unsigned long evma_connect_to_server (const char *bind_addr, int bind_port, const char *server, int port); ^~~~~~ ./eventmachine.h:103:2: warning: ./eventmachine.h:'const' type qualifier on return type has no effect [-Wignored-qualifiers] 48:2./eventmachine.h:48:: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_popen (char * const*cmd_strings); 2 ^~~~~~ : warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_unix_server (const char *server); ^~~~~~ const unsigned long evma_connect_to_unix_server (const char *server); ^~~~~~ ./eventmachine.h:105:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_filename (const char *fname); ^~~~~~ ./eventmachine.h:50:2:./eventmachine.h:50:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_attach_fd (int file_descriptor, int watch_mode); ^~~~~~ ./eventmachine.h const unsigned long evma_attach_fd (int file_descriptor, int watch_mode);: 108 ^~~~~~: 2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_pid (int); ^~~~~~ ./eventmachine.h:65:2: ./eventmachine.h:65:2: warning: warning'const' type qualifier on return type has no effect [-Wignored-qualifiers] : 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_tcp_server (const char *address, int port); ^~~~~~ const unsigned long evma_create_tcp_server (const char *address, int port); ^~~~~~ ./eventmachine.h:66:./eventmachine.h2: warning: :'const' type qualifier on return type has no effect [-Wignored-qualifiers] 66:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_unix_domain_server (const char *filename); ^~~~~~ const unsigned long evma_create_unix_domain_server (const char *filename); ^~~~~~ ./eventmachine.h:67:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./eventmachine.h:67:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_datagram_socket (const char *server, int port); ^~~~~~ const unsigned long evma_open_datagram_socket (const char *server, int port); ^~~~~~ ./eventmachine.h:68:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./eventmachine.h:68:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_keyboard(); ^~~~~~ const unsigned long evma_open_keyboard(); ^~~~~~ ./eventmachine.h:103:2: warning./eventmachine.h:103:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] : 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_popen (char * const*cmd_strings); ^~~~~~ const unsigned long evma_popen (char * const*cmd_strings); ^~~~~~ ./eventmachine.h./eventmachine.h::105105::2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] 2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_filename (const char *fname); ^~~~~~ const unsigned long evma_watch_filename (const char *fname); ^~~~~~ ./eventmachine.h:108:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] ./eventmachine.h:108:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_pid (int); ^~~~~~ const unsigned long evma_watch_pid (int); ^~~~~~ page.cpp:55:15: warning: implicit conversion loses integer precision: 'size_t' (aka 'unsigned long') to 'int' [-Wshorten-64-to-32] *length = p.Size; ~ ~~^~~~ In file included from cmain.cpp:20: In file included from ./project.h:154: ./eventmachine.h:46:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_install_oneshot_timer (int seconds); ^~~~~~ ./eventmachine.h:47:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_server (const char *bind_addr, int bind_port, const char *server, int port); ^~~~~~ ./eventmachine.h:48:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_unix_server (const char *server); ^~~~~~ ./eventmachine.h:50:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_attach_fd (int file_descriptor, int watch_mode); ^~~~~~ ./eventmachine.h:65:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_tcp_server (const char *address, int port); ^~~~~~ ./eventmachine.h:66:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_unix_domain_server (const char *filename); ^~~~~~ ./eventmachine.h:67:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_datagram_socket (const char *server, int port); ^~~~~~ ./eventmachine.h:68:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_keyboard(); ^~~~~~ ./eventmachine.h:103:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_popen (char * const*cmd_strings); ^~~~~~ ./eventmachine.h:105:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_filename (const char *fname); ^~~~~~ ./eventmachine.h:108:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_pid (int); ^~~~~~ cmain.cpp:96:12: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] extern """"C"""" const unsigned long evma_install_oneshot_timer (int seconds) ^~~~~~ cmain.cpp:107:12: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] extern """"C"""" const unsigned long evma_connect_to_server (const char *bind_addr, int bind_port, const char *server, int port) ^~~~~~ cmain.cpp:117:12: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] extern """"C"""" const unsigned long evma_connect_to_unix_server (const char *server) ^~~~~~ In file included from ed.cpp:20: In file included from ./project.h:154: cmain.cpp:127:12: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers]./eventmachine.h :46:2: extern """"C"""" const unsigned long evma_attach_fd (int file_descriptor, int watch_mode) ^~~~~~ warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_install_oneshot_timer (int seconds); ^~~~~~ ./eventmachine.h:47:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_server (const char *bind_addr, int bind_port, const char *server, int port); ^~~~~~ ./eventmachine.h:48:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_unix_server (const char *server); ^~~~~~ ./eventmachine.h:50:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_attach_fd (int file_descriptor, int watch_mode); ^~~~~~ ./eventmachine.h:65:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_tcp_server (const char *address, int port); ^~~~~~ ./eventmachine.h:66:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_unix_domain_server (const char *filename); ^~~~~~ ./eventmachine.h:67:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_datagram_socket (const char *server, int port); ^~~~~~ ./eventmachine.h:68:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_keyboard(); ^~~~~~ ./eventmachine.h:103:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_popen (char * const*cmd_strings); ^~~~~~ ./eventmachine.h:105:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_filename (const char *fname); ^~~~~~ rubymain.cpp:803:13: warning: implicit conversion loses integer precision: 'long' to 'int' [-Wshorten-64-to-32] int len = RARRAY_LEN(cmd); ~~~ ^~~~~~~~~~~~~~~ ./eventmachine.h:108:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] /Users/bbannier/src/homebrew/Cellar/ruby/2.4.1_1/include/ruby-2.4.0/ruby/ruby.h:1026:23: const unsigned long evma_watch_pid (int); ^~~~~~ note: expanded from macro 'RARRAY_LEN' #define RARRAY_LEN(a) rb_array_len(a) ^~~~~~~~~~~~~~~ rubymain.cpp:1020:42: warning: implicit conversion loses integer precision: 'VALUE' (aka 'unsigned long') to 'int' [-Wshorten-64-to-32] return INT2NUM (evma_set_rlimit_nofile (arg)); ~~~~~~~~~~~~~~~~~~~~~~ ^~~ /Users/bbannier/src/homebrew/Cellar/ruby/2.4.1_1/include/ruby-2.4.0/ruby/ruby.h:1538:31: note: expanded from macro 'INT2NUM' #define INT2NUM(x) RB_INT2NUM(x) ^ /Users/bbannier/src/homebrew/Cellar/ruby/2.4.1_1/include/ruby-2.4.0/ruby/ruby.h:1515:41: note: expanded from macro 'RB_INT2NUM' # define RB_INT2NUM(v) RB_INT2FIX((int)(v)) ^ /Users/bbannier/src/homebrew/Cellar/ruby/2.4.1_1/include/ruby-2.4.0/ruby/ruby.h:231:33: note: expanded from macro 'RB_INT2FIX' #define RB_INT2FIX(i) (((VALUE)(i))<<1 | RUBY_FIXNUM_FLAG) ^ em.cpp:75:2:pipe.cpp:160:11: warning: implicit conversion loses integer precision: 'ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32] int r = read (sd, readbuffer, sizeof(readbuffer) - 1); ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ pipe.cpp:217:36: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32] int len = sizeof(output_buffer) - nbytes; ~~~ ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~ pipe.cpp:231:22: warning: implicit conversion loses integer precision: 'ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32] int bytes_written = write (GetSocket(), output_buffer, nbytes); ~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ pipe.cpp:236:21: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32] int len = nbytes - bytes_written; ~~~ ~~~~~~~^~~~~~~~~~~~~~~ warning: field 'LoopBreakerWriter' will be initialized after field 'NumCloseScheduled' [-Wreorder] LoopBreakerWriter (-1), ^ In file included from ssl.cpp:23: In file included from ./project.h:154: ./eventmachine.h:46:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_install_oneshot_timer (int seconds); ^~~~~~ ./eventmachine.h:47:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] em.cpp:265:14: warning: implicit conversion loses integer precision: 'rlim_t' (aka 'unsigned long long') to 'int' [-Wshorten-64-to-32] const unsigned long evma_connect_to_server (const char *bind_addr, int bind_port, const char *server, int port); ^~~~~~ return rlim.rlim_cur; ~~~~~~ ~~~~~^~~~~~~~ ./eventmachine.h:48:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_connect_to_unix_server (const char *server); ^~~~~~ ./eventmachine.h:50:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_attach_fd (int file_descriptor, int watch_mode); ^~~~~~ ./eventmachine.h:65:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_tcp_server (const char *address, int port); ^~~~~~ ./eventmachine.h:66:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_create_unix_domain_server (const char *filename); ^~~~~~ ./eventmachine.h:67:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_datagram_socket (const char *server, int port); ^~~~~~ ./eventmachine.h:68:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_open_keyboard(); ^~~~~~ ./eventmachine.h:103:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_popen (char * const*cmd_strings); ^~~~~~ ./eventmachine.h:105:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_filename (const char *fname); ^~~~~~ ./eventmachine.h:108:2: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long evma_watch_pid (int); ^~~~~~ ed.cpp:297:39: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32] ProxyTarget->SendOutboundData(buf, proxied); ~~~~~~~~~~~~~~~~ ^~~~~~~ ed.cpp:303:17: warning: comparison of integers of different signs: 'unsigned long' and 'int' [-Wsign-compare] if (proxied < size) { ~~~~~~~ ^ ~~~~ em.cpp:736:29: warning: cmain.cpp:269:12: warning: implicit conversion loses integer precision: 'std::__1::vector >::size_type' (aka 'unsigned long') to 'int' [-Wshorten-64-to-32] int nSockets = Descriptors.size(); ~~~~~~~~ ~~~~~~~~~~~~^~~~~~ 'const' type qualifier on return type has no effect [-Wignored-qualifiers] extern """"C"""" const unsigned long evma_create_tcp_server (const char *address, int port) ^~~~~~ cmain.cpp:279:12: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] extern """"C"""" const unsigned long evma_create_unix_domain_server (const char *filename) ^~~~~~ cmain.cpp:289:12: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] extern """"C"""" const unsigned long evma_open_datagram_socket (const char *address, int port) ^~~~~~ cmain.cpp:299:12: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] extern """"C"""" const unsigned long evma_open_keyboard() ^~~~~~ cmain.cpp:309:12: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] extern """"C"""" const unsigned long evma_watch_filename (const char *fname) ^~~~~~ cmain.cpp:329:12: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] extern """"C"""" const unsigned long evma_watch_pid (int pid) ^~~~~~ em.cpp:827:9: error: use of undeclared identifier 'rb_thread_select'; did you mean 'rb_thread_fd_select'? return EmSelect (maxsocket+1, &fdreads, &fdwrites, &fderrors, &tv); ^~~~~~~~ rb_thread_fd_select ./em.h:25:20: note: expanded from macro 'EmSelect' #define EmSelect rb_thread_select ^ /Users/bbannier/src/homebrew/Cellar/ruby/2.4.1_1/include/ruby-2.4.0/ruby/intern.h:456:5: note: 'rb_thread_fd_select' declared here int rb_thread_fd_select(int, rb_fdset_t *, rb_fdset_t *, rb_fdset_t *, struct timeval *); ^ em.cpp:827:32: error: cannot initialize a parameter of type 'rb_fdset_t *' with an rvalue of type 'fd_set *' return EmSelect (maxsocket+1, &fdreads, &fdwrites, &fderrors, &tv); ^~~~~~~~ /Users/bbannier/src/homebrew/Cellar/ruby/2.4.1_1/include/ruby-2.4.0/ruby/intern.h:456:42: note: passing argument to parameter here int rb_thread_fd_select(int, rb_fdset_t *, rb_fdset_t *, rb_fdset_t *, struct timeval *); ^ ed.cpp:767:11: warning: implicit conversion loses integer precision: 'ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32] int r = read (sd, readbuffer, sizeof(readbuffer) - 1); ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ cmain.cpp:678:12: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] extern """"C"""" const unsigned long evma_popen (char * const*cmd_strings) ^~~~~~ cmain.cpp:778:6: warning: implicit conversion loses integer precision: 'ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32] r = read (Fd, data, filesize); ~ ^~~~~~~~~~~~~~~~~~~~~~~~~ ed.cpp:980:29: warning: implicit conversion loses integer precision: 'std::__1::deque >::size_type' (aka 'unsigned long') to 'int' [-Wshorten-64-to-32] int iovcnt = OutboundPages.size(); ~~~~~~ ~~~~~~~~~~~~~~^~~~~~ ed.cpp:1029:22: warning: implicit conversion loses integer precision: 'ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32] int bytes_written = writev (GetSocket(), iov, iovcnt); ~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ em.cpp:946:6: error: use of undeclared identifier 'rb_thread_select'; did you mean 'rb_thread_fd_select'? EmSelect (0, NULL, NULL, NULL, &tv); ^~~~~~~~ rb_thread_fd_select ./em.h:25:20: note: expanded from macro 'EmSelect' #define EmSelect rb_thread_select ^ /Users/bbannier/src/homebrew/Cellar/ruby/2.4.1_1/include/ruby-2.4.0/ruby/intern.h:456:5: note: 'rb_thread_fd_select' declared here int rb_thread_fd_select(int, rb_fdset_t *, rb_fdset_t *, rb_fdset_t *, struct timeval *); ^ em.cpp:1027:1: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long EventMachine_t::InstallOneshotTimer (int milliseconds) ^~~~~~ em.cpp:1049:1: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long EventMachine_t::ConnectToServer (const char *bind_addr, int bind_port, const char *server, int port) ^~~~~~ em.cpp:1235:1: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long EventMachine_t::ConnectToUnixServer (const char *server) ^~~~~~ em.cpp:1308:1: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long EventMachine_t::AttachFD (int fd, bool watch_mode) ^~~~~~ 23 warnings generated. em.cpp:1480:1: warning: ed.cpp:1595:11: warning: implicit conversion loses integer precision: 'ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32] int r = recvfrom (sd, readbuffer, sizeof(readbuffer) - 1, 0, (struct sockaddr*)&sin, &slen); ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long EventMachine_t::CreateTcpServer (const char *server, int port) ^~~~~~ em.cpp:1563:1: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long EventMachine_t::OpenDatagramSocket (const char *address, int port) ^~~~~~ ed.cpp:1665:11: warning: implicit conversion loses integer precision: 'ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32] int s = sendto (sd, (char*)op->Buffer, op->Length, 0, (struct sockaddr*)&(op->From), sizeof(op->From)); ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ em.cpp:1826:1: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long EventMachine_t::CreateUnixDomainServer (const char *filename) ^~~~~~ ed.cpp:1782:24: warning: implicit conversion loses integer precision: 'unsigned long' to 'in_addr_t' (aka 'unsigned int') [-Wshorten-64-to-32] pin.sin_addr.s_addr = HostAddr; ~ ^~~~~~~~ em.cpp:1952:1: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long EventMachine_t::Socketpair (char * const*cmd_strings) ^~~~~~ em.cpp:2016:1: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long EventMachine_t::OpenKeyboard() ^~~~~~ em.cpp:2032:28: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32] return Descriptors.size() + NewDescriptors.size(); ~~~~~~ ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~ em.cpp:2040:1: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long EventMachine_t::WatchPid (int pid) ^~~~~~ em.cpp:2112:1: warning: 'const' type qualifier on return type has no effect [-Wignored-qualifiers] const unsigned long EventMachine_t::WatchFile (const char *fpath) ^~~~~~ In file included from em.cpp:23: In file included from ./project.h:150: ./em.h:189:12: warning: private field 'NextHeartbeatTime' is not used [-Wunused-private-field] uint64_t NextHeartbeatTime; ^ ./em.h:221:22: warning: private field 'inotify' is not used [-Wunused-private-field] InotifyDescriptor *inotify; // pollable descriptor for our inotify instance ^ 40 warnings and 3 errors generated. make: *** [em.o] Error 1 make: *** Waiting for unfinished jobs.... 23 warnings generated. 35 warnings generated. 23 warnings generated. 24 warnings generated. 25 warnings generated. 27 warnings generated. 31 warnings generated. make failed, exit code 2 Gem files will remain installed in /Users/bbannier/src/mesos/site/vendor/ruby/2.4.0/gems/eventmachine-1.0.3 for inspection. Results logged to /Users/bbannier/src/mesos/site/vendor/ruby/2.4.0/extensions/x86_64-darwin-16/2.4.0/eventmachine-1.0.3/gem_make.out An error occurred while installing eventmachine (1.0.3), and Bundler cannot continue. Make sure that `gem install eventmachine -v '1.0.3'` succeeds before bundling. In Gemfile: middleman-livereload was resolved to 3.1.0, which depends on em-websocket was resolved to 0.5.0, which depends on eventmachine ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7762","07/06/2017 00:11:44",1,"net::IP::Network not building on Windows ""Building master (well, 2c1be9ced) is currently broken on Windows. Repro: (Build instructions here: https://github.com/apache/mesos/blob/master/docs/windows.md) Get a bunch of compilation errors: """," git checkout 2c1be9ced mkdir build cd build cmake .. -DENABLE_LIBEVENT=1 -DHAS_AUTHENTICATION=0 -G """"Visual Studio 15 2017 Win64"""" -T """"host=x64"""" cmake --build . --target stout-tests """"C:\Users\andschwa\src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj"""" (default target) (1) -> (ClCompile target) -> C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\include\stout/windows/ip.hpp(31): error C2065: 'IPNetwork': undeclared identifier (compiling source file C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\tests\ip_tests.cpp) [C:\Users\ands chwa\src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj] C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\include\stout/windows/ip.hpp(31): error C2923: 'Result': 'IPNetwork' is not a valid template type argument for parameter 'T' (compiling source file C:\Users\andschwa\src\mesos-copy2\3rdpar ty\stout\tests\ip_tests.cpp) [C:\Users\andschwa\src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj] C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\include\stout/windows/ip.hpp(31): error C2653: 'IPNetwork': is not a class or namespace name (compiling source file C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\tests\ip_tests.cpp) [C: \Users\andschwa\src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj] C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\include\stout/windows/ip.hpp(34): error C2079: 'net::fromLinkDevice' uses undefined class 'Result' (compiling source file C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\tests\ip_tests.cp p) [C:\Users\andschwa\src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj] C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\include\stout/windows/ip.hpp(41): error C2440: 'return': cannot convert from 'Error' to 'Result' (compiling source file C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\tests\ip_tests.cpp) [C:\Users\andschwa\src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj] C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\include\stout/windows/ip.hpp(49): error C2440: 'return': cannot convert from 'WindowsError' to 'Result' (compiling source file C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\tests\ip_tes ts.cpp) [C:\Users\andschwa\src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj] C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\include\stout/windows/ip.hpp(58): error C2440: 'return': cannot convert from 'WindowsError' to 'Result' (compiling source file C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\tests\ip_tes ts.cpp) [C:\Users\andschwa\src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj] C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\include\stout/windows/ip.hpp(70): error C2065: 'IPNetwork': undeclared identifier (compiling source file C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\tests\ip_tests.cpp) [C:\Users\ands chwa\src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj] C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\include\stout/windows/ip.hpp(70): error C2923: 'Try': 'IPNetwork' is not a valid template type argument for parameter 'T' (compiling source file C:\Users\andschwa\src\mesos-copy2\3rdparty\ stout\tests\ip_tests.cpp) [C:\Users\andschwa\src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj] C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\include\stout/windows/ip.hpp(70): error C2653: 'IPNetwork': is not a class or namespace name (compiling source file C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\tests\ip_tests.cpp) [C: \Users\andschwa\src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj] C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\include\stout/windows/ip.hpp(70): error C3861: 'create': identifier not found (compiling source file C:\Users\andschwa\src\mesos-copy2\3rdparty\stout\tests\ip_tests.cpp) [C:\Users\andschwa \src\mesos-copy2\build\3rdparty\stout\tests\stout-tests.vcxproj] ... ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7769","07/07/2017 22:16:08",1,"libprocess initializes to bind to random port if --ip is not specified ""When running current [HEAD|https://github.com/apache/mesos/commit/c90bea80486c089e933bef64aca341e4cfaaef25], {noformat:title=without --ip} ./mesos-master.sh --work_dir=/tmp/mesos-test1 ... I0707 14:14:05.927870 5820 master.cpp:438] Master db2a2d26-a9a9-4e6f-9909-b9eca47a2862 () started on :36839 It would be great this is caught by tests/CI."""," ./mesos-master.sh --work_dir=/tmp/mesos-test1 ... I0707 14:14:05.927870 5820 master.cpp:438] Master db2a2d26-a9a9-4e6f-9909-b9eca47a2862 () started on :36839 ./mesos-master.sh --ip= --work_dir=/tmp/mesos-test1 I0707 14:09:56.851483 5729 master.cpp:438] Master 963e0f42-9767-4629-8e3d-02c6ab6ad225 () started on :5050 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7770","07/07/2017 22:40:47",3,"Persistent volume might not be mounted if there is a sandbox volume whose source is the same as the target of the persistent volume. ""This issue is only for Mesos Containerizer. If the source of a sandbox volume is a relative path, we'll create the directory in the sandbox in Isolator::prepare method: https://github.com/apache/mesos/blob/1.3.x/src/slave/containerizer/mesos/isolators/filesystem/linux.cpp#L480-L485 And then, we'll try to mount persistent volumes. However, because of this TODO in the code: https://github.com/apache/mesos/blob/1.3.x/src/slave/containerizer/mesos/isolators/filesystem/linux.cpp#L726-L739 We'll skip mounting the persistent volume. That will cause a silent failure. This is important because the workaround we suggest folks to solve MESOS-4016 is to use an additional sandbox volume.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7772","07/08/2017 00:01:49",1,"Copy-n-paste error in slave/main.cpp ""Coverity diagnosed a copy-n-paste error in {{slave/main.cpp}} (https://scan5.coverity.com/reports.htm#v10074/p10429/fileInstanceId=120155401&defectInstanceId=33592186&mergedDefectId=1414687+1+Comment), We check the incorrect IP for some value here (check on {{ip6}}, but use of {{ip}}), and it seems extremely likely we intended to use {{flags.ip6}}."""," 323 } else if (flags.ip6.isSome()) { CID 1414687 (#1 of 1): Copy-paste error (COPY_PASTE_ERROR) copy_paste_error: ip in flags.ip looks like a copy-paste error. Should it say ip6 instead? 324 os::setenv(""""LIBPROCESS_IP6"""", flags.ip.get()); 325 } ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7777","07/11/2017 03:17:19",3,"Agent failed to recover due to mount namespace leakage in Docker 1.12/1.13 ""Docker changed its default mount propagation to """"shared"""" since 1.12 to enable persistent volume plugins. However, Docker has a known issue (https://github.com/moby/moby/issues/25718) that it sometimes leaks its mount namespace to other processes, which could make Mesos agents fail to remove Docker containers during recovery. The following shows the logs of such a faliure: """," I0615 09:39:11.083787 4573 docker.cpp:1002] Skipping recovery of executor 'kafka__7e49099d-7ab4-4435-a94a-1e849b8f2b70' of framework 44cbe3e9-984d-4073-b523-0023b427f54d-0011 because its executor is not marked as docker and the docker container doesn't exist Failed to perform recovery: Collect failed: Collect failed: Failed to run 'docker -H unix:///var/run/docker.sock rm -v 2de71c5383cb887f3ee49de5a517545b0522e1bbcb5df618c7ddb8583fd1d12d': exited with status 1; stderr='Error response from daemon: Driver overlay failed to remove root filesystem 2de71c5383cb887f3ee49de5a517545b0522e1bbcb5df618c7ddb8583fd1d12d: remove /var/lib/docker/overlay/221725ec545d60492b5431bb49380d868f7a949aaa3acff49f7ffb5bddeb3385/merged: device or resource busy ' To remedy this do as follows: Step 1: rm -f /var/lib/mesos/slave/meta/slaves/latest This ensures agent doesn't recover old live executors. Step 2: Restart the agent. ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7790","07/12/2017 22:24:04",8,"Design hierarchical quota allocation. ""When quota is assigned in the role hierarchy (see MESOS-6375), it's possible for there to be """"undelegated"""" quota for a role. For example: Here, the """"eng"""" role has 60 of its 90 cpus of quota delegated to its children, and 30 cpus remain undelegated. We need to design how to allocate these 30 cpus undelegated cpus. Are they allocated entirely to the """"eng"""" role? Are they allocated to the """"eng"""" role tree? If so, how do we determine how much is allocated to each role in the """"eng"""" tree (i.e. """"eng"""", """"eng/ads"""", """"eng/build"""")."""," ^ / \ / \ eng (90 cpus) sales (10 cpus) ^ / \ / \ ads (50 cpus) build (10 cpus) ",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7791","07/13/2017 10:43:18",5,"subprocess' childMain using ABORT when encountering user errors ""In {{process/posix/subprocess.hpp}}'s {{childMain}} we exit with {{ABORT}} when there was a user error, We here abort instead of simply {{_exit}}'ing and letting the user know that we couldn't deal with the given arguments. Abort can potentially dump core, and since this abort is before the {{execvpe}}, the process image can potentially be large (e.g., >300 MB) which could quickly fill up a lot of disk space."""," ABORT: (/pkg/src/mesos/3rdparty/libprocess/include/process/posix/subprocess.hpp:195): Failed to os::execvpe on path '/SOME/PATH': Argument list too long ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7792","07/13/2017 14:27:46",5,"Add support for ECDH ciphers ""[Elliptic curve ciphers|https://wiki.openssl.org/index.php/Elliptic_Curve_Cryptography] are a family of ciphers supported by OpenSSL. They allow to have smaller keys, but require an extra configuration parameter, the actual curve to be used, which can't be done through libprocess as it is.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7796","07/14/2017 22:21:24",3,"LIBPROCESS_IP isn't passed on to the fetcher ""{{LIBPROCESS_IP}} is not passed on to the fetcher. The fetcher program uses libprocess, which depending on the DNS configuration might fail during initialization: """," I0710 17:40:36.866606 8157 fetcher.cpp:531] Fetcher Info: {""""cache_directory"""":""""\/tmp\/mesos\/fetch\/slaves\/1f5cf498-82db-4976-8ba1-f2be90af3496-S1\/nobody"""",""""items"""":[{""""action"""":""""BYPASS_CACHE"""",""""uri"""":{""""cache"""":false,""""executable"""":false,""""extract"""":true,""""value"""":""""REDACTED""""}},{""""action"""":""""BYPASS_CACHE"""",""""uri"""":{""""cache"""":false,""""executable"""":false,""""extract"""":true,""""value"""":""""REDACTED""""}},{""""action"""":""""BYPASS_CACHE"""",""""uri"""":{""""cache"""":false,""""executable"""":false,""""extract"""":true,""""value"""":""""REDACTED""""}}],""""sandbox_directory"""":""""\/var\/lib\/mesos\/slave\/slaves\/1f5cf498-82db-4976-8ba1-f2be90af3496-S1\/frameworks\/1f5cf498-82db-4976-8ba1-f2be90af3496-0000\/executors\/cassandra.e1e042ef-6596-11e7-adc9-a6ba31bb9f5f\/runs\/5728acc2-33fc-4c39-ba7c-55908cbe5862"""",""""user"""":""""nobody""""} I0710 17:40:36.869302 8157 fetcher.cpp:442] Fetching URI 'REDACTED' I0710 17:40:36.869319 8157 fetcher.cpp:283] Fetching directly into the sandbox directory I0710 17:40:36.869343 8157 fetcher.cpp:220] Fetching URI 'REDACTED' I0710 17:40:36.869359 8157 fetcher.cpp:163] Downloading resource from 'REDACTED' to '/var/lib/mesos/slave/slaves/1f5cf498-82db-4976-8ba1-f2be90af3496-S1/frameworks/1f5cf498-82db-4976-8ba1-f2be90af3496-0000/executors/cassandra.e1e042ef-6596-11e7-adc9-a6ba31bb9f5f/runs/5728acc2-33fc-4c39-ba7c-55908cbe5862/REDACTED' Failed to obtain the IP address for 'ip-10-0-5-78.us-west-2.compute.internal'; the DNS service may not be able to resolve it: Name or service not known ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"MESOS-7803","07/18/2017 00:24:01",2,"fs::list drops path components on Windows ""fs::list(/foo/bar/*.txt) returns a.txt, b.txt, not /foo/bar/a.txt, /foo/bar/b.txt This breaks a ZooKeeper test on Windows.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7805","07/19/2017 16:36:14",1,"mesos-execute has incorrect example TaskInfo in help string ""{{mesos-execute}} documents that a task can be defined via JSON as If one actually uses that example task definition one gets Removing the resource role field allows the task to execute."""," { """"name"""": """"Name of the task"""", """"task_id"""": {""""value"""" : """"Id of the task""""}, """"agent_id"""": {""""value"""" : """"""""}, """"resources"""": [ { """"name"""": """"cpus"""", """"type"""": """"SCALAR"""", """"scalar"""": { """"value"""": 0.1 }, """"role"""": """"*"""" }, { """"name"""": """"mem"""", """"type"""": """"SCALAR"""", """"scalar"""": { """"value"""": 32 }, """"role"""": """"*"""" } ], """"command"""": { """"value"""": """"sleep 1000"""" } } % ./build/src/mesos-execute --master=127.0.0.1:5050 --task=task.json WARNING: Logging before InitGoogleLogging() is written to STDERR W0719 17:08:17.909696 3291313088 parse.hpp:114] Specifying an absolute filename to read a command line option out of without using 'file:// is deprecated and will be removed in a future release. Simply adding 'file://' to the beginning of the path should eliminate this warning. [warn] kq_init: detected broken kqueue; not using.: Undefined error: 0 I0719 17:08:17.919190 119246848 scheduler.cpp:184] Version: 1.4.0 I0719 17:08:17.923991 119783424 scheduler.cpp:470] New master detected at master@127.0.0.1:5050 Subscribed with ID bb0d36b4-fee0-4412-9cd9-1fa4e330355c-0000 F0719 17:08:18.137984 119783424 resources.cpp:1081] Check failed: !resource.has_role() *** Check failure stack trace: *** @ 0x101d65f5f google::LogMessageFatal::~LogMessageFatal() @ 0x101d62609 google::LogMessageFatal::~LogMessageFatal() @ 0x1016ef3a3 mesos::v1::Resources::isEmpty() @ 0x1016ed267 mesos::v1::Resources::add() @ 0x1016f05af mesos::v1::Resources::operator+=() @ 0x1016f08fb mesos::v1::Resources::Resources() @ 0x100c0d89f CommandScheduler::offers() @ 0x100c085e4 CommandScheduler::received() @ 0x100c0ae06 _ZZN7process8dispatchI16CommandSchedulerNSt3__15queueIN5mesos2v19scheduler5EventENS2_5dequeIS7_NS2_9allocatorIS7_EEEEEESC_EEvRKNS_3PIDIT_EEMSE_FvT0_ET1_ENKUlPNS_11ProcessBaseEE_clESN_ @ 0x101ce5a21 process::ProcessBase::visit() @ 0x101ce3747 process::ProcessManager::resume() @ 0x101d0e243 _ZNSt3__114__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN7process14ProcessManager12init_threadsEvE3$_0EEEEEPvSB_ @ 0x7fffbb5d693b _pthread_body @ 0x7fffbb5d6887 _pthread_start @ 0x7fffbb5d608d thread_start [1] 73521 abort ./build/src/mesos-execute --master=127.0.0.1:5050 --task=task.json ",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7806","07/19/2017 18:58:47",1,"Add copy assignment operator to `net::IP::Network` ""Currently, we can't extend the class `net::IP::Network` with out adding a copy assignment operator in the derived class, due to the use of `std::unique_ptr` in the base class. Hence, need to introduce a copy assignment operator into the base class.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7814","07/20/2017 09:08:08",3,"Improve the test frameworks. ""These improvements include three main points: * Adding a {{name}} flag to certain frameworks to distinguish between instances. * Cleaning up the code style of the frameworks. * For frameworks with custom executors, such as balloon framework, adding a {{executor_extra_uris}} flag containing URIs that will be passed to the {{command_info}} of the executor.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7816","07/20/2017 16:17:26",3,"Add HTTP connection handling to the resource provider driver ""The {{resource_provider::Driver}} is responsible for establishing a connection with an agent/master resource provider API and provide calls to the API, receive events from the API. This is done using HTTP and should be implemented similar to how it's done for schedulers and executors (see {{src/executor/executor.cpp, src/scheduler/scheduler.cpp}}).""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-7819","07/21/2017 00:24:13",5,"Libprocess internal state is not monitored by metrics. ""Libprocess does not expose its internal state via metrics. Active sockets, number of HTTP proxies, number of running actors, number of pending messages for all active sockets, etc — may be of interest when monitoring and debugging Mesos clusters.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7830","07/25/2017 20:15:22",3,"Sandbox_path volume does not have ownership set correctly. ""This issue was exposed when using sandbox_path volume to support shared volume for nested containers under one task group. Here is a scenario: The agent process runs as 'root' user, while the framework user is set as 'nobody'. No matter the commandinfo user is set or not, any non-root user cannot access the sandbox_path volume (e.g., a PARENT sandbox_path volume is not writable from a nested container). This is because the source path at the parent sandbox level is created by the agent process (aka root in this case). While the operator is responsible for guaranteeing a nested container should have permission to write to its sandbox path volume at its parent's sandbox, we should guarantee the source path created at parent's sandbox should be set as the same ownership as this sandbox's ownership.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7837","07/27/2017 15:08:56",3,"Propagate resource updates from local resource providers to master ""When a resource provider registers with a resource provider manager, the manager should sent a message to its subscribers informing them on the changed resources. For the first iteration where we add agent-specific, local resource providers, the agent would be subscribed to the manager. It should be changed to handle such a resource update by informing the master about its changed resources. In order to support master failovers, we should make sure to similarly inform the master on agent reregistration.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7840","07/28/2017 14:21:30",3,"Add Mesos CLI command to list active tasks ""We need to add a command to list all the tasks running in a Mesos cluster by checking the endpoint {{/tasks}} and reporting the results.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7849","08/02/2017 17:28:08",3,"The rlimits and linux/capabilities isolators should support nested containers ""The rlimits and linix/capabilities isolators don't support nesting. That means that the rlimits or capabilities set for tasks that launched by the DefaultExecutor are silently ignored.""","",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7851","08/02/2017 18:12:43",3,"Master stores old resource format in the registry ""We intend for the master to store all internal resource representations in the new, post-reservation-refinement format. However, [when persisting registered agents to the registrar|https://github.com/apache/mesos/blob/498a000ac1bb8f51dc871f22aea265424a407a17/src/master/master.cpp#L5861-L5876], the master does not convert the resources; agents provide resources in the pre-reservation-refinement format, and these resources are stored as-is. This means that after recovery, any agents in the master's {{slaves.recovered}} map will have {{SlaveInfo.resources}} in the pre-reservation-refinement format. We should update the master to convert these resources before persisting them to the registry.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7853","08/02/2017 18:35:39",5,"Support shared PID namespace. ""Currently, with the 'namespaces/pid' isolator enabled, each container will have its own pid namespace. This does not meet the need for some scenarios. For example, under the same executor container, one task wants to reach out to another task which need to share the same pid namespace. We should support container pid namespace to be configurable. Users can choose one container to share its parent's pid namespace or not. User facing API: A new agent flag: --disallow_top_level_pid_ns_sharing (defaults to be: false) this is a security concern from operator's perspective. While some of the nested containers share the pid namespace from their parents, the top level containers always not share the pid ns from the agent."""," message LinuxInfo { ...... // True if it shares the pid namepace with its parent. If the // container is a top level container, it means share the pid // namespace with the agent. If the container is a nested // container, it means share the pid namespce with its parent // container. This field will be ignored if 'namespaces/pid' // isolator is not enabled. optional bool share_pid_namespace = 4; } ",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7858","08/04/2017 03:39:13",5,"Launching a nested container with namespace/pid isolation, with glibc < 2.25, may deadlock the LinuxLauncher and MesosContainerizer ""This bug in glibc (fixed in glibc 2.25) will sometimes cause a child process of a {{fork}} to {{assert}} incorrectly, if the parent enters a new pid namespace before forking: https://sourceware.org/bugzilla/show_bug.cgi?id=15392 https://sourceware.org/bugzilla/show_bug.cgi?id=21386 The LinuxLauncher code happens to do this when launching nested containers: * The MesosContainerizer process launches a subprocess, with a customized {{ns::clone}} function as an argument. The thread then basically waits for the launch to succeed and return a child PID: https://github.com/apache/mesos/blob/1.3.x/src/slave/containerizer/mesos/linux_launcher.cpp#L495 * A separate thread in the Mesos agent forks and then waits for the grandchild to report a PID: https://github.com/apache/mesos/blob/1.3.x/src/linux/ns.hpp#L453 * The child of the fork first enters the namespaces (including a pid namespace) and then forks a grandchild. The child then calls {{waitpid}} on the grandchild: https://github.com/apache/mesos/blob/1.3.x/src/linux/ns.hpp#L555 * Due to the glibc bug, the grandchild sometimes never returns from the {{fork}} here: https://github.com/apache/mesos/blob/1.3.x/src/linux/ns.hpp#L540 According to the glibc bug, we can work around this by: {quote} The obvious solution is just to use clone() after setns() and never use fork() - and one can certainly patch both programs to do so. Nevertheless it would be nice to see if fork() also worked after setns(), especially since there is no inherent reason for it not to. {quote}""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7863","08/05/2017 03:15:35",5,"Agent may drop pending kill task status updates. ""Currently there is an assumption that when a pending task is killed, the framework will still be stored in the agent. However, this assumption can be violated in two cases: # Another pending task was killed and we removed the framework in 'Slave::run' thinking it was idle, because pending tasks were empty (we remove from pending tasks when processing the kill). (MESOS-7783 is an example instance of this). # The last executor terminated without tasks to send terminal updates for, or the last terminated executor received its last acknowledgement. At this point, we remove the framework thinking there were no pending tasks if the task was killed (removed from pending).""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7865","08/08/2017 00:44:59",3,"Agent may process a kill task and still launch the task. ""Based on the investigation of MESOS-7744, the agent has a race in which """"queued"""" tasks can still be launched after the agent has processed a kill task for them. This race was introduced when {{Slave::statusUpdate}} was made asynchronous: (1) {{Slave::__run}} completes, task is now within {{Executor::queuedTasks}} (2) {{Slave::killTask}} locates the executor based on the task ID residing in queuedTasks, calls {{Slave::statusUpdate()}} with {{TASK_KILLED}} (3) {{Slave::___run}} assumes that killed tasks have been removed from {{Executor::queuedTasks}}, but this now occurs asynchronously in {{Slave::_statusUpdate}}. So, the executor still sees the queued task and delivers it and adds the task to {{Executor::launchedTasks}}. (3) {{Slave::_statusUpdate}} runs, removes the task from {{Executor::launchedTasks}} and adds it to {{Executor::terminatedTasks}}.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7871","08/09/2017 06:26:51",2,"Agent fails assertion during request to '/state' ""While processing requests to {{/state}}, the Mesos agent calls {{Framework::allocatedResources()}}, which in turn calls {{Slave::getExecutorInfo()}} on executors associated with the framework's pending tasks. In the case of tasks launched as part of task groups, this leads to the failure of the assertion [here|https://github.com/apache/mesos/blob/a31dd52ab71d2a529b55cd9111ec54acf7550ded/src/slave/slave.cpp#L4983-L4985]. This means that the check will fail if the agent processes a request to {{/state}} at a time when it has pending tasks launched as part of a task group. This assertion should be removed since this helper function is now used with task groups.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7872","08/09/2017 22:03:19",5,"Scheduler hang when registration fails. ""I'm finding that if framework registration fails, the mesos driver client will hang indefinitely with the following output: I'd have expected one or both of the following: - SchedulerDriver.run() should have exited with a failed Proto.Status of some form - Scheduler.error() should have been invoked when the """"Got error"""" occurred Steps to reproduce: - Launch a scheduler instance, have it register with a known-bad framework info. In this case a role containing slashes was used - Observe that the scheduler continues in a TASK_RUNNING state despite the failed registration. From all appearances it looks like the Scheduler implementation isn't invoked at all I'd guess that because this failure happens before framework registration, there's some error handling that isn't fully initialized at this point."""," I0809 20:04:22.479391 73 sched.cpp:1187] Got error ''FrameworkInfo.role' is not a valid role: Role '/test/role/slashes' cannot start with a slash' I0809 20:04:22.479658 73 sched.cpp:2055] Asked to abort the driver I0809 20:04:22.479843 73 sched.cpp:1233] Aborting framework ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7877","08/10/2017 11:15:48",2,"Audit test code for undefined behavior in accessing container elements ""We do not always make sure we never access elements from empty containers, e.g., we use patterns like the following While the intention here is to diagnose an empty {{offers}}, the code still exhibits undefined behavior in the element access if {{offers}} was indeed empty (compilers might aggressively exploit undefined behavior to e.g., remove """"impossible"""" code). Instead one should prevent accessing any elements of an empty container, e.g., We should audit and fix existing test code for such incorrect checks and variations involving e.g., {{EXPECT_NE}}."""," Future> offers; // Satisfy offers. EXPECT_FALSE(offers.empty()); const auto& offer = (*offers)[0]; ASSERT_FALSE(offers.empty()); // Prevent execution of rest of test body. ",0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7883","08/11/2017 18:16:51",1,"Quota heuristic check not accounting for mount volumes ""This may be expected but came as a surprise to us. We are unable to create a quota bigger than the root disk space on slaves. Given two clusters with the same number of slaves and root disk size, but one that also has mount volumes, is what the disk resources look like: In {{fin-fang-foom}}, I was able to create a quota for {{143490mb}} which is the total of available disk resources, root in this case, as reported by Mesos. For {{hydra}}, I am only able to create a quota for {{143489mb}}. This is equivalent to the total of root disks available in {{hydra}} rather than the total available disks reported by Mesos resources which is {{254084mb}}. With a modified Mesos that adds logging to {{quota_handler}}, we can see that only the {{disk(*)}} number increases in {{nonStaticClusterResources}} after every iteration. The final iteration is {{disk(*):143489}} which is the maximum quota I was able to create on {{hydra}}. We expected that quota heuristic check would also include resources such as {{disk(*)[MOUNT:/dcos/volume2]:7373}} """," [root@fin-fang-foom-master-1 ~]# curl -s master.mesos:5050/state | jq '.slaves[] .resources .disk' 28698 28699 28698 28698 28697 [root@hydra-master-1 ~]# curl -s master.mesos:5050/state | jq '.slaves[] .resources .disk' 50817 50817 50814 50819 50817 Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.763764 24902 quota_handler.cpp:71] Performing capacity heuristic check for a set quota request Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.763783 24902 quota_handler.cpp:87] heuristic: total quota 'disk(*):143489' Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.763870 24902 quota_handler.cpp:111] heuristic: nonStaticAgentResources = 'ports(*):[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 8182-32000]; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*):28698; cpus(*):4; mem(*):15023' Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.763923 24902 quota_handler.cpp:113] heuristic: nonStaticClusterResources = 'ports(*):[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 8182-32000]; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*):28698; cpus(*):4; mem(*):15023' Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.763989 24902 quota_handler.cpp:111] heuristic: nonStaticAgentResources = 'ports(*):[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 8182-32000]; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*):28698; cpus(*):4; mem(*):15023' Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.764022 24902 quota_handler.cpp:113] heuristic: nonStaticClusterResources = 'ports(*):[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 8182-32000]; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*):57396; cpus(*):8; mem(*):30046; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373' Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.764077 24902 quota_handler.cpp:111] heuristic: nonStaticAgentResources = 'ports(*):[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 8182-32000]; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*):28695; cpus(*):4; mem(*):15023' Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.764119 24902 quota_handler.cpp:113] heuristic: nonStaticClusterResources = 'ports(*):[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 8182-32000]; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*):86091; cpus(*):12; mem(*):45069; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373' Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.764225 24902 quota_handler.cpp:111] heuristic: nonStaticAgentResources = 'ports(*):[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 8182-32000]; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*):28700; cpus(*):4; mem(*):15023' Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.764307 24902 quota_handler.cpp:113] heuristic: nonStaticClusterResources = 'ports(*):[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 8182-32000]; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*):114791; cpus(*):16; mem(*):60092; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373' Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.764434 24902 quota_handler.cpp:111] heuristic: nonStaticAgentResources = 'ports(*):[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 8182-32000]; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*):28698; cpus(*):4; mem(*):15023' Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.764492 24902 quota_handler.cpp:113] heuristic: nonStaticClusterResources = 'ports(*):[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 8182-32000]; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*):143489; cpus(*):20; mem(*):75115; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373; disk(*)[MOUNT:/dcos/volume0]:7373; disk(*)[MOUNT:/dcos/volume1]:7373; disk(*)[MOUNT:/dcos/volume2]:7373' Aug 11 12:54:18 hydra-master-1 mesos-master[24896]: I0811 12:54:18.764562 24902 quota_handler.cpp:118] heuristic: nonStaticClusterResources.contains(totalQuota) ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7892","08/15/2017 12:47:21",3,"Filter results of `/state` on agent by role. ""The results returned by {{/state}} include data about resource reservations per each role, which should be filtered for certain users, particularly in a multi-tenancy scenario. The kind of leaked data includes specific role names and their specific reservations.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7916","08/25/2017 00:05:16",5,"Improve the test coverage of the DefaultExecutor. ""We should write tests for the {{DefaultExecutor}} to cover the following common scenarios: # -Start a task that uses a GPU, and make sure that it is made available to the task.- # -Launch a Docker task with a health check.- # -Launch two tasks and verify that they can access a volume owned by the Executor via {{sandbox_path}} volumes.- # -Launch two tasks, each one in its own task group, and verify that they can access a volume owned by the Executor via {{sandbox_path}} volumes.- # -Launch a task that uses an env secret, make sure that it is accessible.- # -Launch a task using a URI and make sure that the artifact is accessible.- # -Launch a task using a Docker image + URIs, make sure that the fetched artifact is accessible.- # Launch one task and ensure that (health) checks can read from a persistent volume. # -Ensure that the executor's env is NOT inherited by the nested tasks.-""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7917","08/25/2017 18:55:37",3,"Docker statistics not reported on Windows. ""On Windows, the JSON information provided by the agent at the /container API does not contain the expected {{statistics}} object for Docker containers on Windows. This breaks the dcos-metrics tool, required for DC/OS integration on Windows.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7921","08/28/2017 19:02:50",8,"ProcessManager::resume sometimes crashes accessing EventQueue. ""The following segfault is found on [ASF|https://builds.apache.org/job/Mesos-Buildbot/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A14.04,label_exp=(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)/4159/] in {{MesosContainerizerSlaveRecoveryTest.ResourceStatistics}} but it's flaky and shows up in other tests and environments (with or without --enable-lock-free-event-queue) as well. {noformat: title=Configuration} ./bootstrap '&&' ./configure --verbose '&&' make -j6 distcheck A builds@mesos.apache.org query shows many such instances: https://lists.apache.org/list.html?builds@mesos.apache.org:lte=1M:process%3A%3AEventQueue%3A%3AConsumer%3A%3Aempty"""," ./bootstrap '&&' ./configure --verbose '&&' make -j6 distcheck *** Aborted at 1503937885 (unix time) try """"date -d @1503937885"""" if you are using GNU date *** PC: @ 0x2b9e2581caa0 process::EventQueue::Consumer::empty() *** SIGSEGV (@0x8) received by PID 751 (TID 0x2b9e31978700) from PID 8; stack trace: *** @ 0x2b9e29d26330 (unknown) @ 0x2b9e2581caa0 process::EventQueue::Consumer::empty() @ 0x2b9e25800a40 process::ProcessManager::resume() @ 0x2b9e2580f891 process::ProcessManager::init_threads()::$_9::operator()() @ 0x2b9e2580f7d5 _ZNSt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvE3$_9vEE9_M_invokeIJEEEvSt12_Index_tupleIJXspT_EEE @ 0x2b9e2580f7a5 std::_Bind_simple<>::operator()() @ 0x2b9e2580f77c std::thread::_Impl<>::_M_run() @ 0x2b9e29fe5a60 (unknown) @ 0x2b9e29d1e184 start_thread @ 0x2b9e2a851ffd (unknown) make[3]: *** [CMakeFiles/check] Segmentation fault (core dumped) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7922","08/28/2017 19:30:43",2,"Fix communication between old masters and new agents. ""For re-registration, agents currently send the resources in tasks and executors to the master in the """"post-reservation-refinement"""" format, which is incompatible for pre-1.4 masters. We should change the agent such that it always downgrades the resources to the """"pre-reservation-refinement"""" format, and the master unconditionally upgrade the resources to """"post-reservation-refinement"""" format.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7923","08/28/2017 23:00:14",1,"Make args optional in mesos port mapper plugin ""Current implementation of the mesos-port-mapper plugin fails if the args field is absent in the cni config which makes it very specific to mesos. Instead, if args could be optional then this plugin could be used in a more generic environment. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"MESOS-7924","08/29/2017 01:39:57",5,"Add a javascript linter to the webui. ""As far as I can tell, javascript linters (e.g. ESLint) help catch some functional errors as well, for example, we've made some """"strict"""" mistakes a few times that ESLint can catch: MESOS-6624, MESOS-7912.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7934","09/02/2017 00:53:24",5,"OOM due to LibeventSSLSocket send incorrectly returning 0 after shutdown. ""LibeventSSLSocket can return 0 from send incorrectly, which leads the caller to send the data twice! See here: https://github.com/apache/mesos/blob/1.3.1/3rdparty/libprocess/src/libevent_ssl_socket.cpp#L396-L398 In some particular cases, it's possible that the caller keeps getting back 0 and loops infinitely, blowing up the memory and OOMing the process. One example is when a send occurs after a shutdown: """," TEST_F(SSLTest, ShutdownThenSend) { Clock::pause(); Try server = setup_server({ {""""LIBPROCESS_SSL_ENABLED"""", """"true""""}, {""""LIBPROCESS_SSL_KEY_FILE"""", key_path().string()}, {""""LIBPROCESS_SSL_CERT_FILE"""", certificate_path().string()}}); ASSERT_SOME(server); ASSERT_SOME(server.get().address()); ASSERT_SOME(server.get().address().get().hostname()); Future socket = server.get().accept(); Clock::settle(); EXPECT_TRUE(socket.isPending()); Try client = Socket::create(SocketImpl::Kind::SSL); ASSERT_SOME(client); AWAIT_ASSERT_READY(client->connect(server->address().get())); AWAIT_ASSERT_READY(socket); EXPECT_SOME(Socket(socket.get()).shutdown()); // This loops forever! AWAIT_FAILED(Socket(socket.get()).send(""""Hello World"""")); } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7945","09/07/2017 19:56:03",2,"MasterAPITest.EventAuthorizationFiltering is flaky. "" The above commit introduced the test {{MasterAPITest.EventAuthorizationFiltering}} which is flaky."""," commit e4d56bcb65f7bf9805eff18e6a9249eb7512f745 Author: Quinn Leng quinn.leng.666@gmail.com Date: Tue Aug 29 13:13:19 2017 -0700 Added authorization for V1 events. Added authorization filtering for the master V1 operator event stream. Subscribers will only receive events that their principal is authorized to see. The new test 'MasterAPITest.EventAuthorizationFiltering' verifies this behavior. Review: https://reviews.apache.org/r/61189/ ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7946","09/07/2017 20:02:17",2,"DefaultExecutorTest.SigkillExecutor test fails on Windows "" The above commit introduced the test {{MesosContainerizer/DefaultExecutorTest.SigkillExecutor}} which fails on Windows. At a rough glance, if this is dependent on the {{SIGKILL}} signal, it may not be applicable on Windows and just needs to be disabled."""," commit 9ca2cc8ae751905f078b056b265fb4511ea8e5f4 Author: Gilbert Song songzihao1990@gmail.com Date: Tue Aug 29 17:31:35 2017 -0700 Added unit test for killing the default executor process. Review: https://reviews.apache.org/r/61981 ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7947","09/07/2017 20:30:02",8,"Add GC capability to nested containers ""We should extend the existing API or add a new API for nested containers for an executor to tell the Mesos agent that a nested container is no longer needed and can be scheduled for GC.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7951","09/09/2017 00:26:02",8,"Design Doc for Extended KillPolicy ""After introducing the {{KillPolicy}} in MESOS-4909, some interactions with framework developers have led to the suggestion of a couple possible improvements to this interface. Namely, * Allowing the framework to specify a command to be run to initiate termination, rather than a signal to be sent, would allow some developers to avoid wrapping their application in a signal handler. This is useful because a signal handler wrapper modifies the application's process tree, which may make introspection and debugging more difficult in the case of well-known services with standard debugging procedures. * In the case of terminations which do begin with a signal, it would be useful to allow the framework to specify the signal to be sent, rather than assuming SIGTERM. PostgreSQL, for example, permits several shutdown types, each initiated with a [different signal|https://www.postgresql.org/docs/9.3/static/server-shutdown.html].""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-7964","09/11/2017 20:47:35",2,"Heavy-duty GC makes the agent unresponsive ""An agent is observed to performe heavy-duty GC every half an hour: Each GC activity took 5+ minutes. During the period, the agent became unresponsive, the health check timed out, and no endpoint responded as well. When a disk-usage GC is trigged, around 300 pruning actors would be generated (https://github.com/apache/mesos/blob/master/src/slave/gc.cpp#L229). My hypothesis is that these actors would used all of the worker threads, and some of them took a long time to finish (possibly due to many files to delete, or too many fs operations at once, etc)."""," Sep 07 18:15:56 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 18:15:56.900282 16054 slave.cpp:5920] Current disk usage 93.61%. Max allowed age: 0ns Sep 07 18:15:56 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 18:15:56.900476 16054 gc.cpp:218] Pruning directories with remaining removal time 1.99022105972148days ... Sep 07 18:22:08 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 18:22:08.173645 16050 gc.cpp:178] Deleted '/var/lib/mesos/slave/meta/slaves/9d4b2f2b-a759-4458-bebf-7d3507a6f0ca-S20/frameworks/9750f9be-89d9-4e02-80d3-bdced653e9c3-0258/executors/node__f33065c9-eb42-44a7-9013-25bafc306bd5' ... Sep 07 18:41:08 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 18:41:08.195329 16051 slave.cpp:5920] Current disk usage 90.85%. Max allowed age: 0ns Sep 07 18:41:08 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 18:41:08.195503 16051 gc.cpp:218] Pruning directories with remaining removal time 1.99028708946667days ... Sep 07 18:49:01 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 18:49:01.253906 16049 gc.cpp:178] Deleted '/var/lib/mesos/slave/meta/slaves/9d4b2f2b-a759-4458-bebf-7d3507a6f0ca-S20/frameworks/9750f9be-89d9-4e02-80d3-bdced653e9c3-0258/executors/node__014b451a-30de-41ee-b0b1-3733c790382c/runs/c5b922e8-eee0-4793-8637-7abbd7f8507e' ... Sep 07 19:08:01 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 19:08:01.291092 16048 slave.cpp:5920] Current disk usage 91.39%. Max allowed age: 0ns Sep 07 19:08:01 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 19:08:01.291285 16048 gc.cpp:218] Pruning directories with remaining removal time 1.99028598086815days ... Sep 07 19:14:50 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: W0907 19:14:50.737226 16050 gc.cpp:174] Failed to delete '/var/lib/mesos/slave/meta/slaves/9d4b2f2b-a759-4458-bebf-7d3507a6f0ca-S20/frameworks/9750f9be-89d9-4e02-80d3-bdced653e9c3-0258/executors/node__4139bf2e-e33b-4743-8527-f8f50ac49280/runs/b1991e28-7ff8-476f-8122-1a483e431ff2': No such file or directory ... Sep 07 19:33:50 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 19:33:50.758191 16052 slave.cpp:5920] Current disk usage 91.39%. Max allowed age: 0ns Sep 07 19:33:50 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 19:33:50.758872 16047 gc.cpp:218] Pruning directories with remaining removal time 1.99028057238519days ... Sep 07 19:39:43 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 19:39:43.081485 16052 gc.cpp:178] Deleted '/var/lib/mesos/slave/meta/slaves/9d4b2f2b-a759-4458-bebf-7d3507a6f0ca-S20/frameworks/9750f9be-89d9-4e02-80d3-bdced653e9c3-0258/executors/node__d89dce1f-609b-4cf8-957a-5ba198be7828' ... Sep 07 19:59:43 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 19:59:43.150535 16048 slave.cpp:5920] Current disk usage 94.56%. Max allowed age: 0ns Sep 07 19:59:43 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 19:59:43.150869 16054 gc.cpp:218] Pruning directories with remaining removal time 1.98959316198222days ... Sep 07 20:06:16 int-infinityagentm42xl6-soak110.us-east-1a.mesosphere.com mesos-agent[16040]: I0907 20:06:16.251552 16051 gc.cpp:178] Deleted '/var/lib/mesos/slave/slaves/9d4b2f2b-a759-4458-bebf-7d3507a6f0ca-S20/frameworks/9750f9be-89d9-4e02-80d3-bdced653e9c3-0259/executors/data__45283e7d-9a5e-4d4b-9901-b7f1e096cd54/runs/5cfc5e3e-3975-41aa-846b-c125eb529fbe' ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7966","09/12/2017 17:36:52",5,"check for maintenance on agent causes fatal error ""We interact with the maintenance API frequently to orchestrate gracefully draining agents of tasks without impacting service availability. Occasionally we seem to trigger a fatal error in Mesos when interacting with the api. This happens relatively frequently, and impacts us when downstream frameworks (marathon) react badly to leader elections. Here is the log line that we see when the master dies: It's quite possibly we're using the maintenance API in the wrong way. We're happy to provide any other logs you need - please let me know what would be useful for debugging. Thanks."""," F0911 12:18:49.543401 123748 hierarchical.cpp:872] Check failed: slaves[slaveId].maintenance.isSome() ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7970","09/12/2017 19:10:12",2,"Adding process::Executor::execute() ""It would be easier to use {{process::Executor}} if we can add an {{execute()}} interface that runs a function asynchronously and returns a {{Future}}, so we do the following: """," process::Executor executor; executor.execute(f, a0 a1) .then(executor.defer(g)); ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7972","09/12/2017 19:52:39",1,"SlaveTest.HTTPSchedulerSlaveRestart test is flaky. ""Saw this on ASF CI when testing 1.4.0-rc5 """," [ RUN ] SlaveTest.HTTPSchedulerSlaveRestart I0912 05:40:15.280185 32547 cluster.cpp:162] Creating default 'local' authorizer I0912 05:40:15.282783 32554 master.cpp:442] Master c23ff8cf-cb2f-40d0-8f18-871a41f128cf (b909d5e22907) started on 172.17.0.2:58922 I0912 05:40:15.282804 32554 master.cpp:444] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/he1E9j/credentials"""" --filter_gpu_resources=""""true"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_unreachable_tasks_per_framework=""""1000"""" --port=""""5050"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/he1E9j/master"""" --zk_session_timeout=""""10secs"""" I0912 05:40:15.283092 32554 master.cpp:494] Master only allowing authenticated frameworks to register I0912 05:40:15.283110 32554 master.cpp:508] Master only allowing authenticated agents to register I0912 05:40:15.283118 32554 master.cpp:521] Master only allowing authenticated HTTP frameworks to register I0912 05:40:15.283123 32554 credentials.hpp:37] Loading credentials for authentication from '/tmp/he1E9j/credentials' I0912 05:40:15.283394 32554 master.cpp:566] Using default 'crammd5' authenticator I0912 05:40:15.283543 32554 http.cpp:1026] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly' I0912 05:40:15.283731 32554 http.cpp:1026] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' I0912 05:40:15.283887 32554 http.cpp:1026] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' I0912 05:40:15.284021 32554 master.cpp:646] Authorization enabled I0912 05:40:15.284293 32552 whitelist_watcher.cpp:77] No whitelist given I0912 05:40:15.284335 32550 hierarchical.cpp:171] Initialized hierarchical allocator process I0912 05:40:15.287078 32561 master.cpp:2163] Elected as the leading master! I0912 05:40:15.287103 32561 master.cpp:1702] Recovering from registrar I0912 05:40:15.287214 32557 registrar.cpp:347] Recovering registrar I0912 05:40:15.287703 32557 registrar.cpp:391] Successfully fetched the registry (0B) in 455936ns I0912 05:40:15.287791 32557 registrar.cpp:495] Applied 1 operations in 24179ns; attempting to update the registry I0912 05:40:15.288317 32557 registrar.cpp:552] Successfully updated the registry in 473088ns I0912 05:40:15.288435 32557 registrar.cpp:424] Successfully recovered registrar I0912 05:40:15.288789 32548 master.cpp:1801] Recovered 0 agents from the registry (129B); allowing 10mins for agents to re-register I0912 05:40:15.288822 32559 hierarchical.cpp:209] Skipping recovery of hierarchical allocator: nothing to recover I0912 05:40:15.292457 32547 containerizer.cpp:246] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni,environment_secret W0912 05:40:15.293053 32547 backend.cpp:76] Failed to create 'aufs' backend: AufsBackend requires root privileges W0912 05:40:15.293184 32547 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges I0912 05:40:15.293220 32547 provisioner.cpp:255] Using default backend 'copy' W0912 05:40:15.297993 32547 process.cpp:3196] Attempted to spawn already running process files@172.17.0.2:58922 I0912 05:40:15.298338 32547 cluster.cpp:448] Creating default 'local' authorizer I0912 05:40:15.300554 32551 slave.cpp:250] Mesos agent started on (198)@172.17.0.2:58922 I0912 05:40:15.300576 32551 slave.cpp:251] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/credential"""" --default_role=""""*"""" --disallow_sharing_agent_pid_namespace=""""false"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2: secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V"""" I0912 05:40:15.301059 32551 credentials.hpp:86] Loading credential for authentication from '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/credential' W0912 05:40:15.301174 32547 process.cpp:3196] Attempted to spawn already running process version@172.17.0.2:58922 I0912 05:40:15.301239 32551 slave.cpp:283] Agent using credential for: test-principal I0912 05:40:15.301256 32551 credentials.hpp:37] Loading credentials for authentication from '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/http_credentials' I0912 05:40:15.301512 32551 http.cpp:1026] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I0912 05:40:15.301681 32551 http.cpp:1026] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' I0912 05:40:15.301935 32547 sched.cpp:232] Version: 1.4.0 I0912 05:40:15.302479 32557 sched.cpp:336] New master detected at master@172.17.0.2:58922 I0912 05:40:15.302592 32557 sched.cpp:407] Authenticating with master master@172.17.0.2:58922 I0912 05:40:15.302614 32557 sched.cpp:414] Using default CRAM-MD5 authenticatee I0912 05:40:15.302922 32553 authenticatee.cpp:121] Creating new client SASL connection I0912 05:40:15.303220 32562 master.cpp:7832] Authenticating scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:15.303400 32556 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(406)@172.17.0.2:58922 I0912 05:40:15.303673 32554 authenticator.cpp:98] Creating new server SASL connection I0912 05:40:15.303473 32551 slave.cpp:565] Agent resources: [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] I0912 05:40:15.303707 32551 slave.cpp:573] Agent attributes: [ ] I0912 05:40:15.303717 32551 slave.cpp:582] Agent hostname: b909d5e22907 I0912 05:40:15.303900 32559 status_update_manager.cpp:177] Pausing sending status updates I0912 05:40:15.304033 32548 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I0912 05:40:15.304070 32548 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I0912 05:40:15.304189 32548 authenticator.cpp:204] Received SASL authentication start I0912 05:40:15.304265 32548 authenticator.cpp:326] Authentication requires more steps I0912 05:40:15.304404 32561 authenticatee.cpp:259] Received SASL authentication step I0912 05:40:15.304566 32549 authenticator.cpp:232] Received SASL authentication step I0912 05:40:15.304603 32549 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'b909d5e22907' server FQDN: 'b909d5e22907' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0912 05:40:15.304615 32549 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I0912 05:40:15.304647 32549 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0912 05:40:15.304671 32549 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'b909d5e22907' server FQDN: 'b909d5e22907' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0912 05:40:15.304682 32549 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0912 05:40:15.304697 32549 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0912 05:40:15.304715 32549 authenticator.cpp:318] Authentication success I0912 05:40:15.304852 32563 authenticatee.cpp:299] Authentication success I0912 05:40:15.304916 32552 master.cpp:7862] Successfully authenticated principal 'test-principal' at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:15.305004 32557 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(406)@172.17.0.2:58922 I0912 05:40:15.305253 32549 sched.cpp:513] Successfully authenticated with master master@172.17.0.2:58922 I0912 05:40:15.305269 32549 sched.cpp:836] Sending SUBSCRIBE call to master@172.17.0.2:58922 I0912 05:40:15.305433 32549 sched.cpp:869] Will retry registration in 237.896638ms if necessary I0912 05:40:15.305629 32555 state.cpp:64] Recovering state from '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta' I0912 05:40:15.305652 32559 master.cpp:2894] Received SUBSCRIBE call for framework 'default' at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:15.305742 32559 master.cpp:2228] Authorizing framework principal 'test-principal' to receive offers for roles '{ * }' I0912 05:40:15.305963 32560 status_update_manager.cpp:203] Recovering status update manager I0912 05:40:15.306152 32550 containerizer.cpp:609] Recovering containerizer I0912 05:40:15.306252 32553 master.cpp:2974] Subscribing framework default with checkpointing enabled and capabilities [ RESERVATION_REFINEMENT ] I0912 05:40:15.306928 32559 sched.cpp:759] Framework registered with c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.307013 32559 sched.cpp:773] Scheduler::registered took 58136ns I0912 05:40:15.307162 32552 hierarchical.cpp:303] Added framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.307384 32552 hierarchical.cpp:1925] No allocations performed I0912 05:40:15.307423 32552 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:15.307464 32552 hierarchical.cpp:1468] Performed allocation for 0 agents in 124365ns I0912 05:40:15.308010 32557 provisioner.cpp:416] Provisioner recovery complete I0912 05:40:15.308349 32556 slave.cpp:6295] Finished recovery I0912 05:40:15.308863 32556 slave.cpp:6477] Querying resource estimator for oversubscribable resources I0912 05:40:15.309139 32562 slave.cpp:6491] Received oversubscribable resources {} from the resource estimator I0912 05:40:15.309347 32562 slave.cpp:971] New master detected at master@172.17.0.2:58922 I0912 05:40:15.309409 32550 status_update_manager.cpp:177] Pausing sending status updates I0912 05:40:15.309500 32562 slave.cpp:1006] Detecting new master I0912 05:40:15.311897 32559 slave.cpp:1033] Authenticating with master master@172.17.0.2:58922 I0912 05:40:15.311975 32559 slave.cpp:1044] Using default CRAM-MD5 authenticatee I0912 05:40:15.312253 32560 authenticatee.cpp:121] Creating new client SASL connection I0912 05:40:15.312513 32560 master.cpp:7832] Authenticating slave(198)@172.17.0.2:58922 I0912 05:40:15.312654 32548 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(407)@172.17.0.2:58922 I0912 05:40:15.312940 32558 authenticator.cpp:98] Creating new server SASL connection I0912 05:40:15.313187 32552 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I0912 05:40:15.313213 32552 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I0912 05:40:15.313313 32552 authenticator.cpp:204] Received SASL authentication start I0912 05:40:15.313364 32552 authenticator.cpp:326] Authentication requires more steps I0912 05:40:15.313478 32551 authenticatee.cpp:259] Received SASL authentication step I0912 05:40:15.313613 32553 authenticator.cpp:232] Received SASL authentication step I0912 05:40:15.313649 32553 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'b909d5e22907' server FQDN: 'b909d5e22907' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0912 05:40:15.313673 32553 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I0912 05:40:15.313743 32553 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0912 05:40:15.313788 32553 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'b909d5e22907' server FQDN: 'b909d5e22907' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0912 05:40:15.313808 32553 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0912 05:40:15.313817 32553 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0912 05:40:15.313833 32553 authenticator.cpp:318] Authentication success I0912 05:40:15.313931 32557 authenticatee.cpp:299] Authentication success I0912 05:40:15.314019 32554 master.cpp:7862] Successfully authenticated principal 'test-principal' at slave(198)@172.17.0.2:58922 I0912 05:40:15.314079 32553 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(407)@172.17.0.2:58922 I0912 05:40:15.314239 32555 slave.cpp:1128] Successfully authenticated with master master@172.17.0.2:58922 I0912 05:40:15.314457 32555 slave.cpp:1607] Will retry registration in 9.221574ms if necessary I0912 05:40:15.314672 32561 master.cpp:5714] Received register agent message from slave(198)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.314810 32561 master.cpp:3803] Authorizing agent with principal 'test-principal' I0912 05:40:15.315261 32548 master.cpp:5774] Authorized registration of agent at slave(198)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.315383 32548 master.cpp:5867] Registering agent at slave(198)@172.17.0.2:58922 (b909d5e22907) with id c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 I0912 05:40:15.315827 32558 registrar.cpp:495] Applied 1 operations in 55999ns; attempting to update the registry I0912 05:40:15.316412 32558 registrar.cpp:552] Successfully updated the registry in 528896ns I0912 05:40:15.316654 32557 master.cpp:5914] Admitted agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(198)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.317286 32554 slave.cpp:4970] Received ping from slave-observer(191)@172.17.0.2:58922 I0912 05:40:15.317461 32554 slave.cpp:1174] Registered with master master@172.17.0.2:58922; given agent ID c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 I0912 05:40:15.317587 32553 status_update_manager.cpp:184] Resuming sending status updates I0912 05:40:15.317279 32557 master.cpp:5945] Registered agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(198)@172.17.0.2:58922 (b909d5e22907) with [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] I0912 05:40:15.317819 32562 hierarchical.cpp:593] Added agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 (b909d5e22907) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (allocated: {}) I0912 05:40:15.317857 32554 slave.cpp:1194] Checkpointing SlaveInfo to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/slave.info' I0912 05:40:15.318280 32554 slave.cpp:1243] Forwarding total oversubscribed resources {} I0912 05:40:15.318450 32554 master.cpp:6683] Received update of agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(198)@172.17.0.2:58922 (b909d5e22907) with total oversubscribed resources {} I0912 05:40:15.319030 32562 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:15.319090 32562 hierarchical.cpp:1468] Performed allocation for 1 agents in 1.101144ms I0912 05:40:15.319267 32562 hierarchical.cpp:660] Agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 (b909d5e22907) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000] I0912 05:40:15.319643 32555 master.cpp:7662] Sending 1 offers to framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:15.320127 32561 sched.cpp:933] Scheduler::resourceOffers took 109341ns I0912 05:40:15.322115 32550 master.cpp:9159] Removing offer c23ff8cf-cb2f-40d0-8f18-871a41f128cf-O0 I0912 05:40:15.322265 32550 master.cpp:4153] Processing ACCEPT call for offers: [ c23ff8cf-cb2f-40d0-8f18-871a41f128cf-O0 ] on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(198)@172.17.0.2:58922 (b909d5e22907) for framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:15.322368 32550 master.cpp:3530] Authorizing framework principal 'test-principal' to launch task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec I0912 05:40:15.324560 32550 master.cpp:9719] Adding task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec with resources [{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(198)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.325297 32550 master.cpp:4816] Launching task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 with resources [{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(198)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.327203 32560 slave.cpp:1736] Got assigned task '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' for framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.327380 32560 slave.cpp:7175] Checkpointing FrameworkInfo to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/framework.info' I0912 05:40:15.327888 32560 slave.cpp:7186] Checkpointing framework pid '@0.0.0.0:0' to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/framework.pid' I0912 05:40:15.327944 32550 hierarchical.cpp:887] Updated allocation of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 from cpus(allocated: *):2; mem(allocated: *):1024; disk(allocated: *):1024; ports(allocated: *):[31000-32000] to cpus(allocated: *):2; mem(allocated: *):1024; disk(allocated: *):1024; ports(allocated: *):[31000-32000] I0912 05:40:15.328968 32560 slave.cpp:2003] Authorizing task '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' for framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.329071 32560 slave.cpp:6794] Authorizing framework principal 'test-principal' to launch task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec I0912 05:40:15.330121 32553 slave.cpp:2171] Launching task '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' for framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.330823 32553 paths.cpp:578] Trying to chown '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/runs/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5' to user 'mesos' I0912 05:40:15.331084 32553 slave.cpp:7757] Checkpointing ExecutorInfo to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/executor.info' I0912 05:40:15.331904 32553 slave.cpp:7256] Launching executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 with resources [{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""cpus"""",""""scalar"""":{""""value"""":0.1},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""mem"""",""""scalar"""":{""""value"""":32.0},""""type"""":""""SCALAR""""}] in work directory '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/runs/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5' I0912 05:40:15.332718 32553 slave.cpp:2858] Launching container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 for executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.333190 32554 containerizer.cpp:1083] Starting container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 I0912 05:40:15.333230 32553 slave.cpp:7800] Checkpointing TaskInfo to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/runs/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5/tasks/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/task.info' I0912 05:40:15.333696 32554 containerizer.cpp:2712] Transitioning the state of container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 from PROVISIONING to PREPARING I0912 05:40:15.333937 32553 slave.cpp:2400] Queued task '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' for executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.334064 32553 slave.cpp:924] Successfully attached file '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/runs/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5' I0912 05:40:15.334168 32553 slave.cpp:924] Successfully attached file '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/runs/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5' I0912 05:40:15.338408 32556 containerizer.cpp:1681] Launching 'mesos-containerizer' with flags '--help=""""false"""" --launch_info=""""{""""command"""":{""""arguments"""":[""""mesos-executor"""",""""--launcher_dir=\/mesos\/build\/src""""],""""shell"""":false,""""value"""":""""\/mesos\/build\/src\/mesos-executor""""},""""environment"""":{""""variables"""":[{""""name"""":""""LIBPROCESS_PORT"""",""""type"""":""""VALUE"""",""""value"""":""""0""""},{""""name"""":""""MESOS_AGENT_ENDPOINT"""",""""type"""":""""VALUE"""",""""value"""":""""172.17.0.2:58922""""},{""""name"""":""""MESOS_CHECKPOINT"""",""""type"""":""""VALUE"""",""""value"""":""""1""""},{""""name"""":""""MESOS_DIRECTORY"""",""""type"""":""""VALUE"""",""""value"""":""""\/tmp\/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V\/slaves\/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0\/frameworks\/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000\/executors\/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec\/runs\/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5""""},{""""name"""":""""MESOS_EXECUTOR_ID"""",""""type"""":""""VALUE"""",""""value"""":""""8fc99bc8-a2b6-498b-8bb2-af5d92e78cec""""},{""""name"""":""""MESOS_EXECUTOR_SHUTDOWN_GRACE_PERIOD"""",""""type"""":""""VALUE"""",""""value"""":""""5secs""""},{""""name"""":""""MESOS_FRAMEWORK_ID"""",""""type"""":""""VALUE"""",""""value"""":""""c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000""""},{""""name"""":""""MESOS_HTTP_COMMAND_EXECUTOR"""",""""type"""":""""VALUE"""",""""value"""":""""0""""},{""""name"""":""""MESOS_RECOVERY_TIMEOUT"""",""""type"""":""""VALUE"""",""""value"""":""""15mins""""},{""""name"""":""""MESOS_SLAVE_ID"""",""""type"""":""""VALUE"""",""""value"""":""""c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0""""},{""""name"""":""""MESOS_SLAVE_PID"""",""""type"""":""""VALUE"""",""""value"""":""""slave(198)@172.17.0.2:58922""""},{""""name"""":""""MESOS_SUBSCRIPTION_BACKOFF_MAX"""",""""type"""":""""VALUE"""",""""value"""":""""2secs""""},{""""name"""":""""MESOS_SANDBOX"""",""""type"""":""""VALUE"""",""""value"""":""""\/tmp\/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V\/slaves\/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0\/frameworks\/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000\/executors\/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec\/runs\/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5""""}]},""""task_environment"""":{},""""user"""":""""mesos"""",""""working_directory"""":""""\/tmp\/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V\/slaves\/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0\/frameworks\/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000\/executors\/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec\/runs\/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5""""}"""" --pipe_read=""""6"""" --pipe_write=""""7"""" --runtime_directory=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/containers/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5"""" --unshare_namespace_mnt=""""false""""' I0912 05:40:15.340767 32556 launcher.cpp:140] Forked child with pid '1772' for container '69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5' I0912 05:40:15.340893 32556 containerizer.cpp:1773] Checkpointing container's forked pid 1772 to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/runs/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5/pids/forked.pid' I0912 05:40:15.341821 32556 containerizer.cpp:2712] Transitioning the state of container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 from PREPARING to ISOLATING I0912 05:40:15.343189 32558 containerizer.cpp:2712] Transitioning the state of container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 from ISOLATING to FETCHING I0912 05:40:15.343369 32560 fetcher.cpp:377] Starting to fetch URIs for container: 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5, directory: /tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/runs/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 I0912 05:40:15.344462 32549 containerizer.cpp:2712] Transitioning the state of container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 from FETCHING to RUNNING I0912 05:40:15.504098 1787 exec.cpp:162] Version: 1.4.0 I0912 05:40:15.510535 32550 slave.cpp:3935] Got registration for executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 from executor(1)@172.17.0.2:33722 I0912 05:40:15.511157 32550 slave.cpp:4021] Checkpointing executor pid 'executor(1)@172.17.0.2:33722' to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/runs/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5/pids/libprocess.pid' I0912 05:40:15.513628 32552 slave.cpp:2605] Sending queued task '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' to executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 at executor(1)@172.17.0.2:33722 I0912 05:40:15.517511 1780 exec.cpp:237] Executor registered on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 I0912 05:40:15.521653 1774 executor.cpp:171] Received SUBSCRIBED event I0912 05:40:15.522095 1774 executor.cpp:175] Subscribed executor on b909d5e22907 I0912 05:40:15.522334 1774 executor.cpp:171] Received LAUNCH event I0912 05:40:15.522544 1774 executor.cpp:633] Starting task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec I0912 05:40:15.528475 1774 executor.cpp:477] Running '/mesos/build/src/mesos-containerizer launch ' I0912 05:40:15.531814 1774 executor.cpp:646] Forked command at 1791 I0912 05:40:15.538535 32556 slave.cpp:4399] Handling status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 from executor(1)@172.17.0.2:33722 I0912 05:40:15.540377 32548 status_update_manager.cpp:323] Received status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.540426 32548 status_update_manager.cpp:500] Creating StatusUpdate stream for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.541287 32548 status_update_manager.cpp:834] Checkpointing UPDATE for status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.541561 32548 status_update_manager.cpp:377] Forwarding update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to the agent I0912 05:40:15.541859 32559 slave.cpp:4880] Forwarding the update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to master@172.17.0.2:58922 I0912 05:40:15.542114 32559 slave.cpp:4774] Status update manager successfully handled status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.542174 32559 slave.cpp:4790] Sending acknowledgement for status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to executor(1)@172.17.0.2:33722 I0912 05:40:15.542295 32552 master.cpp:6841] Status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 from agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(198)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.542371 32552 master.cpp:6903] Forwarding status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.542628 32552 master.cpp:8928] Updating the state of task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (latest st: ate: TASK_RUNNING, status update state: TASK_RUNNING) I0912 05:40:15.542891 32563 sched.cpp:1041] Scheduler::statusUpdate took 114540ns I0912 05:40:15.543287 32550 master.cpp:5479] Processing ACKNOWLEDGE call b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 I0912 05:40:15.543305 32547 slave.cpp:843] Agent terminating I0912 05:40:15.543632 32550 master.cpp:1318] Agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(198)@172.17.0.2:58922 (b909d5e22907) disconnected I0912 05:40:15.543651 32550 master.cpp:3301] Disconnecting agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(198)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.543767 32547 containerizer.cpp:246] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni,environment_secret I0912 05:40:15.543817 32550 master.cpp:3320] Deactivating agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(198)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.543967 32560 hierarchical.cpp:690] Agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 deactivated W0912 05:40:15.544199 32547 backend.cpp:76] Failed to create 'aufs' backend: AufsBackend requires root privileges W0912 05:40:15.544307 32547 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges I0912 05:40:15.544339 32547 provisioner.cpp:255] Using default backend 'copy' W0912 05:40:15.551013 32547 process.cpp:3196] Attempted to spawn already running process files@172.17.0.2:58922 I0912 05:40:15.551386 32547 cluster.cpp:448] Creating default 'local' authorizer I0912 05:40:15.554386 32555 slave.cpp:250] Mesos agent started on (199)@172.17.0.2:58922 I0912 05:40:15.554404 32555 slave.cpp:251] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/credential"""" --default_role=""""*"""" --disallow_sharing_agent_pid_namespace=""""false"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V"""" I0912 05:40:15.554872 32555 credentials.hpp:86] Loading credential for authentication from '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/credential' I0912 05:40:15.555035 32555 slave.cpp:283] Agent using credential for: test-principal I0912 05:40:15.555052 32555 credentials.hpp:37] Loading credentials for authentication from '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_n3xE7x/http_credentials' I0912 05:40:15.555235 32555 http.cpp:1026] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I0912 05:40:15.555388 32555 http.cpp:1026] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' I0912 05:40:15.556735 32555 slave.cpp:565] Agent resources: [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] I0912 05:40:15.556988 32555 slave.cpp:573] Agent attributes: [ ] I0912 05:40:15.557003 32555 slave.cpp:582] Agent hostname: b909d5e22907 I0912 05:40:15.557221 32560 status_update_manager.cpp:177] Pausing sending status updates I0912 05:40:15.558465 32558 state.cpp:64] Recovering state from '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta' I0912 05:40:15.558528 32558 state.cpp:722] No committed checkpointed resources found at '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/resources/resources.info' I0912 05:40:15.561717 32551 slave.cpp:6386] Recovering framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.561810 32551 slave.cpp:7335] Recovering executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.562430 32552 status_update_manager.cpp:203] Recovering status update manager I0912 05:40:15.562449 32552 status_update_manager.cpp:211] Recovering executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.562503 32552 status_update_manager.cpp:500] Creating StatusUpdate stream for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.562918 32552 status_update_manager.cpp:810] Replaying status update stream for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec I0912 05:40:15.563284 32556 containerizer.cpp:609] Recovering containerizer I0912 05:40:15.563344 32556 containerizer.cpp:665] Recovering container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 for executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.565598 32551 provisioner.cpp:416] Provisioner recovery complete I0912 05:40:15.566550 32553 slave.cpp:6179] Sending reconnect request to executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 at executor(1)@172.17.0.2:33722 I0912 05:40:15.567891 1775 exec.cpp:283] Received reconnect request from agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 I0912 05:40:15.568752 32558 slave.cpp:4327] Cleaning up un-reregistered executors I0912 05:40:15.568778 32558 slave.cpp:4345] Killing un-reregistered executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 at executor(1)@172.17.0.2:33722 I0912 05:40:15.568904 32551 containerizer.cpp:2166] Destroying container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 in RUNNING state I0912 05:40:15.568922 32559 hierarchical.cpp:1925] No allocations performed I0912 05:40:15.569078 32559 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:15.568987 32551 containerizer.cpp:2712] Transitioning the state of container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 from RUNNING to DESTROYING I0912 05:40:15.569145 32559 hierarchical.cpp:1468] Performed allocation for 1 agents in 332649ns I0912 05:40:15.569416 32551 launcher.cpp:156] Asked to destroy container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 I0912 05:40:15.568934 32558 slave.cpp:6295] Finished recovery I0912 05:40:15.572386 32558 slave.cpp:6477] Querying resource estimator for oversubscribable resources I0912 05:40:15.572738 32558 slave.cpp:4109] Received re-registration message from executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 W0912 05:40:15.572798 32558 slave.cpp:4161] Shutting down executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 at executor(1)@172.17.0.2:33722 because it is in unexpected state TERMINATING I0912 05:40:15.573163 32558 slave.cpp:971] New master detected at master@172.17.0.2:58922 I0912 05:40:15.573194 32562 status_update_manager.cpp:177] Pausing sending status updates I0912 05:40:15.573314 32558 slave.cpp:1006] Detecting new master I0912 05:40:15.573434 32558 slave.cpp:6491] Received oversubscribable resources {} from the resource estimator I0912 05:40:15.573761 1789 exec.cpp:435] Executor asked to shutdown I0912 05:40:15.574031 1782 executor.cpp:171] Received SHUTDOWN event I0912 05:40:15.574048 1782 executor.cpp:743] Shutting down I0912 05:40:15.574089 1782 executor.cpp:850] Sending SIGTERM to process tree at pid 1791 I0912 05:40:15.580627 32553 slave.cpp:1033] Authenticating with master master@172.17.0.2:58922 I0912 05:40:15.580713 32553 slave.cpp:1044] Using default CRAM-MD5 authenticatee I0912 05:40:15.581008 32556 authenticatee.cpp:121] Creating new client SASL connection I0912 05:40:15.581377 32555 master.cpp:7832] Authenticating slave(199)@172.17.0.2:58922 I0912 05:40:15.581524 32561 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(408)@172.17.0.2:58922 I0912 05:40:15.581822 32563 authenticator.cpp:98] Creating new server SASL connection I0912 05:40:15.582089 32554 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I0912 05:40:15.582123 32554 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I0912 05:40:15.582270 32549 authenticator.cpp:204] Received SASL authentication start I0912 05:40:15.582330 32549 authenticator.cpp:326] Authentication requires more steps I0912 05:40:15.582463 32549 authenticatee.cpp:259] Received SASL authentication step I0912 05:40:15.582597 32560 authenticator.cpp:232] Received SASL authentication step I0912 05:40:15.582625 32560 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'b909d5e22907' server FQDN: 'b909d5e22907' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I0912 05:40:15.582641 32560 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I0912 05:40:15.582676 32560 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I0912 05:40:15.582695 32560 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'b909d5e22907' server FQDN: 'b909d5e22907' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I0912 05:40:15.582702 32560 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I0912 05:40:15.582707 32560 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I0912 05:40:15.582720 32560 authenticator.cpp:318] Authentication success I0912 05:40:15.582815 32562 authenticatee.cpp:299] Authentication success I0912 05:40:15.582855 32552 master.cpp:7862] Successfully authenticated principal 'test-principal' at slave(199)@172.17.0.2:58922 I0912 05:40:15.582882 32558 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(408)@172.17.0.2:58922 I0912 05:40:15.583106 32562 slave.cpp:1128] Successfully authenticated with master master@172.17.0.2:58922 I0912 05:40:15.583451 32562 slave.cpp:1607] Will retry registration in 949976ns if necessary I0912 05:40:15.583799 32561 master.cpp:6014] Received re-register agent message from agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.584031 32561 master.cpp:3803] Authorizing agent with principal 'test-principal' I0912 05:40:15.584475 32554 master.cpp:6083] Authorized re-registration of agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.584573 32554 master.cpp:6148] Re-registering agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(198)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.584985 32557 hierarchical.cpp:678] Agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 reactivated I0912 05:40:15.584985 32554 master.cpp:6581] Sending updated checkpointed resources {} to agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.585088 32559 slave.cpp:1607] Will retry registration in 30.237322ms if necessary I0912 05:40:15.585434 32559 slave.cpp:1286] Re-registered with master master@172.17.0.2:58922 I0912 05:40:15.585535 32559 slave.cpp:1323] Forwarding total oversubscribed resources {} I0912 05:40:15.585543 32556 status_update_manager.cpp:184] Resuming sending status updates I0912 05:40:15.585543 32554 master.cpp:6014] Received re-register agent message from agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) W0912 05:40:15.585662 32556 status_update_manager.cpp:191] Resending status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.585732 32556 status_update_manager.cpp:377] Forwarding update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to the agent I0912 05:40:15.585903 32554 master.cpp:3803] Authorizing agent with principal 'test-principal' I0912 05:40:15.585963 32559 slave.cpp:3430] Ignoring new checkpointed resources identical to the current version: {} I0912 05:40:15.586246 32554 master.cpp:6683] Received update of agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) with total oversubscribed resources {} I0912 05:40:15.586230 32559 slave.cpp:4880] Forwarding the update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to master@172.17.0.2:58922 I0912 05:40:15.586472 32554 master.cpp:6083] Authorized re-registration of agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.586551 32556 hierarchical.cpp:660] Agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 (b909d5e22907) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000] I0912 05:40:15.586566 32554 master.cpp:6148] Re-registering agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.586849 32554 master.cpp:6581] Sending updated checkpointed resources {} to agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5: e22907) W0912 05:40:15.586864 32563 slave.cpp:1304] Already re-registered with master master@172.17.0.2:58922 I0912 05:40:15.586884 32563 slave.cpp:1323] Forwarding total oversubscribed resources {} I0912 05:40:15.587103 32563 slave.cpp:3366] Updating info for framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.587175 32563 slave.cpp:7175] Checkpointing FrameworkInfo to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/framework.info' I0912 05:40:15.587147 32554 master.cpp:6841] Status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 from agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.587221 32554 master.cpp:6903] Forwarding status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.587436 32554 master.cpp:8928] Updating the state of task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0912 05:40:15.587570 32554 master.cpp:6683] Received update of agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) with total oversubscribed resources {} I0912 05:40:15.587620 32557 sched.cpp:1041] Scheduler::statusUpdate took 30617ns I0912 05:40:15.587770 32563 slave.cpp:7186] Checkpointing framework pid '@0.0.0.0:0' to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/framework.pid' I0912 05:40:15.587935 32554 master.cpp:5479] Processing ACKNOWLEDGE call b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 I0912 05:40:15.587941 32560 hierarchical.cpp:660] Agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 (b909d5e22907) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000] I0912 05:40:15.588253 32553 status_update_manager.cpp:184] Resuming sending status updates W0912 05:40:15.588287 32553 status_update_manager.cpp:191] Resending status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.588327 32563 slave.cpp:3366] Updating info for framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 with pid updated to scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:15.588326 32553 status_update_manager.cpp:377] Forwarding update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to the agent I0912 05:40:15.588404 32563 slave.cpp:7175] Checkpointing FrameworkInfo to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/framework.info' I0912 05:40:15.588801 32563 slave.cpp:7186] Checkpointing framework pid 'scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922' to '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/framework.pid' I0912 05:40:15.589190 32563 slave.cpp:3430] Ignoring new checkpointed resources identical to the current version: {} I0912 05:40:15.589197 32552 status_update_manager.cpp:184] Resuming sending status updates W0912 05:40:15.589220 32552 status_update_manager.cpp:191] Resending status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.589243 32552 status_update_manager.cpp:377] Forwarding update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to the agent I0912 05:40:15.589283 32563 slave.cpp:4948] Sending message for framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:15.589493 32548 sched.cpp:1177] Scheduler::frameworkMessage took 29470ns I0912 05:40:15.589572 32551 status_update_manager.cpp:395] Received status update acknowledgement (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.589622 32563 slave.cpp:4880] Forwarding the update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to master@172.17.0.2:58922 I0912 05:40:15.589694 32551 status_update_manager.cpp:834] Checkpointing ACK for status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.589879 32563 slave.cpp:4880] Forwarding the update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to master@172.17.0.2:58922 I0912 05:40:15.589992 32555 master.cpp:6841] Status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 from agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.590051 32563 slave.cpp:3663] Status update manager successfully handled status update acknowledgement (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.590047 32555 master.cpp:6903] Forwarding status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.590234 32555 master.cpp:8928] Updating the state of task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0912 05:40:15.590378 32559 sched.cpp:1041] Scheduler::statusUpdate took 14489ns I0912 05:40:15.590432 32555 master.cpp:6841] Status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 from agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.590487 32555 master.cpp:6903] Forwarding status update TASK_RUNNING (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.590644 32555 master.cpp:8928] Updating the state of task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) I0912 05:40:15.590761 32556 sched.cpp:1041] Scheduler::statusUpdate took 13603ns I0912 05:40:15.590807 32555 master.cpp:5479] Processing ACKNOWLEDGE call b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 I0912 05:40:15.591056 32555 master.cpp:5479] Processing ACKNOWLEDGE call b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 I0912 05:40:15.591071 32562 status_update_manager.cpp:395] Received status update acknowledgement (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.591341 32550 status_update_manager.cpp:395] Received status update acknowledgement (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 E0912 05:40:15.591368 32561 slave.cpp:3656] Failed to handle status update acknowledgement (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000: Unexpected status update acknowledgment (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 E0912 05:40:15.591498 32549 slave.cpp:3656] Failed to handle status update acknowledgement (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000: Unexpected status update acknowledgment (UUID: b4d60b7e-a8d0-448e-aaf6-4f83dbcb642e) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.669543 32554 containerizer.cpp:2612] Container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 has exited I0912 05:40:15.671640 32560 provisioner.cpp:490] Ignoring destroy request for unknown container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 I0912 05:40:15.672369 32563 slave.cpp:5412] Executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 terminated with signal Killed I0912 05:40:15.672497 32563 slave.cpp:4399] Handling status update TASK_LOST (UUID: 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 from @0.0.0.0:0 W0912 05:40:15.673275 32550 containerizer.cpp:1976] Ignoring update for unknown container 69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5 I0912 05:40:15.673615 32557 status_update_manager.cpp:323] Received status update TASK_LOST (UUID: 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.673672 32557 status_update_manager.cpp:834] Checkpointing UPDATE for status update TASK_LOST (UUID: 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.673820 32557 status_update_manager.cpp:377] Forwarding update TASK_LOST (UUID: 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to the agent I0912 05:40:15.674010 32552 slave.cpp:4880] Forwarding the update TASK_LOST (UUID: 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 to master@172.17.0.2:58922 I0912 05:40:15.674181 32552 slave.cpp:4774] Status update manager successfully handled status update TASK_LOST (UUID: 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.674340 32549 master.cpp:6841] Status update TASK_LOST (UUID: 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 from agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.674394 32549 master.cpp:6903] Forwarding status update TASK_LOST (UUID: 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.674536 32549 master.cpp:8928] Updating the state of task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (latest state: TASK_LOST, status update state: TASK_LOST) I0912 05:40:15.674707 32553 sched.cpp:1041] Scheduler::statusUpdate took 23659ns I0912 05:40:15.675499 32549 master.cpp:5479] Processing ACKNOWLEDGE call 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9 for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 I0912 05:40:15.675812 32548 hierarchical.cpp:1152] Recovered cpus(allocated: *):2; mem(allocated: *):1024; disk(allocated: *):1024; ports(allocated: *):[31000-32000] (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000], allocated: {}) on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 from framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.675567 32549 master.cpp:9022] Removing task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec with resources [{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 at slave(199)@172.17.0.2:58922 (b909d5e22907) I0912 05:40:15.676257 32560 status_update_manager.cpp:395] Received status update acknowledgement (UUID: 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.676344 32560 status_update_manager.cpp:834] Checkpointing ACK for status update TASK_LOST (UUID: 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.676432 32560 status_update_manager.cpp:531] Cleaning up status update stream for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.676857 32558 slave.cpp:3663] Status update manager successfully handled status update acknowledgement (UUID: 1bc4622e-9dd3-49ef-9bfd-a875fb36bcd9) for task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.676908 32558 slave.cpp:7738] Completing task 8fc99bc8-a2b6-498b-8bb2-af5d92e78cec I0912 05:40:15.676949 32558 slave.cpp:5516] Cleaning up executor '8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' of framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 at executor(1)@172.17.0.2:33722 I0912 05:40:15.677388 32555 gc.cpp:59] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/runs/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5' for gc 6.99999216112296days in the future I0912 05:40:15.677546 32555 gc.cpp:59] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' for gc 6.99999215893333days in the future I0912 05:40:15.677711 32555 gc.cpp:59] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec/runs/69e9c3b3-65c9-4c04-b38d-ef2266c2cdf5' for gc 6.99999215732444days in the future I0912 05:40:15.677775 32558 slave.cpp:5612] Cleaning up framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.677812 32555 gc.cpp:59] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000/executors/8fc99bc8-a2b6-498b-8bb2-af5d92e78cec' for gc 6.99999215597333days in the future I0912 05:40:15.677863 32559 status_update_manager.cpp:285] Closing status update streams for framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:15.677960 32555 gc.cpp:59] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000' for gc 6.99999215403852days in the future I0912 05:40:15.678086 32555 gc.cpp:59] Scheduling '/tmp/SlaveTest_HTTPSchedulerSlaveRestart_68fE8V/meta/slaves/c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0/frameworks/c23ff8cf-cb2: f-40d0-8f18-871a41f128cf-0000' for gc 6.99999215262519days in the future I0912 05:40:16.571491 32549 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:16.571547 32549 hierarchical.cpp:1468] Performed allocation for 1 agents in 1.096296ms I0912 05:40:16.572053 32554 master.cpp:7662] Sending 1 offers to framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:16.572510 32560 sched.cpp:933] Scheduler::resourceOffers took 24477ns I0912 05:40:17.573107 32562 hierarchical.cpp:1925] No allocations performed I0912 05:40:17.573153 32562 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:17.573195 32562 hierarchical.cpp:1468] Performed allocation for 1 agents in 211498ns I0912 05:40:18.574553 32548 hierarchical.cpp:1925] No allocations performed I0912 05:40:18.574599 32548 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:18.574641 32548 hierarchical.cpp:1468] Performed allocation for 1 agents in 180072ns I0912 05:40:19.576134 32562 hierarchical.cpp:1925] No allocations performed I0912 05:40:19.576189 32562 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:19.576248 32562 hierarchical.cpp:1468] Performed allocation for 1 agents in 203826ns I0912 05:40:20.576812 32555 hierarchical.cpp:1925] No allocations performed I0912 05:40:20.576858 32555 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:20.576900 32555 hierarchical.cpp:1468] Performed allocation for 1 agents in 180929ns I0912 05:40:21.577955 32560 hierarchical.cpp:1925] No allocations performed I0912 05:40:21.578001 32560 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:21.578043 32560 hierarchical.cpp:1468] Performed allocation for 1 agents in 181224ns I0912 05:40:22.579715 32553 hierarchical.cpp:1925] No allocations performed I0912 05:40:22.579761 32553 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:22.579803 32553 hierarchical.cpp:1468] Performed allocation for 1 agents in 186117ns I0912 05:40:23.581313 32561 hierarchical.cpp:1925] No allocations performed I0912 05:40:23.581360 32561 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:23.581403 32561 hierarchical.cpp:1468] Performed allocation for 1 agents in 191563ns I0912 05:40:24.582953 32559 hierarchical.cpp:1925] No allocations performed I0912 05:40:24.583000 32559 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:24.583042 32559 hierarchical.cpp:1468] Performed allocation for 1 agents in 180449ns I0912 05:40:25.584481 32551 hierarchical.cpp:1925] No allocations performed I0912 05:40:25.584528 32551 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:25.584570 32551 hierarchical.cpp:1468] Performed allocation for 1 agents in 186677ns I0912 05:40:26.585813 32552 hierarchical.cpp:1925] No allocations performed I0912 05:40:26.585860 32552 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:26.585903 32552 hierarchical.cpp:1468] Performed allocation for 1 agents in 204077ns I0912 05:40:27.586802 32556 hierarchical.cpp:1925] No allocations performed I0912 05:40:27.586848 32556 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:27.586890 32556 hierarchical.cpp:1468] Performed allocation for 1 agents in 189229ns I0912 05:40:28.588395 32559 hierarchical.cpp:1925] No allocations performed I0912 05:40:28.588441 32559 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:28.588484 32559 hierarchical.cpp:1468] Performed allocation for 1 agents in 178527ns I0912 05:40:29.590046 32551 hierarchical.cpp:1925] No allocations performed I0912 05:40:29.590092 32551 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:29.590134 32551 hierarchical.cpp:1468] Performed allocation for 1 agents in 179207ns I0912 05:40:30.574189 32550 slave.cpp:6477] Querying resource estimator for oversubscribable resources I0912 05:40:30.574484 32557 slave.cpp:6491] Received oversubscribable resources {} from the resource estimator /mesos/src/tests/slave_tests.cpp:5501: Failure Failed to wait 15secs for executorToFrameworkMessage1 I0912 05:40:30.590958 32549 master.cpp:1432] Framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 disconnected I0912 05:40:30.591058 32549 master.cpp:3264] Deactivating framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:30.591497 32555 hierarchical.cpp:412] Deactivated framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:30.591819 32555 hierarchical.cpp:1925] No allocations performed I0912 05:40:30.591877 32555 hierarchical.cpp:2015] No inverse offers to send out! I0912 05:40:30.592000 32555 hierarchical.cpp:1468] Performed allocation for 1 agents in 359046ns I0912 05:40:30.592222 32549 master.cpp:9159] Removing offer c23ff8cf-cb2f-40d0-8f18-871a41f128cf-O1 I0912 05:40:30.592332 32549 master.cpp:3241] Disconnecting framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:30.592635 32549 master.cpp:1447] Giving framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 0ns to failover I0912 05:40:30.593114 32562 slave.cpp:843] Agent terminating I0912 05:40:30.593736 32555 hierarchical.cpp:1152] Recovered cpus(allocated: *):2; mem(allocated: *):1024; disk(allocated: *):1024; ports(allocated: *):[31000-32000] (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000], allocated: {}) on agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 from framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:30.594408 32560 master.cpp:7494] Framework failover timeout, removing framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:30.594449 32560 master.cpp:8355] Removing framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 (default) at scheduler-228abf3d-36ea-4900-94a1-8b18d253716c@172.17.0.2:58922 I0912 05:40:30.595404 32548 hierarchical.cpp:355] Removed framework c23ff8cf-cb2f-40d0-8f18-871a41f128cf-0000 I0912 05:40:30.600559 32547 master.cpp:1160] Master terminating I0912 05:40:30.601553 32559 hierarchical.cpp:626] Removed agent c23ff8cf-cb2f-40d0-8f18-871a41f128cf-S0 /mesos/3rdparty/libprocess/include/process/gmock.hpp:467: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: message matcher (8-byte object <48-04 06-D4 37-2B 00-00>, 1, 1) Expected: to be called once Actual: never called - unsatisfied and active /mesos/3rdparty/libprocess/include/process/gmock.hpp:467: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: message matcher (8-byte object <48-04 06-D4 37-2B 00-00>, 1, 1-byte object <28>) Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] SlaveTest.HTTPSchedulerSlaveRestart (15325 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7978","09/14/2017 16:23:17",1,"Lint javascript files to enable linting. ""To enable the linting of our javascript codebase, the javascript files should first be linted so that new commits will not have to include fixes for current issues.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7980","09/15/2017 18:40:18",2,"Stout fails to compile with libc >= 2.26. ""Glibc 2.26 removes """"xlocale.h"""" [1] which makes stout fail to compile. Stout should be using 'locale.h' instead. [1]: https://sourceware.org/glibc/wiki/Release/2.26#Removal_of_.27xlocale.h.27""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-7998","09/21/2017 15:45:11",2,"PersistentVolumeEndpointsTest.UnreserveVolumeResources is flaky. ""Observed a failure on the internal CI: Full log attached."""," ../../src/tests/persistent_volume_endpoints_tests.cpp:450 Value of: (response).get().status Actual: """"409 Conflict"""" Expected: Accepted().status Which is: """"202 Accepted"""" ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8001","09/21/2017 16:18:35",2,"PersistentVolumeEndpointsTest.NoAuthentication is flaky. ""Observed a failure on internal CI: Full log attached."""," ../../src/tests/persistent_volume_endpoints_tests.cpp:1385 Value of: (response).get().status Actual: """"409 Conflict"""" Expected: Accepted().status Which is: """"202 Accepted"""" ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8003","09/21/2017 18:04:05",2,"PersistentVolumeEndpointsTest.SlavesEndpointFullResources is flaky. ""Observed on internal CI: Full log attached."""," ../../src/tests/persistent_volume_endpoints_tests.cpp:1952 Value of: (response).get().status Actual: """"409 Conflict"""" Expected: Accepted().status Which is: """"202 Accepted"""" ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8012","09/25/2017 14:05:55",3,"Support Znode paths for masters in the new CLI ""Right now the new Mesos CLI only works in single master mode with a single master IP and port. We should add support for finding the mesos leader in HA mode by hitting a set of zk instances similar to how {{mesos-resolve}} works.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8013","09/25/2017 15:26:58",3,"Add test for blkio statistics ""In [MESOS-6162|https://issues.apache.org/jira/browse/MESOS-6162], we have added the support for cgroups blkio statistics. In this ticket, we'd like to add a test to verify the cgroups blkio statistics can be correctly retrieved via Mesos containerizer's {{usage()}} method.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8015","09/25/2017 22:35:47",2,"Design a scheduler (V1) HTTP API authenticatee mechanism. ""Provide a design proposal for a scheduler HTTP API authenticatee module.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0 +"MESOS-8016","09/26/2017 12:27:32",2,"Introduce modularized HTTP authenticatee. ""Define the implementation of a modularized interface for the scheduler library authentication, providing the means of an authenticatee. This interface will allow consumers of HTTP APIs to use replaceable authentication mechanisms via a defined interface.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0 +"MESOS-8017","09/26/2017 12:33:00",2,"Introduce a basic HTTP authenticatee. ""Refactor the hardcoded basic HTTP authentication code from within the scheduler library into the (modularized) interface provided by MESOS-8016""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8021","09/26/2017 21:58:24",3,"Update HTTP scheduler library to allow for modularized authenticatee. ""Allow the scheduler library to load an HTTP authenticatee module providing custom mechanisms for authentication.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0 +"MESOS-8030","09/27/2017 23:16:45",13,"A resource provider for supporting local storage through CSI ""The Storage Local Resource Provider (SLRP) is a resource provider component in Mesos to manage persistent local storage on agents. SLRP should support the following MVP functions: * Registering to the RP manager (P0) * Reporting available disk resources through a CSI controller plugin. (P0) * Processing resource converting operations (CREATE_BLOCK, CREATE_VOLUME, DESTROY_BLOCK, DESTROY_VOLUME) issued by frameworks to convert RAW disk resources to mount or block volumes through a CSI controller plugin (P0) * Publish/unpublish a disk resource through CSI controller/node plugins for a task (P0) * Support storage profiles through modules (P1) * Tracking and checkpointing resources and reservations (P1) ""","",1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8031","09/27/2017 23:36:11",3,"SLRP Configuration ""A typical SLRP configuration could look like the following: The {{csi_plugins}} field lists the configurations to launch standalone containers for CSI plugins. The plugins are specified through a map, then we use the {{controller_plugin_name}} and {{node_plugin_name}} fields to refer to the corresponding plugin. With this design, we can support both headless and split-component deployment for CSI."""," { """"type"""": """"org.apache.mesos.rp.local.storage"""", """"name"""": """"local-volume"""", """"storage"""": { """"csi_plugins"""": [ { """"name"""": """"plugin_1"""", """"command"""": {...}, """"resources"""": [...], """"container"""": {...} }, { """"name"""": """"plugin_2"""", """"command"""": {...}, """"resources"""": [...], """"container"""": {...} } ], """"controller_plugin_name"""": """"plugin_1"""", """"node_plugin_name"""": """"plugin_2"""" } } ",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8032","09/27/2017 23:45:08",5,"Launch CSI plugins in storage local resource provider. ""Launching a CSI plugin requires the following steps: 1. Verify the configuration. 2. Prepare a directory in the work directory of the resource provider where the socket file should be placed, and construct the path of the socket file. 3. If the socket file already exists and the plugin is already running, we should not launch another plugin instance. 4. Otherwise, launch a standalone container to run the plugin and connect to it through the socket file.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8046","10/02/2017 14:17:38",2,"MasterTestPrePostReservationRefinement.ReserveAndUnreserveResourcesV1 is flaky. ""As seen on our internal CI. Error Message Log: """," ../../src/tests/master_tests.cpp:8682 Value of: (v1UnreserveResourcesResponse).get().status Actual: """"409 Conflict"""" Expected: Accepted().status Which is: """"202 Accepted"""" 00:33:08 [ RUN ] bool/MasterTestPrePostReservationRefinement.ReserveAndUnreserveResourcesV1/0 00:33:08 I0929 17:33:08.670744 2067726336 cluster.cpp:162] Creating default 'local' authorizer 00:33:08 I0929 17:33:08.672592 3211264 master.cpp:445] Master 71fce4a3-01f6-43a7-b512-28980b04e51f (10.0.49.4) started on 10.0.49.4:54887 00:33:08 I0929 17:33:08.672621 3211264 master.cpp:447] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/private/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/YdqFmR/credentials"""" --filter_gpu_resources=""""true"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_unreachable_tasks_per_framework=""""1000"""" --port=""""5050"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/private/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/YdqFmR/master"""" --zk_session_timeout=""""10secs"""" 00:33:08 I0929 17:33:08.672792 3211264 master.cpp:497] Master only allowing authenticated frameworks to register 00:33:08 I0929 17:33:08.672804 3211264 master.cpp:511] Master only allowing authenticated agents to register 00:33:08 I0929 17:33:08.672821 3211264 master.cpp:524] Master only allowing authenticated HTTP frameworks to register 00:33:08 I0929 17:33:08.672829 3211264 credentials.hpp:37] Loading credentials for authentication from '/private/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/YdqFmR/credentials' 00:33:08 I0929 17:33:08.672997 3211264 master.cpp:569] Using default 'crammd5' authenticator 00:33:08 I0929 17:33:08.673053 3211264 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly' 00:33:08 I0929 17:33:08.673136 3211264 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' 00:33:08 I0929 17:33:08.673174 3211264 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' 00:33:08 I0929 17:33:08.673226 3211264 master.cpp:649] Authorization enabled 00:33:08 I0929 17:33:08.673306 2674688 hierarchical.cpp:171] Initialized hierarchical allocator process 00:33:08 I0929 17:33:08.673326 1601536 whitelist_watcher.cpp:77] No whitelist given 00:33:08 I0929 17:33:08.674684 1601536 master.cpp:2216] Elected as the leading master! 00:33:08 I0929 17:33:08.674708 1601536 master.cpp:1705] Recovering from registrar 00:33:08 I0929 17:33:08.674787 2674688 registrar.cpp:347] Recovering registrar 00:33:08 I0929 17:33:08.674944 2674688 registrar.cpp:391] Successfully fetched the registry (0B) in 134912ns 00:33:08 I0929 17:33:08.675014 2674688 registrar.cpp:495] Applied 1 operations in 17us; attempting to update the registry 00:33:08 I0929 17:33:08.675209 2674688 registrar.cpp:552] Successfully updated the registry in 157184ns 00:33:08 I0929 17:33:08.675252 2674688 registrar.cpp:424] Successfully recovered registrar 00:33:08 I0929 17:33:08.675377 2138112 master.cpp:1809] Recovered 0 agents from the registry (121B); allowing 10mins for agents to re-register 00:33:08 I0929 17:33:08.675418 528384 hierarchical.cpp:209] Skipping recovery of hierarchical allocator: nothing to recover 00:33:08 W0929 17:33:08.678066 2067726336 process.cpp:3194] Attempted to spawn already running process files@10.0.49.4:54887 00:33:08 I0929 17:33:08.678484 2067726336 containerizer.cpp:292] Using isolation { environment_secret, filesystem/posix, posix/cpu, posix/mem } 00:33:08 I0929 17:33:08.678678 2067726336 provisioner.cpp:255] Using default backend 'copy' 00:33:08 I0929 17:33:08.679306 2067726336 cluster.cpp:448] Creating default 'local' authorizer 00:33:08 I0929 17:33:08.680037 3747840 slave.cpp:254] Mesos agent started on (751)@10.0.49.4:54887 00:33:08 I0929 17:33:08.680061 3747840 slave.cpp:255] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_b1Sm7Z/store/appc"""" --authenticate_http_executors=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_b1Sm7Z/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_b1Sm7Z/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2secs"""" --executor_secret_key=""""/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_b1Sm7Z/executor_secret_key"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_b1Sm7Z/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_b1Sm7Z/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/Users/jenkins/workspace/workspace/mesos/Mesos_CI-build/FLAG/SSL/label/mac/mesos/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --runtime_dir=""""/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_b1Sm7Z"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_rx4DfE"""" --zk_session_timeout=""""10secs"""" 00:33:08 I0929 17:33:08.680240 3747840 credentials.hpp:86] Loading credential for authentication from '/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_b1Sm7Z/credential' 00:33:08 I0929 17:33:08.680353 3747840 slave.cpp:287] Agent using credential for: test-principal 00:33:08 I0929 17:33:08.680364 3747840 credentials.hpp:37] Loading credentials for authentication from '/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_b1Sm7Z/http_credentials' 00:33:08 I0929 17:33:08.680552 3747840 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-executor' 00:33:08 I0929 17:33:08.680621 3747840 http.cpp:1066] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-executor' 00:33:08 I0929 17:33:08.680690 3747840 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' 00:33:08 I0929 17:33:08.680723 3747840 http.cpp:1066] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-readonly' 00:33:08 I0929 17:33:08.680783 3747840 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' 00:33:08 I0929 17:33:08.680831 3747840 http.cpp:1066] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-readwrite' 00:33:08 I0929 17:33:08.681946 3747840 slave.cpp:585] Agent resources: [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] 00:33:08 I0929 17:33:08.682111 3747840 slave.cpp:593] Agent attributes: [ ] 00:33:08 I0929 17:33:08.682119 3747840 slave.cpp:602] Agent hostname: 10.0.49.4 00:33:08 I0929 17:33:08.682204 3211264 status_update_manager.cpp:177] Pausing sending status updates 00:33:08 I0929 17:33:08.682720 2138112 state.cpp:64] Recovering state from '/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_rx4DfE/meta' 00:33:08 I0929 17:33:08.682868 3211264 status_update_manager.cpp:203] Recovering status update manager 00:33:08 I0929 17:33:08.682950 4284416 containerizer.cpp:648] Recovering containerizer 00:33:08 I0929 17:33:08.683574 528384 provisioner.cpp:416] Provisioner recovery complete 00:33:08 I0929 17:33:08.683806 1064960 slave.cpp:6313] Finished recovery 00:33:08 I0929 17:33:08.684301 1064960 slave.cpp:6495] Querying resource estimator for oversubscribable resources 00:33:08 I0929 17:33:08.684377 2674688 slave.cpp:6509] Received oversubscribable resources {} from the resource estimator 00:33:08 I0929 17:33:08.684469 3747840 status_update_manager.cpp:177] Pausing sending status updates 00:33:08 I0929 17:33:08.684484 2674688 slave.cpp:993] New master detected at master@10.0.49.4:54887 00:33:08 I0929 17:33:08.684527 2674688 slave.cpp:1028] Detecting new master 00:33:08 I0929 17:33:08.688560 528384 slave.cpp:1055] Authenticating with master master@10.0.49.4:54887 00:33:08 I0929 17:33:08.688627 528384 slave.cpp:1066] Using default CRAM-MD5 authenticatee 00:33:08 I0929 17:33:08.688720 4284416 authenticatee.cpp:121] Creating new client SASL connection 00:33:08 I0929 17:33:08.688834 1064960 master.cpp:7915] Authenticating slave(751)@10.0.49.4:54887 00:33:08 I0929 17:33:08.688892 3211264 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(1388)@10.0.49.4:54887 00:33:08 I0929 17:33:08.688968 1601536 authenticator.cpp:98] Creating new server SASL connection 00:33:08 I0929 17:33:08.689050 3747840 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 00:33:08 I0929 17:33:08.689090 3747840 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 00:33:08 I0929 17:33:08.689149 2674688 authenticator.cpp:204] Received SASL authentication start 00:33:08 I0929 17:33:08.689199 2674688 authenticator.cpp:326] Authentication requires more steps 00:33:08 I0929 17:33:08.689280 2138112 authenticatee.cpp:259] Received SASL authentication step 00:33:08 I0929 17:33:08.689344 528384 authenticator.cpp:232] Received SASL authentication step 00:33:08 I0929 17:33:08.689388 528384 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'Jenkinss-Mac-mini.local' server FQDN: 'Jenkinss-Mac-mini.local' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 00:33:08 I0929 17:33:08.689400 528384 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 00:33:08 I0929 17:33:08.689424 528384 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 00:33:08 I0929 17:33:08.689436 528384 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'Jenkinss-Mac-mini.local' server FQDN: 'Jenkinss-Mac-mini.local' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 00:33:08 I0929 17:33:08.689445 528384 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 00:33:08 I0929 17:33:08.689450 528384 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 00:33:08 I0929 17:33:08.689458 528384 authenticator.cpp:318] Authentication success 00:33:08 I0929 17:33:08.689518 4284416 authenticatee.cpp:299] Authentication success 00:33:08 I0929 17:33:08.689532 1064960 master.cpp:7945] Successfully authenticated principal 'test-principal' at slave(751)@10.0.49.4:54887 00:33:08 I0929 17:33:08.689558 3211264 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(1388)@10.0.49.4:54887 00:33:08 I0929 17:33:08.689643 1601536 slave.cpp:1150] Successfully authenticated with master master@10.0.49.4:54887 00:33:08 I0929 17:33:08.689721 1601536 slave.cpp:1629] Will retry registration in 8.587419ms if necessary 00:33:08 I0929 17:33:08.689782 528384 master.cpp:5819] Received register agent message from slave(751)@10.0.49.4:54887 (10.0.49.4) 00:33:08 I0929 17:33:08.689800 528384 master.cpp:3856] Authorizing agent with principal 'test-principal' 00:33:08 I0929 17:33:08.689968 4284416 master.cpp:5879] Authorized registration of agent at slave(751)@10.0.49.4:54887 (10.0.49.4) 00:33:08 I0929 17:33:08.690037 4284416 master.cpp:5972] Registering agent at slave(751)@10.0.49.4:54887 (10.0.49.4) with id 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 00:33:08 I0929 17:33:08.690204 3747840 registrar.cpp:495] Applied 1 operations in 43us; attempting to update the registry 00:33:08 I0929 17:33:08.690408 528384 registrar.cpp:552] Successfully updated the registry in 168960ns 00:33:08 I0929 17:33:08.690484 1064960 master.cpp:6019] Admitted agent 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 at slave(751)@10.0.49.4:54887 (10.0.49.4) 00:33:08 I0929 17:33:08.690712 2674688 slave.cpp:4969] Received ping from slave-observer(678)@10.0.49.4:54887 00:33:08 I0929 17:33:08.690850 2674688 slave.cpp:1196] Registered with master master@10.0.49.4:54887; given agent ID 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 00:33:08 I0929 17:33:08.690891 3747840 hierarchical.cpp:593] Added agent 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 (10.0.49.4) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (allocated: {}) 00:33:08 I0929 17:33:08.690696 1064960 master.cpp:6050] Registered agent 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 at slave(751)@10.0.49.4:54887 (10.0.49.4) with [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] 00:33:08 I0929 17:33:08.690943 528384 status_update_manager.cpp:184] Resuming sending status updates 00:33:08 I0929 17:33:08.691010 3747840 hierarchical.cpp:1943] No allocations performed 00:33:08 I0929 17:33:08.691033 3747840 hierarchical.cpp:1486] Performed allocation for 1 agents in 74us 00:33:08 I0929 17:33:08.692587 4284416 process.cpp:3929] Handling HTTP event for process 'master' with path: '/master/api/v1' 00:33:08 I0929 17:33:08.693039 3211264 http.cpp:1185] HTTP POST for /master/api/v1 from 10.0.49.4:57137 00:33:08 I0929 17:33:08.693147 2674688 slave.cpp:1216] Checkpointing SlaveInfo to '/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/bool_MasterTestPrePostReservationRefinement_ReserveAndUnreserveResourcesV1_0_rx4DfE/meta/slaves/71fce4a3-01f6-43a7-b512-28980b04e51f-S0/slave.info' 00:33:08 I0929 17:33:08.693150 3211264 http.cpp:673] Processing call RESERVE_RESOURCES 00:33:08 I0929 17:33:08.693291 3211264 master.cpp:3641] Authorizing principal 'test-principal' to reserve resources '[{""""name"""":""""cpus"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""default-role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":1.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""default-role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":512.0},""""type"""":""""SCALAR""""}]' 00:33:08 I0929 17:33:08.693671 2674688 slave.cpp:1265] Forwarding total oversubscribed resources {} 00:33:08 I0929 17:33:08.693742 1064960 master.cpp:6814] Received update of agent 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 at slave(751)@10.0.49.4:54887 (10.0.49.4) with total oversubscribed resources {} 00:33:08 I0929 17:33:08.694505 528384 master.cpp:9314] Sending updated checkpointed resources cpus(reservations: [(DYNAMIC,default-role,test-principal)]):1; mem(reservations: [(DYNAMIC,default-role,test-principal)]):512 to agent 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 at slave(751)@10.0.49.4:54887 (10.0.49.4) 00:33:08 I0929 17:33:08.694834 2138112 hierarchical.cpp:660] Agent 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 (10.0.49.4) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000] 00:33:08 I0929 17:33:08.694988 2138112 hierarchical.cpp:1943] No allocations performed 00:33:08 I0929 17:33:08.695013 2138112 hierarchical.cpp:1486] Performed allocation for 1 agents in 80us 00:33:08 I0929 17:33:08.695297 3211264 slave.cpp:3522] Updated checkpointed resources from {} to cpus(reservations: [(DYNAMIC,default-role,test-principal)]):1; mem(reservations: [(DYNAMIC,default-role,test-principal)]):512 00:33:08 I0929 17:33:08.696081 1601536 process.cpp:3929] Handling HTTP event for process 'master' with path: '/master/api/v1' 00:33:08 I0929 17:33:08.696606 528384 http.cpp:1185] HTTP POST for /master/api/v1 from 10.0.49.4:57138 00:33:08 I0929 17:33:08.696708 528384 http.cpp:673] Processing call UNRESERVE_RESOURCES 00:33:08 I0929 17:33:08.696887 528384 master.cpp:3709] Authorizing principal 'test-principal' to unreserve resources '[{""""name"""":""""cpus"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""default-role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":1.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""default-role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":512.0},""""type"""":""""SCALAR""""}]' 00:33:08 ../../src/tests/master_tests.cpp:8682: Failure 00:33:08 Value of: (v1UnreserveResourcesResponse).get().status 00:33:08 Actual: """"409 Conflict"""" 00:33:08 Expected: Accepted().status 00:33:08 Which is: """"202 Accepted"""" 00:33:08 I0929 17:33:08.698737 2067726336 slave.cpp:869] Agent terminating 00:33:08 I0929 17:33:08.698833 1601536 master.cpp:1321] Agent 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 at slave(751)@10.0.49.4:54887 (10.0.49.4) disconnected 00:33:08 I0929 17:33:08.698854 1601536 master.cpp:3354] Disconnecting agent 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 at slave(751)@10.0.49.4:54887 (10.0.49.4) 00:33:08 I0929 17:33:08.698876 1601536 master.cpp:3373] Deactivating agent 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 at slave(751)@10.0.49.4:54887 (10.0.49.4) 00:33:08 I0929 17:33:08.698952 2138112 hierarchical.cpp:690] Agent 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 deactivated 00:33:08 I0929 17:33:08.701202 2067726336 master.cpp:1163] Master terminating 00:33:08 I0929 17:33:08.701479 1601536 hierarchical.cpp:626] Removed agent 71fce4a3-01f6-43a7-b512-28980b04e51f-S0 00:33:08 [ FAILED ] bool/MasterTestPrePostReservationRefinement.ReserveAndUnreserveResourcesV1/0, where GetParam() = true (39 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8048","10/02/2017 14:56:46",2,"ReservationEndpointsTest.GoodReserveAndUnreserveACL is flaky. ""As just observed on our internal CI; Error Message Log: """," ../../src/tests/reservation_endpoints_tests.cpp:1026 Value of: (response).get().status Actual: """"409 Conflict"""" Expected: Accepted().status Which is: """"202 Accepted"""" 00:42:35 [ RUN ] ReservationEndpointsTest.GoodReserveAndUnreserveACL 00:42:35 I0930 00:42:35.517658 7413 cluster.cpp:162] Creating default 'local' authorizer 00:42:35 I0930 00:42:35.518507 7433 master.cpp:445] Master 938119f3-8007-4d6f-a45b-d49bf76a0590 (ip-172-16-10-96.ec2.internal) started on 172.16.10.96:46227 00:42:35 I0930 00:42:35.518523 7433 master.cpp:447] Flags at startup: --acls=""""reserve_resources { 00:42:35 principals { 00:42:35 values: """"test-principal"""" 00:42:35 } 00:42:35 roles { 00:42:35 type: ANY 00:42:35 } 00:42:35 } 00:42:35 unreserve_resources { 00:42:35 principals { 00:42:35 values: """"test-principal"""" 00:42:35 } 00:42:35 reserver_principals { 00:42:35 values: """"test-principal"""" 00:42:35 } 00:42:35 } 00:42:35 """" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""50ms"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/zFIYus/credentials"""" --filter_gpu_resources=""""true"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_unreachable_tasks_per_framework=""""1000"""" --port=""""5050"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --roles=""""role"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/zFIYus/master"""" --zk_session_timeout=""""10secs"""" 00:42:35 I0930 00:42:35.518672 7433 master.cpp:497] Master only allowing authenticated frameworks to register 00:42:35 I0930 00:42:35.518681 7433 master.cpp:511] Master only allowing authenticated agents to register 00:42:35 I0930 00:42:35.518685 7433 master.cpp:524] Master only allowing authenticated HTTP frameworks to register 00:42:35 I0930 00:42:35.518689 7433 credentials.hpp:37] Loading credentials for authentication from '/tmp/zFIYus/credentials' 00:42:35 I0930 00:42:35.518784 7433 master.cpp:569] Using default 'crammd5' authenticator 00:42:35 I0930 00:42:35.518823 7433 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly' 00:42:35 I0930 00:42:35.518853 7433 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' 00:42:35 I0930 00:42:35.518877 7433 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' 00:42:35 I0930 00:42:35.518898 7433 master.cpp:649] Authorization enabled 00:42:35 W0930 00:42:35.518905 7433 master.cpp:712] The '--roles' flag is deprecated. This flag will be removed in the future. See the Mesos 0.27 upgrade notes for more information 00:42:35 I0930 00:42:35.519016 7438 whitelist_watcher.cpp:77] No whitelist given 00:42:35 I0930 00:42:35.519018 7439 hierarchical.cpp:171] Initialized hierarchical allocator process 00:42:35 I0930 00:42:35.519625 7433 master.cpp:2216] Elected as the leading master! 00:42:35 I0930 00:42:35.519640 7433 master.cpp:1705] Recovering from registrar 00:42:35 I0930 00:42:35.519677 7433 registrar.cpp:347] Recovering registrar 00:42:35 I0930 00:42:35.519762 7438 registrar.cpp:391] Successfully fetched the registry (0B) in 70144ns 00:42:35 I0930 00:42:35.519783 7438 registrar.cpp:495] Applied 1 operations in 3246ns; attempting to update the registry 00:42:35 I0930 00:42:35.519870 7439 registrar.cpp:552] Successfully updated the registry in 78080ns 00:42:35 I0930 00:42:35.519899 7439 registrar.cpp:424] Successfully recovered registrar 00:42:35 I0930 00:42:35.519975 7439 master.cpp:1809] Recovered 0 agents from the registry (168B); allowing 10mins for agents to re-register 00:42:35 I0930 00:42:35.520007 7435 hierarchical.cpp:209] Skipping recovery of hierarchical allocator: nothing to recover 00:42:35 W0930 00:42:35.521775 7413 process.cpp:3194] Attempted to spawn already running process files@172.16.10.96:46227 00:42:35 I0930 00:42:35.522099 7413 containerizer.cpp:292] Using isolation { environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni } 00:42:35 I0930 00:42:35.527375 7413 linux_launcher.cpp:146] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher 00:42:35 I0930 00:42:35.527729 7413 provisioner.cpp:255] Using default backend 'overlay' 00:42:35 I0930 00:42:35.528136 7413 cluster.cpp:448] Creating default 'local' authorizer 00:42:35 I0930 00:42:35.528524 7439 slave.cpp:254] Mesos agent started on (409)@172.16.10.96:46227 00:42:35 I0930 00:42:35.528540 7439 slave.cpp:255] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/ReservationEndpointsTest_GoodReserveAndUnreserveACL_o9Veil/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/ReservationEndpointsTest_GoodReserveAndUnreserveACL_o9Veil/credential"""" --default_role=""""*"""" --disallow_sharing_agent_pid_namespace=""""false"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/ReservationEndpointsTest_GoodReserveAndUnreserveACL_o9Veil/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/ReservationEndpointsTest_GoodReserveAndUnreserveACL_o9Veil/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/ReservationEndpointsTest_GoodReserveAndUnreserveACL_o9Veil/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""linux"""" --launcher_dir=""""/home/centos/workspace/mesos/Mesos_CI-build/FLAG/Plain/label/mesos-ec2-centos-7/mesos/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:1;mem:512"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/ReservationEndpointsTest_GoodReserveAndUnreserveACL_o9Veil"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/ReservationEndpointsTest_GoodReserveAndUnreserveACL_wtxBTT"""" --zk_session_timeout=""""10secs"""" 00:42:35 I0930 00:42:35.528676 7439 credentials.hpp:86] Loading credential for authentication from '/tmp/ReservationEndpointsTest_GoodReserveAndUnreserveACL_o9Veil/credential' 00:42:35 I0930 00:42:35.528724 7439 slave.cpp:287] Agent using credential for: test-principal 00:42:35 I0930 00:42:35.528730 7439 credentials.hpp:37] Loading credentials for authentication from '/tmp/ReservationEndpointsTest_GoodReserveAndUnreserveACL_o9Veil/http_credentials' 00:42:35 I0930 00:42:35.528782 7439 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' 00:42:35 I0930 00:42:35.528818 7439 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' 00:42:35 I0930 00:42:35.529382 7439 slave.cpp:585] Agent resources: [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":1.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":512.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":35823.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] 00:42:35 I0930 00:42:35.529448 7439 slave.cpp:593] Agent attributes: [ ] 00:42:35 I0930 00:42:35.529454 7439 slave.cpp:602] Agent hostname: ip-172-16-10-96.ec2.internal 00:42:35 I0930 00:42:35.529676 7435 status_update_manager.cpp:177] Pausing sending status updates 00:42:35 I0930 00:42:35.529700 7437 state.cpp:64] Recovering state from '/tmp/ReservationEndpointsTest_GoodReserveAndUnreserveACL_wtxBTT/meta' 00:42:35 I0930 00:42:35.529767 7437 status_update_manager.cpp:203] Recovering status update manager 00:42:35 I0930 00:42:35.529822 7437 containerizer.cpp:648] Recovering containerizer 00:42:35 I0930 00:42:35.530825 7437 provisioner.cpp:416] Provisioner recovery complete 00:42:35 I0930 00:42:35.530910 7437 slave.cpp:6313] Finished recovery 00:42:35 I0930 00:42:35.531136 7437 slave.cpp:6495] Querying resource estimator for oversubscribable resources 00:42:35 I0930 00:42:35.531191 7439 status_update_manager.cpp:177] Pausing sending status updates 00:42:35 I0930 00:42:35.531201 7436 slave.cpp:993] New master detected at master@172.16.10.96:46227 00:42:35 I0930 00:42:35.531231 7436 slave.cpp:1028] Detecting new master 00:42:35 I0930 00:42:35.531263 7436 slave.cpp:6509] Received oversubscribable resources {} from the resource estimator 00:42:35 I0930 00:42:35.539541 7435 slave.cpp:1055] Authenticating with master master@172.16.10.96:46227 00:42:35 I0930 00:42:35.539571 7435 slave.cpp:1066] Using default CRAM-MD5 authenticatee 00:42:35 I0930 00:42:35.539618 7435 authenticatee.cpp:121] Creating new client SASL connection 00:42:35 I0930 00:42:35.540117 7435 master.cpp:7915] Authenticating slave(409)@172.16.10.96:46227 00:42:35 I0930 00:42:35.540163 7435 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(861)@172.16.10.96:46227 00:42:35 I0930 00:42:35.540216 7435 authenticator.cpp:98] Creating new server SASL connection 00:42:35 I0930 00:42:35.540619 7435 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 00:42:35 I0930 00:42:35.540637 7435 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 00:42:35 I0930 00:42:35.540665 7435 authenticator.cpp:204] Received SASL authentication start 00:42:35 I0930 00:42:35.540694 7435 authenticator.cpp:326] Authentication requires more steps 00:42:35 I0930 00:42:35.540724 7435 authenticatee.cpp:259] Received SASL authentication step 00:42:35 I0930 00:42:35.540766 7435 authenticator.cpp:232] Received SASL authentication step 00:42:35 I0930 00:42:35.540782 7435 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-96.ec2.internal' server FQDN: 'ip-172-16-10-96.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 00:42:35 I0930 00:42:35.540791 7435 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 00:42:35 I0930 00:42:35.540804 7435 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 00:42:35 I0930 00:42:35.540813 7435 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-96.ec2.internal' server FQDN: 'ip-172-16-10-96.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 00:42:35 I0930 00:42:35.540819 7435 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 00:42:35 I0930 00:42:35.540824 7435 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 00:42:35 I0930 00:42:35.540835 7435 authenticator.cpp:318] Authentication success 00:42:35 I0930 00:42:35.540881 7436 authenticatee.cpp:299] Authentication success 00:42:35 I0930 00:42:35.540894 7435 master.cpp:7945] Successfully authenticated principal 'test-principal' at slave(409)@172.16.10.96:46227 00:42:35 I0930 00:42:35.540925 7432 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(861)@172.16.10.96:46227 00:42:35 I0930 00:42:35.540977 7436 slave.cpp:1150] Successfully authenticated with master master@172.16.10.96:46227 00:42:35 I0930 00:42:35.541028 7436 slave.cpp:1629] Will retry registration in 11.469653ms if necessary 00:42:35 I0930 00:42:35.541095 7438 master.cpp:5819] Received register agent message from slave(409)@172.16.10.96:46227 (ip-172-16-10-96.ec2.internal) 00:42:35 I0930 00:42:35.541121 7438 master.cpp:3856] Authorizing agent with principal 'test-principal' 00:42:35 I0930 00:42:35.541203 7432 master.cpp:5879] Authorized registration of agent at slave(409)@172.16.10.96:46227 (ip-172-16-10-96.ec2.internal) 00:42:35 I0930 00:42:35.541246 7432 master.cpp:5972] Registering agent at slave(409)@172.16.10.96:46227 (ip-172-16-10-96.ec2.internal) with id 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 00:42:35 I0930 00:42:35.541350 7432 registrar.cpp:495] Applied 1 operations in 10060ns; attempting to update the registry 00:42:35 I0930 00:42:35.541492 7432 registrar.cpp:552] Successfully updated the registry in 120064ns 00:42:35 I0930 00:42:35.541545 7437 master.cpp:6019] Admitted agent 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 at slave(409)@172.16.10.96:46227 (ip-172-16-10-96.ec2.internal) 00:42:35 I0930 00:42:35.541666 7437 master.cpp:6050] Registered agent 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 at slave(409)@172.16.10.96:46227 (ip-172-16-10-96.ec2.internal) with [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":1.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":512.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":35823.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] 00:42:35 I0930 00:42:35.541733 7439 hierarchical.cpp:593] Added agent 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 (ip-172-16-10-96.ec2.internal) with cpus:1; mem:512; disk:35823; ports:[31000-32000] (allocated: {}) 00:42:35 I0930 00:42:35.541798 7437 slave.cpp:1196] Registered with master master@172.16.10.96:46227; given agent ID 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 00:42:35 I0930 00:42:35.541828 7439 hierarchical.cpp:1943] No allocations performed 00:42:35 I0930 00:42:35.541842 7439 hierarchical.cpp:1486] Performed allocation for 1 agents in 32679ns 00:42:35 I0930 00:42:35.541879 7439 status_update_manager.cpp:184] Resuming sending status updates 00:42:35 I0930 00:42:35.542570 7433 process.cpp:3929] Handling HTTP event for process 'master' with path: '/master/reserve' 00:42:35 I0930 00:42:35.542874 7438 http.cpp:1185] HTTP POST for /master/reserve from 172.16.10.96:54256 00:42:35 I0930 00:42:35.543103 7437 slave.cpp:1216] Checkpointing SlaveInfo to '/tmp/ReservationEndpointsTest_GoodReserveAndUnreserveACL_wtxBTT/meta/slaves/938119f3-8007-4d6f-a45b-d49bf76a0590-S0/slave.info' 00:42:35 I0930 00:42:35.543090 7438 master.cpp:3641] Authorizing principal 'test-principal' to reserve resources '[{""""name"""":""""cpus"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":1.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":512.0},""""type"""":""""SCALAR""""}]' 00:42:35 I0930 00:42:35.543285 7437 slave.cpp:1265] Forwarding total oversubscribed resources {} 00:42:35 I0930 00:42:35.543319 7437 slave.cpp:4969] Received ping from slave-observer(413)@172.16.10.96:46227 00:42:35 I0930 00:42:35.543632 7438 master.cpp:6814] Received update of agent 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 at slave(409)@172.16.10.96:46227 (ip-172-16-10-96.ec2.internal) with total oversubscribed resources {} 00:42:35 I0930 00:42:35.543754 7438 master.cpp:9314] Sending updated checkpointed resources cpus(reservations: [(DYNAMIC,role,test-principal)]):1; mem(reservations: [(DYNAMIC,role,test-principal)]):512 to agent 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 at slave(409)@172.16.10.96:46227 (ip-172-16-10-96.ec2.internal) 00:42:35 I0930 00:42:35.543889 7437 hierarchical.cpp:660] Agent 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 (ip-172-16-10-96.ec2.internal) updated with total resources cpus:1; mem:512; disk:35823; ports:[31000-32000] 00:42:35 I0930 00:42:35.543952 7437 hierarchical.cpp:1943] No allocations performed 00:42:35 I0930 00:42:35.543967 7437 hierarchical.cpp:1486] Performed allocation for 1 agents in 29057ns 00:42:35 I0930 00:42:35.544109 7438 slave.cpp:3522] Updated checkpointed resources from {} to cpus(reservations: [(DYNAMIC,role,test-principal)]):1; mem(reservations: [(DYNAMIC,role,test-principal)]):512 00:42:35 I0930 00:42:35.544886 7439 process.cpp:3929] Handling HTTP event for process 'master' with path: '/master/unreserve' 00:42:35 I0930 00:42:35.545197 7437 http.cpp:1185] HTTP POST for /master/unreserve from 172.16.10.96:54258 00:42:35 I0930 00:42:35.545383 7437 master.cpp:3709] Authorizing principal 'test-principal' to unreserve resources '[{""""name"""":""""cpus"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":1.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":512.0},""""type"""":""""SCALAR""""}]' 00:42:35 ../../src/tests/reservation_endpoints_tests.cpp:1026: Failure 00:42:35 Value of: (response).get().status 00:42:35 Actual: """"409 Conflict"""" 00:42:35 Expected: Accepted().status 00:42:35 Which is: """"202 Accepted"""" 00:42:35 I0930 00:42:35.546277 7413 slave.cpp:869] Agent terminating 00:42:35 I0930 00:42:35.546371 7439 master.cpp:1321] Agent 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 at slave(409)@172.16.10.96:46227 (ip-172-16-10-96.ec2.internal) disconnected 00:42:35 I0930 00:42:35.546391 7439 master.cpp:3354] Disconnecting agent 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 at slave(409)@172.16.10.96:46227 (ip-172-16-10-96.ec2.internal) 00:42:35 I0930 00:42:35.546404 7439 master.cpp:3373] Deactivating agent 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 at slave(409)@172.16.10.96:46227 (ip-172-16-10-96.ec2.internal) 00:42:35 I0930 00:42:35.546520 7438 hierarchical.cpp:690] Agent 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 deactivated 00:42:35 I0930 00:42:35.547936 7413 master.cpp:1163] Master terminating 00:42:35 I0930 00:42:35.548065 7439 hierarchical.cpp:626] Removed agent 938119f3-8007-4d6f-a45b-d49bf76a0590-S0 00:42:35 [ FAILED ] ReservationEndpointsTest.GoodReserveAndUnreserveACL (33 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8051","10/04/2017 17:01:04",2,"Killing TASK_GROUP fail to kill some tasks ""When starting following pod definition via marathon: mesos will successfully kill all {{ct2}} containers but fail to kill all/some of the {{ct1}} containers. I've attached both master and agent logs. The interesting part starts after marathon issues 6 kills: All {{.ct1}} tasks fail eventually (~30s) where {{.ct2}} are successfully killed."""," { """"id"""": """"/simple-pod"""", """"scaling"""": { """"kind"""": """"fixed"""", """"instances"""": 3 }, """"environment"""": { """"PING"""": """"PONG"""" }, """"containers"""": [ { """"name"""": """"ct1"""", """"resources"""": { """"cpus"""": 0.1, """"mem"""": 32 }, """"image"""": { """"kind"""": """"MESOS"""", """"id"""": """"busybox"""" }, """"exec"""": { """"command"""": { """"shell"""": """"while true; do echo the current time is $(date) > ./test-v1/clock; sleep 1; done"""" } }, """"volumeMounts"""": [ { """"name"""": """"v1"""", """"mountPath"""": """"test-v1"""" } ] }, { """"name"""": """"ct2"""", """"resources"""": { """"cpus"""": 0.1, """"mem"""": 32 }, """"exec"""": { """"command"""": { """"shell"""": """"while true; do echo -n $PING ' '; cat ./etc/clock; sleep 1; done"""" } }, """"volumeMounts"""": [ { """"name"""": """"v1"""", """"mountPath"""": """"etc"""" }, { """"name"""": """"v2"""", """"mountPath"""": """"docker"""" } ] } ], """"networks"""": [ { """"mode"""": """"host"""" } ], """"volumes"""": [ { """"name"""": """"v1"""" }, { """"name"""": """"v2"""", """"host"""": """"/var/lib/docker"""" } ] } Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.209966 4746 master.cpp:5297] Processing KILL call for task 'simple-pod.instance-3c1098e5-a914-11e7-bcd5-e63c853d bf20.ct1' of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5.229:15101 Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.210033 4746 master.cpp:5371] Telling agent bae11d5d-20c2-4d66-9ec3-773d1d717e58-S1 at slave(1)@10.0.1.207:5051 ( 10.0.1.207) to kill task simple-pod.instance-3c1098e5-a914-11e7-bcd5-e63c853dbf20.ct1 of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5 .229:15101 Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.210471 4748 master.cpp:5297] Processing KILL call for task 'simple-pod.instance-3c1098e5-a914-11e7-bcd5-e63c853d bf20.ct2' of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5.229:15101 Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.210518 4748 master.cpp:5371] Telling agent bae11d5d-20c2-4d66-9ec3-773d1d717e58-S1 at slave(1)@10.0.1.207:5051 ( 10.0.1.207) to kill task simple-pod.instance-3c1098e5-a914-11e7-bcd5-e63c853dbf20.ct2 of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5 .229:15101 Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.210602 4748 master.cpp:5297] Processing KILL call for task 'simple-pod.instance-3c0ffca4-a914-11e7-bcd5-e63c853d bf20.ct1' of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5.229:15101 Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.210639 4748 master.cpp:5371] Telling agent bae11d5d-20c2-4d66-9ec3-773d1d717e58-S1 at slave(1)@10.0.1.207:5051 ( 10.0.1.207) to kill task simple-pod.instance-3c0ffca4-a914-11e7-bcd5-e63c853dbf20.ct1 of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5 .229:15101 Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.210932 4753 master.cpp:5297] Processing KILL call for task 'simple-pod.instance-3c0ffca4-a914-11e7-bcd5-e63c853d bf20.ct2' of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5.229:15101 Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.210968 4753 master.cpp:5371] Telling agent bae11d5d-20c2-4d66-9ec3-773d1d717e58-S1 at slave(1)@10.0.1.207:5051 ( 10.0.1.207) to kill task simple-pod.instance-3c0ffca4-a914-11e7-bcd5-e63c853dbf20.ct2 of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5 .229:15101 Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.211210 4747 master.cpp:5297] Processing KILL call for task 'simple-pod.instance-328cd633-a914-11e7-bcd5-e63c853d bf20.ct1' of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5.229:15101 Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.211251 4747 master.cpp:5371] Telling agent bae11d5d-20c2-4d66-9ec3-773d1d717e58-S1 at slave(1)@10.0.1.207:5051 ( 10.0.1.207) to kill task simple-pod.instance-328cd633-a914-11e7-bcd5-e63c853dbf20.ct1 of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5 .229:15101 Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.211474 4746 master.cpp:5297] Processing KILL call for task 'simple-pod.instance-328cd633-a914-11e7-bcd5-e63c853d bf20.ct2' of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5.229:15101 Oct 04 14:58:25 ip-10-0-5-229.eu-central-1.compute.internal mesos-master[4708]: I1004 14:58:25.211514 4746 master.cpp:5371] Telling agent bae11d5d-20c2-4d66-9ec3-773d1d717e58-S1 at slave(1)@10.0.1.207:5051 ( 10.0.1.207) to kill task simple-pod.instance-328cd633-a914-11e7-bcd5-e63c853dbf20.ct2 of framework bae11d5d-20c2-4d66-9ec3-773d1d717e58-0001 (marathon) at scheduler-c61c493c-728f-4bd9-be60-7373574749af@10.0.5 .229:15101 ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-8052","10/04/2017 19:16:55",2,"""protoc"" not found when running ""make -j4 check"" directly in stout ""If we run {{make -j4 check}} without running {{make}} first, we will get the following error message: """," 3rdparty/protobuf-3.3.0/src/protoc -I../tests --cpp_out=. ../tests/protobuf_tests.proto /bin/bash: 3rdparty/protobuf-3.3.0/src/protoc: No such file or directory Makefile:1934: recipe for target 'protobuf_tests.pb.cc' failed make: *** [protobuf_tests.pb.cc] Error 127 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8058","10/06/2017 16:45:42",2,"Agent and master can race when updating agent state. ""In {{2af9a5b07dc80151154264e974d03f56a1c25838}} we introduce the use of {{UpdateSlaveMessage}} for the agent to inform the master about its current total resources. Currently we trigger this message only on agent registration and reregistration. This can race with operations applied in the master and communicated via {{CheckpointResourcesMessage}}. Example: 1. Agent ({{cpus:4(\*)}} registers. 2. Master is triggered to apply an operation to the agent's resources, e.g., a reservation: {{cpus:4(\*) -> cpus:4(A)}}. The master applies the operation to its current view of the agent's resources and sends the agent a {{CheckpointResourcesMessage}} so the agent can persist the result. 3. The agent sends the master an {{UpdateSlaveMessage}}, e.g., {{cpus:4(\*)}} since it hasn't received the {{CheckpointResourcesMessage}} yet. 4. The master processes the {{UpdateSlaveMessage}} and updates its view of the agent's resources to be {{cpus:4(\*)}}. 5. The agent processes the {{CheckpointResourcesMessage}} and updates its view of its resources to be {{cpus:4(A)}}. 6. The agent and the master have an inconsistent view of the agent's resources.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8075","10/11/2017 21:47:29",5,"Add ReadWriteLock to libprocess. ""We want to add a new {{ReadWriteLock}} similar to {{Mutex}}, which can provide better concurrecy protection for mutual exclusive actions, but allow high concurrency for actions which can be performed at the same time. One use case is image garbage collection: the new API {{provisioner::pruneImages}} needs to be mutually exclusive from {{provisioner::provision}}, but multiple {{{provisioner::provision}} can concurrently run safely.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8076","10/12/2017 01:04:05",2,"PersistentVolumeTest.SharedPersistentVolumeRescindOnDestroy is flaky. ""I'm observing {{ROOT_MountDiskResource/PersistentVolumeTest.SharedPersistentVolumeRescindOnDestroy/0}} being flaky on our internal CI. From what I see in the logs, when {{framework1}} accepts an offer, creates volumes, launches a task, and kills it right after, the executor might manage to register in-between and hence an unexpected {{TASK_RUNNING}} status update is sent. To fix this, one approach is to explicitly wait for {{TASK_RUNNING}} before attempting to kill the task.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8078","10/12/2017 16:45:51",2,"Some fields went missing with no replacement in api/v1. ""Hi friends, These fields are available via the state.json but went missing in the v1 of the API: -leader_info- -> available via GET_MASTER which should always return leading master info start_time elected_time As we're showing them on the Overview page of the DC/OS UI, yet would like not be using state.json, it would be great to have them somewhere in V1.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8079","10/12/2017 16:54:46",5,"Checkpoint and recover layers used to provision rootfs in provisioner ""This information will be necessary for {{provisioner}} to determine all layers of active containers, which we need to retain when image gc happens.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0 +"MESOS-8082","10/12/2017 21:23:19",5,"updateAvailable races with a periodic allocation and leads to flaky tests. ""When an operator requests a resource modification (reserve resources, create a persitent volume and so on), a corresponding endpoint handler can request allocator state modification twice: recover resources from rescinded offers and for update applied operation. These operations should happen atomically, i.e., no other allocator change can happen in-between. This is however not the case: a periodic allocation can kick in. Solutions to this race might be: moving offer management to the allocator, coupling operations in the allocator, pausing allocator. While this race does not necessarily lead to bugs in production—as long as operators and tooling can handle failures and retry—, it makes some tests using resource modification flaky, because in tests we do not plan for failures and retries.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8093","10/14/2017 07:02:20",2,"Some tests miss subscribed event because expectation is set after event fires. ""Tests all have the same problem. They initiate a scheduler subscribe call in reaction to {{connected}} event. However, an expectation for {{subscribed}} event is created _afterwards_, which might lead to an uninteresting mock function call for {{subscribed}} followed by a failure to wait for {{subscribed}}, see attached log excerpt for more details. Problematic code is here: https://github.com/apache/mesos/blob/1c51c98638bb9ea0e8ec6a3f284b33d6c1a4e8ef/src/tests/containerizer/runtime_isolator_tests.cpp#L593-L615 A possible solution is to await for {{subscribed}} only, without {{connected}}, setting the expectation before a connection is attempted, see https://github.com/apache/mesos/blob/1c51c98638bb9ea0e8ec6a3f284b33d6c1a4e8ef/src/tests/default_executor_tests.cpp#L139-L159."""," CgroupsIsolatorTest.ROOT_CGROUPS_LimitSwap DefaultExecutorCniTest.ROOT_VerifyContainerIP DockerRuntimeIsolatorTest.ROOT_INTERNET_CURL_NestedSimpleCommand DockerRuntimeIsolatorTest.ROOT_NestedDockerDefaultCmdLocalPuller DockerRuntimeIsolatorTest.ROOT_NestedDockerDefaultEntryptLocalPuller ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8095","10/14/2017 22:57:32",2,"ResourceProviderRegistrarTest.AgentRegistrar is flaky. ""Observed it in internal CI. Test log attached.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8097","10/16/2017 19:23:43",2,"Add filesystem layout for local resource providers. ""We need to add a checkpoint directory for local resource providers. The checkpoints should be tied to the slave ID, otherwise resources with the same ID appearing on different agents (due to agent failover and registering with a new ID) may confuse frameworks.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8100","10/16/2017 23:46:18",5,"Authorize standalone container calls from local resource providers. ""We need to add authorization for a local resource provider to call the standalone container API to prevent the provider from manipulating arbitrary containers. We can use the same JWT-based authN/authZ mechanism for executors, where the agent will create a auth token for each local resource provider instance: """," class LecalResourceProvider { public: static Try> create( const process::http::URL& url, const std::string& workDir, const mesos::ResourceProviderInfo& info, const Option& authToken); ... }; ",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8101","10/17/2017 01:49:58",5,"Import resources from CSI plugins in storage local resource provider. ""The following lists the steps to import resources from a CSI plugin: 1. Launch the node plugin 1.1 GetSupportedVersions 1.2 GetPluginInfo 1.3 ProbeNode 1.4 GetNodeCapabilities 2. Launch the controller plugin 2.1 GetSuportedVersions 2.2 GetPluginInfo 2.3 GetControllerCapabilities 3. GetCapacity 4. ListVolumes 5. Report to the resource provider through UPDATE_TOTAL_RESOURCES""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8102","10/17/2017 01:58:06",5,"Add a test CSI plugin for storage local resource provider. ""We need a dummy CSI plugin for testing storage local resoure providers. The test CSI plugin would just create subdirectories under its working directories to mimic the behavior of creating volumes, then bind-mount those volumes to mimic publish.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8106","10/17/2017 19:27:11",3,"Docker fetcher plugin unsupported scheme failure message is not accurate. ""https://github.com/apache/mesos/blob/1.4.0/src/uri/fetchers/docker.cpp#L843 This failure message is not accurate. For such a case, if the user/operator give a wrong credential to communicate to a BASIC auth based docker private registry. The authentication failed but the log is still saying: """"Unsupported auth-scheme: BASIC"""" ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8108","10/18/2017 02:03:14",3,"Process offer operations in storage local resource provider ""The storage local resource provider receives offer operations for reservations and resource conversions, and invoke proper CSI calls to implement these operations.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8119","10/20/2017 19:34:07",3,"ROOT_DOCKER_DockerHealthyTask segfaults in debian 8. ""This test consistently cannot recover the agent on two debian 8 builds: with SSL and CMake based. The error is always the same (full logs attached): """," 19:40:59 E1019 19:40:58.581372 16873 slave.cpp:6301] EXIT with status 1: Failed to perform recovery: Failed to run 'docker -H unix:///var/run/docker.sock ps -a': exited with status 1; stderr='error during connect: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.31/containers/json?all=1: read unix @->/var/run/docker.sock: read: connection reset by peer ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8121","10/20/2017 22:20:53",3,"Unified Containerizer Auto backend should check xfs ftype for overlayfs backend. ""when using xfs as the backing filesystem in unified containerizer, the `ftype` has to be equal to 1 if we are using the overlay fs backend. we should add the detection in auto backend logic because some OS (like centos 7.2) has xfs ftype=0 by default. https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8123","10/21/2017 06:23:18",1,"GPU tests are failing due to TASK_STARTING. ""For instance: NvidiaGpuTest.ROOT_CGROUPS_NVIDIA_GPU_VerifyDeviceAccess """," I1020 22:18:46.180371 1480 exec.cpp:237] Executor registered on agent ca0e7b44-c621-4442-a62e-15f7bf02064b-S0 I1020 22:18:46.185027 1486 executor.cpp:171] Received SUBSCRIBED event I1020 22:18:46.186005 1486 executor.cpp:175] Subscribed executor on core-dev I1020 22:18:46.186189 1486 executor.cpp:171] Received LAUNCH event I1020 22:18:46.188908 1486 executor.cpp:637] Starting task 3c08cf78-575d-4813-82b6-3ace272db35e I1020 22:18:46.192939 1316 slave.cpp:4407] Handling status update TASK_STARTING (UUID: 87cee290-b2fe-4459-9b75-b9f03aab6492) for task 3c08cf78-575d-4813-82b6-3ace272db35e of fra mework ca0e7b44-c621-4442-a62e-15f7bf02064b-0000 from executor(1)@10.0.49.2:42711 I1020 22:18:46.196228 1330 status_update_manager.cpp:323] Received status update TASK_STARTING (UUID: 87cee290-b2fe-4459-9b75-b9f03aab6492) for task 3c08cf78-575d-4813-82b6-3ace 272db35e of framework ca0e7b44-c621-4442-a62e-15f7bf02064b-0000 I1020 22:18:46.197510 1329 slave.cpp:4888] Forwarding the update TASK_STARTING (UUID: 87cee290-b2fe-4459-9b75-b9f03aab6492) for task 3c08cf78-575d-4813-82b6-3ace272db35e of fram ework ca0e7b44-c621-4442-a62e-15f7bf02064b-0000 to master@10.0.49.2:34819 I1020 22:18:46.197927 1329 slave.cpp:4798] Sending acknowledgement for status update TASK_STARTING (UUID: 87cee290-b2fe-4459-9b75-b9f03aab6492) for task 3c08cf78-575d-4813-82b6- 3ace272db35e of framework ca0e7b44-c621-4442-a62e-15f7bf02064b-0000 to executor(1)@10.0.49.2:42711 I1020 22:18:46.198098 1332 master.cpp:6998] Status update TASK_STARTING (UUID: 87cee290-b2fe-4459-9b75-b9f03aab6492) for task 3c08cf78-575d-4813-82b6-3ace272db35e of framework c a0e7b44-c621-4442-a62e-15f7bf02064b-0000 from agent ca0e7b44-c621-4442-a62e-15f7bf02064b-S0 at slave(1)@10.0.49.2:34819 (core-dev) I1020 22:18:46.198187 1332 master.cpp:7060] Forwarding status update TASK_STARTING (UUID: 87cee290-b2fe-4459-9b75-b9f03aab6492) for task 3c08cf78-575d-4813-82b6-3ace272db35e of framework ca0e7b44-c621-4442-a62e-15f7bf02064b-0000 I1020 22:18:46.198463 1332 master.cpp:9162] Updating the state of task 3c08cf78-575d-4813-82b6-3ace272db35e of framework ca0e7b44-c621-4442-a62e-15f7bf02064b-0000 (latest state: TASK_STARTING, status update state: TASK_STARTING) I1020 22:18:46.199198 1331 master.cpp:5566] Processing ACKNOWLEDGE call 87cee290-b2fe-4459-9b75-b9f03aab6492 for task 3c08cf78-575d-4813-82b6-3ace272db35e of framework ca0e7b44- c621-4442-a62e-15f7bf02064b-0000 (default) at scheduler-f2b66689-382a-4b8c-bdc9-978cff922409@10.0.49.2:34819 on agent ca0e7b44-c621-4442-a62e-15f7bf02064b-S0 /home/jie/workspace/mesos/src/tests/containerizer/nvidia_gpu_isolator_tests.cpp:142: Failure Expected: TASK_RUNNING To be equal to: statusRunning1->state() Which is: TASK_STARTING ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8125","10/24/2017 00:44:27",2,"Agent should properly handle recovering an executor when its pid is reused ""Here's how to reproduce this issue: # Start a task using the Docker containerizer (the same will probably happen with the command executor). # Stop the corresponding Mesos agent while the task is running. # Change the executor's checkpointed forked pid, which is located in the meta directory, e.g., {{/var/lib/mesos/slave/meta/slaves/latest/frameworks/19faf6e0-3917-48ab-8b8e-97ec4f9ed41e-0001/executors/foo.13faee90-b5f0-11e7-8032-e607d2b4348c/runs/latest/pids/forked.pid}}. I used pid 2, which is normally used by {{kthreadd}}. # Reboot the host""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8128","10/24/2017 22:44:49",3,"Make os::pipe file descriptors O_CLOEXEC. ""File descriptors from {{os::pipe}} will be inherited across exec. On Linux we can use [pipe2|http://man7.org/linux/man-pages/man2/pipe.2.html] to atomically make the pipe {{O_CLOEXEC}}.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8130","10/25/2017 18:35:32",2,"Add placeholder handlers for offer operation feedback ""In order to sketch out the flow of messages necessary to facilitate offer operation feedback, we should add some empty placeholder handlers to the master and agent as detailed in the [offer operation feedback design doc|https://docs.google.com/document/d/1GGh14SbPTItjiweSZfann4GZ6PCteNrn-1y4pxOjgcI/edit#].""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8141","10/27/2017 23:14:50",3,"Add filesystem layout for storage resource providers. ""We need directories for placing mount points and checkpoint CSI volume state for storage resource providers. Unlike resource checkpoints, CSI volume states should persist across agents since otherwise the CSI plugin might not work properly.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8143","10/28/2017 01:16:11",3,"Publish and unpublish storage local resources through CSI plugins. ""Storage local resource provider needs to call the following CSI API to publish CSI volumes for tasks to use: 1. ControllerPublishVolume (optional) 2. NodePublishVolume Although we don't need to unpublish CSI volumes after tasks are completed, we still needs to unpublish them for DESTROY_VOLUME or DESTROY_BLOCK: 1. NodeUnpublishVolume 2. ControllerUnpublishVolume (optional)""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8171","11/04/2017 07:23:52",2,"Using a failoverTimeout of 0 with Mesos native scheduler client can result in infinite subscribe loop ""Over the past year, the Marathon team has been plagued with an issue that hits our CI builds periodically in which the scheduler driver enters a tight loop, sending 10,000s of SUBSCRIBE calls to the master per second. I turned on debug logging for the client and the server, and it pointed to an issue with the {{doReliableRegistration}} method in sched.cpp. Here's the logs: In Marathon, when we are running our tests, we set the failoverTimeout to 0 in order to cause the Mesos master to immediately forget about a framework when it disconnects. On line 860 of sched.cpp, the retry-delay is set to 1/10th the failoverTimeout, which provides the best explanation for why the value is 0: Reading through the code, it seems that once this value is 0, it will always be zero, since backoff is multiplicative (0 * 2 == 0), and the failover_timeout / 10 limit is applied each time. To make matters worse, failoverTimeout of {{0}} is the default: I've confirmed that when using 1.4.0 of the Mesos client java jar, this default is used if failoverTimeout is not set: """," WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.099815 13397 process.cpp:1383] libprocess is initialized on 127.0.1.1:60957 with 8 worker threads WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.118237 13397 logging.cpp:199] Logging to STDERR WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.128921 13416 sched.cpp:232] Version: 1.4.0 WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.151785 13791 group.cpp:341] Group process (zookeeper-group(1)@127.0.1.1:60957) connected to ZooKeeper WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.151823 13791 group.cpp:831] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.151837 13791 group.cpp:419] Trying to create path '/mesos' in ZooKeeper WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.152586 13791 group.cpp:758] Found non-sequence node 'log_replicas' at '/mesos' in ZooKeeper WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.152662 13791 detector.cpp:152] Detected a new leader: (id='0') WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.152762 13791 group.cpp:700] Trying to get '/mesos/json.info_0000000000' in ZooKeeper WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.157148 13791 zookeeper.cpp:262] A new leading master (UPID=master@172.16.10.95:32856) is detected WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.157347 13787 sched.cpp:336] New master detected at master@172.16.10.95:32856 WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.157557 13787 sched.cpp:352] No credentials provided. Attempting to register without authentication WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.157565 13787 sched.cpp:836] Sending SUBSCRIBE call to master@172.16.10.95:32856 WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.157635 13787 sched.cpp:869] Will retry registration in 0ns if necessary WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.158979 13785 sched.cpp:836] Sending SUBSCRIBE call to master@172.16.10.95:32856 WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.159029 13785 sched.cpp:869] Will retry registration in 0ns if necessary WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.159265 13790 sched.cpp:836] Sending SUBSCRIBE call to master@172.16.10.95:32856 WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.159303 13790 sched.cpp:869] Will retry registration in 0ns if necessary WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.159479 13786 sched.cpp:836] Sending SUBSCRIBE call to master@172.16.10.95:32856 WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.159521 13786 sched.cpp:869] Will retry registration in 0ns if necessary WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.159622 13788 sched.cpp:836] Sending SUBSCRIBE call to master@172.16.10.95:32856 WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.159658 13788 sched.cpp:869] Will retry registration in 0ns if necessary WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.159749 13789 sched.cpp:836] Sending SUBSCRIBE call to master@172.16.10.95:32856 WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.159785 13789 sched.cpp:869] Will retry registration in 0ns if necessary WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.159878 13792 sched.cpp:836] Sending SUBSCRIBE call to master@172.16.10.95:32856 WARN [05:39:39 EventsIntegrationTest-LocalMarathon-32858] I1104 05:39:39.159916 13792 sched.cpp:869] Will retry registration in 0ns if necessary ./mesos/src/sched/sched.cpp 818 | void doReliableRegistration(Duration maxBackoff) 819 | { ... 851 | // Bound the maximum backoff by 'REGISTRATION_RETRY_INTERVAL_MAX'. 852 | maxBackoff = 853 | std::min(maxBackoff, scheduler::REGISTRATION_RETRY_INTERVAL_MAX); 854 | 855 | // If failover timeout is present, bound the maximum backoff 856 | // by 1/10th of the failover timeout. 857 | if (framework.has_failover_timeout()) { 858 | Try duration = Duration::create(framework.failover_timeout()); 859 | if (duration.isSome()) { 860 | maxBackoff = std::min(maxBackoff, duration.get() / 10); 861 | } 862 | } 863 | 864 | // Determine the delay for next attempt by picking a random 865 | // duration between 0 and 'maxBackoff'. 866 | // TODO(vinod): Use random numbers from header. 867 | Duration delay = maxBackoff * ((double) os::random() / RAND_MAX); 868 | 869 | VLOG(1) << """"Will retry registration in """" << delay << """" if necessary""""; 870 | 871 | // Backoff. 872 | frameworkRegistrationTimer = process::delay( 873 | delay, self(), &Self::doReliableRegistration, maxBackoff * 2); 874 | } 875 | ./mesos/include/mesos/mesos.proto 238 | // The amount of time (in seconds) that the master will wait for the 239 | // scheduler to failover before it tears down the framework by 240 | // killing all its tasks/executors. This should be non-zero if a 241 | // framework expects to reconnect after a failure and not lose its 242 | // tasks/executors. 243 | // 244 | // NOTE: To avoid accidental destruction of tasks, production 245 | // frameworks typically set this to a large value (e.g., 1 week). 246 | optional double failover_timeout = 4 [default = 0.0]; 247 | @ import $ivy.`org.apache.mesos:mesos:1.4.0` import $ivy.$ @ import org.apache.mesos.Protos import org.apache.mesos.Protos @ Protos.FrameworkInfo.newBuilder.setName(""""test"""").setUser(""""user"""").build.getFailoverTimeout res3: Double = 0.0 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8179","11/07/2017 18:05:35",1,"Scheduler library has incorrect assumptions about connections. ""Scheduler library assumes that a connection cannot be interrupted between continuations, for example {{send()}} and {{_send()}}: [https://github.com/apache/mesos/blob/509a1ab3226bbec7c369f431656f4ec692da00ba/src/scheduler/scheduler.cpp#L553]. This is not true, {{detected()}} can fire in-between, leading to disconnection: The bug has been introduced in https://reviews.apache.org/r/62594"""," I1107 18:50:57.154796 2138112 scheduler.cpp:496] New master detected at master@192.168.9.40:59063 ... I1107 18:50:57.160935 2138112 scheduler.cpp:505] Waiting for 0ns before initiating a re-(connection) attempt with the master I1107 18:50:57.161245 1064960 clock.cpp:435] Clock of __collect__(7)@192.168.9.40:59063 updated to 2017-11-07 17:50:57.159954176+00:00 I1107 18:50:57.161285 1898086400 clock.cpp:361] Clock resumed at 2017-11-07 17:50:57.159954176+00:00 I1107 18:50:57.161602 1064960 scheduler.cpp:387] Connected with the master at http://192.168.9.40:59063/master/api/v1/scheduler I1107 18:50:57.161779 2138112 scheduler.cpp:249] Sending SUBSCRIBE call to http://192.168.9.40:59063/master/api/v1/scheduler I1107 18:50:57.162037 2138112 scheduler.cpp:496] New master detected at master@192.168.9.40:59063 I1107 18:50:57.162055 2138112 scheduler.cpp:505] Waiting for 0ns before initiating a re-(connection) attempt with the master I1107 18:50:57.162164 4820992 process.cpp:3167] Dropping event for process __http_connection__(14)@192.168.9.40:59063 F1107 18:50:57.162214 2138112 scheduler.cpp:553] CHECK_SOME(connections): is NONE *** Check failure stack trace: *** E1107 18:50:57.162240 4820992 process.cpp:2576] Failed to shutdown socket with fd 9, address 192.168.9.40:59063: Socket is not connected @ 0x10ed262b4 google::LogMessage::Flush() @ 0x10ed2a21f google::LogMessageFatal::~LogMessageFatal() @ 0x10ed26ef9 google::LogMessageFatal::~LogMessageFatal() E1107 18:50:57.162304 4820992 process.cpp:2576] Failed to shutdown socket with fd 10, address 192.168.9.40:59063: Socket is not connected @ 0x1078efaea _CheckFatal::~_CheckFatal() @ 0x1078ea675 _CheckFatal::~_CheckFatal() @ 0x109dfcabf mesos::v1::scheduler::MesosProcess::_send() @ 0x109e07438 _ZZN7process8dispatchIN5mesos2v19scheduler12MesosProcessERKNS3_4CallERKNS_6FutureINS_4http7RequestEEES7_SD_EEvRKNS_3PIDIT_EEMSF_FvT0_T1_EOT2_OT3_ENKUlRS5_RSB_PNS_11ProcessBaseEE_clESR_SS_SU_ @ 0x109e072b7 _ZNSt3__128__invoke_void_return_wrapperIvE6__callIJRNS_6__bindIZN7process8dispatchIN5mesos2v19scheduler12MesosProcessERKNS8_4CallERKNS4_6FutureINS4_4http7RequestEEESC_SI_EEvRKNS4_3PIDIT_EEMSK_FvT0_T1_EOT2_OT3_EUlRSA_RSG_PNS4_11ProcessBaseEE_JSC_SI_RNS_12placeholders4__phILi1EEEEEESZ_EEEvDpOT_ @ 0x109e06ba9 _ZNSt3__110__function6__funcINS_6__bindIZN7process8dispatchIN5mesos2v19scheduler12MesosProcessERKNS7_4CallERKNS3_6FutureINS3_4http7RequestEEESB_SH_EEvRKNS3_3PIDIT_EEMSJ_FvT0_T1_EOT2_OT3_EUlRS9_RSF_PNS3_11ProcessBaseEE_JSB_SH_RNS_12placeholders4__phILi1EEEEEENS_9allocatorIS14_EEFvSY_EEclEOSY_ @ 0x10de77d3a std::__1::function<>::operator()() @ 0x10e307abc process::ProcessBase::visit() @ 0x10e3b804e process::DispatchEvent::visit() @ 0x107a4b991 process::ProcessBase::serve() @ 0x10e300191 process::ProcessManager::resume() @ 0x10e42d27d process::ProcessManager::init_threads()::$_2::operator()() @ 0x10e42ce12 _ZNSt3__114__thread_proxyINS_5tupleIJZN7process14ProcessManager12init_threadsEvE3$_2EEEEEPvS6_ @ 0x7fff8591499d _pthread_body @ 0x7fff8591491a _pthread_start @ 0x7fff85912351 thread_start zsh: abort GLOG_v=2 GTEST_FILTER=""""*SchedulerTest.MasterFailover*"""" ./bin/mesos-tests.sh ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8183","11/08/2017 19:54:58",3,"Add a container daemon to monitor a long-running standalone container. ""The `ContanierDaemon` class is responsible to monitor if a long-running service running in a standalone container is ready to serve, and restart the service container if not. It does not manage the lifecycle of the contanier it monitors, so the container persists across `ContainerDaemon`s.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8207","11/10/2017 15:37:32",2,"Reconcile offer operations between resource providers, agents, and master ""We need to implement reconciliation of pending or unacknowledged offer operations between resource providers and agent, and agents and master. ""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8211","11/13/2017 10:48:33",5,"Handle agent local resources in offer operation handler ""The master will send {{ApplyOfferOperationMessage}} instead of {{CheckpointResourcesMessage}} when an agent has the 'RESOURCE_PROVIDER' capability set. The agent handler for the message needs to be updated to support operations on agent resources.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8219","11/14/2017 13:39:59",1,"Validate that any offer operation is only applied on resources from a single provider ""Offer operations can only be applied to resources from one single resource provider. A number of places in the implementation assume that the provider ID obtained from any {Resource} in an offer operation is equivalent to the one from any other resource. We should update the master to validate that invariant and reject malformed operations.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8221","11/14/2017 18:08:42",5,"Use protobuf reflection to simplify downgrading of resources. ""We currently have a {{downgradeResources}} function which is called on every {{repeated Resource}} field in every message that we checkpoint. We should leverage protobuf reflection to automatically downgrade any instances of {{Resource}} within any protobuf message.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8222","11/14/2017 18:50:15",3,"Add resource versions to RunTaskMessage ""To support speculative application of certain offer operations we have added resource versions to offer operation messages. This permits checking compatibility of master and agent state before applying operations. Launch operations are not modelled with offer operation messages, but instead with {{RunTaskMessage}}. In order to provide the same consistency guarantees we need to add resource versions to {{RunTaskMessage}} as well. Otherwise we would only rely on resource containment checks in the agent to catch inconsistencies; these can be unreliable as there is no guarantee that the matched agent resource is unique (e.g., with two {{RESERVE}} operations on similar resorces triggered on the same agent and one of these failing, the other succeeding, we would end up potentially sending one framework a success status and the other a failed one, but would not do anything the make sure the speculative operation application matches the resources belonging to the sent offer operation status update).""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8224","11/14/2017 19:31:31",1,"mesos.interface 1.4.0 cannot be installed with pip ""This breaks some framework development tooling. WIth latest pip: This works fine for previous releases: But it does not for 1.4.0: Verbose output shows that pip skips the 1.4.0 distribution: """," $ python -m pip -V pip 9.0.1 from /Users/wfarner/code/aurora/build-support/python/pycharm.venv/lib/python2.7/site-packages (python 2.7) $ python -m pip install mesos.interface==1.3.0 Collecting mesos.interface==1.3.0 ... Installing collected packages: mesos.interface Successfully installed mesos.interface-1.3.0 $ python -m pip install mesos.interface==1.4.0 Collecting mesos.interface==1.4.0 Could not find a version that satisfies the requirement mesos.interface==1.4.0 (from versions: 0.21.2.linux-x86_64, 0.22.1.2.linux-x86_64, 0.22.2.linux-x86_64, 0.23.1.linux-x86_64, 0.24.1.linux-x86_64, 0.24.2.linux-x86_64, 0.25.0.linux-x86_64, 0.25.1.linux-x86_64, 0.26.1.linux-x86_64, 0.27.0.linux-x86_64, 0.27.1.linux-x86_64, 0.27.2.linux-x86_64, 0.28.0.linux-x86_64, 0.28.1.linux-x86_64, 0.28.2.linux-x86_64, 1.0.0.linux-x86_64, 1.0.1.linux-x86_64, 1.1.0.linux-x86_64, 1.2.0.linux-x86_64, 1.3.0.linux-x86_64, 0.20.0, 0.20.1, 0.21.0, 0.21.1, 0.21.2, 0.22.0, 0.22.1.2, 0.22.2, 0.23.0, 0.23.1, 0.24.0, 0.24.1, 0.24.2, 0.25.0, 0.25.1, 0.26.0, 0.26.1, 0.27.0, 0.27.1, 0.27.2, 0.28.0, 0.28.1, 0.28.2, 1.0.0, 1.0.1, 1.1.0, 1.2.0, 1.3.0) No matching distribution found for mesos.interface==1.4.0 $ python -m pip install -v mesos.interface==1.4.0 | grep 1.4.0 Collecting mesos.interface==1.4.0 Skipping link https://pypi.python.org/packages/ef/1b/d5b0c1456f755ad42477eaa9667e22d1f5fd8e2fce0f9b26937f93743f6c/mesos.interface-1.4.0-py2.7.egg#md5=32113860961d49c31f69f7b13a9bc063 (from https://pypi.python.org/simple/mesos-interface/); unsupported archive format: .egg ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"MESOS-8240","11/15/2017 23:12:55",8,"Add an option to build the new CLI and run unit tests. ""An update of the discarded [https://reviews.apache.org/r/52543/] Also needs to be available for CMake.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8245","11/17/2017 09:58:40",3,"SlaveRecoveryTest/0.ReconnectExecutor is flaky. ""Observed it today in our CI. Logs attached.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8249","11/20/2017 20:38:40",13,"Support image prune in mesos containerizer and provisioner. ""Implement image prune in containerizer and the provisioner, by using mark and sweep to garbage collect unused layers.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0 +"MESOS-8258","11/23/2017 12:04:01",2,"Mesos.DockerContainerizerTest.ROOT_DOCKER_SlaveRecoveryTaskContainer is flaky. "" Full log attached."""," /home/ubuntu/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-ubuntu-17.04/mesos/src/tests/containerizer/docker_containerizer_tests.cpp:2772 Expected: 1 To be equal to: reregister.updates_size() Which is: 2 ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8263","11/23/2017 15:15:47",2,"ResourceProviderManagerHttpApiTest.ConvertResources is flaky ""From a ASF CI run: """," 3: [ OK ] ContentType/ResourceProviderManagerHttpApiTest.ConvertResources/0 (1048 ms) 3: [ RUN ] ContentType/ResourceProviderManagerHttpApiTest.ConvertResources/1 3: I1123 08:06:04.233137 20036 cluster.cpp:162] Creating default 'local' authorizer 3: I1123 08:06:04.237293 20060 master.cpp:448] Master 7c9d8e8c-3fb3-44c5-8505-488ada3e848e (dce3e4c418cb) started on 172.17.0.2:35090 3: I1123 08:06:04.237325 20060 master.cpp:450] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/EpiTO7/credentials"""" --filter_gpu_resources=""""true"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_unreachable_tasks_per_framework=""""1000"""" --port=""""5050"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/EpiTO7/master"""" --zk_session_timeout=""""10secs"""" 3: I1123 08:06:04.237727 20060 master.cpp:499] Master only allowing authenticated frameworks to register 3: I1123 08:06:04.237743 20060 master.cpp:505] Master only allowing authenticated agents to register 3: I1123 08:06:04.237753 20060 master.cpp:511] Master only allowing authenticated HTTP frameworks to register 3: I1123 08:06:04.237764 20060 credentials.hpp:37] Loading credentials for authentication from '/tmp/EpiTO7/credentials' 3: I1123 08:06:04.238149 20060 master.cpp:555] Using default 'crammd5' authenticator 3: I1123 08:06:04.238358 20060 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly' 3: I1123 08:06:04.238575 20060 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' 3: I1123 08:06:04.238764 20060 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' 3: I1123 08:06:04.238939 20060 master.cpp:634] Authorization enabled 3: I1123 08:06:04.239159 20043 whitelist_watcher.cpp:77] No whitelist given 3: I1123 08:06:04.239187 20045 hierarchical.cpp:173] Initialized hierarchical allocator process 3: I1123 08:06:04.242822 20041 master.cpp:2215] Elected as the leading master! 3: I1123 08:06:04.242857 20041 master.cpp:1695] Recovering from registrar 3: I1123 08:06:04.243067 20052 registrar.cpp:347] Recovering registrar 3: I1123 08:06:04.243808 20052 registrar.cpp:391] Successfully fetched the registry (0B) in 690944ns 3: I1123 08:06:04.243953 20052 registrar.cpp:495] Applied 1 operations in 37370ns; attempting to update the registry 3: I1123 08:06:04.244638 20052 registrar.cpp:552] Successfully updated the registry in 620032ns 3: I1123 08:06:04.244798 20052 registrar.cpp:424] Successfully recovered registrar 3: I1123 08:06:04.245352 20058 hierarchical.cpp:211] Skipping recovery of hierarchical allocator: nothing to recover 3: I1123 08:06:04.245358 20057 master.cpp:1808] Recovered 0 agents from the registry (129B); allowing 10mins for agents to re-register 3: W1123 08:06:04.251852 20036 process.cpp:2756] Attempted to spawn already running process files@172.17.0.2:35090 3: I1123 08:06:04.253250 20036 containerizer.cpp:301] Using isolation { environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni } 3: W1123 08:06:04.253965 20036 backend.cpp:76] Failed to create 'aufs' backend: AufsBackend requires root privileges 3: W1123 08:06:04.254109 20036 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges 3: I1123 08:06:04.254148 20036 provisioner.cpp:259] Using default backend 'copy' 3: I1123 08:06:04.256542 20036 cluster.cpp:448] Creating default 'local' authorizer 3: I1123 08:06:04.260066 20057 slave.cpp:262] Mesos agent started on (784)@172.17.0.2:35090 3: I1123 08:06:04.260093 20057 slave.cpp:263] Flags at startup: --acls="""""""" --agent_features=""""capabilities { 3: type: MULTI_ROLE 3: } 3: capabilities { 3: type: HIERARCHICAL_ROLE 3: } 3: capabilities { 3: type: RESERVATION_REFINEMENT 3: } 3: capabilities { 3: type: RESOURCE_PROVIDER 3: } 3: """" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_Vr92Vg/store/appc"""" --authenticate_http_executors=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_Vr92Vg/credential"""" --default_role=""""*"""" --disallow_sharing_agent_pid_namespace=""""false"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_Vr92Vg/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2secs"""" --executor_secret_key=""""/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_Vr92Vg/executor_secret_key"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_Vr92Vg/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_Vr92Vg/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_Vr92Vg"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_ycwsnc"""" --zk_session_timeout=""""10secs"""" 3: I1123 08:06:04.260721 20057 credentials.hpp:86] Loading credential for authentication from '/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_Vr92Vg/credential' 3: I1123 08:06:04.260936 20057 slave.cpp:295] Agent using credential for: test-principal 3: I1123 08:06:04.260975 20057 credentials.hpp:37] Loading credentials for authentication from '/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_Vr92Vg/http_credentials' 3: I1123 08:06:04.261390 20057 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-executor' 3: I1123 08:06:04.261538 20057 http.cpp:1066] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-executor' 3: I1123 08:06:04.261780 20057 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' 3: I1123 08:06:04.261898 20057 http.cpp:1066] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-readonly' 3: I1123 08:06:04.263828 20057 slave.cpp:593] Agent resources: [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] 3: I1123 08:06:04.264111 20057 slave.cpp:601] Agent attributes: [ ] 3: I1123 08:06:04.264124 20057 slave.cpp:610] Agent hostname: dce3e4c418cb 3: I1123 08:06:04.264286 20054 task_status_update_manager.cpp:181] Pausing sending task status updates 3: I1123 08:06:04.266237 20052 state.cpp:64] Recovering state from '/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_ycwsnc/meta' 3: I1123 08:06:04.266593 20052 task_status_update_manager.cpp:207] Recovering task status update manager 3: I1123 08:06:04.266840 20045 containerizer.cpp:668] Recovering containerizer 3: I1123 08:06:04.268791 20048 provisioner.cpp:455] Provisioner recovery complete 3: I1123 08:06:04.269258 20054 slave.cpp:6493] Finished recovery 3: I1123 08:06:04.270414 20057 task_status_update_manager.cpp:181] Pausing sending task status updates 3: I1123 08:06:04.270411 20055 slave.cpp:1007] New master detected at master@172.17.0.2:35090 3: I1123 08:06:04.270539 20055 slave.cpp:1042] Detecting new master 3: I1123 08:06:04.283004 20040 slave.cpp:1069] Authenticating with master master@172.17.0.2:35090 3: I1123 08:06:04.283128 20040 slave.cpp:1078] Using default CRAM-MD5 authenticatee 3: I1123 08:06:04.283475 20042 authenticatee.cpp:121] Creating new client SASL connection 3: I1123 08:06:04.283859 20052 master.cpp:8312] Authenticating slave(784)@172.17.0.2:35090 3: I1123 08:06:04.284049 20038 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(1421)@172.17.0.2:35090 3: I1123 08:06:04.284358 20043 authenticator.cpp:98] Creating new server SASL connection 3: I1123 08:06:04.284647 20049 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 3: I1123 08:06:04.284683 20049 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 3: I1123 08:06:04.284826 20060 authenticator.cpp:204] Received SASL authentication start 3: I1123 08:06:04.284900 20060 authenticator.cpp:326] Authentication requires more steps 3: I1123 08:06:04.285058 20044 authenticatee.cpp:259] Received SASL authentication step 3: I1123 08:06:04.285233 20044 authenticator.cpp:232] Received SASL authentication step 3: I1123 08:06:04.285275 20044 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'dce3e4c418cb' server FQDN: 'dce3e4c418cb' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 3: I1123 08:06:04.285287 20044 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 3: I1123 08:06:04.285326 20044 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 3: I1123 08:06:04.285348 20044 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'dce3e4c418cb' server FQDN: 'dce3e4c418cb' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 3: I1123 08:06:04.285357 20044 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 3: I1123 08:06:04.285363 20044 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 3: I1123 08:06:04.285380 20044 authenticator.cpp:318] Authentication success 3: I1123 08:06:04.285506 20059 authenticatee.cpp:299] Authentication success 3: I1123 08:06:04.285557 20045 master.cpp:8342] Successfully authenticated principal 'test-principal' at slave(784)@172.17.0.2:35090 3: I1123 08:06:04.285658 20047 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(1421)@172.17.0.2:35090 3: I1123 08:06:04.285847 20044 slave.cpp:1161] Successfully authenticated with master master@172.17.0.2:35090 3: I1123 08:06:04.286111 20044 slave.cpp:1685] Will retry registration in 2.233007ms if necessary 3: I1123 08:06:04.286402 20046 master.cpp:6036] Received register agent message from slave(784)@172.17.0.2:35090 (dce3e4c418cb) 3: I1123 08:06:04.286550 20046 master.cpp:3872] Authorizing agent with principal 'test-principal' 3: I1123 08:06:04.287089 20056 master.cpp:6098] Authorized registration of agent at slave(784)@172.17.0.2:35090 (dce3e4c418cb) 3: I1123 08:06:04.287220 20056 master.cpp:6191] Registering agent at slave(784)@172.17.0.2:35090 (dce3e4c418cb) with id 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 3: I1123 08:06:04.287748 20053 registrar.cpp:495] Applied 1 operations in 63340ns; attempting to update the registry 3: I1123 08:06:04.288362 20053 registrar.cpp:552] Successfully updated the registry in 548864ns 3: I1123 08:06:04.288583 20042 master.cpp:6240] Admitted agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) 3: I1123 08:06:04.289449 20042 master.cpp:6276] Registered agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] 3: I1123 08:06:04.289716 20038 slave.cpp:1685] Will retry registration in 35.387012ms if necessary 3: I1123 08:06:04.289844 20043 hierarchical.cpp:600] Added agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 (dce3e4c418cb) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (allocated: {}) 3: I1123 08:06:04.289922 20038 slave.cpp:1207] Registered with master master@172.17.0.2:35090; given agent ID 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 3: I1123 08:06:04.289942 20060 master.cpp:6036] Received register agent message from slave(784)@172.17.0.2:35090 (dce3e4c418cb) 3: I1123 08:06:04.290046 20060 master.cpp:3872] Authorizing agent with principal 'test-principal' 3: I1123 08:06:04.290077 20050 task_status_update_manager.cpp:188] Resuming sending task status updates 3: I1123 08:06:04.290155 20043 hierarchical.cpp:1457] Performed allocation for 1 agents in 152178ns 3: I1123 08:06:04.290329 20038 slave.cpp:1227] Checkpointing SlaveInfo to '/tmp/ContentType_ResourceProviderManagerHttpApiTest_ConvertResources_1_ycwsnc/meta/slaves/7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0/slave.info' 3: I1123 08:06:04.290479 20059 master.cpp:6098] Authorized registration of agent at slave(784)@172.17.0.2:35090 (dce3e4c418cb) 3: I1123 08:06:04.290560 20059 master.cpp:6169] Agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) already registered, resending acknowledgement 3: I1123 08:06:04.290829 20038 slave.cpp:1288] Forwarding total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000] 3: I1123 08:06:04.290917 20038 slave.cpp:1298] Forwarding total oversubscribed resources {} 3: W1123 08:06:04.291487 20038 slave.cpp:1265] Already registered with master master@172.17.0.2:35090 3: I1123 08:06:04.291539 20038 slave.cpp:1288] Forwarding total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000] 3: I1123 08:06:04.291553 20037 master.cpp:7078] Received update of agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000] 3: I1123 08:06:04.291610 20037 master.cpp:7091] Received update of agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) with total oversubscribed resources {} 3: I1123 08:06:04.291649 20038 slave.cpp:1298] Forwarding total oversubscribed resources {} 3: I1123 08:06:04.291822 20037 master.cpp:7109] Ignoring update on agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) as it reports no changes 3: I1123 08:06:04.292245 20037 master.cpp:7078] Received update of agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000] 3: I1123 08:06:04.292304 20037 master.cpp:7091] Received update of agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) with total oversubscribed resources {} 3: I1123 08:06:04.292381 20044 http_connection.hpp:221] New endpoint detected at http://172.17.0.2:35090/slave(784)/api/v1/resource_provider 3: I1123 08:06:04.292474 20037 master.cpp:7109] Ignoring update on agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) as it reports no changes 3: I1123 08:06:04.292973 20036 scheduler.cpp:188] Version: 1.5.0 3: I1123 08:06:04.293253 20055 scheduler.cpp:311] Using default 'basic' HTTP authenticatee 3: I1123 08:06:04.293665 20049 scheduler.cpp:494] New master detected at master@172.17.0.2:35090 3: I1123 08:06:04.293694 20049 scheduler.cpp:503] Waiting for 0ns before initiating a re-(connection) attempt with the master 3: I1123 08:06:04.294288 20045 http_connection.hpp:277] Connected with the remote endpoint at http://172.17.0.2:35090/slave(784)/api/v1/resource_provider 3: I1123 08:06:04.294934 20046 http_connection.hpp:129] Sending 1 call to http://172.17.0.2:35090/slave(784)/api/v1/resource_provider 3: I1123 08:06:04.295969 20055 scheduler.cpp:385] Connected with the master at http://172.17.0.2:35090/master/api/v1/scheduler 3: I1123 08:06:04.296830 20059 process.cpp:3503] Handling HTTP event for process 'slave(784)' with path: '/slave(784)/api/v1/resource_provider' 3: I1123 08:06:04.297196 20054 scheduler.cpp:247] Sending SUBSCRIBE call to http://172.17.0.2:35090/master/api/v1/scheduler 3: I1123 08:06:04.298148 20052 http.cpp:1185] HTTP POST for /slave(784)/api/v1/resource_provider from 172.17.0.2:38204 3: I1123 08:06:04.298343 20052 process.cpp:3503] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' 3: I1123 08:06:04.298552 20056 manager.cpp:386] Subscribing resource provider {""""name"""":""""test"""",""""type"""":""""org.apache.mesos.rp.test""""} 3: I1123 08:06:04.299580 20057 http.cpp:1185] HTTP POST for /master/api/v1/scheduler from 172.17.0.2:38208 3: I1123 08:06:04.299873 20057 master.cpp:2615] Received subscription request for HTTP framework 'default' 3: I1123 08:06:04.300007 20057 master.cpp:2280] Authorizing framework principal 'test-principal' to receive offers for roles '{ role }' 3: I1123 08:06:04.300477 20050 master.cpp:2750] Subscribing framework 'default' with checkpointing disabled and capabilities [ RESERVATION_REFINEMENT ] 3: I1123 08:06:04.300882 20056 http_connection.hpp:129] Sending 3 call to http://172.17.0.2:35090/slave(784)/api/v1/resource_provider 3: I1123 08:06:04.301465 20047 hierarchical.cpp:306] Added framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 3: I1123 08:06:04.301954 20057 scheduler.cpp:739] Enqueuing event SUBSCRIBED received from http://172.17.0.2:35090/master/api/v1/scheduler 3: I1123 08:06:04.302021 20052 process.cpp:3503] Handling HTTP event for process 'slave(784)' with path: '/slave(784)/api/v1/resource_provider' 3: I1123 08:06:04.302435 20057 scheduler.cpp:739] Enqueuing event HEARTBEAT received from http://172.17.0.2:35090/master/api/v1/scheduler 3: I1123 08:06:04.303020 20055 http.cpp:1185] HTTP POST for /slave(784)/api/v1/resource_provider from 172.17.0.2:38202 3: I1123 08:06:04.303211 20047 hierarchical.cpp:1457] Performed allocation for 1 agents in 1.549319ms 3: I1123 08:06:04.303767 20060 master.cpp:8142] Sending 1 offers to framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 (default) 3: I1123 08:06:04.303828 20050 slave.cpp:6780] Handling resource provider message 'UPDATE_TOTAL_RESOURCES: 582a6138-5ac1-4c38-b407-576be39d0a82 disk[RAW]:200' 3: I1123 08:06:04.303941 20050 slave.cpp:6825] Forwarding new total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk[RAW]:200 3: I1123 08:06:04.304744 20060 master.cpp:7078] Received update of agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk[RAW]:200 3: I1123 08:06:04.305366 20051 scheduler.cpp:739] Enqueuing event OFFERS received from http://172.17.0.2:35090/master/api/v1/scheduler 3: I1123 08:06:04.305364 20060 master.cpp:7141] Removing offer 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-O0 with resources cpus(allocated: role):2; mem(allocated: role):1024; disk(allocated: role):1024; ports(allocated: role):[31000-32000] on agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) 3: I1123 08:06:04.306197 20060 master.cpp:10063] Removing offer 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-O0 3: I1123 08:06:04.306350 20058 hierarchical.cpp:667] Agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 (dce3e4c418cb) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk[RAW]:200 3: /mesos/src/tests/resource_provider_manager_tests.cpp:713: Failure 3: Value of: resources.empty() 3: Actual: true 3: Expected: false 3: I1123 08:06:04.307636 20058 hierarchical.cpp:1132] Recovered cpus(allocated: role):2; mem(allocated: role):1024; disk(allocated: role):1024; ports(allocated: role):[31000-32000] (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk[RAW]:200, allocated: {}) on agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 from framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 3: I1123 08:06:04.308148 20051 master.cpp:1425] Framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 (default) disconnected 3: I1123 08:06:04.308176 20051 master.cpp:3333] Deactivating framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 (default) 3: I1123 08:06:04.308253 20051 master.cpp:3310] Disconnecting framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 (default) 3: I1123 08:06:04.308284 20051 master.cpp:1440] Giving framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 (default) 0ns to failover 3: I1123 08:06:04.308549 20055 master.cpp:7974] Framework failover timeout, removing framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 (default) 3: I1123 08:06:04.308575 20055 master.cpp:8831] Removing framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 (default) 3: I1123 08:06:04.308782 20038 slave.cpp:3270] Asked to shut down framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 by master@172.17.0.2:35090 3: I1123 08:06:04.308811 20038 slave.cpp:3285] Cannot shut down unknown framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 3: W1123 08:06:04.309376 20037 master.cpp:7988] Master returning resources offered to framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 because the framework has terminated or is inactive 3: I1123 08:06:04.309382 20058 hierarchical.cpp:1457] Performed allocation for 1 agents in 1.597792ms 3: I1123 08:06:04.309464 20058 hierarchical.cpp:419] Deactivated framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 3: I1123 08:06:04.310418 20058 hierarchical.cpp:358] Removed framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 3: I1123 08:06:04.310714 20058 hierarchical.cpp:1132] Recovered cpus(allocated: role):2; mem(allocated: role):1024; disk(allocated: role):1024; ports(allocated: role):[31000-32000]; disk(allocated: role)[RAW]:200 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk[RAW]:200, allocated: {}) on agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 from framework 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-0000 3: I1123 08:06:04.311563 20054 slave.cpp:883] Agent terminating 3: I1123 08:06:04.311776 20038 master.cpp:1311] Agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) disconnected 3: I1123 08:06:04.311800 20038 master.cpp:3370] Disconnecting agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) 3: I1123 08:06:04.311849 20038 master.cpp:3389] Deactivating agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 at slave(784)@172.17.0.2:35090 (dce3e4c418cb) 3: I1123 08:06:04.311949 20058 hierarchical.cpp:697] Agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 deactivated 3: I1123 08:06:04.318100 20036 master.cpp:1153] Master terminating 3: I1123 08:06:04.318877 20047 hierarchical.cpp:633] Removed agent 7c9d8e8c-3fb3-44c5-8505-488ada3e848e-S0 3: [ FAILED ] ContentType/ResourceProviderManagerHttpApiTest.ConvertResources/1, where ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8267","11/27/2017 10:32:24",2,"NestedMesosContainerizerTest.ROOT_CGROUPS_RecoverLauncherOrphans is flaky. "" Full log attached."""," ../../src/tests/containerizer/nested_mesos_containerizer_tests.cpp:1902 wait.get() is NONE ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8270","11/28/2017 07:57:04",3,"Add an agent endpoint to list all active resource providers ""Operators/Frameworks might need information about all resource providers currently running on an agent. An API endpoint should provide that information and include resource provider name and type.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8280","11/30/2017 00:04:03",3,"Mesos Containerizer GC should set 'layers' after checkpointing layer ids in provisioner. "" Please neglect the debugging logs like '111111'. To reproduce this issue, just continuously trigger image gc. The log above was from a scenario that we launch two nested containers. One sleeps 1 second, another sleep forever. This is related to this patch: https://github.com/apache/mesos/commit/e273efe6976434858edb85bbcf367a02e963a467#diff-a3593ed0ebd2b205775f7f04d9b5afe7 The root cause is that we did not set the 'layers' after we checkpoint the layer ids in provisioner. The log below is the prove: """," 11111 222222 333333 444444 11111 222222 333333 444444 I1129 23:24:45.469543 6592 registry_puller.cpp:395] Extracting layer tar ball '/tmp/mesos/store/docker/staging/MVgVC7/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/mesos/store/docker/staging/MVgVC7/38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e/rootfs.overlay' I1129 23:24:45.473287 6592 registry_puller.cpp:395] Extracting layer tar ball '/tmp/mesos/store/docker/staging/MVgVC7/sha256:b56ae66c29370df48e7377c8f9baa744a3958058a766793f821dadcb144a4647 to rootfs '/tmp/mesos/store/docker/staging/MVgVC7/b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3/rootfs.overlay' I1129 23:24:45.582002 6594 registry_puller.cpp:395] Extracting layer tar ball '/tmp/mesos/store/docker/staging/6Zbc17/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/mesos/store/docker/staging/6Zbc17/e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6/rootfs.overlay' I1129 23:24:45.589404 6595 metadata_manager.cpp:167] Successfully cached image 'alpine' I1129 23:24:45.590204 6594 registry_puller.cpp:395] Extracting layer tar ball '/tmp/mesos/store/docker/staging/6Zbc17/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/mesos/store/docker/staging/6Zbc17/be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e/rootfs.overlay' I1129 23:24:45.595190 6594 registry_puller.cpp:395] Extracting layer tar ball '/tmp/mesos/store/docker/staging/6Zbc17/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/mesos/store/docker/staging/6Zbc17/53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f/rootfs.overlay' I1129 23:24:45.599500 6594 registry_puller.cpp:395] Extracting layer tar ball '/tmp/mesos/store/docker/staging/6Zbc17/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/mesos/store/docker/staging/6Zbc17/a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721/rootfs.overlay' I1129 23:24:45.602047 6597 provisioner.cpp:506] Provisioning image rootfs '/tmp/provisioner/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/01de09c5-d8e9-412e-8825-a592d2c875e5/backends/overlay/rootfses/b5d48445-848d-4274-a4f8-e909351ebc35' for container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 using overlay backend I1129 23:24:45.602751 6594 registry_puller.cpp:395] Extracting layer tar ball '/tmp/mesos/store/docker/staging/6Zbc17/sha256:1db09adb5ddd7f1a07b6d585a7db747a51c7bd17418d47e91f901bdf420abd66 to rootfs '/tmp/mesos/store/docker/staging/6Zbc17/120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16/rootfs.overlay' I1129 23:24:45.603054 6596 overlay.cpp:168] Created symlink '/tmp/provisioner/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/01de09c5-d8e9-412e-8825-a592d2c875e5/backends/overlay/scratch/b5d48445-848d-4274-a4f8-e909351ebc35/links' -> '/tmp/xAWQ8y' I1129 23:24:45.604398 6596 overlay.cpp:196] Provisioning image rootfs with overlayfs: 'lowerdir=/tmp/xAWQ8y/1:/tmp/xAWQ8y/0,upperdir=/tmp/provisioner/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/01de09c5-d8e9-412e-8825-a592d2c875e5/backends/overlay/scratch/b5d48445-848d-4274-a4f8-e909351ebc35/upperdir,workdir=/tmp/provisioner/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/01de09c5-d8e9-412e-8825-a592d2c875e5/backends/overlay/scratch/b5d48445-848d-4274-a4f8-e909351ebc35/workdir' I1129 23:24:45.607802 6594 registry_puller.cpp:395] Extracting layer tar ball '/tmp/mesos/store/docker/staging/6Zbc17/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/mesos/store/docker/staging/6Zbc17/42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229/rootfs.overlay' I1129 23:24:45.612139 6594 registry_puller.cpp:395] Extracting layer tar ball '/tmp/mesos/store/docker/staging/6Zbc17/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 to rootfs '/tmp/mesos/store/docker/staging/6Zbc17/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/rootfs.overlay' I1129 23:24:45.612253 6593 containerizer.cpp:1369] Checkpointed ContainerConfig at '/var/run/mesos/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/01de09c5-d8e9-412e-8825-a592d2c875e5/config' I1129 23:24:45.612298 6593 containerizer.cpp:2926] Transitioning the state of container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 from PROVISIONING to PREPARING I1129 23:24:45.625658 6596 containerizer.cpp:1838] Launching 'mesos-containerizer' with flags '--help=""""false"""" --launch_info=""""{""""clone_namespaces"""":[131072],""""command"""":{""""shell"""":true,""""value"""":""""sleep 1""""},""""environment"""":{""""variables"""":[{""""name"""":""""MESOS_SANDBOX"""",""""type"""":""""VALUE"""",""""value"""":""""\/mnt\/mesos\/sandbox""""},{""""name"""":""""PATH"""",""""type"""":""""VALUE"""",""""value"""":""""\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin""""},{""""name"""":""""MESOS_CONTAINER_IP"""",""""type"""":""""VALUE"""",""""value"""":""""10.0.2.15""""}]},""""pre_exec_commands"""":[{""""arguments"""":[""""mesos-containerizer"""",""""mount"""",""""--help=false"""",""""--operation=make-rslave"""",""""--path=\/""""],""""shell"""":false,""""value"""":""""\/vagrant\/mesos\/build\/src\/mesos-containerizer""""},{""""arguments"""":[""""mount"""",""""-n"""",""""--rbind"""",""""\/tmp\/slaves\/65fa58c5-48c6-4998-b336-ceb9bcd2ec43-S0\/frameworks\/55b825d4-c922-46bb-ab8e-4e01abf1a756-0000\/executors\/default-executor\/runs\/3bbc3fd1-0138-43a9-94ba-d017d813daac\/containers\/01de09c5-d8e9-412e-8825-a592d2c875e5"""",""""\/tmp\/provisioner\/containers\/3bbc3fd1-0138-43a9-94ba-d017d813daac\/containers\/01de09c5-d8e9-412e-8825-a592d2c875e5\/backends\/overlay\/rootfses\/b5d48445-848d-4274-a4f8-e909351ebc35\/mnt\/mesos\/sandbox""""],""""shell"""":false,""""value"""":""""mount""""}],""""rootfs"""":""""\/tmp\/provisioner\/containers\/3bbc3fd1-0138-43a9-94ba-d017d813daac\/containers\/01de09c5-d8e9-412e-8825-a592d2c875e5\/backends\/overlay\/rootfses\/b5d48445-848d-4274-a4f8-e909351ebc35"""",""""task_environment"""":{},""""user"""":""""root"""",""""working_directory"""":""""\/mnt\/mesos\/sandbox""""}"""" --pipe_read=""""12"""" --pipe_write=""""15"""" --runtime_directory=""""/var/run/mesos/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/01de09c5-d8e9-412e-8825-a592d2c875e5"""" --unshare_namespace_mnt=""""false""""' I1129 23:24:45.626317 6598 linux_launcher.cpp:438] Launching nested container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 and cloning with namespaces CLONE_NEWNS I1129 23:24:45.633211 6598 systemd.cpp:96] Assigned child process '6745' to 'mesos_executors.slice' I1129 23:24:45.636270 6596 containerizer.cpp:2926] Transitioning the state of container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 from PREPARING to ISOLATING I1129 23:24:45.691830 6597 metadata_manager.cpp:167] Successfully cached image 'mesosphere/inky' I1129 23:24:45.694399 6594 provisioner.cpp:506] Provisioning image rootfs '/tmp/provisioner/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/7be37efd-7ec4-459a-a51a-97f6be99c3c3/backends/overlay/rootfses/1187cc83-a23a-4390-9c28-092a7b7690b5' for container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 using overlay backend I1129 23:24:45.694919 6596 overlay.cpp:168] Created symlink '/tmp/provisioner/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/7be37efd-7ec4-459a-a51a-97f6be99c3c3/backends/overlay/scratch/1187cc83-a23a-4390-9c28-092a7b7690b5/links' -> '/tmp/GXhXiT' I1129 23:24:45.695103 6596 overlay.cpp:196] Provisioning image rootfs with overlayfs: 'lowerdir=/tmp/GXhXiT/6:/tmp/GXhXiT/5:/tmp/GXhXiT/4:/tmp/GXhXiT/3:/tmp/GXhXiT/2:/tmp/GXhXiT/1:/tmp/GXhXiT/0,upperdir=/tmp/provisioner/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/7be37efd-7ec4-459a-a51a-97f6be99c3c3/backends/overlay/scratch/1187cc83-a23a-4390-9c28-092a7b7690b5/upperdir,workdir=/tmp/provisioner/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/7be37efd-7ec4-459a-a51a-97f6be99c3c3/backends/overlay/scratch/1187cc83-a23a-4390-9c28-092a7b7690b5/workdir' I1129 23:24:45.696255 6596 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 has no checkpointed layers I1129 23:24:45.696349 6596 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 has no checkpointed layers I1129 23:24:45.696449 6593 containerizer.cpp:1369] Checkpointed ContainerConfig at '/var/run/mesos/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/7be37efd-7ec4-459a-a51a-97f6be99c3c3/config' I1129 23:24:45.696506 6593 containerizer.cpp:2926] Transitioning the state of container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 from PROVISIONING to PREPARING I1129 23:24:45.697865 6595 store.cpp:531] Layer 'be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e' is retained by image store cache I1129 23:24:45.697918 6595 store.cpp:531] Layer '511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' is retained by image store cache I1129 23:24:45.697968 6595 store.cpp:531] Layer '53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f' is retained by image store cache I1129 23:24:45.697999 6595 store.cpp:531] Layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' is retained by image store cache I1129 23:24:45.698025 6595 store.cpp:531] Layer '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16' is retained by image store cache I1129 23:24:45.698050 6595 store.cpp:531] Layer '38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e' is retained by image store cache I1129 23:24:45.698076 6595 store.cpp:531] Layer 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721' is retained by image store cache I1129 23:24:45.698104 6595 store.cpp:531] Layer '42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229' is retained by image store cache I1129 23:24:45.698129 6595 store.cpp:531] Layer 'b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3' is retained by image store cache I1129 23:24:45.698894 6596 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 has no checkpointed layers I1129 23:24:45.698966 6596 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 has no checkpointed layers I1129 23:24:45.700333 6596 store.cpp:531] Layer 'be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e' is retained by image store cache I1129 23:24:45.700394 6596 store.cpp:531] Layer '511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' is retained by image store cache I1129 23:24:45.700412 6596 store.cpp:531] Layer '53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f' is retained by image store cache I1129 23:24:45.700428 6596 store.cpp:531] Layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' is retained by image store cache I1129 23:24:45.700441 6596 store.cpp:531] Layer '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16' is retained by image store cache I1129 23:24:45.700454 6596 store.cpp:531] Layer '38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e' is retained by image store cache I1129 23:24:45.700467 6596 store.cpp:531] Layer 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721' is retained by image store cache I1129 23:24:45.700480 6596 store.cpp:531] Layer '42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229' is retained by image store cache I1129 23:24:45.700495 6596 store.cpp:531] Layer 'b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3' is retained by image store cache I1129 23:24:45.702491 6594 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 has no checkpointed layers I1129 23:24:45.702554 6594 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 has no checkpointed layers I1129 23:24:45.703707 6592 store.cpp:531] Layer 'be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e' is retained by image store cache I1129 23:24:45.703783 6592 store.cpp:531] Layer '511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' is retained by image store cache I1129 23:24:45.703812 6592 store.cpp:531] Layer '53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f' is retained by image store cache I1129 23:24:45.703816 6592 store.cpp:531] Layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' is retained by image store cache I1129 23:24:45.703816 6592 store.cpp:531] Layer '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16' is retained by image store cache I1129 23:24:45.704164 6592 store.cpp:531] Layer '38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e' is retained by image store cache I1129 23:24:45.704208 6592 store.cpp:531] Layer 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721' is retained by image store cache I1129 23:24:45.704255 6592 store.cpp:531] Layer '42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229' is retained by image store cache I1129 23:24:45.704285 6592 store.cpp:531] Layer 'b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3' is retained by image store cache I1129 23:24:45.704814 6595 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 has no checkpointed layers I1129 23:24:45.704861 6595 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 has no checkpointed layers I1129 23:24:45.708112 6592 store.cpp:531] Layer 'be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e' is retained by image store cache I1129 23:24:45.708204 6592 store.cpp:531] Layer '511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' is retained by image store cache I1129 23:24:45.708238 6592 store.cpp:531] Layer '53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f' is retained by image store cache I1129 23:24:45.708264 6592 store.cpp:531] Layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' is retained by image store cache I1129 23:24:45.708375 6592 store.cpp:531] Layer '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16' is retained by image store cache I1129 23:24:45.708407 6592 store.cpp:531] Layer '38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e' is retained by image store cache I1129 23:24:45.708472 6592 store.cpp:531] Layer 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721' is retained by image store cache I1129 23:24:45.708514 6592 store.cpp:531] Layer '42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229' is retained by image store cache I1129 23:24:45.708545 6592 store.cpp:531] Layer 'b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3' is retained by image store cache I1129 23:24:45.709048 6596 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 has no checkpointed layers I1129 23:24:45.709161 6596 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 has no checkpointed layers I1129 23:24:45.710321 6594 store.cpp:531] Layer 'be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e' is retained by image store cache I1129 23:24:45.710382 6594 store.cpp:531] Layer '511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' is retained by image store cache I1129 23:24:45.710412 6594 store.cpp:531] Layer '53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f' is retained by image store cache I1129 23:24:45.710436 6594 store.cpp:531] Layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' is retained by image store cache I1129 23:24:45.710458 6594 store.cpp:531] Layer '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16' is retained by image store cache I1129 23:24:45.710480 6594 store.cpp:531] Layer '38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e' is retained by image store cache I1129 23:24:45.710522 6594 store.cpp:531] Layer 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721' is retained by image store cache I1129 23:24:45.710551 6594 store.cpp:531] Layer '42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229' is retained by image store cache I1129 23:24:45.710590 6594 store.cpp:531] Layer 'b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3' is retained by image store cache I1129 23:24:45.713176 6594 containerizer.cpp:1838] Launching 'mesos-containerizer' with flags '--help=""""false"""" --launch_info=""""{""""clone_namespaces"""":[131072],""""command"""":{""""shell"""":true,""""value"""":""""sleep 100000""""},""""environment"""":{""""variables"""":[{""""name"""":""""MESOS_SANDBOX"""",""""type"""":""""VALUE"""",""""value"""":""""\/mnt\/mesos\/sandbox""""},{""""name"""":""""HOME"""",""""type"""":""""VALUE"""",""""value"""":""""\/""""},{""""name"""":""""PATH"""",""""type"""":""""VALUE"""",""""value"""":""""\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin""""},{""""name"""":""""MESOS_CONTAINER_IP"""",""""type"""":""""VALUE"""",""""value"""":""""10.0.2.15""""}]},""""pre_exec_commands"""":[{""""arguments"""":[""""mesos-containerizer"""",""""mount"""",""""--help=false"""",""""--operation=make-rslave"""",""""--path=\/""""],""""shell"""":false,""""value"""":""""\/vagrant\/mesos\/build\/src\/mesos-containerizer""""},{""""arguments"""":[""""mount"""",""""-n"""",""""--rbind"""",""""\/tmp\/slaves\/65fa58c5-48c6-4998-b336-ceb9bcd2ec43-S0\/frameworks\/55b825d4-c922-46bb-ab8e-4e01abf1a756-0000\/executors\/default-executor\/runs\/3bbc3fd1-0138-43a9-94ba-d017d813daac\/containers\/7be37efd-7ec4-459a-a51a-97f6be99c3c3"""",""""\/tmp\/provisioner\/containers\/3bbc3fd1-0138-43a9-94ba-d017d813daac\/containers\/7be37efd-7ec4-459a-a51a-97f6be99c3c3\/backends\/overlay\/rootfses\/1187cc83-a23a-4390-9c28-092a7b7690b5\/mnt\/mesos\/sandbox""""],""""shell"""":false,""""value"""":""""mount""""}],""""rootfs"""":""""\/tmp\/provisioner\/containers\/3bbc3fd1-0138-43a9-94ba-d017d813daac\/containers\/7be37efd-7ec4-459a-a51a-97f6be99c3c3\/backends\/overlay\/rootfses\/1187cc83-a23a-4390-9c28-092a7b7690b5"""",""""task_environment"""":{},""""user"""":""""root"""",""""working_directory"""":""""\/mnt\/mesos\/sandbox""""}"""" --pipe_read=""""13"""" --pipe_write=""""14"""" --runtime_directory=""""/var/run/mesos/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/7be37efd-7ec4-459a-a51a-97f6be99c3c3"""" --unshare_namespace_mnt=""""false""""' I1129 23:24:45.713954 6597 linux_launcher.cpp:438] Launching nested container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 and cloning with namespaces CLONE_NEWNS I1129 23:24:45.721781 6597 systemd.cpp:96] Assigned child process '6775' to 'mesos_executors.slice' I1129 23:24:45.725494 6594 containerizer.cpp:2926] Transitioning the state of container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 from PREPARING to ISOLATING I1129 23:24:45.791635 6595 containerizer.cpp:2926] Transitioning the state of container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 from ISOLATING to FETCHING I1129 23:24:45.791880 6591 fetcher.cpp:379] Starting to fetch URIs for container: 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5, directory: /tmp/slaves/65fa58c5-48c6-4998-b336-ceb9bcd2ec43-S0/frameworks/55b825d4-c922-46bb-ab8e-4e01abf1a756-0000/executors/default-executor/runs/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/01de09c5-d8e9-412e-8825-a592d2c875e5 I1129 23:24:45.792626 6591 containerizer.cpp:2926] Transitioning the state of container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 from FETCHING to RUNNING 11111 222222 333333 444444 I1129 23:24:45.807262 6592 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 has no checkpointed layers I1129 23:24:45.807375 6592 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 has no checkpointed layers I1129 23:24:45.808658 6591 store.cpp:531] Layer 'be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e' is retained by image store cache I1129 23:24:45.808843 6591 store.cpp:531] Layer '511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' is retained by image store cache I1129 23:24:45.808869 6591 store.cpp:531] Layer '53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f' is retained by image store cache I1129 23:24:45.808897 6591 store.cpp:531] Layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' is retained by image store cache I1129 23:24:45.808962 6591 store.cpp:531] Layer '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16' is retained by image store cache I1129 23:24:45.808990 6591 store.cpp:531] Layer '38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e' is retained by image store cache I1129 23:24:45.809012 6591 store.cpp:531] Layer 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721' is retained by image store cache I1129 23:24:45.809036 6591 store.cpp:531] Layer '42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229' is retained by image store cache I1129 23:24:45.809057 6591 store.cpp:531] Layer 'b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3' is retained by image store cache I1129 23:24:45.893280 6596 containerizer.cpp:2926] Transitioning the state of container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 from ISOLATING to FETCHING I1129 23:24:45.893523 6596 fetcher.cpp:379] Starting to fetch URIs for container: 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3, directory: /tmp/slaves/65fa58c5-48c6-4998-b336-ceb9bcd2ec43-S0/frameworks/55b825d4-c922-46bb-ab8e-4e01abf1a756-0000/executors/default-executor/runs/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/7be37efd-7ec4-459a-a51a-97f6be99c3c3 I1129 23:24:45.894335 6594 containerizer.cpp:2926] Transitioning the state of container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 from FETCHING to RUNNING I1129 23:24:45.902606 6598 process.cpp:3503] Handling HTTP event for process 'slave(1)' with path: '/slave(1)/api/v1/executor' I1129 23:24:45.903908 6597 process.cpp:3503] Handling HTTP event for process 'slave(1)' with path: '/slave(1)/api/v1' I1129 23:24:45.904618 6597 process.cpp:3503] Handling HTTP event for process 'slave(1)' with path: '/slave(1)/api/v1' I1129 23:24:45.908681 6597 http.cpp:1185] HTTP POST for /slave(1)/api/v1 from 10.0.2.15:57620 I1129 23:24:45.909113 6597 http.cpp:1185] HTTP POST for /slave(1)/api/v1 from 10.0.2.15:57622 I1129 23:24:45.909708 6597 http.cpp:2589] Processing WAIT_NESTED_CONTAINER call for container '3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5' I1129 23:24:45.910148 6597 http.cpp:2589] Processing WAIT_NESTED_CONTAINER call for container '3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3' I1129 23:24:45.938350 6596 process.cpp:3503] Handling HTTP event for process 'slave(1)' with path: '/slave(1)/api/v1/executor' I1129 23:24:45.938781 6596 http.cpp:1185] HTTP POST for /slave(1)/api/v1/executor from 10.0.2.15:57564 I1129 23:24:45.939141 6596 slave.cpp:4584] Handling status update TASK_RUNNING (UUID: 6ba2641b-ff6a-4ac2-9123-23579534147d) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:45.940809 6591 http.cpp:1185] HTTP POST for /slave(1)/api/v1/executor from 10.0.2.15:57564 I1129 23:24:45.941130 6591 slave.cpp:4584] Handling status update TASK_RUNNING (UUID: f22fe2c5-2a2e-43c9-9f22-a7da351bab08) for task 2 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:45.942906 6591 task_status_update_manager.cpp:328] Received task status update TASK_RUNNING (UUID: 6ba2641b-ff6a-4ac2-9123-23579534147d) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:45.943076 6591 task_status_update_manager.cpp:383] Forwarding task status update TASK_RUNNING (UUID: 6ba2641b-ff6a-4ac2-9123-23579534147d) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 to the agent I1129 23:24:45.943322 6595 slave.cpp:5067] Forwarding the update TASK_RUNNING (UUID: 6ba2641b-ff6a-4ac2-9123-23579534147d) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 to master@127.0.0.1:5050 I1129 23:24:45.943332 6591 task_status_update_manager.cpp:328] Received task status update TASK_RUNNING (UUID: f22fe2c5-2a2e-43c9-9f22-a7da351bab08) for task 2 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:45.943506 6591 task_status_update_manager.cpp:383] Forwarding task status update TASK_RUNNING (UUID: f22fe2c5-2a2e-43c9-9f22-a7da351bab08) for task 2 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 to the agent I1129 23:24:45.943717 6595 slave.cpp:4960] Task status update manager successfully handled status update TASK_RUNNING (UUID: 6ba2641b-ff6a-4ac2-9123-23579534147d) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:45.943905 6595 slave.cpp:5067] Forwarding the update TASK_RUNNING (UUID: f22fe2c5-2a2e-43c9-9f22-a7da351bab08) for task 2 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 to master@127.0.0.1:5050 I1129 23:24:45.943905 6595 slave.cpp:4960] Task status update manager successfully handled status update TASK_RUNNING (UUID: f22fe2c5-2a2e-43c9-9f22-a7da351bab08) for task 2 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:45.992866 6591 task_status_update_manager.cpp:401] Received task status update acknowledgement (UUID: 6ba2641b-ff6a-4ac2-9123-23579534147d) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:45.993261 6595 slave.cpp:3868] Task status update manager successfully handled status update acknowledgement (UUID: 6ba2641b-ff6a-4ac2-9123-23579534147d) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:45.993968 6593 task_status_update_manager.cpp:401] Received task status update acknowledgement (UUID: f22fe2c5-2a2e-43c9-9f22-a7da351bab08) for task 2 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:45.994295 6598 slave.cpp:3868] Task status update manager successfully handled status update acknowledgement (UUID: f22fe2c5-2a2e-43c9-9f22-a7da351bab08) for task 2 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 11111 222222 333333 444444 I1129 23:24:46.808684 6592 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 has no checkpointed layers I1129 23:24:46.808876 6592 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 has no checkpointed layers I1129 23:24:46.810683 6593 store.cpp:531] Layer 'be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e' is retained by image store cache I1129 23:24:46.810751 6593 store.cpp:531] Layer '511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' is retained by image store cache I1129 23:24:46.810781 6593 store.cpp:531] Layer '53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f' is retained by image store cache I1129 23:24:46.810808 6593 store.cpp:531] Layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' is retained by image store cache I1129 23:24:46.810834 6593 store.cpp:531] Layer '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16' is retained by image store cache I1129 23:24:46.810860 6593 store.cpp:531] Layer '38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e' is retained by image store cache I1129 23:24:46.810885 6593 store.cpp:531] Layer 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721' is retained by image store cache I1129 23:24:46.810911 6593 store.cpp:531] Layer '42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229' is retained by image store cache I1129 23:24:46.810937 6593 store.cpp:531] Layer 'b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3' is retained by image store cache I1129 23:24:47.057590 6596 containerizer.cpp:2775] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 has exited I1129 23:24:47.057667 6596 containerizer.cpp:2324] Destroying container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 in RUNNING state I1129 23:24:47.057695 6596 containerizer.cpp:2926] Transitioning the state of container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 from RUNNING to DESTROYING I1129 23:24:47.058082 6596 linux_launcher.cpp:514] Asked to destroy container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 I1129 23:24:47.059027 6596 linux_launcher.cpp:560] Using freezer to destroy cgroup mesos/3bbc3fd1-0138-43a9-94ba-d017d813daac/mesos/01de09c5-d8e9-412e-8825-a592d2c875e5 I1129 23:24:47.060667 6596 cgroups.cpp:3058] Freezing cgroup /sys/fs/cgroup/freezer/mesos/3bbc3fd1-0138-43a9-94ba-d017d813daac/mesos/01de09c5-d8e9-412e-8825-a592d2c875e5 I1129 23:24:47.062700 6597 cgroups.cpp:1413] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/3bbc3fd1-0138-43a9-94ba-d017d813daac/mesos/01de09c5-d8e9-412e-8825-a592d2c875e5 after 1.838336ms I1129 23:24:47.064627 6592 cgroups.cpp:3076] Thawing cgroup /sys/fs/cgroup/freezer/mesos/3bbc3fd1-0138-43a9-94ba-d017d813daac/mesos/01de09c5-d8e9-412e-8825-a592d2c875e5 I1129 23:24:47.066498 6598 cgroups.cpp:1442] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/3bbc3fd1-0138-43a9-94ba-d017d813daac/mesos/01de09c5-d8e9-412e-8825-a592d2c875e5 after 1.642752ms I1129 23:24:47.071521 6592 provisioner.cpp:648] Destroying container rootfs at '/tmp/provisioner/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/01de09c5-d8e9-412e-8825-a592d2c875e5/backends/overlay/rootfses/b5d48445-848d-4274-a4f8-e909351ebc35' for container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 I1129 23:24:47.098203 6596 overlay.cpp:296] Removed temporary directory '/tmp/xAWQ8y' pointed by '/tmp/provisioner/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/01de09c5-d8e9-412e-8825-a592d2c875e5/backends/overlay/scratch/b5d48445-848d-4274-a4f8-e909351ebc35/links' I1129 23:24:47.100265 6591 containerizer.cpp:2613] Checkpointing termination state to nested container's runtime directory '/var/run/mesos/containers/3bbc3fd1-0138-43a9-94ba-d017d813daac/containers/01de09c5-d8e9-412e-8825-a592d2c875e5/termination' I1129 23:24:47.107206 6594 process.cpp:3503] Handling HTTP event for process 'slave(1)' with path: '/slave(1)/api/v1/executor' I1129 23:24:47.151911 6594 http.cpp:1185] HTTP POST for /slave(1)/api/v1/executor from 10.0.2.15:57564 I1129 23:24:47.152243 6594 slave.cpp:4584] Handling status update TASK_FINISHED (UUID: 288dbdf5-3dde-48fa-b2d4-bfd6c098ea44) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:47.154391 6594 task_status_update_manager.cpp:328] Received task status update TASK_FINISHED (UUID: 288dbdf5-3dde-48fa-b2d4-bfd6c098ea44) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:47.154578 6594 task_status_update_manager.cpp:383] Forwarding task status update TASK_FINISHED (UUID: 288dbdf5-3dde-48fa-b2d4-bfd6c098ea44) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 to the agent I1129 23:24:47.154810 6593 slave.cpp:5067] Forwarding the update TASK_FINISHED (UUID: 288dbdf5-3dde-48fa-b2d4-bfd6c098ea44) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 to master@127.0.0.1:5050 I1129 23:24:47.157249 6593 slave.cpp:4960] Task status update manager successfully handled status update TASK_FINISHED (UUID: 288dbdf5-3dde-48fa-b2d4-bfd6c098ea44) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:47.385977 6592 task_status_update_manager.cpp:401] Received task status update acknowledgement (UUID: 288dbdf5-3dde-48fa-b2d4-bfd6c098ea44) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:47.386334 6592 task_status_update_manager.cpp:538] Cleaning up status update stream for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:47.387459 6592 slave.cpp:3868] Task status update manager successfully handled status update acknowledgement (UUID: 288dbdf5-3dde-48fa-b2d4-bfd6c098ea44) for task 1 of framework 55b825d4-c922-46bb-ab8e-4e01abf1a756-0000 I1129 23:24:47.387568 6592 slave.cpp:8457] Completing task 1 11111 222222 333333 444444 I1129 23:24:47.818768 6595 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 has no checkpointed layers I1129 23:24:47.824591 6598 store.cpp:531] Layer 'be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e' is retained by image store cache I1129 23:24:47.824724 6598 store.cpp:531] Layer '511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' is retained by image store cache I1129 23:24:47.824753 6598 store.cpp:531] Layer '53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f' is retained by image store cache I1129 23:24:47.824767 6598 store.cpp:531] Layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' is retained by image store cache I1129 23:24:47.824782 6598 store.cpp:531] Layer '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16' is retained by image store cache I1129 23:24:47.824861 6598 store.cpp:550] Marking layer '38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e' to gc by renaming '/tmp/mesos/store/docker/layers/38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e' to '/tmp/mesos/store/docker/gc/38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e.1511997887824822784' I1129 23:24:47.824918 6598 store.cpp:531] Layer 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721' is retained by image store cache I1129 23:24:47.824960 6598 store.cpp:531] Layer '42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229' is retained by image store cache I1129 23:24:47.825088 6598 store.cpp:550] Marking layer 'b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3' to gc by renaming '/tmp/mesos/store/docker/layers/b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3' to '/tmp/mesos/store/docker/gc/b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3.1511997887825015040' I1129 23:24:47.825389 6598 store.cpp:577] Deleting path '/tmp/mesos/store/docker/gc/b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3.1511997887825015040' I1129 23:24:47.829903 6598 store.cpp:584] Deleted '/tmp/mesos/store/docker/gc/b5815a31a59b66c909dbf6c670de78690d4b52649b8e283fc2bfd2594f61cca3.1511997887825015040' I1129 23:24:47.829980 6598 store.cpp:577] Deleting path '/tmp/mesos/store/docker/gc/38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e.1511997887824822784' I1129 23:24:47.830047 6598 store.cpp:584] Deleted '/tmp/mesos/store/docker/gc/38135e3743e6dcb66bd1394b633053714333c00007b7cf930bfeebfda660c06e.1511997887824822784' 11111 222222 333333 444444 I1129 23:24:48.829519 6595 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 has no checkpointed layers I1129 23:24:48.831161 6598 store.cpp:531] Layer 'be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e' is retained by image store cache I1129 23:24:48.831225 6598 store.cpp:531] Layer '511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' is retained by image store cache I1129 23:24:48.831248 6598 store.cpp:531] Layer '53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f' is retained by image store cache I1129 23:24:48.831266 6598 store.cpp:531] Layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' is retained by image store cache I1129 23:24:48.831284 6598 store.cpp:531] Layer '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16' is retained by image store cache I1129 23:24:48.831302 6598 store.cpp:531] Layer 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721' is retained by image store cache I1129 23:24:48.831321 6598 store.cpp:531] Layer '42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229' is retained by image store cache 11111 222222 333333 444444 I1129 23:24:49.830904 6597 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 has no checkpointed layers I1129 23:24:49.832487 6597 store.cpp:531] Layer 'be4ce2753831b8952a5b797cf45b2230e1befead6f5db0630bcb24a5f554255e' is retained by image store cache I1129 23:24:49.832584 6597 store.cpp:531] Layer '511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' is retained by image store cache I1129 23:24:49.833329 6597 store.cpp:531] Layer '53b5066c5a7dff5d6f6ef0c1945572d6578c083d550d2a3d575b4cdf7460306f' is retained by image store cache I1129 23:24:49.833367 6597 store.cpp:531] Layer 'e28617c6dd2169bfe2b10017dfaa04bd7183ff840c4f78ebe73fca2a89effeb6' is retained by image store cache I1129 23:24:49.833387 6597 store.cpp:531] Layer '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16' is retained by image store cache I1129 23:24:49.833406 6597 store.cpp:531] Layer 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721' is retained by image store cache I1129 23:24:49.833425 6597 store.cpp:531] Layer '42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229' is retained by image store cache I1129 23:24:45.698894 6596 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.7be37efd-7ec4-459a-a51a-97f6be99c3c3 has no checkpointed layers I1129 23:24:45.698966 6596 provisioner.cpp:714] Container 3bbc3fd1-0138-43a9-94ba-d017d813daac.01de09c5-d8e9-412e-8825-a592d2c875e5 has no checkpointed layers ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0 +"MESOS-8288","12/01/2017 13:15:36",3,"SlaveTest.IgnoreV0ExecutorIfItReregistersWithoutReconnect is flaky. "" Full log attached."""," ../../src/tests/slave_tests.cpp:7888 Actual function call count doesn't match EXPECT_CALL(exec, shutdown(_))... Expected: to be called once Actual: never called - unsatisfied and active ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8289","12/01/2017 14:04:03",3,"ReservationTest.MasterFailover is flaky when run with `RESOURCE_PROVIDER` capability. ""On a system under load, {{ResourceProviderCapability/ReservationTest.MasterFailover/1}} can fail. {{GLOG_v=2}} of the failure: """," [ RUN ] ResourceProviderCapability/ReservationTest.MasterFailover/1 I1201 14:52:47.324741 122806272 process.cpp:2730] Dropping event for process hierarchical-allocator(34)@172.18.8.37:57116 I1201 14:52:47.324816 122806272 process.cpp:2730] Dropping event for process slave(17)@172.18.8.37:57116 I1201 14:52:47.324859 2720961344 clock.cpp:331] Clock paused at 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.326314 2720961344 clock.cpp:435] Clock of files@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.326371 2720961344 clock.cpp:435] Clock of hierarchical-allocator(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.326539 2720961344 cluster.cpp:170] Creating default 'local' authorizer I1201 14:52:47.326568 2720961344 clock.cpp:435] Clock of local-authorizer(52)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.326671 2720961344 clock.cpp:435] Clock of standalone-master-detector(52)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.326709 2720961344 clock.cpp:435] Clock of in-memory-storage(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.326884 2720961344 clock.cpp:435] Clock of registrar(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.327579 2720961344 clock.cpp:435] Clock of master@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.330301 119050240 master.cpp:454] Master 209387ca-a9c3-4717-9769-a59d9fe927f1 (172.18.8.37) started on 172.18.8.37:57116 I1201 14:52:47.330329 119050240 master.cpp:456] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""5ms"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/private/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/z44iHn/credentials"""" --filter_gpu_resources=""""true"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_unreachable_tasks_per_framework=""""1000"""" --port=""""5050"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --roles=""""role"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/private/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/z44iHn/master"""" --zk_session_timeout=""""10secs"""" I1201 14:52:47.330628 119050240 master.cpp:505] Master only allowing authenticated frameworks to register I1201 14:52:47.330638 119050240 master.cpp:511] Master only allowing authenticated agents to register I1201 14:52:47.330644 119050240 master.cpp:517] Master only allowing authenticated HTTP frameworks to register I1201 14:52:47.330652 119050240 credentials.hpp:37] Loading credentials for authentication from '/private/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/z44iHn/credentials' I1201 14:52:47.330873 119050240 master.cpp:561] Using default 'crammd5' authenticator I1201 14:52:47.330927 119050240 clock.cpp:435] Clock of crammd5-authenticator(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.330963 119050240 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly' I1201 14:52:47.330993 119050240 clock.cpp:435] Clock of __basic_authenticator__(137)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.331056 119050240 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' I1201 14:52:47.331082 119050240 clock.cpp:435] Clock of __basic_authenticator__(138)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.331151 119050240 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' I1201 14:52:47.331176 119050240 clock.cpp:435] Clock of __basic_authenticator__(139)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.331228 119050240 master.cpp:640] Authorization enabled W1201 14:52:47.331238 119050240 master.cpp:703] The '--roles' flag is deprecated. This flag will be removed in the future. See the Mesos 0.27 upgrade notes for more information I1201 14:52:47.331326 119050240 clock.cpp:435] Clock of whitelist(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.331521 118513664 hierarchical.cpp:173] Initialized hierarchical allocator process I1201 14:52:47.331560 122269696 whitelist_watcher.cpp:77] No whitelist given I1201 14:52:47.333322 119050240 master.cpp:2221] Elected as the leading master! I1201 14:52:47.333339 119050240 master.cpp:1701] Recovering from registrar I1201 14:52:47.333454 120659968 registrar.cpp:347] Recovering registrar I1201 14:52:47.333526 120659968 clock.cpp:435] Clock of __latch__(154)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.333732 120659968 registrar.cpp:391] Successfully fetched the registry (0B) in 0ns I1201 14:52:47.333809 120659968 registrar.cpp:495] Applied 1 operations in 29366ns; attempting to update the registry I1201 14:52:47.333927 120659968 clock.cpp:435] Clock of __latch__(155)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.334156 118513664 registrar.cpp:552] Successfully updated the registry in 0ns I1201 14:52:47.334218 118513664 registrar.cpp:424] Successfully recovered registrar I1201 14:52:47.334426 119050240 master.cpp:1814] Recovered 0 agents from the registry (131B); allowing 10mins for agents to re-register I1201 14:52:47.334503 119586816 hierarchical.cpp:211] Skipping recovery of hierarchical allocator: nothing to recover I1201 14:52:47.337247 2720961344 clock.cpp:435] Clock of standalone-master-detector(53)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.338493 2720961344 clock.cpp:435] Clock of files@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 W1201 14:52:47.338521 2720961344 process.cpp:2756] Attempted to spawn already running process files@172.18.8.37:57116 I1201 14:52:47.338845 2720961344 clock.cpp:435] Clock of fetcher(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.339238 2720961344 containerizer.cpp:304] Using isolation { environment_secret, filesystem/posix, posix/mem, posix/cpu } I1201 14:52:47.339455 2720961344 clock.cpp:435] Clock of copy-provisioner-backend(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.339504 2720961344 provisioner.cpp:297] Using default backend 'copy' I1201 14:52:47.339524 2720961344 clock.cpp:435] Clock of mesos-provisioner(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.339607 2720961344 clock.cpp:435] Clock of environment-secret-isolator(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.339789 2720961344 clock.cpp:435] Clock of posix-filesystem-isolator(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.339973 2720961344 clock.cpp:435] Clock of posix-mem-isolator(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.340020 2720961344 clock.cpp:435] Clock of posix-cpu-isolator(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.340062 2720961344 clock.cpp:435] Clock of sandbox-logger(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.340103 2720961344 clock.cpp:435] Clock of (18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.340281 2720961344 clock.cpp:435] Clock of mesos-containerizer(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.340481 2720961344 cluster.cpp:458] Creating default 'local' authorizer I1201 14:52:47.340513 2720961344 clock.cpp:435] Clock of local-authorizer(53)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.340608 2720961344 clock.cpp:435] Clock of agent-garbage-collector(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.340742 2720961344 clock.cpp:435] Clock of __executor__(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.340839 2720961344 clock.cpp:435] Clock of task-status-update-manager(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.341027 2720961344 clock.cpp:435] Clock of slave(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.341190 2720961344 clock.cpp:435] Clock of __limiter__(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.341687 2720961344 clock.cpp:435] Clock of resource-provider-manager(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.341840 121196544 slave.cpp:253] Mesos agent started on (18)@172.18.8.37:57116 I1201 14:52:47.341935 2720961344 clock.cpp:381] Clock advanced (10ms) to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.342027 122806272 clock.cpp:435] Clock of hierarchical-allocator(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.839857088+00:00 I1201 14:52:47.342051 122806272 clock.cpp:435] Clock of __reaper__(1)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.842657088+00:00 I1201 14:52:47.341878 121196544 slave.cpp:254] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/ResourceProviderCapability_ReservationTest_MasterFailover_1_DhNV1G/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/ResourceProviderCapability_ReservationTest_MasterFailover_1_DhNV1G/credential"""" --default_role=""""*"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/ResourceProviderCapability_ReservationTest_MasterFailover_1_DhNV1G/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/ResourceProviderCapability_ReservationTest_MasterFailover_1_DhNV1G/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/ResourceProviderCapability_ReservationTest_MasterFailover_1_DhNV1G/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/Users/jan/Documents/mesos/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:8;mem:2048"""" --runtime_dir=""""/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/ResourceProviderCapability_ReservationTest_MasterFailover_1_DhNV1G"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --version=""""false"""" --work_dir=""""/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/ResourceProviderCapability_ReservationTest_MasterFailover_1_wjKAB3"""" --zk_session_timeout=""""10secs"""" I1201 14:52:47.342082 122806272 clock.cpp:435] Clock of hierarchical-allocator(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.342172 121196544 credentials.hpp:86] Loading credential for authentication from '/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/ResourceProviderCapability_ReservationTest_MasterFailover_1_DhNV1G/credential' I1201 14:52:47.342216 122806272 clock.cpp:435] Clock of __reaper__(1)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.342301 121196544 slave.cpp:286] Agent using credential for: test-principal I1201 14:52:47.342316 121196544 credentials.hpp:37] Loading credentials for authentication from '/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/ResourceProviderCapability_ReservationTest_MasterFailover_1_DhNV1G/http_credentials' I1201 14:52:47.342391 120659968 hierarchical.cpp:1890] No allocations performed I1201 14:52:47.342411 120659968 hierarchical.cpp:1431] Performed allocation for 0 agents in 45088ns I1201 14:52:47.342497 121196544 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' I1201 14:52:47.342541 121196544 clock.cpp:435] Clock of __basic_authenticator__(140)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.342600 121196544 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' I1201 14:52:47.342628 121196544 clock.cpp:435] Clock of __basic_authenticator__(141)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.342682 121196544 clock.cpp:435] Clock of noop-resource-estimator(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.342725 121196544 clock.cpp:435] Clock of qos-noop-controller(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.342804 121196544 clock.cpp:435] Clock of local-resource-provider-daemon(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.343531 121196544 slave.cpp:585] Agent resources: [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":8.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":2048.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":470537.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] I1201 14:52:47.343757 121196544 slave.cpp:593] Agent attributes: [ ] I1201 14:52:47.343766 121196544 slave.cpp:602] Agent hostname: 172.18.8.37 I1201 14:52:47.343842 119050240 task_status_update_manager.cpp:181] Pausing sending task status updates I1201 14:52:47.344367 121196544 clock.cpp:435] Clock of __async_executor__(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.344763 122269696 state.cpp:64] Recovering state from '/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/ResourceProviderCapability_ReservationTest_MasterFailover_1_wjKAB3/meta' I1201 14:52:47.344969 120123392 task_status_update_manager.cpp:207] Recovering task status update manager I1201 14:52:47.345077 119050240 containerizer.cpp:672] Recovering containerizer I1201 14:52:47.345255 119050240 clock.cpp:435] Clock of __collect__(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.345836 119050240 clock.cpp:435] Clock of __collect__(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.834857088+00:00 I1201 14:52:47.345984 119586816 provisioner.cpp:493] Provisioner recovery complete I1201 14:52:47.346258 121196544 slave.cpp:6513] Finished recovery I1201 14:52:47.353896 121196544 slave.cpp:6701] Querying resource estimator for oversubscribable resources I1201 14:52:47.354270 120123392 slave.cpp:999] New master detected at master@172.18.8.37:57116 I1201 14:52:47.354349 120123392 slave.cpp:1034] Detecting new master I1201 14:52:47.354396 120123392 slave.cpp:6715] Received oversubscribable resources {} from the resource estimator I1201 14:52:47.354427 119586816 task_status_update_manager.cpp:181] Pausing sending task status updates I1201 14:52:47.354418 122806272 clock.cpp:435] Clock of slave(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844058370+00:00 I1201 14:52:47.354485 122806272 clock.cpp:435] Clock of slave(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.354533 122269696 slave.cpp:1061] Authenticating with master master@172.18.8.37:57116 I1201 14:52:47.354569 122269696 slave.cpp:1070] Using default CRAM-MD5 authenticatee I1201 14:52:47.354616 122269696 clock.cpp:435] Clock of crammd5-authenticatee(69)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.354712 121733120 authenticatee.cpp:121] Creating new client SASL connection I1201 14:52:47.354758 121733120 clock.cpp:435] Clock of master@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.354904 121196544 master.cpp:8589] Authenticating slave(18)@172.18.8.37:57116 I1201 14:52:47.354948 121196544 clock.cpp:435] Clock of crammd5-authenticator(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.355032 120659968 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(69)@172.18.8.37:57116 I1201 14:52:47.355062 120659968 clock.cpp:435] Clock of crammd5-authenticator-session(69)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.355201 118513664 authenticator.cpp:98] Creating new server SASL connection I1201 14:52:47.355330 119586816 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1201 14:52:47.355350 119586816 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1201 14:52:47.355409 119050240 authenticator.cpp:204] Received SASL authentication start I1201 14:52:47.355455 119050240 authenticator.cpp:326] Authentication requires more steps I1201 14:52:47.355516 120123392 authenticatee.cpp:259] Received SASL authentication step I1201 14:52:47.355592 122269696 authenticator.cpp:232] Received SASL authentication step I1201 14:52:47.355625 122269696 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'nfnt.local' server FQDN: 'nfnt.local' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1201 14:52:47.355651 122269696 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1201 14:52:47.355687 122269696 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1201 14:52:47.355713 122269696 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'nfnt.local' server FQDN: 'nfnt.local' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1201 14:52:47.355736 122269696 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1201 14:52:47.355758 122269696 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1201 14:52:47.355783 122269696 authenticator.cpp:318] Authentication success I1201 14:52:47.355907 121733120 master.cpp:8619] Successfully authenticated principal 'test-principal' at slave(18)@172.18.8.37:57116 I1201 14:52:47.355965 121196544 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(69)@172.18.8.37:57116 I1201 14:52:47.355998 120659968 authenticatee.cpp:299] Authentication success I1201 14:52:47.356014 121196544 clock.cpp:435] Clock of help@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.356144 119050240 slave.cpp:1153] Successfully authenticated with master master@172.18.8.37:57116 I1201 14:52:47.356294 119050240 slave.cpp:1696] Will retry registration in 2.9532ms if necessary I1201 14:52:47.356449 121733120 master.cpp:6042] Received register agent message from slave(18)@172.18.8.37:57116 (172.18.8.37) I1201 14:52:47.356472 121733120 master.cpp:3878] Authorizing agent with principal 'test-principal' I1201 14:52:47.356495 121733120 clock.cpp:435] Clock of local-authorizer(52)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.356734 120659968 master.cpp:6104] Authorized registration of agent at slave(18)@172.18.8.37:57116 (172.18.8.37) I1201 14:52:47.356797 120659968 master.cpp:6197] Registering agent at slave(18)@172.18.8.37:57116 (172.18.8.37) with id 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 I1201 14:52:47.356842 120659968 clock.cpp:435] Clock of registrar(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.357019 121196544 registrar.cpp:495] Applied 1 operations in 51305ns; attempting to update the registry I1201 14:52:47.357128 121196544 clock.cpp:435] Clock of in-memory-storage(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.357218 121196544 clock.cpp:435] Clock of __latch__(156)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.357378 121196544 registrar.cpp:552] Successfully updated the registry in 0ns I1201 14:52:47.357507 119050240 master.cpp:6246] Admitted agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) I1201 14:52:47.357703 119050240 clock.cpp:435] Clock of slave-observer(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.357873 120659968 slave.cpp:5160] Received ping from slave-observer(35)@172.18.8.37:57116 I1201 14:52:47.357844 119050240 master.cpp:6282] Registered agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) with cpus:8; mem:2048; disk:470537; ports:[31000-32000] I1201 14:52:47.357964 120659968 slave.cpp:1199] Registered with master master@172.18.8.37:57116; given agent ID 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 I1201 14:52:47.357992 120659968 clock.cpp:435] Clock of task-status-update-manager(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.358054 122269696 task_status_update_manager.cpp:188] Resuming sending task status updates I1201 14:52:47.358053 118513664 hierarchical.cpp:553] Added agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 (172.18.8.37) with cpus:8; mem:2048; disk:470537; ports:[31000-32000] (allocated: {}) I1201 14:52:47.358266 118513664 hierarchical.cpp:1890] No allocations performed I1201 14:52:47.358304 118513664 hierarchical.cpp:1431] Performed allocation for 1 agents in 118185ns I1201 14:52:47.358395 120659968 slave.cpp:1219] Checkpointing SlaveInfo to '/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/ResourceProviderCapability_ReservationTest_MasterFailover_1_wjKAB3/meta/slaves/209387ca-a9c3-4717-9769-a59d9fe927f1-S0/slave.info' I1201 14:52:47.358955 120659968 clock.cpp:435] Clock of local-resource-provider-daemon(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.359042 120659968 slave.cpp:1281] Forwarding total resources cpus:8; mem:2048; disk:470537; ports:[31000-32000] I1201 14:52:47.359097 120659968 slave.cpp:1298] Forwarding total oversubscribed resources {} I1201 14:52:47.359454 121196544 master.cpp:7036] Received update of agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) with total resources cpus:8; mem:2048; disk:470537; ports:[31000-32000] I1201 14:52:47.359508 121196544 master.cpp:7049] Received update of agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) with total oversubscribed resources {} I1201 14:52:47.359946 121733120 hierarchical.cpp:620] Agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 (172.18.8.37) updated with total resources cpus:8; mem:2048; disk:470537; ports:[31000-32000] I1201 14:52:47.361738 2720961344 clock.cpp:435] Clock of version@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 W1201 14:52:47.361764 2720961344 process.cpp:2756] Attempted to spawn already running process version@172.18.8.37:57116 I1201 14:52:47.361790 2720961344 clock.cpp:435] Clock of __latch__(157)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.366130 2720961344 clock.cpp:435] Clock of scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.366179 2720961344 clock.cpp:435] Clock of metrics@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.366233 2720961344 sched.cpp:232] Version: 1.5.0 I1201 14:52:47.366262 2720961344 clock.cpp:381] Clock advanced (5ms) to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.366351 122806272 clock.cpp:435] Clock of hierarchical-allocator(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.366603 118513664 clock.cpp:435] Clock of standalone-master-detector(53)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.366729 122269696 hierarchical.cpp:1890] No allocations performed I1201 14:52:47.366744 120123392 sched.cpp:336] New master detected at master@172.18.8.37:57116 I1201 14:52:47.366755 122269696 hierarchical.cpp:1431] Performed allocation for 1 agents in 84509ns I1201 14:52:47.366794 120123392 sched.cpp:396] Authenticating with master master@172.18.8.37:57116 I1201 14:52:47.366806 120123392 sched.cpp:403] Using default CRAM-MD5 authenticatee I1201 14:52:47.366844 120123392 clock.cpp:435] Clock of crammd5-authenticatee(70)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.366993 121196544 authenticatee.cpp:121] Creating new client SASL connection I1201 14:52:47.367127 119050240 master.cpp:8589] Authenticating scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 I1201 14:52:47.367199 119586816 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(70)@172.18.8.37:57116 I1201 14:52:47.367228 119586816 clock.cpp:435] Clock of crammd5-authenticator-session(70)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.844857088+00:00 I1201 14:52:47.367327 118513664 authenticator.cpp:98] Creating new server SASL connection I1201 14:52:47.367401 120659968 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1201 14:52:47.367424 120659968 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1201 14:52:47.367471 122269696 authenticator.cpp:204] Received SASL authentication start I1201 14:52:47.367512 122269696 authenticator.cpp:326] Authentication requires more steps I1201 14:52:47.367568 120123392 authenticatee.cpp:259] Received SASL authentication step I1201 14:52:47.367624 121733120 authenticator.cpp:232] Received SASL authentication step I1201 14:52:47.367642 121733120 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'nfnt.local' server FQDN: 'nfnt.local' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1201 14:52:47.367652 121733120 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1201 14:52:47.367668 121733120 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1201 14:52:47.367678 121733120 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'nfnt.local' server FQDN: 'nfnt.local' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1201 14:52:47.367688 121733120 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1201 14:52:47.367694 121733120 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1201 14:52:47.367704 121733120 authenticator.cpp:318] Authentication success I1201 14:52:47.367748 121196544 authenticatee.cpp:299] Authentication success I1201 14:52:47.367775 119586816 master.cpp:8619] Successfully authenticated principal 'test-principal' at scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 I1201 14:52:47.367806 118513664 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(70)@172.18.8.37:57116 I1201 14:52:47.367884 119050240 sched.cpp:502] Successfully authenticated with master master@172.18.8.37:57116 I1201 14:52:47.367899 119050240 sched.cpp:824] Sending SUBSCRIBE call to master@172.18.8.37:57116 I1201 14:52:47.367959 119050240 sched.cpp:857] Will retry registration in 1.762124339secs if necessary I1201 14:52:47.368059 121733120 master.cpp:2969] Received SUBSCRIBE call for framework 'default' at scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 I1201 14:52:47.368075 121733120 master.cpp:2286] Authorizing framework principal 'test-principal' to receive offers for roles '{ role }' I1201 14:52:47.368321 119586816 master.cpp:3049] Subscribing framework default with checkpointing disabled and capabilities [ RESERVATION_REFINEMENT ] I1201 14:52:47.368604 120659968 clock.cpp:435] Clock of metrics@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.368692 120659968 hierarchical.cpp:293] Added framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.368717 120123392 sched.cpp:751] Framework registered with 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.368772 120123392 sched.cpp:765] Scheduler::registered took 45597ns I1201 14:52:47.368937 120659968 hierarchical.cpp:1860] Allocating cpus:8; mem:2048; disk:470537; ports:[31000-32000] on agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 to role role of framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.369276 120659968 clock.cpp:435] Clock of master@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.369351 120659968 hierarchical.cpp:1990] No inverse offers to send out! I1201 14:52:47.369380 120659968 hierarchical.cpp:1431] Performed allocation for 1 agents in 643932ns I1201 14:52:47.369691 119050240 master.cpp:8419] Sending 1 offers to framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 (default) at scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 I1201 14:52:47.369753 119050240 clock.cpp:435] Clock of scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.369946 121733120 sched.cpp:897] Received 1 offers I1201 14:52:47.370076 121733120 sched.cpp:921] Scheduler::resourceOffers took 82264ns I1201 14:52:47.372522 119586816 master.cpp:10331] Removing offer 209387ca-a9c3-4717-9769-a59d9fe927f1-O0 I1201 14:52:47.372608 119586816 master.cpp:4236] Processing ACCEPT call for offers: [ 209387ca-a9c3-4717-9769-a59d9fe927f1-O0 ] on agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) for framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 (default) at scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 I1201 14:52:47.372666 119586816 clock.cpp:435] Clock of local-authorizer(52)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.372692 119586816 master.cpp:3663] Authorizing principal 'test-principal' to reserve resources '[{""""allocation_info"""":{""""role"""":""""role""""},""""name"""":""""cpus"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":8.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""role""""},""""name"""":""""mem"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":2048.0},""""type"""":""""SCALAR""""}]' I1201 14:52:47.373001 119586816 clock.cpp:435] Clock of __await__(52)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.373103 119586816 clock.cpp:435] Clock of __await__(53)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.373265 118513664 clock.cpp:435] Clock of help@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.373839 119050240 master.cpp:4569] Applying RESERVE operation for resources [{""""allocation_info"""":{""""role"""":""""role""""},""""name"""":""""cpus"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":8.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""role""""},""""name"""":""""mem"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":2048.0},""""type"""":""""SCALAR""""}] from framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 (default) at scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 to agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) I1201 14:52:47.374579 119050240 master.cpp:10228] Sending offer operation '' (uuid: 06473b15-dc7c-4e97-8a36-11798b6ad430) to agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) I1201 14:52:47.374627 119050240 clock.cpp:435] Clock of slave(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.376063 121196544 slave.cpp:3605] Updated checkpointed resources from {} to cpus(reservations: [(DYNAMIC,role,test-principal)]):8; mem(reservations: [(DYNAMIC,role,test-principal)]):2048 I1201 14:52:47.376122 121196544 slave.cpp:7079] Updating the state of offer operation '' (uuid: 06473b15-dc7c-4e97-8a36-11798b6ad430) of framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 (latest state: OFFER_OPERATION_FINISHED, status update state: OFFER_OPERATION_FINISHED) I1201 14:52:47.376248 118513664 master.cpp:10015] Updating the state of offer operation '' (uuid: 06473b15-dc7c-4e97-8a36-11798b6ad430) of framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 (latest state: OFFER_OPERATION_FINISHED, status update state: OFFER_OPERATION_FINISHED) I1201 14:52:47.376736 122269696 hierarchical.cpp:830] Updated allocation of framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 on agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 from cpus(allocated: role):8; mem(allocated: role):2048; disk(allocated: role):470537; ports(allocated: role):[31000-32000] to ports(allocated: role):[31000-32000]; cpus(allocated: role)(reservations: [(DYNAMIC,role,test-principal)]):8; disk(allocated: role):470537; mem(allocated: role)(reservations: [(DYNAMIC,role,test-principal)]):2048 I1201 14:52:47.377378 122269696 hierarchical.cpp:1106] Recovered ports(allocated: role):[31000-32000]; cpus(allocated: role)(reservations: [(DYNAMIC,role,test-principal)]):8; disk(allocated: role):470537; mem(allocated: role)(reservations: [(DYNAMIC,role,test-principal)]):2048 (total: ports:[31000-32000]; cpus(reservations: [(DYNAMIC,role,test-principal)]):8; disk:470537; mem(reservations: [(DYNAMIC,role,test-principal)]):2048, allocated: {}) on agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 from framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.377565 2720961344 clock.cpp:435] Clock of __authentication_router__(1)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.377607 2720961344 master.cpp:1159] Master terminating I1201 14:52:47.377635 120659968 clock.cpp:435] Clock of __basic_authenticator__(137)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.377753 2720961344 clock.cpp:435] Clock of slave-observer(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.377820 120659968 clock.cpp:435] Clock of __basic_authenticator__(138)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.377976 2720961344 clock.cpp:435] Clock of whitelist(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.378032 2720961344 clock.cpp:435] Clock of crammd5-authenticator(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.378087 121733120 hierarchical.cpp:586] Removed agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 I1201 14:52:47.378192 121733120 hierarchical.cpp:345] Removed framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.378202 120123392 slave.cpp:5202] Got exited event for master@172.18.8.37:57116 W1201 14:52:47.378216 120123392 slave.cpp:5207] Master disconnected! Waiting for a new master to be elected I1201 14:52:47.378964 2720961344 clock.cpp:435] Clock of registrar(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.379283 2720961344 clock.cpp:435] Clock of in-memory-storage(35)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.379336 2720961344 clock.cpp:435] Clock of standalone-master-detector(52)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.379380 2720961344 process.cpp:2730] Dropping event for process master@172.18.8.37:57116 I1201 14:52:47.379434 2720961344 process.cpp:2730] Dropping event for process master@172.18.8.37:57116 I1201 14:52:47.379631 2720961344 clock.cpp:435] Clock of files@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.379984 2720961344 clock.cpp:435] Clock of files@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.380062 2720961344 clock.cpp:435] Clock of hierarchical-allocator(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.380225 2720961344 cluster.cpp:170] Creating default 'local' authorizer I1201 14:52:47.380254 2720961344 clock.cpp:435] Clock of local-authorizer(54)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.380331 2720961344 clock.cpp:435] Clock of standalone-master-detector(54)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.380364 2720961344 clock.cpp:435] Clock of in-memory-storage(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.380398 2720961344 clock.cpp:435] Clock of registrar(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.380887 2720961344 clock.cpp:435] Clock of master@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.382493 121733120 master.cpp:454] Master 2cadeec8-e39c-486b-bcd9-c24721e36b5a (172.18.8.37) started on 172.18.8.37:57116 I1201 14:52:47.382514 121733120 master.cpp:456] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""5ms"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/private/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/z44iHn/credentials"""" --filter_gpu_resources=""""true"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_unreachable_tasks_per_framework=""""1000"""" --port=""""5050"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --roles=""""role"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/private/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/z44iHn/master"""" --zk_session_timeout=""""10secs"""" I1201 14:52:47.382732 121733120 master.cpp:505] Master only allowing authenticated frameworks to register I1201 14:52:47.382743 121733120 master.cpp:511] Master only allowing authenticated agents to register I1201 14:52:47.382750 121733120 master.cpp:517] Master only allowing authenticated HTTP frameworks to register I1201 14:52:47.382756 121733120 credentials.hpp:37] Loading credentials for authentication from '/private/var/folders/0b/srgwj7vd2037pygpz1fpyqgm0000gn/T/z44iHn/credentials' I1201 14:52:47.382990 121733120 master.cpp:561] Using default 'crammd5' authenticator I1201 14:52:47.383054 121733120 clock.cpp:435] Clock of crammd5-authenticator(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.383100 121733120 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly' I1201 14:52:47.383141 121733120 clock.cpp:435] Clock of __basic_authenticator__(142)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.383208 121733120 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' I1201 14:52:47.383245 121733120 clock.cpp:435] Clock of __basic_authenticator__(143)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.383299 121733120 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' I1201 14:52:47.383329 121733120 clock.cpp:435] Clock of __basic_authenticator__(144)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.383373 121733120 master.cpp:640] Authorization enabled W1201 14:52:47.383383 121733120 master.cpp:703] The '--roles' flag is deprecated. This flag will be removed in the future. See the Mesos 0.27 upgrade notes for more information I1201 14:52:47.383452 122269696 clock.cpp:435] Clock of __basic_authenticator__(139)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.383494 121733120 clock.cpp:435] Clock of whitelist(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.383536 119050240 hierarchical.cpp:173] Initialized hierarchical allocator process I1201 14:52:47.383571 119586816 whitelist_watcher.cpp:77] No whitelist given I1201 14:52:47.385560 119050240 master.cpp:2221] Elected as the leading master! I1201 14:52:47.385589 119050240 master.cpp:1701] Recovering from registrar I1201 14:52:47.385844 119586816 registrar.cpp:347] Recovering registrar I1201 14:52:47.385941 119586816 clock.cpp:435] Clock of __latch__(158)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.386288 119050240 registrar.cpp:391] Successfully fetched the registry (0B) in 0ns I1201 14:52:47.386355 119050240 registrar.cpp:495] Applied 1 operations in 25658ns; attempting to update the registry I1201 14:52:47.386454 119050240 clock.cpp:435] Clock of __latch__(159)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.386924 120123392 registrar.cpp:552] Successfully updated the registry in 0ns I1201 14:52:47.387053 120123392 registrar.cpp:424] Successfully recovered registrar I1201 14:52:47.387351 118513664 master.cpp:1814] Recovered 0 agents from the registry (131B); allowing 10mins for agents to re-register I1201 14:52:47.387539 119050240 hierarchical.cpp:211] Skipping recovery of hierarchical allocator: nothing to recover I1201 14:52:47.389255 2720961344 clock.cpp:435] Clock of standalone-master-detector(53)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.389283 2720961344 clock.cpp:381] Clock advanced (1secs) to 2017-12-01 13:53:05.849857088+00:00 I1201 14:52:47.389328 2720961344 clock.cpp:381] Clock advanced (10ms) to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.389621 122806272 clock.cpp:435] Clock of hierarchical-allocator(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.854857088+00:00 I1201 14:52:47.389695 122806272 clock.cpp:435] Clock of __reaper__(1)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.944857088+00:00 I1201 14:52:47.389721 122806272 clock.cpp:435] Clock of master@172.18.8.37:57116 updated to 2017-12-01 13:53:05.121836864+00:00 I1201 14:52:47.389757 122806272 clock.cpp:435] Clock of master@172.18.8.37:57116 updated to 2017-12-01 13:53:05.223129856+00:00 I1201 14:52:47.389801 122806272 process.cpp:2730] Dropping event for process hierarchical-allocator(35)@172.18.8.37:57116 I1201 14:52:47.389827 122269696 sched.cpp:330] Scheduler::disconnected took 26933ns I1201 14:52:47.389847 122269696 sched.cpp:336] New master detected at master@172.18.8.37:57116 I1201 14:52:47.389847 119586816 clock.cpp:435] Clock of task-status-update-manager(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.389878 122806272 clock.cpp:435] Clock of hierarchical-allocator(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.389904 119586816 slave.cpp:999] New master detected at master@172.18.8.37:57116 I1201 14:52:47.389920 122806272 clock.cpp:435] Clock of __reaper__(1)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.389943 118513664 task_status_update_manager.cpp:181] Pausing sending task status updates I1201 14:52:47.389977 122269696 sched.cpp:396] Authenticating with master master@172.18.8.37:57116 I1201 14:52:47.389991 122269696 sched.cpp:403] Using default CRAM-MD5 authenticatee I1201 14:52:47.390002 119586816 slave.cpp:1034] Detecting new master I1201 14:52:47.390007 122806272 clock.cpp:435] Clock of master@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.390041 122806272 process.cpp:2730] Dropping event for process slave(13)@172.18.8.37:57116 I1201 14:52:47.390142 122269696 clock.cpp:435] Clock of crammd5-authenticatee(71)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.849857088+00:00 I1201 14:52:47.390215 122806272 process.cpp:2730] Dropping event for process slave-observer(8)@172.18.8.37:57116 I1201 14:52:47.390233 122806272 process.cpp:2730] Dropping event for process slave(14)@172.18.8.37:57116 I1201 14:52:47.390305 122806272 process.cpp:2730] Dropping event for process scheduler-43385646-c98e-467a-85bf-bf2dde2554e3@172.18.8.37:57116 I1201 14:52:47.390333 119586816 authenticatee.cpp:121] Creating new client SASL connection I1201 14:52:47.390377 120123392 hierarchical.cpp:1890] No allocations performed I1201 14:52:47.390394 122806272 process.cpp:2730] Dropping event for process scheduler-43385646-c98e-467a-85bf-bf2dde2554e3@172.18.8.37:57116 I1201 14:52:47.390410 122806272 process.cpp:2730] Dropping event for process slave(5)@172.18.8.37:57116 I1201 14:52:47.390422 122806272 process.cpp:2730] Dropping event for process slave-observer(9)@172.18.8.37:57116 I1201 14:52:47.390434 122806272 process.cpp:2730] Dropping event for process scheduler-adf2755a-1205-4709-b112-20b68b416d29@172.18.8.37:57116 I1201 14:52:47.390486 120659968 master.cpp:8589] Authenticating scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 I1201 14:52:47.390480 120123392 hierarchical.cpp:1431] Performed allocation for 0 agents in 143584ns I1201 14:52:47.390514 120659968 clock.cpp:435] Clock of crammd5-authenticator(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.390588 118513664 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(71)@172.18.8.37:57116 I1201 14:52:47.390600 122806272 clock.cpp:435] Clock of slave(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:04.856267893+00:00 I1201 14:52:47.390712 118513664 clock.cpp:435] Clock of crammd5-authenticator-session(71)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.390739 122806272 clock.cpp:435] Clock of slave(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.390795 121733120 slave.cpp:1061] Authenticating with master master@172.18.8.37:57116 I1201 14:52:47.390843 121196544 authenticator.cpp:98] Creating new server SASL connection I1201 14:52:47.390856 121733120 slave.cpp:1070] Using default CRAM-MD5 authenticatee I1201 14:52:47.390898 121733120 clock.cpp:435] Clock of crammd5-authenticatee(72)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.390928 121196544 clock.cpp:435] Clock of crammd5-authenticatee(71)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.391010 119586816 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1201 14:52:47.391033 119586816 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1201 14:52:47.391055 122269696 authenticatee.cpp:121] Creating new client SASL connection I1201 14:52:47.391096 119050240 authenticator.cpp:204] Received SASL authentication start I1201 14:52:47.391152 119050240 authenticator.cpp:326] Authentication requires more steps I1201 14:52:47.391176 120123392 master.cpp:8589] Authenticating slave(18)@172.18.8.37:57116 I1201 14:52:47.391217 120659968 authenticatee.cpp:259] Received SASL authentication step I1201 14:52:47.391259 118513664 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(72)@172.18.8.37:57116 I1201 14:52:47.391302 121196544 authenticator.cpp:232] Received SASL authentication step I1201 14:52:47.391297 118513664 clock.cpp:435] Clock of crammd5-authenticator-session(72)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.391327 121196544 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'nfnt.local' server FQDN: 'nfnt.local' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1201 14:52:47.391341 121196544 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1201 14:52:47.391372 121196544 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1201 14:52:47.391384 121196544 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'nfnt.local' server FQDN: 'nfnt.local' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1201 14:52:47.391394 121196544 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1201 14:52:47.391400 121196544 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1201 14:52:47.391412 121196544 authenticator.cpp:318] Authentication success I1201 14:52:47.391432 121733120 authenticator.cpp:98] Creating new server SASL connection I1201 14:52:47.391477 119586816 authenticatee.cpp:299] Authentication success I1201 14:52:47.391510 119586816 clock.cpp:435] Clock of scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.391515 122269696 master.cpp:8619] Successfully authenticated principal 'test-principal' at scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 I1201 14:52:47.391571 119050240 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(71)@172.18.8.37:57116 I1201 14:52:47.391602 120123392 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 I1201 14:52:47.391620 120123392 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' I1201 14:52:47.391645 119050240 clock.cpp:435] Clock of help@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.391826 120659968 sched.cpp:502] Successfully authenticated with master master@172.18.8.37:57116 I1201 14:52:47.391840 120659968 sched.cpp:824] Sending SUBSCRIBE call to master@172.18.8.37:57116 I1201 14:52:47.391840 119586816 authenticator.cpp:204] Received SASL authentication start I1201 14:52:47.391886 119586816 authenticator.cpp:326] Authentication requires more steps I1201 14:52:47.391918 120659968 sched.cpp:857] Will retry registration in 863.906836ms if necessary I1201 14:52:47.391944 118513664 authenticatee.cpp:259] Received SASL authentication step I1201 14:52:47.392076 121196544 master.cpp:2969] Received SUBSCRIBE call for framework 'default' at scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 I1201 14:52:47.392094 121196544 master.cpp:2286] Authorizing framework principal 'test-principal' to receive offers for roles '{ role }' I1201 14:52:47.392139 121196544 clock.cpp:435] Clock of local-authorizer(54)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.392231 121733120 authenticator.cpp:232] Received SASL authentication step I1201 14:52:47.392251 121733120 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'nfnt.local' server FQDN: 'nfnt.local' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false I1201 14:52:47.392261 121733120 auxprop.cpp:181] Looking up auxiliary property '*userPassword' I1201 14:52:47.392278 121733120 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' I1201 14:52:47.392292 121733120 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'nfnt.local' server FQDN: 'nfnt.local' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true I1201 14:52:47.392300 121733120 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true I1201 14:52:47.392307 121733120 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true I1201 14:52:47.392318 121733120 authenticator.cpp:318] Authentication success I1201 14:52:47.392449 122269696 authenticatee.cpp:299] Authentication success I1201 14:52:47.392474 119050240 master.cpp:8619] Successfully authenticated principal 'test-principal' at slave(18)@172.18.8.37:57116 I1201 14:52:47.392523 120123392 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(72)@172.18.8.37:57116 I1201 14:52:47.392602 118513664 master.cpp:3049] Subscribing framework default with checkpointing disabled and capabilities [ RESERVATION_REFINEMENT ] I1201 14:52:47.392652 120659968 slave.cpp:1153] Successfully authenticated with master master@172.18.8.37:57116 I1201 14:52:47.392724 118513664 clock.cpp:435] Clock of metrics@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.392774 118513664 master.cpp:6950] Updating info for framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.392865 120659968 slave.cpp:1696] Will retry registration in 12.391929ms if necessary I1201 14:52:47.392976 119050240 hierarchical.cpp:293] Added framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.392999 119050240 hierarchical.cpp:406] Deactivated framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.393084 120123392 sched.cpp:751] Framework registered with 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.393128 120123392 sched.cpp:765] Scheduler::registered took 31032ns I1201 14:52:47.393184 119050240 hierarchical.cpp:372] Activated framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.393191 118513664 master.cpp:6371] Received re-register agent message from agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) I1201 14:52:47.393234 118513664 master.cpp:3878] Authorizing agent with principal 'test-principal' I1201 14:52:47.393254 119050240 hierarchical.cpp:1890] No allocations performed I1201 14:52:47.393266 119050240 hierarchical.cpp:1990] No inverse offers to send out! I1201 14:52:47.393278 119050240 hierarchical.cpp:1431] Performed allocation for 0 agents in 45973ns I1201 14:52:47.393479 121733120 master.cpp:6442] Authorized re-registration of agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) I1201 14:52:47.393527 121733120 master.cpp:6624] Re-registering agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) I1201 14:52:47.393579 121733120 clock.cpp:435] Clock of registrar(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 W1201 14:52:47.393762 119586816 registry_operations.cpp:133] Allowing UNKNOWN agent to reregister: hostname: """"172.18.8.37"""" resources { name: """"cpus"""" type: SCALAR scalar { value: 8 } role: """"*"""" } resources { name: """"mem"""" type: SCALAR scalar { value: 2048 } role: """"*"""" } resources { name: """"disk"""" type: SCALAR scalar { value: 470537 } role: """"*"""" } resources { name: """"ports"""" type: RANGES ranges { range { begin: 31000 end: 32000 } } role: """"*"""" } id { value: """"209387ca-a9c3-4717-9769-a59d9fe927f1-S0"""" } checkpoint: true port: 57116 I1201 14:52:47.393916 119586816 registrar.cpp:495] Applied 1 operations in 181543ns; attempting to update the registry I1201 14:52:47.393991 119586816 clock.cpp:435] Clock of in-memory-storage(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.394055 119586816 clock.cpp:435] Clock of __latch__(160)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.394348 120123392 registrar.cpp:552] Successfully updated the registry in 0ns I1201 14:52:47.394500 119050240 master.cpp:6696] Re-admitted agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) I1201 14:52:47.394784 119050240 clock.cpp:435] Clock of slave-observer(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.394906 119050240 master.cpp:6849] Re-registered agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) with cpus:8; mem:2048; disk:470537; ports:[31000-32000] I1201 14:52:47.395256 122269696 hierarchical.cpp:553] Added agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 (172.18.8.37) with ports:[31000-32000]; cpus(reservations: [(DYNAMIC,role,test-principal)]):8; disk:470537; mem(reservations: [(DYNAMIC,role,test-principal)]):2048 (allocated: {}) I1201 14:52:47.395566 122269696 hierarchical.cpp:1860] Allocating ports:[31000-32000]; cpus(reservations: [(DYNAMIC,role,test-principal)]):8; disk:470537; mem(reservations: [(DYNAMIC,role,test-principal)]):2048 on agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 to role role of framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.395998 121733120 slave.cpp:1343] Re-registered with master master@172.18.8.37:57116 I1201 14:52:47.396021 121733120 clock.cpp:435] Clock of task-status-update-manager(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.396041 122269696 hierarchical.cpp:1990] No inverse offers to send out! I1201 14:52:47.396050 121733120 clock.cpp:435] Clock of local-resource-provider-daemon(18)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.859857088+00:00 I1201 14:52:47.396112 122269696 hierarchical.cpp:1431] Performed allocation for 1 agents in 783438ns I1201 14:52:47.396169 121733120 slave.cpp:1394] Forwarding total resources ports:[31000-32000]; cpus(reservations: [(DYNAMIC,role,test-principal)]):8; disk:470537; mem(reservations: [(DYNAMIC,role,test-principal)]):2048 I1201 14:52:47.396220 121733120 slave.cpp:1411] Forwarding total oversubscribed resources {} I1201 14:52:47.396232 121196544 task_status_update_manager.cpp:188] Resuming sending task status updates I1201 14:52:47.396296 121733120 slave.cpp:5160] Received ping from slave-observer(36)@172.18.8.37:57116 I1201 14:52:47.396390 120659968 master.cpp:8419] Sending 1 offers to framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 (default) at scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 I1201 14:52:47.396756 120659968 master.cpp:7036] Received update of agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) with total resources ports:[31000-32000]; cpus(reservations: [(DYNAMIC,role,test-principal)]):8; disk:470537; mem(reservations: [(DYNAMIC,role,test-principal)]):2048 I1201 14:52:47.396806 120659968 master.cpp:7049] Received update of agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) with total oversubscribed resources {} I1201 14:52:47.396842 119050240 sched.cpp:897] Received 1 offers I1201 14:52:47.396957 119050240 sched.cpp:921] Scheduler::resourceOffers took 81196ns I1201 14:52:47.397416 120659968 master.cpp:7422] Removing offer 2cadeec8-e39c-486b-bcd9-c24721e36b5a-O0 with resources ports(allocated: role):[31000-32000]; cpus(allocated: role)(reservations: [(DYNAMIC,role,test-principal)]):8; disk(allocated: role):470537; mem(allocated: role)(reservations: [(DYNAMIC,role,test-principal)]):2048 on agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 at slave(18)@172.18.8.37:57116 (172.18.8.37) I1201 14:52:47.397509 119586816 hierarchical.cpp:620] Agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 (172.18.8.37) updated with total resources ports:[31000-32000]; cpus(reservations: [(DYNAMIC,role,test-principal)]):8; disk:470537; mem(reservations: [(DYNAMIC,role,test-principal)]):2048 I1201 14:52:47.397892 120659968 master.cpp:10331] Removing offer 2cadeec8-e39c-486b-bcd9-c24721e36b5a-O0 I1201 14:52:47.398036 122269696 sched.cpp:947] Rescinded offer 2cadeec8-e39c-486b-bcd9-c24721e36b5a-O0 GMOCK WARNING: Uninteresting mock function call - returning directly. Function call: offerRescinded(0x7ffee4a13878, @0x7f9f6aa4bfb8 2cadeec8-e39c-486b-bcd9-c24721e36b5a-O0) NOTE: You can safely ignore the above warning unless this call should not happen. Do not suppress it by blindly adding an EXPECT_CALL() if you don't mean to enforce the call. See https://github.com/google/googletest/blob/master/googlemock/docs/CookBook.md#knowing-when-to-expect for details. I1201 14:52:47.398121 122269696 sched.cpp:958] Scheduler::offerRescinded took 61218ns I1201 14:52:47.398532 119586816 hierarchical.cpp:1106] Recovered ports(allocated: role):[31000-32000]; cpus(allocated: role)(reservations: [(DYNAMIC,role,test-principal)]):8; disk(allocated: role):470537; mem(allocated: role)(reservations: [(DYNAMIC,role,test-principal)]):2048 (total: ports:[31000-32000]; cpus(reservations: [(DYNAMIC,role,test-principal)]):8; disk:470537; mem(reservations: [(DYNAMIC,role,test-principal)]):2048, allocated: {}) on agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 from framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.398988 2720961344 clock.cpp:381] Clock advanced (5ms) to 2017-12-01 13:53:05.864857088+00:00 I1201 14:52:47.399147 122806272 clock.cpp:435] Clock of hierarchical-allocator(36)@172.18.8.37:57116 updated to 2017-12-01 13:53:05.864857088+00:00 I1201 14:52:47.399618 121196544 hierarchical.cpp:1860] Allocating ports:[31000-32000]; cpus(reservations: [(DYNAMIC,role,test-principal)]):8; disk:470537; mem(reservations: [(DYNAMIC,role,test-principal)]):2048 on agent 209387ca-a9c3-4717-9769-a59d9fe927f1-S0 to role role of framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 I1201 14:52:47.400048 121196544 clock.cpp:435] Clock of master@172.18.8.37:57116 updated to 2017-12-01 13:53:05.864857088+00:00 I1201 14:52:47.400128 121196544 hierarchical.cpp:1990] No inverse offers to send out! I1201 14:52:47.400143 121196544 hierarchical.cpp:1431] Performed allocation for 1 agents in 774337ns I1201 14:52:47.400451 120123392 master.cpp:8419] Sending 1 offers to framework 209387ca-a9c3-4717-9769-a59d9fe927f1-0000 (default) at scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 I1201 14:52:47.400507 120123392 clock.cpp:435] Clock of scheduler-430784ca-9418-4424-b431-5ce1a96a6fee@172.18.8.37:57116 updated to 2017-12-01 13:53:05.864857088+00:00 I1201 14:52:47.400672 121733120 sched.cpp:897] Received 1 offers ../src/tests/reservation_tests.cpp:938: Failure Mock function called more times than expected - returning directly. Function call: resourceOffers(0x7ffee4a13878, @0x700007417160 { 160-byte object <68-FA A8-22 01-00 00-00 00-00 00-00 00-00 00-00 5F-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 04-00 00-00 04-00 00-00 C0-E5 C4-6D 9F-7F 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 ... 50-C6 CB-6A 9F-7F 00-00 70-C6 CB-6A 9F-7F 00-00 E0-C6 CB-6A 9F-7F 00-00 40-DC C3-6D 9F-7F 00-00 B0-DC C3-6D 9F-7F 00-00 00-00 00-00 00-00 00-00 50-B4 C4-6D 9F-7F 00-00 00-00 00-00 00-00 00-00> }) Expected: to be called once Actual: called twice - over-saturated and active *** Aborted at 1512136367 (unix time) try """"date -d @1512136367"""" if you are using GNU date *** PC: @ 0x10d94677a testing::UnitTest::AddTestPartResult() *** SIGSEGV (@0x0) received by PID 53782 (TID 0x700007418000) stack trace: *** @ 0x7fff68f19f5a _sigtramp @ 0x2c0 (unknown) @ 0x10d945f3b testing::internal::AssertHelper::operator=() @ 0x10d9c187f testing::internal::GoogleTestFailureReporter::ReportFailure() @ 0x10b344315 testing::internal::Expect() @ 0x10d9bc9b5 testing::internal::UntypedFunctionMockerBase::UntypedInvokeWith() @ 0x10b59d6eb _ZN7testing8internal18FunctionMockerBaseIFvPN5mesos15SchedulerDriverERKNSt3__16vectorINS2_5OfferENS5_9allocatorIS7_EEEEEE10InvokeWithERKNS5_5tupleIJS4_SC_EEE @ 0x10b59d698 testing::internal::FunctionMocker<>::Invoke() @ 0x10b544c8d mesos::internal::tests::MockScheduler::resourceOffers() @ 0x117f1a432 mesos::internal::SchedulerProcess::resourceOffers() @ 0x117f2dad5 _ZN15ProtobufProcessIN5mesos8internal16SchedulerProcessEE8handlerNINS1_21ResourceOffersMessageEJRKN6google8protobuf16RepeatedPtrFieldINS0_5OfferEEERKNS8_INSt3__112basic_stringIcNSD_11char_traitsIcEENSD_9allocatorIcEEEEEEEJRKNSD_6vectorIS9_NSH_IS9_EEEERKNSN_ISJ_NSH_ISJ_EEEEEEEvPS2_MS2_FvRKN7process4UPIDEDpT1_ES10_RKSJ_DpMT_KFT0_vE @ 0x117f2fe32 _ZNSt3__128__invoke_void_return_wrapperIvE6__callIJRNS_6__bindIRFvPN5mesos8internal16SchedulerProcessEMS6_FvRKN7process4UPIDERKNS_6vectorINS4_5OfferENS_9allocatorISD_EEEERKNSC_INS_12basic_stringIcNS_11char_traitsIcEENSE_IcEEEENSE_ISN_EEEEESB_RKSN_MNS5_21ResourceOffersMessageEKFRKN6google8protobuf16RepeatedPtrFieldISD_EEvEMSW_KFRKNSZ_ISN_EEvEEJRS7_RST_RKNS_12placeholders4__phILi1EEERKNS1F_ILi2EEERS14_RS19_EEESB_SV_EEEvDpOT_ @ 0x117f2f79c _ZNSt3__110__function6__funcINS_6__bindIRFvPN5mesos8internal16SchedulerProcessEMS5_FvRKN7process4UPIDERKNS_6vectorINS3_5OfferENS_9allocatorISC_EEEERKNSB_INS_12basic_stringIcNS_11char_traitsIcEENSD_IcEEEENSD_ISM_EEEEESA_RKSM_MNS4_21ResourceOffersMessageEKFRKN6google8protobuf16RepeatedPtrFieldISC_EEvEMSV_KFRKNSY_ISM_EEvEEJRS6_RSS_RKNS_12placeholders4__phILi1EEERKNS1E_ILi2EEERS13_RS18_EEENSD_IS1N_EEFvSA_SU_EEclESA_SU_ @ 0x10b669322 std::__1::function<>::operator()() @ 0x117f0dff2 ProtobufProcess<>::visit() @ 0x112a19aae process::MessageEvent::visit() @ 0x1150a7e71 process::ProcessBase::serve() @ 0x11296d59b process::ProcessManager::resume() @ 0x112ac270b process::ProcessManager::init_threads()::$_1::operator()() @ 0x112ac2290 _ZNSt3__114__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN7process14ProcessManager12init_threadsEvE3$_1EEEEEPvSB_ @ 0x7fff68f236c1 _pthread_body @ 0x7fff68f2356d _pthread_start @ 0x7fff68f22c5d thread_start Segmentation fault: 11 ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8293","12/01/2017 23:44:44",3,"Reservation may not be allocated when the role has no quota. ""Reservations that belong to a role that has no quota may not be allocated even when the reserved resources are allocatable to the role. This is because in the current implementation the reserved resources may be counted towards the headroom left for unallocated quota limit in the second stage allocation. https://github.com/apache/mesos/blob/c844db9ac7c0cef59be87438c6781bfb71adcc42/src/master/allocator/mesos/hierarchical.cpp#L1764-L1767 Roles with quota do not have this issue because currently their reservations are taken care of in the first stage.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8294","12/02/2017 00:45:58",8,"Support container image basic auto gc. ""Add heuristic logic in the agent for basic auto image gc support. Please see this section for the new interface design: https://docs.google.com/document/d/1TSn7HOFLWpF3TLRVe4XyLpv6B__A1tk-tU16B1ZbsCI/edit#heading=h.iepp3ce9i22i ""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0 +"MESOS-8295","12/02/2017 00:51:19",2,"Add excluded image parameter to containerizer::pruneImages() interface. ""Add excluded image parameter to containerizer::pruneImages() interface.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0 +"MESOS-8297","12/04/2017 17:59:01",5,"Built-in driver-based executors ignore kill task if the task has not been launched. ""If docker executor receives a kill task request and the task has never been launch, the request is ignored. We now know that: the executor has never received the registration confirmation, hence has ignored the launch task request, hence the task has never started. And this is how the executor enters an idle state, waiting for registration and ignoring kill task requests.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-8310","12/07/2017 17:14:46",2,"Document container image garbage collection. ""Document container image garbage collection.""","",0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0 +"MESOS-8314","12/08/2017 09:58:35",3,"Add authorization to display of resource provider information in API calls and endpoints ""The {{GET_RESOURCE_PROVIDERS}} call is used to list all resource providers known to a Mesos agent. We akso display resource provider infos for the master's {{GET_AGENTS}} call. These call needs to be authorized.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8325","12/12/2017 23:50:17",2,"Mesos containerizer does not properly handle old running containers ""We were testing an upgrade scenario recently and encountered the following assertion failure: Looking into {{Slave::_launch}}, indeed we find an unguarded access to the parent container's {{ContainerConfig}} [here|https://github.com/apache/mesos/blob/c320ab3b2dc4a16de7e060b9e15e9865a73389b0/src/slave/containerizer/mesos/containerizer.cpp#L1716]. We recently [added checkpointing|https://github.com/apache/mesos/commit/03a2a4dfa47b1d47c5eb23e81f5ef8213e46d545] of {{ContainerConfig}} to the Mesos containerizer. It seems that we are not appropriately handling upgrades, when there may be old containers running for which we do not expect to recover a {{ContainerConfig}}."""," Dec 12 16:45:42 agent.hostname mesos-agent[20788]: I1212 16:45:42.693977 20810 http.cpp:3116] Processing LAUNCH_NESTED_CONTAINER_SESSION call for container 'a89b211a-4549-462d-9cc7-0ea2bac2f729.1c262420-7525-4fee-99c1-aff4f66996bd.check-a41362ae-13c6-4750-990e-a1a0b2792b5f' Dec 12 16:45:42 agent.hostname mesos-agent[20788]: I1212 16:45:42.695179 20807 containerizer.cpp:1169] Trying to chown '/var/lib/mesos/slave/slaves/aaf0a62f-a6eb-4c1d-80db-5fdd26fe8008-S12/frameworks/dcf5f8b5-86a8-44df-ac03-b39404239ad8-0377/executors/kafka__68baefd4-aa8c-4b97-a23e-eb6a73fa91f6/runs/a89b211a-4549-462d-9cc7-0ea2bac2f729/containers/1c262420-7525-4fee-99c1-aff4f66996bd/containers/check-a41362ae-13c6-4750-990e-a1a0b2792b5f' to user 'nobody' Dec 12 16:45:42 agent.hostname mesos-agent[20788]: W1212 16:45:42.695309 20807 containerizer.cpp:1198] Cannot determine executor_info for root container 'a89b211a-4549-462d-9cc7-0ea2bac2f729' which has no config recovered. Dec 12 16:45:42 agent.hostname mesos-agent[20788]: I1212 16:45:42.695327 20807 containerizer.cpp:1203] Starting container a89b211a-4549-462d-9cc7-0ea2bac2f729.1c262420-7525-4fee-99c1-aff4f66996bd.check-a41362ae-13c6-4750-990e-a1a0b2792b5f Dec 12 16:45:42 agent.hostname mesos-agent[20788]: I1212 16:45:42.695829 20807 containerizer.cpp:2932] Transitioning the state of container a89b211a-4549-462d-9cc7-0ea2bac2f729.1c262420-7525-4fee-99c1-aff4f66996bd.check-a41362ae-13c6-4750-990e-a1a0b2792b5f from PROVISIONING to PREPARING Dec 12 16:45:42 agent.hostname mesos-agent[20788]: I1212 16:45:42.700569 20811 systemd.cpp:98] Assigned child process '20941' to 'mesos_executors.slice' Dec 12 16:45:42 agent.hostname mesos-agent[20788]: I1212 16:45:42.702945 20811 systemd.cpp:98] Assigned child process '20942' to 'mesos_executors.slice' Dec 12 16:45:42 agent.hostname mesos-agent[20788]: I1212 16:45:42.706069 20806 switchboard.cpp:575] Created I/O switchboard server (pid: 20943) listening on socket file '/tmp/mesos-io-switchboard-74af71bb-2385-4dde-9762-94d0196124d3' for container a89b211a-4549-462d-9cc7-0ea2bac2f729.1c262420-7525-4fee-99c1-aff4f66996bd.check-a41362ae-13c6-4750-990e-a1a0b2792b5f Dec 12 16:45:42 agent.hostname mesos-agent[20788]: mesos-agent: /pkg/src/mesos/3rdparty/stout/include/stout/option.hpp:115: T& Option::get() & [with T = mesos::slave::ContainerConfig]: Assertion `isSome()' failed. Dec 12 16:45:42 agent.hostname mesos-agent[20788]: *** Aborted at 1513097142 (unix time) try """"date -d @1513097142"""" if you are using GNU date *** Dec 12 16:45:42 agent.hostname mesos-agent[20788]: PC: @ 0x7f472f2851f7 __GI_raise Dec 12 16:45:42 agent.hostname mesos-agent[20788]: *** SIGABRT (@0x5134) received by PID 20788 (TID 0x7f472a2bf700) from PID 20788; stack trace: *** Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f472f6225e0 (unknown) Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f472f2851f7 __GI_raise Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f472f2868e8 __GI_abort Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f472f27e266 __assert_fail_base Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f472f27e312 __GI___assert_fail Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f4731c481e3 _ZNR6OptionIN5mesos5slave15ContainerConfigEE3getEv.part.170 Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f4731c61c2d mesos::internal::slave::MesosContainerizerProcess::_launch() Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f4731c7f403 _ZN5cpp176invokeIZN7process8dispatchIN5mesos8internal5slave13Containerizer12LaunchResultENS5_25MesosContainerizerProcessERKNS3_11ContainerIDERK6OptionINS3_5slave11ContainerIOEERKSt3mapISsSsSt4lessISsESaISt4pairIKSsSsEEERKSC_ISsESB_SH_SR_SU_EENS1_6FutureIT_EERKNS1_3PIDIT0_EEMSZ_FSX_T1_T2_T3_T4_EOT5_OT6_OT7_OT8_EUlSt10unique_ptrINS1_7PromiseIS7_EESt14default_deleteIS1J_EEOS9_OSF_OSP_OSS_PNS1_11ProcessBaseEE_IS1M_S9_SF_SP_SS_S1S_EEEDTclcl7forwardISW_Efp_Espcl7forwardIT0_Efp0_EEEOSW_DpOS1U_ Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f4731c7f4f1 _ZNO6lambda12CallableOnceIFvPN7process11ProcessBaseEEE10CallableFnINS_8internal7PartialIZNS1_8dispatchIN5mesos8internal5slave13Containerizer12LaunchResultENSC_25MesosContainerizerProcessERKNSA_11ContainerIDERK6OptionINSA_5slave11ContainerIOEERKSt3mapISsSsSt4lessISsESaISt4pairIKSsSsEEERKSJ_ISsESI_SO_SY_S11_EENS1_6FutureIT_EERKNS1_3PIDIT0_EEMS16_FS14_T1_T2_T3_T4_EOT5_OT6_OT7_OT8_EUlSt10unique_ptrINS1_7PromiseISE_EESt14default_deleteIS1Q_EEOSG_OSM_OSW_OSZ_S3_E_IS1T_SG_SM_SW_SZ_St12_PlaceholderILi1EEEEEEclEOS3_ Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f47325dbb31 process::ProcessBase::consume() Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f47325ea882 process::ProcessManager::resume() Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f47325efcf6 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f472fafa230 (unknown) Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f472f61ae25 start_thread Dec 12 16:45:42 agent.hostname mesos-agent[20788]: @ 0x7f472f34834d __clone Dec 12 16:45:42 agent.hostname systemd[1]: dcos-mesos-slave.service: main process exited, code=killed, status=6/ABRT Dec 12 16:45:42 agent.hostname systemd[1]: Unit dcos-mesos-slave.service entered failed state. Dec 12 16:45:42 agent.hostname systemd[1]: dcos-mesos-slave.service failed. ",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8327","12/13/2017 06:37:44",13,"Add container-specific CGroup FS mounts under /sys/fs/cgroup/* to Mesos containers ""Containers launched with Unified Containerizer do not include container-specific CGroup FS mounts under {{/sys/fs/cgroup}}, which are created by default by Docker (usually readonly for unprivileged containers). Let's honor the same convention for Mesos containers. For example, this is needed by Uber's [{{automaxprocs}}|https://github.com/uber-go/automaxprocs] patch for Go programs, which amends {{GOMAXPROCS}} per CPU quota and requires access to the CPU cgroup subsystem.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8339","12/15/2017 23:47:47",3,"Quota headroom may be insufficiently held when role has more reservation than quota. ""If a role has more reservation than its quota, the current quota headroom calculation is insufficient in guaranteeing quota allocation. Consider, role `A` with 100 (units of resource, same below) reservation and 10 quota and role `B` with no reservation and 90 quota. Let's say there is no allocation yet. The existing allocator would calculate that the required headroom is 100. And since unallocated quota role reserved resource is also 100, no additional resources would be held back for the headroom. While role `A` would have no problem getting its quota satisfied. Role `B` may have difficulty getting any resources because the """"headroom"""" can only be allocated to `A`. The solution is to calculate per-role headroom before aggregating the quantity. And unallocated reservations should not count towards quota headroom. In the above case. The headroom for role `A` should be zero, the headroom for role `B` should be 90. Thus the aggregated headroom will be `90`.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8344","12/19/2017 00:26:40",2,"Improve JSON v1 operator API performance. ""According to some user reports, a simple comparison of the v1 operator API (using the """"GET_TASKS"""" call) and the v0 /tasks HTTP endpoint shows that the v1 API suffers from an inefficient implementation: {noformat: title=Curl Timing} Operator HTTP API (GET_TASKS): 0.02s user 0.08s system 1% cpu 9.883 total Old /tasks API: /tasks: 0.00s user 0.00s system 1% cpu 0.222 total {noformat} Looking over the implementation, it suffers from the same issues we originally had with the JSON endpoints: * Excessive copying up the """"tree"""" of state building calls. * Building up the state object as opposed to directly serializing it."""," Operator HTTP API (GET_TASKS): 0.02s user 0.08s system 1% cpu 9.883 total Old /tasks API: /tasks: 0.00s user 0.00s system 1% cpu 0.222 total ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8350","12/20/2017 14:19:33",2,"Resource provider-capable agents not correctly synchronizing checkpointed agent resources on reregistration ""For resource provider-capable agents the master does not re-send checkpointed resources on agent reregistration; instead the checkpointed resources sent as part of the {{ReregisterSlaveMessage}} should be used. This is not what happens in reality. If e.g., checkpointing of an offer operation fails and the agent fails over the checkpointed resources would, as expected, not be reflected in the agent, but would still be assumed in the master. A workaround is to fail over the master which would lead to the newly elected master bootstrapping agent state from {{ReregisterSlaveMessage}}.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8352","12/21/2017 04:01:49",3,"Resources may get over allocated to some roles while fail to meet the quota of other roles. ""In the quota role allocation stage, if a role gets some resources on an agent to meet its quota, it will also get all other resources on the same agent that it does not have quota for. This may starve roles behind it that have quotas set for those resources. To fix that, we need to track quota headroom in the quota role allocation stage. In that stage, if a role has no quota set for a scalar resource, it will get that resource only when two conditions are both met: - It got some other resources on the same agent to meet its quota; And - After allocating those resources, quota headroom is still above the required amount.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8361","12/27/2017 21:55:07",3,"Example frameworks to support launching mesos-local. ""The scheduler driver and library support implicit launching of mesos-local for a convenient test setup. Some of our example frameworks account for this in supporting implicit ACL rendering and more. We should unify the experience by documenting this behaviour and adding it to all example frameworks.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8365","12/28/2017 20:04:59",3,"Create AuthN support for prune images API ""We want to make sure there is a way to configure AuthZ for new API added in MESOS-8360.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0 +"MESOS-8373","01/03/2018 16:59:16",3,"Test reconciliation after operation is dropped en route to agent ""Since new code paths were added to handle operations on resources in 1.5, we should test that such operations are reconciled correctly after an operation is dropped on the way from the master to the agent.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8388","01/03/2018 22:37:07",2,"Show LRP resources in master and agent endpoints. ""Currently, only resource provider info is shown. We should also show the resources provided by resource providers.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8389","01/04/2018 00:40:59",5,"Notion of ""removable"" task in master code is inaccurate. ""In the past, the notion of a """"removable"""" task meant: the task is terminal and acknowledged. It appears now that a removable task is defined purely by its state (terminal or unreachable) but not whether the terminal update is acknowledged. As a result, the code that is calling this function ({{isRemovable}}) ends up being unintuitive. One example of a confusing piece of code is within {{updateTask}}. Here, we have logic which says, if the task is removable, recover the resources *but don't remove it*. This seems more intuitive if directly described as: """"if the task is no longer consuming resources, then (e.g. transitioned to terminal or unreachable) then recover the resources"""". If one looks up the documentation of {{isRemovable}}, it says """"When a task becomes removable, it is erased from the master's primary task data structures"""", but that isn't accurate since this function doesn't say whether the terminal task has been acknowledged, which is required for a task to be removable. I think an easy improvement here would be to move this notion of removable towards something like {{isTerminalOrUnreachable}}. We could also think about how to name this concept more generally, like {{canReleaseResources}} to describe whether the task's resources are considered allocated. If we do introduce a notion of {{isRemovable}}, it seems it should be saying whether the task could be removed from the master, which includes checking that terminal tasks have been acknowledged.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8390","01/04/2018 00:44:13",3,"Notion of ""transitioning"" agents in the master is now inaccurate. ""While [~xujyan] and I were discussing https://reviews.apache.org/r/57535/ we found a recent change that made the concept of """"in transition"""" agents confusing. See my comment here: https://reviews.apache.org/r/52083/#review170066 Given the new semantics described in the summary of https://reviews.apache.org/r/52083, the need for a separate method {{transitioning}} no longer exists because now it just wraps around a single variable {{unrecovered}} and gives it an alias which is less intuitive (because when reading the word transitioning one would think it has a more general meaning).""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8391","01/04/2018 02:27:07",3,"Mesos agent doesn't notice that a pod task exits or crashes after the agent restart ""h4. (1) Agent doesn't detect that a pod task exits/crashes # Create a Marathon pod with two containers which just do {{sleep 10000}}. # Restart the Mesos agent on the node the pod got launched. # Kill one of the pod tasks *Expected result*: The Mesos agent detects that one of the tasks got killed, and forwards {{TASK_FAILED}} status to Marathon. *Actual result*: The Mesos agent does nothing, and the Mesos master thinks that both tasks are running just fine. Marathon doesn't take any action because it doesn't receive any update from Mesos. h4. (2) After the agent restart, it detects that the task crashed, forwards the correct status update, but the other task stays in {{TASK_KILLING}} state forever # Perform steps in (1). # Restart the Mesos agent *Expected result*: The Mesos agent detects that one of the tasks got crashed, forwards the corresponding status update, and kills the other task too. *Actual result*: The Mesos agent detects that one of the tasks got crashed, forwards the corresponding status update, but the other task stays in `TASK_KILLING` state forever. Please note, that after another agent restart, the other tasks gets finally killed and the correct status updates get propagated all the way to Marathon.""","",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-8402","01/05/2018 14:50:14",8,"Resource provider manager should persist resource provider information ""Currently, the resource provider manager used to abstract away resource provider subscription and state does not persist resource provider information. It has no notion of e.g., disconnected or forcibly removed resource providers. This makes it hard to implement a number of features, e.g., * removal of a resource provider and make it possible to garbage collect its cached state (e.g., in the resource provider manager, agent, or master), or * controlling resource provider resubscription, e.g., by observing and enforcing resubscription timeouts. We should extend the resource provider manager to persist the state of each resource provider (e.g., {{CONNECTED}}, {{DISCONNECTED}}, its resources and other attributes). This information should also be exposed in resource provider reconciliation, and be reflected in master or agent endpoints.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8403","01/05/2018 15:03:13",3,"Add agent HTTP API operator call to mark local resource providers as gone ""It is currently not possible to mark local resource providers as gone (e.g., after agent reconfiguration). As resource providers registered at earlier times could still be cached in a number of places, e.g., the agent or the master, the only way to e.g., prevent this cache from growing too large is to fail over caching components (to e.g., prevent an agent cache to update a fresh master cache during reconciliation). Showing unavailable and known to be gone resource providers in various endpoints is likely also confusing to users. We should add an operator call to mark resource providers as gone. While the entity managing resource provider subscription state is the resource provider manager, it still seems to make sense to add this operator call to the agent API as currently only local resource providers are supported. The agent would then forward the call to the resource provider manager which would transition its state for the affected resource provider, e.g., setting its state to {{GONE}} and removing it from the list of known resource providers, and then send out an update to its subscribers.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8411","01/08/2018 01:13:12",5,"Killing a queued task can lead to the command executor never terminating. ""If a task is killed while the executor is re-registering, we will remove it from queued tasks and shut down the executor if all the its initial tasks could not be delivered. However, there is a case (within {{Slave::___run}}) where we leave the executor running, the race is: # Command-executor task launched. # Command executor sends registration message. Agent tells containerizer to update the resources before it sends the tasks to the executor. # Kill arrives, and we synchronously remove the task from queued tasks. # Containerizer finishes updating the resources, and in {{Slave::___run}} the killed task is ignored. # Command executor stays running! Executors could have a timeout to handle this case, but it's not clear that all executors will implement this correctly. It would be better to have a defensive policy that will shut down an executor if all of its initial batch of tasks were killed prior to delivery. In order to implement this, one approach discussed with [~vinodkone] is to look at the running + terminated but unacked + completed tasks, and if empty, shut the executor down in the {{Slave::___run}} path. This will require us to check that the completed task cache size is set to at least 1, and this also assumes that the completed tasks are not cleared based on time or during agent recovery.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8416","01/08/2018 23:35:40",5,"CHECK failure if trying to recover nested containers but the framework checkpointing is not enabled. "" If the framework does not enable the checkpointing. It means there is no slave state checkpointed. But containers are still checkpointed at the runtime dir, which mean recovering a nested container would cause the CHECK failure due to its parent's sandbox dir is unknown."""," I0108 23:05:25.313344 31743 slave.cpp:620] Agent attributes: [ ] I0108 23:05:25.313832 31743 slave.cpp:629] Agent hostname: vagrant-ubuntu-wily-64 I0108 23:05:25.314916 31763 task_status_update_manager.cpp:181] Pausing sending task status updates I0108 23:05:25.323496 31766 state.cpp:66] Recovering state from '/var/lib/mesos/slave/meta' I0108 23:05:25.323639 31766 state.cpp:724] No committed checkpointed resources found at '/var/lib/mesos/slave/meta/resources/resources.info' I0108 23:05:25.326169 31760 task_status_update_manager.cpp:207] Recovering task status update manager I0108 23:05:25.326954 31759 containerizer.cpp:674] Recovering containerizer F0108 23:05:25.331529 31759 containerizer.cpp:919] CHECK_SOME(container->directory): is NONE *** Check failure stack trace: *** @ 0x7f769dbc98bd google::LogMessage::Fail() @ 0x7f769dbc8c8e google::LogMessage::SendToLog() @ 0x7f769dbc958d google::LogMessage::Flush() @ 0x7f769dbcca08 google::LogMessageFatal::~LogMessageFatal() @ 0x556cb4c2b937 _CheckFatal::~_CheckFatal() @ 0x7f769c5ac653 mesos::internal::slave::MesosContainerizerProcess::recover() ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8417","01/09/2018 02:55:03",1,"Mesos can get ""stuck"" when a Process throws an exception. ""When a {{Process}} throws an exception, we log it, terminate the throwing {{Process}}, and continue to run. However, currently there exists no known user-level code that I'm aware of that handles the unexpected termination due to an uncaught exception. Generally, this means that when an exception is thrown (e.g. a bad call to {{std::map::at}}), the {{Process}} terminates with a log message but things get """"stuck"""" and the user has to debug what is wrong / kill the process. Libprocess would likely need to provide some primitives to better support handling unexpected termination of a {{Process}} in order for us to provide a strategy where we continue running. In the short term, it would be prudent to abort libprocess if any {{Process}} throws an exception so that users can observe the issue and we can get it fixed.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8419","01/09/2018 09:32:46",1,"RP manager incorrectly setting framework ID leads to CHECK failure ""The resource provider manager [unconditionally sets the framework ID|https://github.com/apache/mesos/blob/3290b401d20f2db2933294470ea8a2356a47c305/src/resource_provider/manager.cpp#L637] when forwarding operation status updates to the agent. This is incorrect, for example, when the resource provider [generates OPERATION_DROPPED updates during reconciliation|https://github.com/apache/mesos/blob/3290b401d20f2db2933294470ea8a2356a47c305/src/resource_provider/storage/provider.cpp#L1653-L1657], and leads to protobuf errors in this case since the framework ID's required {{value}} field is left unset.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8422","01/10/2018 00:59:37",5,"Master's UpdateSlave handler not correctly updating terminated operations ""I created a test that verifies that operation status updates are resent to the master after being dropped en route to it (MESOS-8420). The test does the following: # Creates a volume from a RAW disk resource. # Drops the first `UpdateOperationStatusMessage` message from the agent to the master, so that it isn't acknowledged by the master. # Restarts the agent. # Verifies that the agent resends the operation status update. The good news are that the agent is resending the operation status update, the bad news are that it triggers a CHECK failure that crashes the master. Here are the relevant sections of the log produced by the test: We can see that once the SLRP reregisters with the agent, the following happens: # The agent will send an {{UpdateSlave}} message to the master including the converted resources and the {{CREATE_VOLUME}} operation with the status {{OPERATION_FINISHED}}. # The master will update the agent's resources, including the volume created by the operation. # The agent will resend the operation status update. # The master will try to apply the operation and crash, because it already updated the agent's resources on step #2."""," [ RUN ] StorageLocalResourceProviderTest.ROOT_RetryOperationStatusUpdateAfterRecovery [...] I0109 16:36:08.515882 24106 master.cpp:4284] Processing ACCEPT call for offers: [ 046b3f21-6e97-4a56-9a13-773f7d481efd-O0 ] on agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 at slave(2)@10.0.49.2:40681 (core-dev) for framework 046b3f21-6e97-4a56-9a13-773f7d481efd-0000 (default) at scheduler-2a48a684-64b4-4b4d-a396-6491adb4f2b1@10.0.49.2:40681 I0109 16:36:08.516487 24106 master.cpp:5260] Processing CREATE_VOLUME operation with source disk(allocated: storage)(reservations: [(DYNAMIC,storage)])[RAW(,volume-default)]:4096 from framework 046b3f21-6e97-4a56-9a13-773f7d481efd-0000 (default) at scheduler-2a48a684-64b4-4b4d-a396-6491adb4f2b1@10.0.49.2:40681 to agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 at slave(2)@10.0.49.2:40681 (core-dev) I0109 16:36:08.518704 24106 master.cpp:10622] Sending operation '' (uuid: 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408) to agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 at slave(2)@10.0.49.2:40681 (core-dev) I0109 16:36:08.521210 24130 provider.cpp:504] Received APPLY_OPERATION event I0109 16:36:08.521276 24130 provider.cpp:1368] Received CREATE_VOLUME operation '' (uuid: 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408) I0109 16:36:08.523131 24432 test_csi_plugin.cpp:305] CreateVolumeRequest '{""""version"""":{""""minor"""":1},""""name"""":""""18b4c4a5-d162-4dcf-bb21-a13c6ee0f408"""",""""capacityRange"""":{""""requiredBytes"""":""""4294967296"""",""""limitBytes"""":""""4294967296""""},""""volumeCapabilities"""":[{""""mount"""":{},""""accessMode"""":{""""mode"""":""""SINGLE_NODE_WRITER""""}}]}' I0109 16:36:08.525806 24152 provider.cpp:2635] Applying conversion from 'disk(allocated: storage)(reservations: [(DYNAMIC,storage)])[RAW(,volume-default)]:4096' to 'disk(allocated: storage)(reservations: [(DYNAMIC,storage)])[MOUNT(18b4c4a5-d162-4dcf-bb21-a13c6ee0f408,volume-default):./csi/org.apache.mesos.csi.test/slrp_test/mounts/18b4c4a5-d162-4dcf-bb21-a13c6ee0f408]:4096' for operation (uuid: 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408) I0109 16:36:08.528725 24134 status_update_manager_process.hpp:152] Received operation status update OPERATION_FINISHED (Status UUID: 0c79cdf2-b89d-453b-bb62-57766e968dd0) for operation UUID 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408 of framework '046b3f21-6e97-4a56-9a13-773f7d481efd-0000' on agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 I0109 16:36:08.529207 24134 status_update_manager_process.hpp:929] Checkpointing UPDATE for operation status update OPERATION_FINISHED (Status UUID: 0c79cdf2-b89d-453b-bb62-57766e968dd0) for operation UUID 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408 of framework '046b3f21-6e97-4a56-9a13-773f7d481efd-0000' on agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 I0109 16:36:08.573177 24150 http.cpp:1185] HTTP POST for /slave(2)/api/v1/resource_provider from 10.0.49.2:53598 I0109 16:36:08.573974 24139 slave.cpp:7065] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408) for framework 046b3f21-6e97-4a56-9a13-773f7d481efd-0000 (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)' I0109 16:36:08.574154 24139 slave.cpp:7409] Updating the state of operation ' with no ID (uuid: 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408) for framework 046b3f21-6e97-4a56-9a13-773f7d481efd-0000 (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED) I0109 16:36:08.574785 24139 slave.cpp:7249] Forwarding status update of operation with no ID (operation_uuid: 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408) for framework 046b3f21-6e97-4a56-9a13-773f7d481efd-0000 I0109 16:36:08.583748 24084 slave.cpp:931] Agent terminating I0109 16:36:08.584115 24144 master.cpp:1305] Agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 at slave(2)@10.0.49.2:40681 (core-dev) disconnected [...] I0109 16:36:08.655766 24140 slave.cpp:1378] Re-registered with master master@10.0.49.2:40681 I0109 16:36:08.655936 24117 task_status_update_manager.cpp:188] Resuming sending task status updates I0109 16:36:08.655995 24149 hierarchical.cpp:669] Agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 (core-dev) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000] I0109 16:36:08.656008 24140 slave.cpp:1423] Forwarding agent update {""""operations"""":{},""""resource_version_uuid"""":{""""value"""":""""icuAKyO6TymMt2Y9vyF6Jg==""""},""""slave_id"""":{""""value"""":""""046b3f21-6e97-4a56-9a13-773f7d481efd-S0""""},""""update_oversubscribed_resources"""":true} I0109 16:36:08.656121 24149 hierarchical.cpp:754] Agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 reactivated W0109 16:36:08.656481 24113 master.cpp:7277] !!!! update slave message: slave_id { value: """"046b3f21-6e97-4a56-9a13-773f7d481efd-S0"""" } update_oversubscribed_resources: true operations { } resource_version_uuid { value: """"\211\313\200+#\272O)\214\267f=\277!z&"""" } I0109 16:36:08.656637 24113 master.cpp:7320] Received update of agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 at slave(3)@10.0.49.2:40681 (core-dev) with total oversubscribed resources {} W0109 16:36:08.657387 24113 master.cpp:7704] Performing explicit reconciliation with agent for known operation 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408 since it was not present in original reconciliation message from agent I0109 16:36:08.657917 24133 hierarchical.cpp:669] Agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 (core-dev) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000] W0109 16:36:08.658048 24125 manager.cpp:472] Dropping operation reconciliation message with operation_uuid 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408 because resource provider 605b22f5-e39d-4d9f-950a-e7f44d202c01 is not subscribed I0109 16:36:08.658609 24143 container_daemon.cpp:119] Launching container 'org-apache-mesos-rp-local-storage-test--org-apache-mesos-csi-test-slrp_test--CONTROLLER_SERVICE-NODE_SERVICE' [...] I0109 16:36:08.689859 24130 provider.cpp:3066] Sending UPDATE_STATE call with resources 'disk(reservations: [(DYNAMIC,storage)])[MOUNT(18b4c4a5-d162-4dcf-bb21-a13c6ee0f408,volume-default):./csi/org.apache.mesos.csi.test/slrp_test/mounts/18b4c4a5-d162-4dcf-bb21-a13c6ee0f408]:4096' and 1 operations to agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 I0109 16:36:08.690449 24130 provider.cpp:1042] Resource provider 605b22f5-e39d-4d9f-950a-e7f44d202c01 is in READY state I0109 16:36:08.690491 24105 status_update_manager_process.hpp:385] Resuming operation status update manager I0109 16:36:08.690640 24105 status_update_manager_process.hpp:394] Sending operation status update OPERATION_FINISHED (Status UUID: 0c79cdf2-b89d-453b-bb62-57766e968dd0) for operation UUID 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408 of framework '046b3f21-6e97-4a56-9a13-773f7d481efd-0000' on agent 046b3f21-6e97-4a56-9a13-773f7d481efd-S0 I0109 16:36:08.693244 24131 http.cpp:1185] HTTP POST for /slave(3)/api/v1/resource_provider from 10.0.49.2:53606 I0109 16:36:08.693912 24140 http.cpp:1185] HTTP POST for /slave(3)/api/v1/resource_provider from 10.0.49.2:53606 I0109 16:36:08.693974 24115 manager.cpp:677] Received UPDATE_STATE call with resources '[{""""disk"""":{""""source"""":{""""id"""":""""18b4c4a5-d162-4dcf-bb21-a13c6ee0f408"""",""""metadata"""":{""""labels"""":[{""""key"""":""""path"""",""""value"""":""""\/tmp\/n5thZ3\/test\/4GB-18b4c4a5-d162-4dcf-bb21-a13c6ee0f408""""}]},""""mount"""":{""""root"""":"""".\/csi\/org.apache.mesos.csi.test\/slrp_test\/mounts\/18b4c4a5-d162-4dcf-bb21-a13c6ee0f408""""},""""profile"""":""""volume-default"""",""""type"""":""""MOUNT""""}},""""name"""":""""disk"""",""""provider_id"""":{""""value"""":""""605b22f5-e39d-4d9f-950a-e7f44d202c01""""},""""reservations"""":[{""""role"""":""""storage"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":4096.0},""""type"""":""""SCALAR""""}]' and 1 operations from resource provider 605b22f5-e39d-4d9f-950a-e7f44d202c01 I0109 16:36:08.694897 24144 slave.cpp:7065] Handling resource provider message 'UPDATE_STATE: 605b22f5-e39d-4d9f-950a-e7f44d202c01 disk(reservations: [(DYNAMIC,storage)])[MOUNT(18b4c4a5-d162-4dcf-bb21-a13c6ee0f408,volume-default):./csi/org.apache.mesos.csi.test/slrp_test/mounts/18b4c4a5-d162-4dcf-bb21-a13c6ee0f408]:4096' I0109 16:36:08.695184 24144 slave.cpp:7182] Forwarding new total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000]; disk(reservations: [(DYNAMIC,storage)])[MOUNT(18b4c4a5-d162-4dcf-bb21-a13c6ee0f408,volume-default):./csi/org.apache.mesos.csi.test/slrp_test/mounts/18b4c4a5-d162-4dcf-bb21-a13c6ee0f408]:4096 I0109 16:36:08.696467 24144 slave.cpp:7065] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408) for framework 046b3f21-6e97-4a56-9a13-773f7d481efd-0000 (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED)' I0109 16:36:08.696594 24144 slave.cpp:7409] Updating the state of operation ' with no ID (uuid: 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408) for framework 046b3f21-6e97-4a56-9a13-773f7d481efd-0000 (latest state: OPERATION_FINISHED, status update state: OPERATION_FINISHED) I0109 16:36:08.696666 24144 slave.cpp:7249] Forwarding status update of operation with no ID (operation_uuid: 18b4c4a5-d162-4dcf-bb21-a13c6ee0f408) for framework 046b3f21-6e97-4a56-9a13-773f7d481efd-0000 W0109 16:36:08.697093 24142 master.cpp:7277] !!!! update slave message: slave_id { value: """"046b3f21-6e97-4a56-9a13-773f7d481efd-S0"""" } update_oversubscribed_resources: false operations { } resource_version_uuid { value: """"\211\313\200+#\272O)\214\267f=\277!z&"""" } resource_providers { providers { info { id { value: """"605b22f5-e39d-4d9f-950a-e7f44d202c01"""" } type: """"org.apache.mesos.rp.local.storage"""" name: """"test"""" default_reservations { role: """"storage"""" type: DYNAMIC } storage { plugin { type: """"org.apache.mesos.csi.test"""" name: """"slrp_test"""" containers { [...] } } } } total_resources { name: """"disk"""" type: SCALAR scalar { value: 4096 } disk { source { type: MOUNT mount { root: """"./csi/org.apache.mesos.csi.test/slrp_test/mounts/18b4c4a5-d162-4dcf-bb21-a13c6ee0f408"""" } id: """"18b4c4a5-d162-4dcf-bb21-a13c6ee0f408"""" metadata { labels { key: """"path"""" value: """"/tmp/n5thZ3/test/4GB-18b4c4a5-d162-4dcf-bb21-a13c6ee0f408"""" } } profile: """"volume-default"""" } } provider_id { value: """"605b22f5-e39d-4d9f-950a-e7f44d202c01"""" } reservations { role: """"storage"""" type: DYNAMIC } } operations { operations { framework_id { value: """"046b3f21-6e97-4a56-9a13-773f7d481efd-0000"""" } slave_id { value: """"046b3f21-6e97-4a56-9a13-773f7d481efd-S0"""" } info { type: CREATE_VOLUME create_volume { source { name: """"disk"""" type: SCALAR scalar { value: 4096 } disk { source { type: RAW profile: """"volume-default"""" } } allocation_info { role: """"storage"""" } provider_id { value: """"605b22f5-e39d-4d9f-950a-e7f44d202c01"""" } reservations { role: """"storage"""" type: DYNAMIC } } target_type: MOUNT } } latest_status { state: OPERATION_FINISHED converted_resources { name: """"disk"""" type: SCALAR scalar { value: 4096 } disk { source { type: MOUNT mount { root: """"./csi/org.apache.mesos.csi.test/slrp_test/mounts/18b4c4a5-d162-4dcf-bb21-a13c6ee0f408"""" } id: """"18b4c4a5-d162-4dcf-bb21-a13c6ee0f408"""" metadata { labels { key: """"path"""" value: """"/tmp/n5thZ3/test/4GB-18b4c4a5-d162-4dcf-bb21-a13c6ee0f408"""" } } profile: """"volume-default"""" } } allocation_info { role: """"storage"""" } provider_id { value: """"605b22f5-e39d-4d9f-950a-e7f44d202c01"""" } reservations { role: """"storage"""" type: DYNAMIC } } uuid { value: """"\014y\315\362\270\235E;\273bWvn\226\215\320"""" } } statuses { state: OPERATION_FINISHED converted_resources { name: """"disk"""" type: SCALAR scalar { value: 4096 } disk { source { type: MOUNT mount { root: """"./csi/org.apache.mesos.csi.test/slrp_test/mounts/18b4c4a5-d162-4dcf-bb21-a13c6ee0f408"""" } id: """"18b4c4a5-d162-4dcf-bb21-a13c6ee0f408"""" metadata { labels { key: """"path"""" value: """"/tmp/n5thZ3/test/4GB-18b4c4a5-d162-4dcf-bb21-a13c6ee0f408"""" } } profile: """"volume-default"""" } } allocation_info { role: """"storage"""" } provider_id { value: """"605b22f5-e39d-4d9f-950a-e7f44d202c01"""" } reservations { role: """"storage"""" type: DYNAMIC } } uuid { value: """"\014y\315\362\270\235E;\273bWvn\226\215\320"""" } } uuid { value: """"\030\264\304\245\321bM\317\273!\241/slaves//frameworks//executors//runs/ # //slaves//frameworks//executors/ # //slaves//frameworks/ For 1 and 2, the code to gc them is like this: So here {{then()}} is used which means we will only do the detach when the gc succeeds. But the problem is the order of 1, 2 and 3 deleted by gc can not be guaranteed, from my test, 3 will be deleted first for most of times. Since 3 is the parent directory of 1 and 2, so the gc for 1 and 2 will fail: So we will NOT do the detach for 1 and 2 which is a leak."""," garbageCollect(path) .then(defer(self(), &Self::detachFile, path)); I0111 00:19:33.001655 42889 gc.cpp:208] Deleting /home/qzhang/opt/mesos/slaves/9dea9207-5730-4f7a-b9a5-f772e035253b-S0/frameworks/c6f6659d-a402-41e3-891a-aaaa0c887a3b-0000 I0111 00:19:33.002576 42889 gc.cpp:218] Deleted '/home/qzhang/opt/mesos/slaves/9dea9207-5730-4f7a-b9a5-f772e035253b-S0/frameworks/c6f6659d-a402-41e3-891a-aaaa0c887a3b-0000' I0111 00:19:33.004551 42893 gc.cpp:208] Deleting /home/qzhang/opt/mesos/slaves/9dea9207-5730-4f7a-b9a5-f772e035253b-S0/frameworks/c6f6659d-a402-41e3-891a-aaaa0c887a3b-0000/executors/default-executor/runs/b067936a-f4c4-4091-b786-4dd4d4d6da15 W0111 00:19:33.004622 42893 gc.cpp:212] Failed to delete '/home/qzhang/opt/mesos/slaves/9dea9207-5730-4f7a-b9a5-f772e035253b-S0/frameworks/c6f6659d-a402-41e3-891a-aaaa0c887a3b-0000/executors/default-executor/runs/b067936a-f4c4-4091-b786-4dd4d4d6da15': No such file or directory I0111 00:19:33.006367 42923 gc.cpp:208] Deleting /home/qzhang/opt/mesos/slaves/9dea9207-5730-4f7a-b9a5-f772e035253b-S0/frameworks/c6f6659d-a402-41e3-891a-aaaa0c887a3b-0000/executors/default-executor W0111 00:19:33.006466 42923 gc.cpp:212] Failed to delete '/home/qzhang/opt/mesos/slaves/9dea9207-5730-4f7a-b9a5-f772e035253b-S0/frameworks/c6f6659d-a402-41e3-891a-aaaa0c887a3b-0000/executors/default-executor': No such file or directory ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8446","01/15/2018 13:45:39",1,"Agent miss to detach `virtualLatestPath` for the executor's sandbox during recovery ""In {{Framework::recoverExecutor()}}, we attach {{executor->directory}} to 3 virtual paths: (1) /agent_workdir/frameworks/FID/executors/EID/runs/CID (2) /agent_workdir/frameworks/FID/executors/EID/runs/latest (3) /frameworks/FID/executors/EID/runs/latest But in this method, when we find the executor completes, we only do detach for (1) and (2) but not (3). We should do detach for (3) too as what we do in {{Slave::removeExecutor}}, otherwise, it will be a leak.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8447","01/16/2018 09:42:24",1,"Incomplete output of apply-reviews.py --dry-run ""The script {{support/apply-reviews.py}} has a flag {{--dry-run}} which should dump the commands which would be performed. This flag is useful to e.g., reorder patch chains or to manually resolve intermediate conflicts while still being able to pull a full chain. The output looks like this Trying to replay that dry run leads to an error since the commands to create the commit message files are not printed. We should add these commands to the output."""," % ./support/apply-reviews.py -r 62447 -c -n -3 --dry-run wget --no-check-certificate --no-verbose -O 62160.patch https://reviews.apache.org/r/62160/diff/raw/ git apply --index 62160.patch --3way git commit --author """"Benno Evers """" -aF """"62160.message"""" wget --no-check-certificate --no-verbose -O 62161.patch https://reviews.apache.org/r/62161/diff/raw/ git apply --index 62161.patch --3way git commit --author """"Benno Evers """" -aF """"62161.message"""" wget --no-check-certificate --no-verbose -O 62444.patch https://reviews.apache.org/r/62444/diff/raw/ git apply --index 62444.patch --3way git commit --author """"Benno Evers """" -aF """"62444.message"""" wget --no-check-certificate --no-verbose -O 62445.patch https://reviews.apache.org/r/62445/diff/raw/ git apply --index 62445.patch --3way git commit --author """"Benno Evers """" -aF """"62445.message"""" wget --no-check-certificate --no-verbose -O 62447.patch https://reviews.apache.org/r/62447/diff/raw/ git apply --index 62447.patch --3way git commit --author """"Benno Evers """" -aF """"62447.message""""",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8453","01/17/2018 15:44:36",3,"ExecutorAuthorizationTest.RunTaskGroup segfaults. "" Full log attached."""," 14:32:50 *** Aborted at 1516199570 (unix time) try """"date -d @1516199570"""" if you are using GNU date *** 14:32:50 PC: @ 0x7f36ef13f8b0 std::_Hashtable<>::count() 14:32:50 *** SIGSEGV (@0x107c7f88978) received by PID 19547 (TID 0x7f36e2722700) from PID 18446744072769538424; stack trace: *** 14:32:50 @ 0x7f36dcc763fd (unknown) 14:32:50 @ 0x7f36dcc7b419 (unknown) 14:32:50 @ 0x7f36dcc6f918 (unknown) 14:32:50 @ 0x7f36eb99e330 (unknown) 14:32:50 @ 0x7f36ef13f8b0 std::_Hashtable<>::count() 14:32:50 @ 0x7f36ef12bd22 _ZZN7process11ProcessBase8_consumeERKNS0_12HttpEndpointERKSsRKNS_5OwnedINS_4http7RequestEEEENKUlRK6OptionINS7_14authentication20AuthenticationResultEEE0_clESH_ 14:32:50 @ 0x7f36ef12c834 _ZNO6lambda12CallableOnceIFN7process6FutureINS1_4http8ResponseEEEvEE10CallableFnINS_8internal7PartialIZNS1_11ProcessBase8_consumeERKNSB_12HttpEndpointERKSsRKNS1_5OwnedINS3_7RequestEEEEUlRK6OptionINS3_14authentication20AuthenticationResultEEE0_JSP_EEEEclEv 14:32:50 @ 0x7f36ee1c1e8a _ZNO6lambda12CallableOnceIFvPN7process11ProcessBaseEEE10CallableFnINS_8internal7PartialIZNS1_8internal8DispatchINS1_6FutureINS1_4http8ResponseEEEEclINS0_IFSE_vEEEEESE_RKNS1_4UPIDEOT_EUlSt10unique_ptrINS1_7PromiseISD_EESt14default_deleteISQ_EEOSI_S3_E_JST_SI_St12_PlaceholderILi1EEEEEEclEOS3_ 14:32:50 @ 0x7f36ef118711 process::ProcessBase::consume() 14:32:50 @ 0x7f36ef1309a2 process::ProcessManager::resume() 14:32:50 @ 0x7f36ef134216 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv 14:32:50 @ 0x7f36ec15a5b0 (unknown) 14:32:50 @ 0x7f36eb996184 start_thread 14:32:50 @ 0x7f36eb6c2ffd (unknown) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8454","01/17/2018 17:26:02",3,"Add a download link for master and agent logs in WebUI ""Just like task sandboxes, it would be great for us to provide a download link for mesos and agent logs in the WebUI. Right now the the log link opens up the pailer, which is not really convenient to do `grep` and such while debugging.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8455","01/17/2018 19:03:51",3,"Avoid unnecessary copying of protobuf in the v1 API. ""Now that we have move support for protobufs, we can avoid the unnecessary copying of protobuf in the v1 API to improve the performance.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8456","01/17/2018 21:08:38",8,"Allocator should allow roles to burst above guarantees but below limits. ""Currently, allocator only allocates resources for quota roles up to their guarantee in the first allocation stage. The allocator should continue allocating resources to these roles in the second stage below their quota limit. In other words, allocator should allow roles to burst above their guarantee but below the limit.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8463","01/19/2018 09:07:39",3,"Test MasterAllocatorTest/1.SingleFramework is flaky ""Observed in our internal CI on a ubuntu-16 setup in a plain autotools build, """," ../../src/tests/master_allocator_tests.cpp:175 Mock function called more times than expected - taking default action specified at: ../../src/tests/allocator.hpp:273: Function call: addSlave(@0x7fe8dc03d0e8 1eb6ab2c-293d-4b99-b76b-87bd939a1a19-S1, @0x7fe8dc03d108 hostname: """"ip-172-16-10-65.ec2.internal"""" resources { name: """"cpus"""" type: SCALAR scalar { value: 2 } } resources { name: """"mem"""" type: SCALAR scalar { value: 1024 } } resources { name: """"ports"""" type: RANGES ranges { range { begin: 31000 end: 32000 } } } id { value: """"1eb6ab2c-293d-4b99-b76b-87bd939a1a19-S1"""" } checkpoint: true port: 40262 , @0x7fe8ffa276c0 { 32-byte object <48-94 7D-0E E9-7F 00-00 00-00 00-00 00-00 00-00 01-00 00-00 00-00 00-00 01-00 00-00 00-00 00-00>, 32-byte object <48-94 7D-0E E9-7F 00-00 00-00 00-00 00-00 00-00 01-00 00-00 00-00 00-00 02-00 00-00 00-00 00-00>, 32-byte object <48-94 7D-0E E9-7F 00-00 00-00 00-00 00-00 00-00 01-00 00-00 00-00 00-00 03-00 00-00 73-79 73-74> }, @0x7fe8ffa27720 48-byte object <01-00 00-00 E8-7F 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 08-7A A2-FF E8-7F 00-00 A0-32 24-7D 62-55 00-00 DE-3C 11-0A E9-7F 00-00>, @0x7fe8dc03d4c8 { cpus:2, mem:1024, ports:[31000-32000] }, @0x7fe8dc03d460 {}) Expected: to be called once Actual: called twice - over-saturated and active Stacktrace ../../src/tests/master_allocator_tests.cpp:175 Mock function called more times than expected - taking default action specified at: ../../src/tests/allocator.hpp:273: Function call: addSlave(@0x7fe8dc03d0e8 1eb6ab2c-293d-4b99-b76b-87bd939a1a19-S1, @0x7fe8dc03d108 hostname: """"ip-172-16-10-65.ec2.internal"""" resources { name: """"cpus"""" type: SCALAR scalar { value: 2 } } resources { name: """"mem"""" type: SCALAR scalar { value: 1024 } } resources { name: """"ports"""" type: RANGES ranges { range { begin: 31000 end: 32000 } } } id { value: """"1eb6ab2c-293d-4b99-b76b-87bd939a1a19-S1"""" } checkpoint: true port: 40262 , @0x7fe8ffa276c0 { 32-byte object <48-94 7D-0E E9-7F 00-00 00-00 00-00 00-00 00-00 01-00 00-00 00-00 00-00 01-00 00-00 00-00 00-00>, 32-byte object <48-94 7D-0E E9-7F 00-00 00-00 00-00 00-00 00-00 01-00 00-00 00-00 00-00 02-00 00-00 00-00 00-00>, 32-byte object <48-94 7D-0E E9-7F 00-00 00-00 00-00 00-00 00-00 01-00 00-00 00-00 00-00 03-00 00-00 73-79 73-74> }, @0x7fe8ffa27720 48-byte object <01-00 00-00 E8-7F 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 08-7A A2-FF E8-7F 00-00 A0-32 24-7D 62-55 00-00 DE-3C 11-0A E9-7F 00-00>, @0x7fe8dc03d4c8 { cpus:2, mem:1024, ports:[31000-32000] }, @0x7fe8dc03d460 {}) Expected: to be called once Actual: called twice - over-saturated and active ",0,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8470","01/20/2018 02:12:34",2,"CHECK failure in DRFSorter due to invalid framework id. ""A framework registering with a custom {{FrameworkID}} containing slashes such as {{/foo/bar}} will trigger a CHECK failure at https://github.com/apache/mesos/blob/177a2221496a2caa5ad25e71c9982ca3eed02fd4/src/master/allocator/sorter/drf/sorter.cpp#L167: The sorter should be defensive with any {{FrameworkID}} containing slashes."""," master.cpp:6618] Updating info for framework /foo/bar sorter.cpp:167] Check failed: clientPath == current->clientPath() (/foo/bar vs. foo/bar) ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8473","01/22/2018 13:50:56",3,"Authorize `GET_OPERATIONS` calls. ""The {{GET_OPERATIONS}} call lists all known operations on a master or agent. Authorization has to be added to this call.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8474","01/22/2018 20:47:47",2,"Test StorageLocalResourceProviderTest.ROOT_ConvertPreExistingVolume is flaky ""Observed on our internal CI on ubuntu16.04 with SSL and GRPC enabled, """," ../../src/tests/storage_local_resource_provider_tests.cpp:1898 Expected: 2u Which is: 2 To be equal to: destroyed.size() Which is: 1 ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8477","01/23/2018 03:57:43",1,"Make clean fails without Python artifacts. ""Make clean may fail if there are no Python artifacts created by previous builds.     Triggered by [https://github.com/apache/mesos/blob/62d392704c499e06da0323e50dfd016cdac06f33/src/Makefile.am#L2218-L2219]"""," $ make clean [...] rm -rf java/target rm -f examples/java/*.class rm -f java/jni/org_apache_mesos*.h find python \( -name """"build"""" -o -name """"dist"""" -o -name """"*.pyc"""" \ -o -name """"*.egg-info"""" \) -exec rm -rf '{}' \+ find: ‘python’: No such file or directory make[1]: *** [clean-python] Error 1 make[1]: Leaving directory `/home/centos/workspace/mesos/build/src' make: *** [clean-recursive] Error 1",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8480","01/23/2018 21:33:57",2,"Mesos returns high resource usage when killing a Docker task. ""The way we get resource statistics for Docker tasks is through getting the cgroup subsystem path through {{/proc//cgroup}} first (taking the {{cpuacct}} subsystem as an example): Then read {{/sys/fs/cgroup/cpuacct//docker/66fbe67b64ad3a86c6e080e18578bc9e540e55ee0bdcae09c2e131a4264a3a3b/cpuacct.stat}} to get the statistics: However, when a Docker container is being teared down, it seems that Docker or the operation system will first move the process to the root cgroup before actually killing it, making {{/proc//docker}} look like the following: This makes a racy call to [{{cgroup::internal::cgroup()}}|https://github.com/apache/mesos/blob/master/src/linux/cgroups.cpp#L1935] return a single '/', which in turn makes [{{DockerContainerizerProcess::cgroupsStatistics()}}|https://github.com/apache/mesos/blob/master/src/slave/containerizer/docker.cpp#L1991] read {{/sys/fs/cgroup/cpuacct///cpuacct.stat}}, which contains the statistics for the root cgroup: This can be reproduced by [^test.cpp] with the following command: """," 9:cpuacct,cpu:/docker/66fbe67b64ad3a86c6e080e18578bc9e540e55ee0bdcae09c2e131a4264a3a3b user 4 system 0 9:cpuacct,cpu:/ user 228058750 system 24506461 $ docker run --name sleep -d --rm alpine sleep 1000; ./test $(docker inspect sleep | jq .[].State.Pid) & sleep 1 && docker rm -f sleep ... Reading file '/proc/44224/cgroup' Reading file '/sys/fs/cgroup/cpuacct//docker/1d79a6c877e2af3081630aa57d23d853e6bd7d210dad28f897556bfea20bc9c1/cpuacct.stat' user 4 system 0 Reading file '/proc/44224/cgroup' Reading file '/sys/fs/cgroup/cpuacct///cpuacct.stat' user 228058750 system 24506461 Reading file '/proc/44224/cgroup' Reading file '/sys/fs/cgroup/cpuacct///cpuacct.stat' user 228058750 system 24506461 Failed to open file '/proc/44224/cgroup' sleep [2]- Exit 1 ./test $(docker inspect sleep | jq .[].State.Pid) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8482","01/24/2018 12:41:05",1,"Signed/Unsigned comparisons in tests ""Many tests in mesos currently have comparisons between signed and unsigned integers, eg or comparisons between values of different enums, e.g. TaskState and v1::TaskState: Usually, the compiler would catch these and emit a warning, but these are currently silenced because gtest headers are included using the {{-isystem}} command line flag.""","     ASSERT_EQ(4, v1Response->read_file().size());   ASSERT_EQ(TASK_STARTING, startingUpdate->status().state()); ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8484","01/24/2018 18:07:38",1,"stout test NumifyTest.HexNumberTest fails. ""The current Mesos master shows the following on my machine: This problem disappears for me when reverting the latest boost upgrade."""," [ RUN ] NumifyTest.HexNumberTest ../../../3rdparty/stout/tests/numify_tests.cpp:57: Failure Value of: numify(""""0x10.9"""").isError() Actual: false Expected: true ../../../3rdparty/stout/tests/numify_tests.cpp:58: Failure Value of: numify(""""0x1p-5"""").isError() Actual: false Expected: true [ FAILED ] NumifyTest.HexNumberTest (0 ms) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8485","01/24/2018 19:02:31",3,"MasterTest.RegistryGcByCount is flaky ""Observed this while testing Mesos 1.5.0-rc1 in ASF CI.   """," 3: [ RUN      ] MasterTest.RegistryGcByCount ..............snip........................... 3: I0123 19:22:05.929347 15994 slave.cpp:1201] Detecting new master 3: I0123 19:22:05.931701 15988 slave.cpp:1228] Authenticating with master master@172.17.0.2:45634 3: I0123 19:22:05.931838 15988 slave.cpp:1237] Using default CRAM-MD5 authenticatee 3: I0123 19:22:05.932153 15999 authenticatee.cpp:121] Creating new client SASL connection 3: I0123 19:22:05.932580 15992 master.cpp:8958] Authenticating slave(442)@172.17.0.2:45634 3: I0123 19:22:05.932822 15990 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(870)@172.17.0.2:45634 3: I0123 19:22:05.933163 15989 authenticator.cpp:98] Creating new server SASL connection 3: I0123 19:22:05.933465 16001 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 3: I0123 19:22:05.933495 16001 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 3: I0123 19:22:05.933631 15987 authenticator.cpp:204] Received SASL authentication start 3: I0123 19:22:05.933712 15987 authenticator.cpp:326] Authentication requires more steps 3: I0123 19:22:05.933851 15987 authenticatee.cpp:259] Received SASL authentication step 3: I0123 19:22:05.934006 15987 authenticator.cpp:232] Received SASL authentication step 3: I0123 19:22:05.934041 15987 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '455912973e2c' server FQDN: '455912973e2c' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 3: I0123 19:22:05.934095 15987 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 3: I0123 19:22:05.934147 15987 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 3: I0123 19:22:05.934279 15987 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '455912973e2c' server FQDN: '455912973e2c' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 3: I0123 19:22:05.934298 15987 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 3: I0123 19:22:05.934307 15987 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 3: I0123 19:22:05.934324 15987 authenticator.cpp:318] Authentication success 3: I0123 19:22:05.934463 15995 authenticatee.cpp:299] Authentication success 3: I0123 19:22:05.934563 16002 master.cpp:8988] Successfully authenticated principal 'test-principal' at slave(442)@172.17.0.2:45634 3: I0123 19:22:05.934708 15993 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(870)@172.17.0.2:45634 3: I0123 19:22:05.934891 15995 slave.cpp:1320] Successfully authenticated with master master@172.17.0.2:45634 3: I0123 19:22:05.935261 15995 slave.cpp:1764] Will retry registration in 2.234083ms if necessary 3: I0123 19:22:05.935436 15999 master.cpp:6061] Received register agent message from slave(442)@172.17.0.2:45634 (455912973e2c) 3: I0123 19:22:05.935662 15999 master.cpp:3867] Authorizing agent with principal 'test-principal' 3: I0123 19:22:05.936161 15992 master.cpp:6123] Authorized registration of agent at slave(442)@172.17.0.2:45634 (455912973e2c) 3: I0123 19:22:05.936261 15992 master.cpp:6234] Registering agent at slave(442)@172.17.0.2:45634 (455912973e2c) with id eef8ea11-9247-44f3-84cf-340b24df3a52-S0 3: I0123 19:22:05.936993 15989 registrar.cpp:495] Applied 1 operations in 227911ns; attempting to update the registry 3: I0123 19:22:05.937814 15989 registrar.cpp:552] Successfully updated the registry in 743168ns 3: I0123 19:22:05.938057 15991 master.cpp:6282] Admitted agent eef8ea11-9247-44f3-84cf-340b24df3a52-S0 at slave(442)@172.17.0.2:45634 (455912973e2c) 3: I0123 19:22:05.938891 15991 master.cpp:6331] Registered agent eef8ea11-9247-44f3-84cf-340b24df3a52-S0 at slave(442)@172.17.0.2:45634 (455912973e2c) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] 3: I0123 19:22:05.939159 16002 slave.cpp:1764] Will retry registration in 26.332876ms if necessary 3: I0123 19:22:05.939349 15994 master.cpp:6061] Received register agent message from slave(442)@172.17.0.2:45634 (455912973e2c) 3: I0123 19:22:05.939347 15998 hierarchical.cpp:574] Added agent eef8ea11-9247-44f3-84cf-340b24df3a52-S0 (455912973e2c) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (allocated: {}) 3: I0123 19:22:05.939574 15994 master.cpp:3867] Authorizing agent with principal 'test-principal' 3: I0123 19:22:05.939704 16002 slave.cpp:1366] Registered with master master@172.17.0.2:45634; given agent ID eef8ea11-9247-44f3-84cf-340b24df3a52-S0 3: I0123 19:22:05.939894 15999 task_status_update_manager.cpp:188] Resuming sending task status updates 3: I0123 19:22:05.940163 15998 hierarchical.cpp:1517] Performed allocation for 1 agents in 231470ns 3: I0123 19:22:05.940194 16001 master.cpp:6123] Authorized registration of agent at slave(442)@172.17.0.2:45634 (455912973e2c) 3: I0123 19:22:05.940263 16001 master.cpp:6213] Agent eef8ea11-9247-44f3-84cf-340b24df3a52-S0 at slave(442)@172.17.0.2:45634 (455912973e2c) already registered, resending acknowledgement 3: I0123 19:22:05.942983 15994 process.cpp:3515] Handling HTTP event for process 'master' with path: '/master/api/v1' 3: I0123 19:22:05.944905 15995 http.cpp:1185] HTTP POST for /master/api/v1 from 172.17.0.2:33442 3: I0123 19:22:05.945107 15995 http.cpp:682] Processing call MARK_AGENT_GONE 3: I0123 19:22:05.945749 16001 http.cpp:5363] Marking agent 'eef8ea11-9247-44f3-84cf-340b24df3a52-S0' as gone 3: I0123 19:22:05.946480 15997 registrar.cpp:495] Applied 1 operations in 186752ns; attempting to update the registry 3: I0123 19:22:05.947284 15997 registrar.cpp:552] Successfully updated the registry in 730112ns 3: I0123 19:22:05.948225 15988 hierarchical.cpp:609] Removed agent eef8ea11-9247-44f3-84cf-340b24df3a52-S0 3: I0123 19:22:05.952500 16002 slave.cpp:1386] Checkpointing SlaveInfo to '/tmp/MasterTest_RegistryGcByCount_HbzHl2/meta/slaves/eef8ea11-9247-44f3-84cf-340b24df3a52-S0/slave.info' 3: I0123 19:22:05.953299 16002 slave.cpp:1433] Forwarding agent update \{""""operations"""":{},""""resource_version_uuid"""":\{""""value"""":""""nekTyNfGT1S5DNQZxKJ72A==""""},""""slave_id"""":\{""""value"""":""""eef8ea11-9247-44f3-84cf-340b24df3a52-S0""""},""""update_oversubscribed_resources"""":true} 3: W0123 19:22:05.953675 16002 slave.cpp:1415] Already registered with master master@172.17.0.2:45634 3: I0123 19:22:05.953790 16002 slave.cpp:1433] Forwarding agent update \{""""operations"""":{},""""resource_version_uuid"""":\{""""value"""":""""nekTyNfGT1S5DNQZxKJ72A==""""},""""slave_id"""":\{""""value"""":""""eef8ea11-9247-44f3-84cf-340b24df3a52-S0""""},""""update_oversubscribed_resources"""":true} 3: I0123 19:22:05.954031 16002 slave.cpp:964] Agent asked to shut down by master@172.17.0.2:45634 because 'Agent has been marked gone' 3: I0123 19:22:05.954082 16002 slave.cpp:931] Agent terminating 3: W0123 19:22:05.954145 15993 master.cpp:7235] Ignoring update on removed agent eef8ea11-9247-44f3-84cf-340b24df3a52-S0 3: W0123 19:22:05.954636 15993 master.cpp:7235] Ignoring update on removed agent eef8ea11-9247-44f3-84cf-340b24df3a52-S0 3: W0123 19:22:05.955550 15986 process.cpp:2756] Attempted to spawn already running process files@172.17.0.2:45634 3: I0123 19:22:05.956634 15986 containerizer.cpp:304] Using isolation \{ environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni } 3: W0123 19:22:05.957228 15986 backend.cpp:76] Failed to create 'aufs' backend: AufsBackend requires root privileges 3: W0123 19:22:05.957363 15986 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges 3: I0123 19:22:05.957401 15986 provisioner.cpp:299] Using default backend 'copy' 3: I0123 19:22:05.959393 15986 cluster.cpp:460] Creating default 'local' authorizer 3: I0123 19:22:05.961545 15998 slave.cpp:262] Mesos agent started on (443)@172.17.0.2:45634 3: I0123 19:22:05.961560 15998 slave.cpp:263] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/MasterTest_RegistryGcByCount_2Nh5JR/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/MasterTest_RegistryGcByCount_2Nh5JR/credential"""" --default_role=""""*"""" --disallow_sharing_agent_pid_namespace=""""false"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/MasterTest_RegistryGcByCount_2Nh5JR/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/MasterTest_RegistryGcByCount_2Nh5JR/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/MasterTest_RegistryGcByCount_2Nh5JR/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/mesos/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --reconfiguration_policy=""""equal"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/MasterTest_RegistryGcByCount_2Nh5JR"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/MasterTest_RegistryGcByCount_FyzIlU"""" --zk_session_timeout=""""10secs"""" 3: I0123 19:22:05.961917 15998 credentials.hpp:86] Loading credential for authentication from '/tmp/MasterTest_RegistryGcByCount_2Nh5JR/credential' 3: I0123 19:22:05.962070 15998 slave.cpp:295] Agent using credential for: test-principal 3: I0123 19:22:05.962090 15998 credentials.hpp:37] Loading credentials for authentication from '/tmp/MasterTest_RegistryGcByCount_2Nh5JR/http_credentials' 3: I0123 19:22:05.962347 15998 http.cpp:1045] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' 3: I0123 19:22:05.964292 15988 process.cpp:3515] Handling HTTP event for process 'master' with path: '/master/api/v1' 3: I0123 19:22:05.964375 15998 slave.cpp:612] Agent resources: [\{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},\{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},\{""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},\{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] 3: I0123 19:22:05.964591 15998 slave.cpp:620] Agent attributes: [  ] 3: I0123 19:22:05.964601 15998 slave.cpp:629] Agent hostname: 455912973e2c 3: I0123 19:22:05.964753 15995 task_status_update_manager.cpp:181] Pausing sending task status updates 3: I0123 19:22:05.966159 16002 http.cpp:1185] HTTP POST for /master/api/v1 from 172.17.0.2:33443 3: I0123 19:22:05.966327 16002 http.cpp:682] Processing call MARK_AGENT_GONE 3: I0123 19:22:05.966898 15990 http.cpp:5363] Marking agent 'eef8ea11-9247-44f3-84cf-340b24df3a52-S0' as gone 3: W0123 19:22:05.966939 15990 http.cpp:5366] Not marking agent 'eef8ea11-9247-44f3-84cf-340b24df3a52-S0' as gone because it has already transitioned to gone 3: I0123 19:22:05.966969 15992 state.cpp:66] Recovering state from '/tmp/MasterTest_RegistryGcByCount_FyzIlU/meta' 3: I0123 19:22:05.967445 15995 task_status_update_manager.cpp:207] Recovering task status update manager 3: I0123 19:22:05.967747 15991 containerizer.cpp:674] Recovering containerizer 3: I0123 19:22:05.969804 15999 provisioner.cpp:493] Provisioner recovery complete 3: I0123 19:22:05.970371 16002 slave.cpp:6822] Finished recovery 3: I0123 19:22:05.971616 16000 task_status_update_manager.cpp:181] Pausing sending task status updates 3: I0123 19:22:05.971608 15987 slave.cpp:1146] New master detected at master@172.17.0.2:45634 3: I0123 19:22:05.971737 15987 slave.cpp:1201] Detecting new master 3: I0123 19:22:05.971755 15990 hierarchical.cpp:1517] Performed allocation for 0 agents in 134021ns 3: I0123 19:22:05.972729 15999 slave.cpp:6360] Current disk usage 16.56%. Max allowed age: 5.140805523372986days 3: I0123 19:22:05.972985 15988 master.cpp:1878] Skipping periodic registry garbage collection: no agents qualify for removal 3: I0123 19:22:05.974480 16001 slave.cpp:1228] Authenticating with master master@172.17.0.2:45634 3: I0123 19:22:05.974581 16001 slave.cpp:1237] Using default CRAM-MD5 authenticatee 3: I0123 19:22:05.975023 16002 authenticatee.cpp:121] Creating new client SASL connection 3: I0123 19:22:05.975343 15998 master.cpp:8958] Authenticating slave(443)@172.17.0.2:45634 3: I0123 19:22:05.975476 16000 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(871)@172.17.0.2:45634 3: I0123 19:22:05.975735 15991 authenticator.cpp:98] Creating new server SASL connection 3: I0123 19:22:05.976027 15995 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 3: I0123 19:22:05.976143 15995 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 3: I0123 19:22:05.976356 15989 authenticator.cpp:204] Received SASL authentication start 3: I0123 19:22:05.976420 15989 authenticator.cpp:326] Authentication requires more steps 3: I0123 19:22:05.976552 15990 authenticatee.cpp:259] Received SASL authentication step 3: I0123 19:22:05.976698 15987 authenticator.cpp:232] Received SASL authentication step 3: I0123 19:22:05.976750 15987 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '455912973e2c' server FQDN: '455912973e2c' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 3: I0123 19:22:05.976773 15987 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 3: I0123 19:22:05.976821 15987 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 3: I0123 19:22:05.976898 15987 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '455912973e2c' server FQDN: '455912973e2c' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 3: I0123 19:22:05.976924 15987 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 3: I0123 19:22:05.976935 15987 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 3: I0123 19:22:05.976953 15987 authenticator.cpp:318] Authentication success 3: I0123 19:22:05.977094 15996 authenticatee.cpp:299] Authentication success 3: I0123 19:22:05.977233 15994 master.cpp:8988] Successfully authenticated principal 'test-principal' at slave(443)@172.17.0.2:45634 3: I0123 19:22:05.977321 15987 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(871)@172.17.0.2:45634 3: I0123 19:22:05.977475 15996 slave.cpp:1320] Successfully authenticated with master master@172.17.0.2:45634 3: I0123 19:22:05.977953 15996 slave.cpp:1764] Will retry registration in 6.841446ms if necessary 3: I0123 19:22:05.978238 15992 master.cpp:6061] Received register agent message from slave(443)@172.17.0.2:45634 (455912973e2c) 3: I0123 19:22:05.978591 15992 master.cpp:3867] Authorizing agent with principal 'test-principal' 3: I0123 19:22:05.979161 16000 master.cpp:6123] Authorized registration of agent at slave(443)@172.17.0.2:45634 (455912973e2c) 3: I0123 19:22:05.979320 16000 master.cpp:6234] Registering agent at slave(443)@172.17.0.2:45634 (455912973e2c) with id eef8ea11-9247-44f3-84cf-340b24df3a52-S1 3: I0123 19:22:05.980505 15991 registrar.cpp:495] Applied 1 operations in 455955ns; attempting to update the registry 3: I0123 19:22:05.981642 15991 registrar.cpp:552] Successfully updated the registry in 0ns 3: I0123 19:22:05.981912 15988 master.cpp:6282] Admitted agent eef8ea11-9247-44f3-84cf-340b24df3a52-S1 at slave(443)@172.17.0.2:45634 (455912973e2c) 3: I0123 19:22:05.982857 15988 master.cpp:6331] Registered agent eef8ea11-9247-44f3-84cf-340b24df3a52-S1 at slave(443)@172.17.0.2:45634 (455912973e2c) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] 3: I0123 19:22:05.982964 16001 slave.cpp:1366] Registered with master master@172.17.0.2:45634; given agent ID eef8ea11-9247-44f3-84cf-340b24df3a52-S1 3: I0123 19:22:05.983130 15996 task_status_update_manager.cpp:188] Resuming sending task status updates 3: I0123 19:22:05.983392 16001 slave.cpp:1386] Checkpointing SlaveInfo to '/tmp/MasterTest_RegistryGcByCount_FyzIlU/meta/slaves/eef8ea11-9247-44f3-84cf-340b24df3a52-S1/slave.info' 3: I0123 19:22:05.983423 15994 hierarchical.cpp:574] Added agent eef8ea11-9247-44f3-84cf-340b24df3a52-S1 (455912973e2c) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (allocated: {}) 3: I0123 19:22:05.983815 15994 hierarchical.cpp:1517] Performed allocation for 1 agents in 171516ns 3: I0123 19:22:05.984135 16001 slave.cpp:1433] Forwarding agent update \{""""operations"""":{},""""resource_version_uuid"""":\{""""value"""":""""1HNo1ICkRY24eDUqFmb6+Q==""""},""""slave_id"""":\{""""value"""":""""eef8ea11-9247-44f3-84cf-340b24df3a52-S1""""},""""update_oversubscribed_resources"""":true} 3: I0123 19:22:05.984762 15997 master.cpp:7265] Received update of agent eef8ea11-9247-44f3-84cf-340b24df3a52-S1 at slave(443)@172.17.0.2:45634 (455912973e2c) with total oversubscribed resources {} 3: I0123 19:22:05.985123 15997 master.cpp:7359] Ignoring update on agent eef8ea11-9247-44f3-84cf-340b24df3a52-S1 at slave(443)@172.17.0.2:45634 (455912973e2c) as it reports no changes 3: W0123 19:22:05.985782 15986 process.cpp:2756] Attempted to spawn already running process version@172.17.0.2:45634 3: I0123 19:22:05.986744 15986 sched.cpp:232] Version: 1.5.0 3: I0123 19:22:05.987470 15990 sched.cpp:336] New master detected at master@172.17.0.2:45634 3: I0123 19:22:05.987567 15990 sched.cpp:396] Authenticating with master master@172.17.0.2:45634 3: I0123 19:22:05.987582 15990 sched.cpp:403] Using default CRAM-MD5 authenticatee 3: I0123 19:22:05.987869 15999 authenticatee.cpp:121] Creating new client SASL connection 3: I0123 19:22:05.988121 15991 master.cpp:8958] Authenticating scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 3: I0123 19:22:05.988296 15987 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(872)@172.17.0.2:45634 3: I0123 19:22:05.988575 15988 authenticator.cpp:98] Creating new server SASL connection 3: I0123 19:22:05.988821 15996 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 3: I0123 19:22:05.988852 15996 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 3: I0123 19:22:05.988967 15994 authenticator.cpp:204] Received SASL authentication start 3: I0123 19:22:05.989023 15994 authenticator.cpp:326] Authentication requires more steps 3: I0123 19:22:05.989145 15993 authenticatee.cpp:259] Received SASL authentication step 3: I0123 19:22:05.989271 16001 authenticator.cpp:232] Received SASL authentication step 3: I0123 19:22:05.989316 16001 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '455912973e2c' server FQDN: '455912973e2c' SASL_AUXPROP_VER: IFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 3: I0123 19:22:05.989334 16001 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 3: I0123 19:22:05.989377 16001 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 3: I0123 19:22:05.989400 16001 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '455912973e2c' server FQDN: '455912973e2c' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 3: I0123 19:22:05.989415 16001 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 3: I0123 19:22:05.989423 16001 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 3: I0123 19:22:05.989441 16001 authenticator.cpp:318] Authentication success 3: I0123 19:22:05.989531 15997 authenticatee.cpp:299] Authentication success 3: I0123 19:22:05.989719 15992 master.cpp:8988] Successfully authenticated principal 'test-principal' at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 3: I0123 19:22:05.989751 16000 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(872)@172.17.0.2:45634 3: I0123 19:22:05.989894 15998 sched.cpp:502] Successfully authenticated with master master@172.17.0.2:45634 3: I0123 19:22:05.989914 15998 sched.cpp:824] Sending SUBSCRIBE call to master@172.17.0.2:45634 3: I0123 19:22:05.990039 15998 sched.cpp:857] Will retry registration in 1.379182754secs if necessary 3: I0123 19:22:05.990229 15991 master.cpp:2958] Received SUBSCRIBE call for framework 'default' at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 3: I0123 19:22:05.990311 15991 master.cpp:2275] Authorizing framework principal 'test-principal' to receive offers for roles '\{ * }' 3: I0123 19:22:05.990876 15987 master.cpp:3038] Subscribing framework default with checkpointing disabled and capabilities [ MULTI_ROLE, RESERVATION_REFINEMENT, PARTITION_AWARE ] 3: I0123 19:22:05.991102 15987 master.cpp:9179] Adding framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 (default) at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 with roles {  } suppressed 3: I0123 19:22:05.991621 15994 sched.cpp:751] Framework registered with eef8ea11-9247-44f3-84cf-340b24df3a52-0000 3: I0123 19:22:05.991731 15988 hierarchical.cpp:297] Added framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 3: I0123 19:22:05.991816 15994 sched.cpp:765] Scheduler::registered took 25842ns 3: I0123 19:22:05.993170 15988 hierarchical.cpp:1517] Performed allocation for 1 agents in 1.236264ms 3: I0123 19:22:05.993611 15997 master.cpp:8788] Sending 1 offers to framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 (default) at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 3: 3: GMOCK WARNING: 3: Uninteresting mock function call - returning directly. 3:     Function call: resourceOffers(0x7ffcbfa4e140, @0x7f82fc814860 \{ 160-byte object <10-22 C5-0C 83-7F 00-00 00-00 00-00 00-00 00-00 5F-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 04-00 00-00 04-00 00-00 80-C0 03-C8 82-7F 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 ... 10-38 04-C8 82-7F 00-00 E0-C4 03-C8 82-7F 00-00 10-B3 00-C8 82-7F 00-00 60-A7 03-C8 82-7F 00-00 10-C3 00-C8 82-7F 00-00 00-00 00-00 00-00 00-00 10-DA 00-C8 82-7F 00-00 00-00 00-00 00-00 00-00> }) 3: NOTE: You can safely ignore the above warning unless this call should not happen.  Do not suppress it by blindly adding an EXPECT_CALL() if you don't mean to enforce the call.  See https://github.com/google/googletest/blob/master/googlemock/docs/CookBook.md#knowing-when-to-expect for details. 3: I0123 19:22:05.994109 16001 sched.cpp:921] Scheduler::resourceOffers took 77564ns 3: I0123 19:22:05.994665 15989 master.cpp:8425] Performing explicit task state reconciliation for 2 tasks of framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 (default) at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 3: I0123 19:22:05.994779 15989 master.cpp:8573] Sending explicit reconciliation state TASK_GONE_BY_OPERATOR for task 7a4ff1bf-488b-4152-a7e4-cf0876008c4d of framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 (default) at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 3: I0123 19:22:05.994936 15989 master.cpp:8573] Sending explicit reconciliation state TASK_GONE_BY_OPERATOR for task 9d4e66ef-86c7-428a-b110-ba949c0c19b8 of framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 (default) at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 3: I0123 19:22:05.995215 15989 sched.cpp:1029] Scheduler::statusUpdate took 36257ns 3: I0123 19:22:05.995381 15989 sched.cpp:1029] Scheduler::statusUpdate took 36862ns 3: /mesos/src/tests/master_tests.cpp:8606: Failure 3:       Expected: TASK_UNKNOWN 3: To be equal to: reconcileUpdate1->state() 3:       Which is: TASK_GONE_BY_OPERATOR 3: I0123 19:22:05.995779 16000 master.cpp:1420] Framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 (default) at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 disconnected 3: I0123 19:22:05.995802 16000 master.cpp:3328] Deactivating framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 (default) at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 3: I0123 19:22:05.996014 15991 hierarchical.cpp:405] Deactivated framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 3: I0123 19:22:05.996299 15994 slave.cpp:931] Agent terminating 3: I0123 19:22:05.996402 16000 master.cpp:10703] Removing offer eef8ea11-9247-44f3-84cf-340b24df3a52-O0 3: I0123 19:22:05.996474 16000 master.cpp:3305] Disconnecting framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 (default) at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 3: I0123 19:22:05.996522 16000 master.cpp:1435] Giving framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 (default) at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 0ns to failover 3: I0123 19:22:05.996711 16000 master.cpp:1306] Agent eef8ea11-9247-44f3-84cf-340b24df3a52-S1 at slave(443)@172.17.0.2:45634 (455912973e2c) disconnected 3: I0123 19:22:05.996732 16000 master.cpp:3365] Disconnecting agent eef8ea11-9247-44f3-84cf-340b24df3a52-S1 at slave(443)@172.17.0.2:45634 (455912973e2c) 3: I0123 19:22:05.996789 16000 master.cpp:3384] Deactivating agent eef8ea11-9247-44f3-84cf-340b24df3a52-S1 at slave(443)@172.17.0.2:45634 (455912973e2c) 3: I0123 19:22:05.997368 15991 hierarchical.cpp:1192] Recovered cpus(allocated: *):2; mem(allocated: *):1024; disk(allocated: *):1024; ports(allocated: *):[31000-32000] (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000], allocated: {}) on agent eef8ea11-9247-44f3-84cf-340b24df3a52-S1 from framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 3: I0123 19:22:05.997454 15991 hierarchical.cpp:766] Agent eef8ea11-9247-44f3-84cf-340b24df3a52-S1 deactivated 3: I0123 19:22:05.998085 15992 master.cpp:8603] Framework failover timeout, removing framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 (default) at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 3: I0123 19:22:05.998239 15992 master.cpp:9480] Removing framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 (default) at scheduler-be5d74bf-c4bf-4504-9e67-6a4f8df93425@172.17.0.2:45634 3: I0123 19:22:05.998915 15995 hierarchical.cpp:344] Removed framework eef8ea11-9247-44f3-84cf-340b24df3a52-0000 3: I0123 19:22:06.013475 15986 master.cpp:1148] Master terminating 3: I0123 19:22:06.014463 15994 hierarchical.cpp:609] Removed agent eef8ea11-9247-44f3-84cf-340b24df3a52-S1 3: [  FAILED  ] MasterTest.RegistryGcByCount (172 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8486","01/25/2018 00:25:57",1,"Webui should display role limits. ""With the addition of quota limits (see MESOS-8068), the UI should be updated to display the per role limit information. Specifically, the 'Roles' tab needs to be updated.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8487","01/25/2018 00:35:00",13,"Design API changes for supporting quota limits. ""Per MESOS-8068, the introduction of a quota limit requires introducing this in the API. We should send out the proposed changes more broadly in the interest of being more rigorous about API changes. [Design doc|https://docs.google.com/document/d/13vG5uH4YVwM79ErBPYAZfnqYFOBbUy2Lym0_9iAQ5Uk/edit#]""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8488","01/25/2018 01:16:44",2,"Docker bug can cause unkillable tasks. ""Due to an [issue on the Moby project|https://github.com/moby/moby/issues/33820], it's possible for Docker versions 1.13 and later to fail to catch a container exit, so that the {{docker run}} command which was used to launch the container will never return. This can lead to the Docker executor becoming stuck in a state where it believes the container is still running and cannot be killed. We should update the Docker executor to ensure that containers stuck in such a state cannot cause unkillable Docker executors/tasks. One way to do this would be a timeout, after which the Docker executor will commit suicide if a kill task attempt has not succeeded. However, if we do this we should also ensure that in the case that the container was actually still running, either the Docker daemon or the DockerContainerizer would clean up the container when it does exit. Another option might be for the Docker executor to directly {{wait()}} on the container's Linux PID, in order to notice when the container exits.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8489","01/25/2018 12:19:09",8,"LinuxCapabilitiesIsolatorFlagsTest.ROOT_IsolatorFlags is flaky ""Observed this on internal Mesosphere CI. h2. Steps to reproduce # Add {{::sleep(1);}} before [removing|https://github.com/apache/mesos/blob/e91ce42ed56c5ab65220fbba740a8a50c7f835ae/src/linux/cgroups.cpp#L483] """"test"""" cgroup # recompile # run `GLOG_v=2 sudo GLOG_v=2 ./src/mesos-tests --gtest_filter=LinuxCapabilitiesIsolatorFlagsTest.ROOT_IsolatorFlags --gtest_break_on_failure --gtest_repeat=10 --verbose` h2. Race description While recovery is in progress for [the first slave|https://github.com/apache/mesos/blob/ce0905fcb31a10ade0962a89235fa90b01edf01a/src/tests/containerizer/linux_capabilities_isolator_tests.cpp#L733], calling [`StartSlave()`|https://github.com/apache/mesos/blob/ce0905fcb31a10ade0962a89235fa90b01edf01a/src/tests/containerizer/linux_capabilities_isolator_tests.cpp#L738] leads to calling [`slave::Containerizer::create()`|https://github.com/apache/mesos/blob/ce0905fcb31a10ade0962a89235fa90b01edf01a/src/tests/cluster.cpp#L431] to create a containerizer. An attempt to create a mesos c'zer, leads to calling [`cgroups::prepare`|https://github.com/apache/mesos/blob/ce0905fcb31a10ade0962a89235fa90b01edf01a/src/slave/containerizer/mesos/linux_launcher.cpp#L124]. Finally, we get to the point, where we try to create a [""""test"""" container|https://github.com/apache/mesos/blob/ce0905fcb31a10ade0962a89235fa90b01edf01a/src/linux/cgroups.cpp#L476]. So, the recovery process for the second slave [might detect|https://github.com/apache/mesos/blob/ce0905fcb31a10ade0962a89235fa90b01edf01a/src/slave/containerizer/mesos/linux_launcher.cpp#L268-L301] this """"test"""" container as an orphaned container. Thus, there is the race between recovery process for the first slave and an attempt to create a c'zer for the second agent."""," ../../src/tests/cluster.cpp:662: Failure Value of: containers->empty() Actual: false Expected: true Failed to destroy containers: { test } ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8492","01/25/2018 22:09:42",3,"Checkpoint profiles in storage local resource provider. ""SLRP should be able to handle missing profiles from an arbitrary disk profile module, and probably need to checkpoint them for recovery.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8497","01/26/2018 19:27:39",2,"Docker parameter `name` does not work with Docker Containerizer. ""When deploying a marathon app with Docker Containerizer (need to check Mesos Containerizer) and the parameter name set, Mesos is not able to recognize/control/kill the started container. Steps to reproduce  # Deploy the below marathon app definition #  Watch task being stuck in staging and mesos not being able to kill it/communicate with it ## {quote}e.g., Agent Logs: W0126 18:38:50.000000  4988 slave.cpp:6750] Failed to get resource statistics for executor ‘instana-agent.1a1f8d22-02c8-11e8-b607-923c3c523109’ of framework 41f1b534-5f9d-4b5e-bb74-a0e387d5739f-0001: Failed to run ‘docker -H unix:///var/run/docker.sock inspect mesos-1c6f894d-9a3e-408c-8146-47ebab2f28be’: exited with status 1; stderr=’Error: No such image, container or task: mesos-1c6f894d-9a3e-408c-8146-47ebab2f28be{quote} # Check on node and see container running, but not being recognized by mesos """," { """"id"""": """"/docker-test"""", """"instances"""": 1, """"portDefinitions"""": [], """"container"""": { """"type"""": """"DOCKER"""", """"volumes"""": [], """"docker"""": { """"image"""": """"ubuntu:16.04"""", """"parameters"""": [ { """"key"""": """"name"""", """"value"""": """"myname"""" } ] } }, """"cpus"""": 0.1, """"mem"""": 128, """"requirePorts"""": false, """"networks"""": [], """"healthChecks"""": [], """"fetch"""": [], """"constraints"""": [], """"cmd"""": """"sleep 1000"""" } ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8530","02/01/2018 21:43:53",5,"Default executor tasks can get stuck in KILLING state ""The default executor will transition a task to {{TASK_KILLING}} and mark its container as being killed before issuing the {{KILL_NESTED_CONTAINER}} call. If the kill call fails, the task will get stuck in {{TASK_KILLING}}, and the executor won't allow retrying the kill. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-8536","02/02/2018 14:05:44",2,"Pending offer operations on resource provider resources not properly accounted for in allocator ""The master currently does not accumulate the resources used by offer operations on master failover. While we create a datastructure to hold this information, we missed updating it. Here {{usedByOperations}} is not updated. This leads to problems when the operation becomes terminal and we try to recover the used resources which might not be known to the framework sorter inside the hierarchical allocator."""," hashmap usedByOperations; if (provider.newOperations.isSome()) { foreachpair (const id::UUID& uuid, const Operation& operation, provider.newOperations.get()) { // Update to bookkeeping of operations. CHECK(!slave->operations.contains(uuid)) << """"New operation """" << uuid.toString() << """" is already known""""; Framework* framework = nullptr; if (operation.has_framework_id()) { framework = getFramework(operation.framework_id()); } addOperation(framework, slave, new Operation(operation)); } } allocator->addResourceProvider( slaveId, provider.newTotal.get(), usedByOperations); ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8537","02/02/2018 20:38:28",3,"Default executor doesn't wait for status updates to be ack'd before shutting down ""The default executor doesn't wait for pending status updates to be acknowledged before shutting down, instead it sleeps for one second and then terminates: The event handler should exit if upon receiving a {{Event::ACKNOWLEDGED}} the executor is shutting down, no tasks are running anymore, and all pending status updates have been acknowledged."""," void _shutdown() { const Duration duration = Seconds(1); LOG(INFO) << """"Terminating after """" << duration; // TODO(qianzhang): Remove this hack since the executor now receives // acknowledgements for status updates. The executor can terminate // after it receives an ACK for a terminal status update. os::sleep(duration); terminate(self()); } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-8545","02/05/2018 19:07:49",8,"AgentAPIStreamingTest.AttachInputToNestedContainerSession is flaky. """""," I0205 17:11:01.091872 4898 http_proxy.cpp:132] Returning '500 Internal Server Error' for '/slave(974)/api/v1' (Disconnected) /home/centos/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-centos-7/mesos/src/tests/api_tests.cpp:6596: Failure Value of: (response).get().status Actual: """"500 Internal Server Error"""" Expected: http::OK().status Which is: """"200 OK"""" Body: """"Disconnected"""" ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8546","02/05/2018 19:39:28",2,"PythonFramework test fails with cache write failure. ""After some recent changes, the  {{ExamplesTest.PythonFramework}} fails on centos and ubuntu rather frequently (but not always). The symptom always is like this (taken from an ASF CI run):   """," [...] I0203 03:21:06.871362 11001 leveldb.cpp:347] Persisting action (16 bytes) to leveldb took 73.84466ms I0203 03:21:06.871433 11001 replica.cpp:712] Persisted action TRUNCATE at position 8 I0203 03:21:06.871841 10984 replica.cpp:695] Replica received learned notice for position 8 from log-network(1)@172.17.0.4:43102 I0203 03:21:06.908581 11004 hierarchical.cpp:2429] Filtered offer with ports:[31000-32000]; mem:9984; disk:367463 on agent 0bd8b628-491d-46a1-a358-6cc902ee2578-S1 for role * of framework 0bd8b628-491d-46a1-a358-6cc902ee2578-0000 I0203 03:21:06.908924 11004 hierarchical.cpp:2429] Filtered offer with cpus:1; mem:10112; disk:367463; ports:[31000-32000] on agent 0bd8b628-491d-46a1-a358-6cc902ee2578-S2 for role * of framework 0bd8b628-491d-46a1-a358-6cc902ee2578-0000 I0203 03:21:06.909207 11004 hierarchical.cpp:2429] Filtered offer with ports:[31000-32000]; mem:9984; disk:367463 on agent 0bd8b628-491d-46a1-a358-6cc902ee2578-S0 for role * of framework 0bd8b628-491d-46a1-a358-6cc902ee2578-0000 I0203 03:21:06.909306 11004 hierarchical.cpp:1517] Performed allocation for 3 agents in 1.276217ms I0203 03:21:06.945303 10984 leveldb.cpp:347] Persisting action (18 bytes) to leveldb took 73.445285ms I0203 03:21:06.945451 10984 leveldb.cpp:423] Deleting ~2 keys from leveldb took 81868ns I0203 03:21:06.945477 10984 replica.cpp:712] Persisted action TRUNCATE at position 8 Traceback (most recent call last): File """"/mesos/mesos-1.6.0/_build/../src/examples/python/test_executor.py"""", line 25, in from mesos.executor import MesosExecutorDriver File """"build/bdist.linux-x86_64/egg/mesos/executor/__init__.py"""", line 17, in File """"build/bdist.linux-x86_64/egg/mesos/executor/_executor.py"""", line 7, in File """"build/bdist.linux-x86_64/egg/mesos/executor/_executor.py"""", line 4, in __bootstrap__ File """"/mesos/mesos-1.6.0/_build/3rdparty/setuptools-20.9.0/pkg_resources/__init__.py"""", line 1172, in resource_filename self, resource_name File """"/mesos/mesos-1.6.0/_build/3rdparty/setuptools-20.9.0/pkg_resources/__init__.py"""", line 1716, in get_resource_filename self._extract_resource(manager, self._eager_to_zip(name)) File """"/mesos/mesos-1.6.0/_build/3rdparty/setuptools-20.9.0/pkg_resources/__init__.py"""", line 1746, in _extract_resource self.egg_name, self._parts(zip_path) File """"/mesos/mesos-1.6.0/_build/3rdparty/setuptools-20.9.0/pkg_resources/__init__.py"""", line 1239, in get_cache_path self.extraction_error() File """"/mesos/mesos-1.6.0/_build/3rdparty/setuptools-20.9.0/pkg_resources/__init__.py"""", line 1219, in extraction_error raise err pkg_resources.ExtractionError: Can't extract file(s) to egg cache The following error occurred while trying to extract file(s) to the Python egg cache: [Errno 17] File exists: '/home/mesos/.python-eggs/mesos.executor-1.6.0-py2.7-linux-x86_64.egg-tmp' The Python egg cache directory is currently set to: /home/mesos/.python-eggs Perhaps your account does not have write access to this directory? You can change the cache directory by setting the PYTHON_EGG_CACHE environment variable to point to an accessible directory.",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8548","02/06/2018 05:24:19",2,"Test StorageLocalResourceProviderTest.ROOT_Metrics is flaky ""The SLRP Metrics test is flaky because the agent might got two {{SlaveRegisteredMessage}}s due to its retry logic for registration, and thus it would send two {{UpdateSlaveMessage}}s. As a result, the futures waiting for these messages will be ready before the plugin is actually launched. This will lead to a race between the SIGKILL and LAUNCH_CONTAINER in the test, and if the kill happens before SLRP gets connected to the plugin, SLRP will wait for 1 minutes before giving up, which is too long for the test to wait for a second launch.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8550","02/07/2018 16:24:50",2,"Bug in `Master::detected()` leads to coredump in `MasterZooKeeperTest.MasterInfoAddress`. "" This failure is most likely caused by calling [leader->has_domain()|https://github.com/apache/mesos/blob/994213739b1afc473bbd9d15ded7c3fd26eaa924/src/master/master.cpp#L2159] on empty `leader`, from logs: """," 15:55:17 Assertion failed: (isSome()), function get, file ../../3rdparty/stout/include/stout/option.hpp, line 119. 15:55:17 *** Aborted at 1518018924 (unix time) try """"date -d @1518018924"""" if you are using GNU date *** 15:55:17 PC: @ 0x7fff4f8f2e3e __pthread_kill 15:55:17 *** SIGABRT (@0x7fff4f8f2e3e) received by PID 39896 (TID 0x700000427000) stack trace: *** 15:55:17 @ 0x7fff4fa24f5a _sigtramp 15:55:17 I0207 07:55:24.945252 4890624 group.cpp:511] ZooKeeper session expired 15:55:17 @ 0x700000425500 (unknown) 15:55:17 2018-02-07 07:55:24,945:39896(0x700000633000):ZOO_INFO@log_env@794: Client environment:user.dir=/private/var/folders/6w/rw03zh013y38ys6cyn8qppf80000gn/T/1mHCvU 15:55:17 @ 0x7fff4f84f312 abort 15:55:17 2018-02-07 07:55:24,945:39896(0x700000633000):ZOO_INFO@zookeeper_init@827: Initiating client connection, host=127.0.0.1:52197 sessionTimeout=10000 watcher=0x10d916590 sessionId=0 sessionPasswd= context=0x7fe1bda706a0 flags=0 15:55:17 @ 0x7fff4f817368 __assert_rtn 15:55:17 @ 0x10b9cff97 _ZNR6OptionIN5mesos10MasterInfoEE3getEv 15:55:17 @ 0x10bbb04b5 Option<>::operator->() 15:55:17 @ 0x10bd4514a mesos::internal::master::Master::detected() 15:55:17 @ 0x10bf54558 _ZZN7process8dispatchIN5mesos8internal6master6MasterERKNS_6FutureI6OptionINS1_10MasterInfoEEEESB_EEvRKNS_3PIDIT_EEMSD_FvT0_EOT1_ENKUlOS9_PNS_11ProcessBaseEE_clESM_SO_ 15:55:17 @ 0x10bf54310 _ZN5cpp176invokeIZN7process8dispatchIN5mesos8internal6master6MasterERKNS1_6FutureI6OptionINS3_10MasterInfoEEEESD_EEvRKNS1_3PIDIT_EEMSF_FvT0_EOT1_EUlOSB_PNS1_11ProcessBaseEE_JSB_SQ_EEEDTclclsr3stdE7forwardISF_Efp_Espclsr3stdE7forwardIT0_Efp0_EEEOSF_DpOSS_ 15:55:17 @ 0x10bf542bb _ZN6lambda8internal7PartialIZN7process8dispatchIN5mesos8internal6master6MasterERKNS2_6FutureI6OptionINS4_10MasterInfoEEEESE_EEvRKNS2_3PIDIT_EEMSG_FvT0_EOT1_EUlOSC_PNS2_11ProcessBaseEE_JSC_NSt3__112placeholders4__phILi1EEEEE13invoke_expandISS_NST_5tupleIJSC_SW_EEENSZ_IJOSR_EEEJLm0ELm1EEEEDTclsr5cpp17E6invokeclsr3stdE7forwardISG_Efp_Espcl6expandclsr3stdE3getIXT2_EEclsr3stdE7forwardISK_Efp0_EEclsr3stdE7forwardISN_Efp2_EEEEOSG_OSK_N5cpp1416integer_sequenceImJXspT2_EEEESO_ 15:55:17 @ 0x10bf541f3 _ZNO6lambda8internal7PartialIZN7process8dispatchIN5mesos8internal6master6MasterERKNS2_6FutureI6OptionINS4_10MasterInfoEEEESE_EEvRKNS2_3PIDIT_EEMSG_FvT0_EOT1_EUlOSC_PNS2_11ProcessBaseEE_JSC_NSt3__112placeholders4__phILi1EEEEEclIJSR_EEEDTcl13invoke_expandclL_ZNST_4moveIRSS_EEONST_16remove_referenceISG_E4typeEOSG_EdtdefpT1fEclL_ZNSZ_IRNST_5tupleIJSC_SW_EEEEES14_S15_EdtdefpT10bound_argsEcvN5cpp1416integer_sequenceImJLm0ELm1EEEE_Eclsr3stdE16forward_as_tuplespclsr3stdE7forwardIT_Efp_EEEEDpOS1C_ 15:55:17 @ 0x10bf540bd _ZN5cpp176invokeIN6lambda8internal7PartialIZN7process8dispatchIN5mesos8internal6master6MasterERKNS4_6FutureI6OptionINS6_10MasterInfoEEEESG_EEvRKNS4_3PIDIT_EEMSI_FvT0_EOT1_EUlOSE_PNS4_11ProcessBaseEE_JSE_NSt3__112placeholders4__phILi1EEEEEEJST_EEEDTclclsr3stdE7forwardISI_Efp_Espclsr3stdE7forwardIT0_Efp0_EEEOSI_DpOS10_ 15:55:17 @ 0x10bf54081 _ZN6lambda8internal6InvokeIvEclINS0_7PartialIZN7process8dispatchIN5mesos8internal6master6MasterERKNS5_6FutureI6OptionINS7_10MasterInfoEEEESH_EEvRKNS5_3PIDIT_EEMSJ_FvT0_EOT1_EUlOSF_PNS5_11ProcessBaseEE_JSF_NSt3__112placeholders4__phILi1EEEEEEJSU_EEEvOSJ_DpOT0_ 15:55:17 @ 0x10bf53e06 _ZNO6lambda12CallableOnceIFvPN7process11ProcessBaseEEE10CallableFnINS_8internal7PartialIZNS1_8dispatchIN5mesos8internal6master6MasterERKNS1_6FutureI6OptionINSA_10MasterInfoEEEESK_EEvRKNS1_3PIDIT_EEMSM_FvT0_EOT1_EUlOSI_S3_E_JSI_NSt3__112placeholders4__phILi1EEEEEEEclEOS3_ 15:55:17 @ 0x10ebf464f _ZNO6lambda12CallableOnceIFvPN7process11ProcessBaseEEEclES3_ 15:55:17 @ 0x10ebf44c4 process::ProcessBase::consume() 15:55:17 @ 0x10ec6f4d9 _ZNO7process13DispatchEvent7consumeEPNS_13EventConsumerE 15:55:17 @ 0x10b0b2389 process::ProcessBase::serve() 15:55:17 @ 0x10ebecccc process::ProcessManager::resume() 15:55:17 @ 0x10ecbd335 process::ProcessManager::init_threads()::$_2::operator()() 15:55:17 @ 0x10ecbcee6 _ZNSt3__114__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN7process14ProcessManager12init_threadsEvE3$_2EEEEEPvSB_ 15:55:17 @ 0x7fff4fa2e6c1 _pthread_body 15:55:17 @ 0x7fff4fa2e56d _pthread_start 15:55:17 @ 0x7fff4fa2dc5d thread_start 15:55:17 I0207 07:55:24.944833 5427200 detector.cpp:152] Detected a new leader: None ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8554","02/08/2018 20:15:13",3,"Enhance V1 scheduler send API to receive sync response from Master ""Current scheduler HTTP API doesn't provide a way for the scheduler to get a synchronous response back from the Master. A synchronous API means the scheduler wouldn't have to wait on the event stream to check the status of operations that require master-only validation/approval/rejection.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8567","02/10/2018 02:31:25",3,"Test UriDiskProfileTest.FetchFromHTTP is flaky. ""The {{UriDiskProfileTest.FetchFromHTTP}} test is flaky on Debian 9: I also run it in repetition and got the following error log (although the test itself is passed): """," ../../src/tests/disk_profile_tests.cpp:683 Failed to wait 15secs for future E0209 18:26:37.030012 7282 uri_disk_profile.cpp:220] Failed to parse result: Failed to parse DiskProfileMapping message: INVALID_ARGUMENT:Unexpected end of string. Expected a value. ^ ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8569","02/12/2018 22:06:58",2,"Allow newline characters when decoding base64 strings in stout. ""Current implementation of `stout::base64::decode` errors out on encountering a newline character (""""\n"""" or """"\r\n"""") which is correct wrt [RFC4668#section-3.3|https://tools.ietf.org/html/rfc4648#section-3.3]. However, most implementations insert a newline to delimit encoded string and ignore (instead of erroring out) the newline character while decoding the string. Since stout facilities are used by third-party modules to encode/decode base64 data, it is desirable to allow decoding of newline-delimited data.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8574","02/13/2018 08:01:01",5,"Docker executor makes no progress when 'docker inspect' hangs ""In the Docker executor, many calls later in the executor's lifecycle are gated on an initial {{docker inspect}} call returning: https://github.com/apache/mesos/blob/bc6b61bca37752689cffa40a14c53ad89f24e8fc/src/docker/executor.cpp#L223 If that first call to {{docker inspect}} never returns, the executor becomes stuck in a state where it makes no progress and cannot be killed. It's tempting for the executor to simply commit suicide after a timeout, but we must be careful of the case in which the executor's Docker container is actually running successfully, but the Docker daemon is unresponsive. In such a case, we do not want to send TASK_FAILED or TASK_KILLED if the task's container is running successfully.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-8576","02/13/2018 08:15:36",3,"Improve discard handling of 'Docker::inspect()' ""In the call path of {{Docker::inspect()}}, each continuation currently checks if {{promise->future().hasDiscard()}}, where the {{promise}} is associated with the output of the {{docker inspect}} call. However, if the call to {{docker inspect}} becomes hung indefinitely, then continuations are never invoked, and a subsequent discard of the returned {{Future}} will have no effect. We should add proper {{onDiscard}} handling to that {{Future}} so that appropriate cleanup is performed in such cases.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8591","02/16/2018 21:47:28",3,"Add infra to test a hung Docker daemon ""We should add infrastructure to our tests which enables us to test the behavior of the Docker executor and containerizer in the presence of a hung Docker daemon. One possible first-order solution is to build a simple binary which never returns. We could initialize the agent/executor with this binary instead of the Docker CLI in order to simulate a Docker daemon which hangs on every call.""","",0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8592","02/17/2018 00:51:01",1,"Avoid failure for invalid profile in `UriDiskProfileAdaptor` ""We should be defensive and not fail the profile module when the user provides an invalid profile in the profile matrix.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8594","02/19/2018 13:52:09",1,"Mesos master stack overflow in libprocess socket send loop. ""Mesos master crashes under load. Attached are some infos from the `lldb`: To quote [~abudnik] {quote}it’s the stack overflow bug in libprocess due to the way `internal::send()` and `internal::_send()` are implemented in `process.cpp` {quote}"""," Process 41933 resuming Process 41933 stopped * thread #10, stop reason = EXC_BAD_ACCESS (code=2, address=0x7000089ecff8) frame #0: 0x000000010c30ddb6 libmesos-1.6.0.dylib`::_Some() at some.hpp:35 32 template 33 struct _Some 34 { -> 35 _Some(T _t) : t(std::move(_t)) {} 36 37 T t; 38 }; Target 0: (mesos-master) stopped. (lldb) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8598","02/20/2018 18:40:29",1,"Allow empty resource provider selector in `UriDiskProfileAdaptor`. ""Currently in {{UriDiskProfileAdaptor}}, it is invalid for a profile to have a resource provider selector with 0 resource providers. However, one can put non-existent provider types and names into the selector to achieve the same effect, and this is semantically inconsistent. We should allow an empty list of resource providers directly.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8601","02/22/2018 19:07:43",3,"Master crashes during slave reregistration after failover. ""The following happened after a master failover. During slave reregistration, new tasks were added and the new leading master notified all of its subscribers, and triggered the following check failure: This was because the master tried to get the framework info when sending the notification: https://github.com/apache/mesos/blob/1.5.x/src/master/master.cpp#L11190 But it added the framework after that: https://github.com/apache/mesos/blob/1.5.x/src/master/master.cpp#L6963"""," F0222 15:53:44.440387 2805 master.cpp:11190] Check failed: 'framework' Must be non NULL *** Check failure stack trace: *** @ 0x7f1357be521d google::LogMessage::Fail() @ 0x7f1357be704d google::LogMessage::SendToLog() @ 0x7f1357be4e0c google::LogMessage::Flush() @ 0x7f1357be7949 google::LogMessageFatal::~LogMessageFatal() @ 0x7f1356c80e2d google::CheckNotNull<>() @ 0x7f1356ce2666 mesos::internal::master::Master::Subscribers::send() @ 0x7f1356cece83 mesos::internal::master::Slave::addTask() @ 0x7f1356cf3206 mesos::internal::master::Slave::Slave() @ 0x7f1356cf5b90 mesos::internal::master::Master::__reregisterSlave() @ 0x7f1356d02cf8 mesos::internal::master::Master::_reregisterSlave() @ 0x7f1357b43761 process::ProcessBase::consume() @ 0x7f1357b5248c process::ProcessManager::resume() @ 0x7f1357b579f6 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv @ 0x7f1354e6c230 (unknown) @ 0x7f135468ae25 start_thread @ 0x7f13543b834d __clone ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8604","02/23/2018 21:55:23",2,"Quota headroom tracking may be incorrect in the presence of hierarchical reservation. ""When calculating the global quota headroom, we subtract all unallocated reservations by doing We only traverse roles with reservation. In the presence of hierarchal reservation, this is problematic. Consider a child role (e.g. """"a/b"""") with no reservations, it can still get reserved resources if its ancestor has reservations (e.g. """"a"""" has reservations). However, allocated reserved resources of role “a/b” will be ignored given the above code. The consequence is that availableHeadroom will be underestimated because allocated reservations are underestimated. This would lead to excessive resources set aside for quota headroom."""," for each role with reservation availableHeadroom -= role total reservation - role allocated reservation; ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8605","02/23/2018 23:21:20",3,"Terminal task status update will not send if 'docker inspect' is hung ""When the agent processes a terminal status update for a task, it calls {{containerizer->update()}} on the container before it forwards the update: https://github.com/apache/mesos/blob/9635d4a2d12fc77935c3d5d166469258634c6b7e/src/slave/slave.cpp#L5509-L5514 In the Docker containerizer, {{update()}} calls {{Docker::inspect()}}, which means that if the inspect call hangs, the terminal update will not be sent: https://github.com/apache/mesos/blob/9635d4a2d12fc77935c3d5d166469258634c6b7e/src/slave/containerizer/docker.cpp#L1714""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8626","03/02/2018 00:49:22",3,"The 'allocatable' check in the allocator is problematic with multi-role frameworks ""The [allocatable|https://github.com/apache/mesos/blob/1.5.x/src/master/allocator/mesos/hierarchical.cpp#L2471-L2479] check in the allocator (shown below) was originally introduced to help alleviate the situation where a framework receives only disk, but not cpu/memory, thus cannot launch a task. When we introduce multi-role capability to the frameworks, this check makes less sense now. For instance, consider the following case: 1) There is a single agent and a single framework in the cluster 2) The agent has cpu/memory reserved to role A, and disk reserved to B 3) The framework subscribes to both role A and role B 4) The framework expects that it'll receive an offer containing the resources on the agent 5) However, the framework receives no disk resources due to the following [code|https://github.com/apache/mesos/blob/1.5.x/src/master/allocator/mesos/hierarchical.cpp#L2078-L2100]. This is counter intuitive. Two comments: 1) If `allocatable` check is still necessary (see MESOS-7398)? 2) If we want to keep `allocatable` check for the original purpose, we should do that based on framework not role, given that a framework can subscribe to multiple roles now? Some related JIRAs: MESOS-1688 MESOS-7398 """," bool HierarchicalAllocatorProcess::allocatable( const Resources& resources) { Option cpus = resources.cpus(); Option mem = resources.mem(); return (cpus.isSome() && cpus.get() >= MIN_CPUS) || (mem.isSome() && mem.get() >= MIN_MEM); } void HierarchicalAllocatorProcess::__allocate() { ... Resources resources = available.allocatableTo(role); if (!allocatable(resources)) { break; } ... } bool Resources::isAllocatableTo( const Resource& resource, const std::string& role) { CHECK(!resource.has_role()) << resource; CHECK(!resource.has_reservation()) << resource; return isUnreserved(resource) || role == reservationRole(resource) || roles::isStrictSubroleOf(role, reservationRole(resource)); } ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8640","03/06/2018 02:04:30",2,"Validate `DockerInfo` exists when container's type is `DOCKER` ""Currently when framework launches a task whose ContainerInfo's type is DOCKER (i.e., Docker containerizer will be used to launch the container), we do not validate if the `DockerInfo` exists in the ContainerInfo, so such task will be sent from master to agent, and will eventually fail due to pulling image with empty name. Actually we have a validation in [this code|https://github.com/apache/mesos/blob/1.5.0/src/docker/docker.cpp#L605:L607], but it is too late (i.e., when Docker executor tries to create the Docker container), we should do the validation much earlier, e.g., in master."""," Failed to launch container: Failed to run 'docker -H unix:///var/run/docker.sock pull :latest': exited with status 1; stderr='invalid reference format' ",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8647","03/08/2018 16:26:28",2,"Enable resource provider agent capability by default ""In 1.5.0 we introduced a resource provider agent capability which e.g., enables a modified operation protocol. We should enable this capability by default.   If tests explicitly depend on the agent being fully operational, they should be adjusted for the modified protocol. It is e.g., not enough to wait for a {{dispatch}} to the agent's recovery method, but instead one should wait for a dedicated {{UpdateSlaveMessage}} from the agent.""","",0,0,1,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8650","03/09/2018 01:57:31",2,"Bump CSI bundle to v0.2. ""Upgrade CSI spec bundle in {{3rdparty/}}.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8653","03/09/2018 02:26:02",2,"Make the CSI client to support CSI v0.2. ""CSI v0.2 is incompatible with v0.1, thus we need to modify the CSI client to support the new CSI API. We may consider supporting both v0.1 and v0.2 in Mesos 1.6, or just deprecating v0.1.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8657","03/09/2018 19:17:14",3,"Build CSI proto in CMake. ""We should be able to build CSI proto with CMake.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8702","03/21/2018 02:29:27",2,"Replace the manual parsing in Mesos code with the native protobuf map support ""In MESOS-7656, we have updated the JSON <=> protobuf message conversion in stout for map support which means we can use the native protobuf map feature now in Mesos code. So we should replace the manual parsing for the following fields with the native protobuf map. [https://github.com/apache/mesos/blob/1.5.0/include/mesos/docker/v1.proto#L65:L68] [https://github.com/apache/mesos/blob/1.5.0/include/mesos/oci/spec.proto#L33:L36] [https://github.com/apache/mesos/blob/1.5.0/include/mesos/oci/spec.proto#L61:L64] [https://github.com/apache/mesos/blob/1.5.0/include/mesos/oci/spec.proto#L88:L91] [https://github.com/apache/mesos/blob/1.5.0/include/mesos/oci/spec.proto#L107:L110] [https://github.com/apache/mesos/blob/1.5.0/include/mesos/oci/spec.proto#L151:L154] Please note, for [Appc image manifest|https://github.com/apache/mesos/blob/1.5.0/include/mesos/appc/spec.proto#L43], we also have a field {{repeated Label labels = 4}}, but we should not replace it with the native protobuf map support, because according to the [Appc image spec|https://github.com/appc/spec/blob/master/spec/aci.md#image-manifest-schema], this field is not a map, instead it is a list of objects. And in {{mesos.proto}}, we also have a couple protobuf messages which have field like {{optional Labels labels = 10}}, .e.g. {{TaskInfo.labels}}, I would not suggest to replace them with native protobuf map since that would be an API changes which may break framework's code.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8711","03/21/2018 16:14:39",1,"SlaveTest.ChangeDomain is disabled. ""This test has been disabled in https://github.com/apache/mesos/commit/c0468b240842d4aaf04249cb0a58c59c43d1850d. We should either fix or remove it.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8715","03/21/2018 19:12:38",2,"Consider removing conditional inclusion in the public header `csi/spec.hpp`. ""Currently we conditionally include {{csi.grpc.pb.h}} in {{csi/spec.hpp}} based on the configuration config {{ENABLE_GRPC}}, which is not ideal since this makes the public header depends on an some-what internal configuration flag. We could consider one of the following approaches to remove such dependency: 1. Generate a blank {{csi.grpc.pb.h}} when gRPC is not enabled. 2. Split {{csi/spec.hpp}} into {{csi/messages.hpp}} and {{csi/services.hpp}}, and do the conditional inclusion of {{csi/services.hpp}} in the implementation files. 3. Only include {{csi.pb.h}} in {{csi/spec.hpp}} since Mesos is only publicly dependent on the proto messages. Have a {{src/csi/services.hpp}} to include {{csi.grpc.pb.h}}. 4. Remove this wrapper header file and directly include {{csi.pb.h}} and {{csi.grpc.pb.h}}.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8717","03/22/2018 01:04:02",5,"Support CSI v0.2 in SLRP. ""SLRP needs to be modified to talk to plugins using CSI v0.2.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8719","03/22/2018 09:04:25",2,"Mesos configured with `--enable-grpc` doesn't compile on non-Linux builds ""Commit {{59cca968e04dee069e0df2663733b6d6f55af0da}} added {{examples/test_csi_plugin.cpp}} to non-Linux builds that are configured using the {{--enable-grpc}} flag. As {{examples/test_csi_plugin.cpp}} includes {{fs/linux.hpp}}, it can only compile on Linux and needs to be disabled for non-Linux builds.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8728","03/23/2018 16:28:29",2,"Don't print full usage for invocation errors ""The current usage string for mesos-master comes in at 399 lines, and for mesos-agent at 685 lines.   Printing such a wall of text will overflow most terminal windows, making it necessary to scroll up to see the actual error when invoking mesos with an incorrect command line.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8732","03/27/2018 13:58:57",8,"Use composing containerizer in some agent tests. ""If we assign """"docker,mesos"""" to the `containerizers` flag for an agent, then `ComposingContainerizer` will be used for many tests that do not specify `containerizers` flag. That's the goal of this task. I tried to do that by adding [`flags.containerizers = """"docker,mesos"""";`|https://github.com/apache/mesos/blob/master/src/tests/mesos.cpp#L273], but it turned out that some tests are started to hang due to a paused clocks, while docker c'zer and docker library use libprocess clocks.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8733","03/27/2018 16:03:42",1,"OversubscriptionTest.ForwardUpdateSlaveMessage is flaky ""Observed this failure in CI, """," [ RUN ] OversubscriptionTest.ForwardUpdateSlaveMessage 3: I0327 10:12:04.032042 18320 cluster.cpp:172] Creating default 'local' authorizer 3: I0327 10:12:04.035696 18321 master.cpp:463] Master b5c97327-11cc-4183-82ed-75e62b71cc58 (1931c74e0c4c) started on 172.17.0.2:35020 3: I0327 10:12:04.035732 18321 master.cpp:466] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""HierarchicalDRF"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/4j65Va/credentials"""" --filter_gpu_resources=""""true"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_unreachable_tasks_per_framework=""""1000"""" --port=""""5050"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --require_agent_domain=""""false"""" --root_submissions=""""true"""" --user_sorter=""""drf"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/4j65Va/master"""" --zk_session_timeout=""""10secs"""" 3: I0327 10:12:04.036129 18321 master.cpp:515] Master only allowing authenticated frameworks to register 3: I0327 10:12:04.036140 18321 master.cpp:521] Master only allowing authenticated agents to register 3: I0327 10:12:04.036147 18321 master.cpp:527] Master only allowing authenticated HTTP frameworks to register 3: I0327 10:12:04.036156 18321 credentials.hpp:37] Loading credentials for authentication from '/tmp/4j65Va/credentials' 3: I0327 10:12:04.036468 18321 master.cpp:571] Using default 'crammd5' authenticator 3: I0327 10:12:04.036643 18321 http.cpp:959] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly' 3: I0327 10:12:04.036834 18321 http.cpp:959] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' 3: I0327 10:12:04.037005 18321 http.cpp:959] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' 3: I0327 10:12:04.037170 18321 master.cpp:652] Authorization enabled 3: I0327 10:12:04.037370 18338 whitelist_watcher.cpp:77] No whitelist given 3: I0327 10:12:04.037374 18322 hierarchical.cpp:175] Initialized hierarchical allocator process 3: I0327 10:12:04.040787 18321 master.cpp:2126] Elected as the leading master! 3: I0327 10:12:04.040812 18321 master.cpp:1682] Recovering from registrar 3: I0327 10:12:04.040966 18342 registrar.cpp:347] Recovering registrar 3: I0327 10:12:04.041606 18330 registrar.cpp:391] Successfully fetched the registry (0B) in 590848ns 3: I0327 10:12:04.041764 18330 registrar.cpp:495] Applied 1 operations in 57052ns; attempting to update the registry 3: I0327 10:12:04.042466 18330 registrar.cpp:552] Successfully updated the registry in 638976ns 3: I0327 10:12:04.042615 18330 registrar.cpp:424] Successfully recovered registrar 3: I0327 10:12:04.043128 18339 master.cpp:1796] Recovered 0 agents from the registry (135B); allowing 10mins for agents to reregister 3: I0327 10:12:04.043151 18326 hierarchical.cpp:213] Skipping recovery of hierarchical allocator: nothing to recover 3: W0327 10:12:04.048898 18320 process.cpp:2805] Attempted to spawn already running process files@172.17.0.2:35020 3: I0327 10:12:04.050076 18320 containerizer.cpp:304] Using isolation { environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni } 3: W0327 10:12:04.050720 18320 backend.cpp:76] Failed to create 'aufs' backend: AufsBackend requires root privileges 3: W0327 10:12:04.050746 18320 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges 3: I0327 10:12:04.050791 18320 provisioner.cpp:299] Using default backend 'copy' 3: I0327 10:12:04.053491 18320 cluster.cpp:460] Creating default 'local' authorizer 3: I0327 10:12:04.056531 18326 slave.cpp:261] Mesos agent started on (546)@172.17.0.2:35020 3: I0327 10:12:04.056571 18326 slave.cpp:262] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_YeoNx5/store/appc"""" --authenticate_http_executors=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_YeoNx5/credential"""" --default_role=""""*"""" --disallow_sharing_agent_pid_namespace=""""false"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_YeoNx5/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_YeoNx5/fetch"""" --fetcher_cache_size=""""2GB"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --hadoop_home="""""""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_YeoNx5/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --jwt_secret_key=""""/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_YeoNx5/jwt_secret_key"""" --launcher=""""posix"""" --launcher_dir=""""/tmp/SRC/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --reconfiguration_policy=""""equal"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_YeoNx5"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_8qkWeD"""" --zk_session_timeout=""""10secs"""" 3: I0327 10:12:04.057035 18326 credentials.hpp:86] Loading credential for authentication from '/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_YeoNx5/credential' 3: I0327 10:12:04.057212 18326 slave.cpp:294] Agent using credential for: test-principal 3: I0327 10:12:04.057235 18326 credentials.hpp:37] Loading credentials for authentication from '/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_YeoNx5/http_credentials' 3: I0327 10:12:04.057521 18326 http.cpp:959] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-executor' 3: I0327 10:12:04.057674 18326 http.cpp:980] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-executor' 3: I0327 10:12:04.057922 18326 http.cpp:959] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' 3: I0327 10:12:04.058051 18326 http.cpp:980] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-readonly' 3: I0327 10:12:04.058272 18326 http.cpp:959] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' 3: I0327 10:12:04.058408 18326 http.cpp:980] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-readwrite' 3: I0327 10:12:04.058784 18326 disk_profile_adaptor.cpp:80] Creating default disk profile adaptor module 3: I0327 10:12:04.060353 18326 slave.cpp:609] Agent resources: [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] 3: I0327 10:12:04.060569 18326 slave.cpp:617] Agent attributes: [ ] 3: I0327 10:12:04.060583 18326 slave.cpp:626] Agent hostname: 1931c74e0c4c 3: I0327 10:12:04.060739 18330 task_status_update_manager.cpp:181] Pausing sending task status updates 3: I0327 10:12:04.062536 18331 state.cpp:66] Recovering state from '/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_8qkWeD/meta' 3: I0327 10:12:04.062916 18322 task_status_update_manager.cpp:207] Recovering task status update manager 3: I0327 10:12:04.063143 18323 containerizer.cpp:674] Recovering containerizer 3: I0327 10:12:04.064961 18330 provisioner.cpp:495] Provisioner recovery complete 3: I0327 10:12:04.065325 18336 slave.cpp:7212] Finished recovery 3: I0327 10:12:04.066190 18331 task_status_update_manager.cpp:181] Pausing sending task status updates 3: I0327 10:12:04.066213 18336 slave.cpp:1260] New master detected at master@172.17.0.2:35020 3: I0327 10:12:04.066336 18336 slave.cpp:1315] Detecting new master 3: I0327 10:12:04.067641 18338 slave.cpp:1342] Authenticating with master master@172.17.0.2:35020 3: I0327 10:12:04.067776 18338 slave.cpp:1351] Using default CRAM-MD5 authenticatee 3: I0327 10:12:04.068178 18322 authenticatee.cpp:121] Creating new client SASL connection 3: I0327 10:12:04.068650 18324 master.cpp:9206] Authenticating slave(546)@172.17.0.2:35020 3: I0327 10:12:04.068862 18321 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(1085)@172.17.0.2:35020 3: I0327 10:12:04.069332 18327 authenticator.cpp:98] Creating new server SASL connection 3: I0327 10:12:04.069733 18335 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 3: I0327 10:12:04.069778 18335 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 3: I0327 10:12:04.070008 18332 authenticator.cpp:204] Received SASL authentication start 3: I0327 10:12:04.070113 18332 authenticator.cpp:326] Authentication requires more steps 3: I0327 10:12:04.070336 18323 authenticatee.cpp:259] Received SASL authentication step 3: I0327 10:12:04.070583 18342 authenticator.cpp:232] Received SASL authentication step 3: I0327 10:12:04.070636 18342 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '1931c74e0c4c' server FQDN: '1931c74e0c4c' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 3: I0327 10:12:04.070659 18342 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 3: I0327 10:12:04.070724 18342 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 3: I0327 10:12:04.070760 18342 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '1931c74e0c4c' server FQDN: '1931c74e0c4c' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 3: I0327 10:12:04.070824 18342 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 3: I0327 10:12:04.070832 18342 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 3: I0327 10:12:04.070847 18342 authenticator.cpp:318] Authentication success 3: I0327 10:12:04.070940 18334 authenticatee.cpp:299] Authentication success 3: I0327 10:12:04.071063 18333 master.cpp:9236] Successfully authenticated principal 'test-principal' at slave(546)@172.17.0.2:35020 3: I0327 10:12:04.071118 18337 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(1085)@172.17.0.2:35020 3: I0327 10:12:04.071286 18328 slave.cpp:1434] Successfully authenticated with master master@172.17.0.2:35020 3: I0327 10:12:04.071718 18328 slave.cpp:1877] Will retry registration in 383294ns if necessary 3: I0327 10:12:04.071923 18330 master.cpp:6326] Received register agent message from slave(546)@172.17.0.2:35020 (1931c74e0c4c) 3: I0327 10:12:04.072154 18330 master.cpp:3802] Authorizing agent providing resources 'cpus:2; mem:1024; disk:1024; ports:[31000-32000]' with principal 'test-principal' 3: I0327 10:12:04.072834 18331 master.cpp:6397] Authorized registration of agent at slave(546)@172.17.0.2:35020 (1931c74e0c4c) 3: I0327 10:12:04.072928 18331 master.cpp:6509] Registering agent at slave(546)@172.17.0.2:35020 (1931c74e0c4c) with id b5c97327-11cc-4183-82ed-75e62b71cc58-S0 3: I0327 10:12:04.073508 18329 registrar.cpp:495] Applied 1 operations in 237308ns; attempting to update the registry 3: I0327 10:12:04.074270 18321 registrar.cpp:552] Successfully updated the registry in 675072ns 3: I0327 10:12:04.074518 18335 master.cpp:6557] Admitted agent b5c97327-11cc-4183-82ed-75e62b71cc58-S0 at slave(546)@172.17.0.2:35020 (1931c74e0c4c) 3: I0327 10:12:04.075176 18335 master.cpp:6602] Registered agent b5c97327-11cc-4183-82ed-75e62b71cc58-S0 at slave(546)@172.17.0.2:35020 (1931c74e0c4c) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] 3: I0327 10:12:04.075368 18323 slave.cpp:1877] Will retry registration in 26.831215ms if necessary 3: I0327 10:12:04.075518 18342 master.cpp:6326] Received register agent message from slave(546)@172.17.0.2:35020 (1931c74e0c4c) 3: I0327 10:12:04.075597 18323 slave.cpp:1481] Registered with master master@172.17.0.2:35020; given agent ID b5c97327-11cc-4183-82ed-75e62b71cc58-S0 3: I0327 10:12:04.075626 18334 hierarchical.cpp:574] Added agent b5c97327-11cc-4183-82ed-75e62b71cc58-S0 (1931c74e0c4c) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (allocated: {}) 3: I0327 10:12:04.075739 18341 task_status_update_manager.cpp:188] Resuming sending task status updates 3: I0327 10:12:04.075709 18342 master.cpp:3802] Authorizing agent providing resources 'cpus:2; mem:1024; disk:1024; ports:[31000-32000]' with principal 'test-principal' 3: I0327 10:12:04.075896 18323 slave.cpp:1501] Checkpointing SlaveInfo to '/tmp/OversubscriptionTest_ForwardUpdateSlaveMessage_8qkWeD/meta/slaves/b5c97327-11cc-4183-82ed-75e62b71cc58-S0/slave.info' 3: I0327 10:12:04.075943 18334 hierarchical.cpp:1517] Performed allocation for 1 agents in 169342ns 3: I0327 10:12:04.076222 18339 master.cpp:6397] Authorized registration of agent at slave(546)@172.17.0.2:35020 (1931c74e0c4c) 3: I0327 10:12:04.076292 18339 master.cpp:6488] Agent b5c97327-11cc-4183-82ed-75e62b71cc58-S0 at slave(546)@172.17.0.2:35020 (1931c74e0c4c) already registered, resending acknowledgement 3: I0327 10:12:04.076493 18323 slave.cpp:1548] Forwarding agent update {""""operations"""":{},""""resource_version_uuid"""":{""""value"""":""""rd+fCEbpQsWYa07c\/1tXpw==""""},""""slave_id"""":{""""value"""":""""b5c97327-11cc-4183-82ed-75e62b71cc58-S0""""},""""update_oversubscribed_resources"""":false} 3: W0327 10:12:04.076702 18323 slave.cpp:1530] Already registered with master master@172.17.0.2:35020 3: I0327 10:12:04.076735 18323 slave.cpp:1548] Forwarding agent update {""""operations"""":{},""""resource_version_uuid"""":{""""value"""":""""rd+fCEbpQsWYa07c\/1tXpw==""""},""""slave_id"""":{""""value"""":""""b5c97327-11cc-4183-82ed-75e62b71cc58-S0""""},""""update_oversubscribed_resources"""":false} 3: I0327 10:12:04.077424 18343 master.cpp:7639] Ignoring update on agent b5c97327-11cc-4183-82ed-75e62b71cc58-S0 at slave(546)@172.17.0.2:35020 (1931c74e0c4c) as it reports no changes 3: I0327 10:12:04.078074 18343 master.cpp:7639] Ignoring update on agent b5c97327-11cc-4183-82ed-75e62b71cc58-S0 at slave(546)@172.17.0.2:35020 (1931c74e0c4c) as it reports no changes 3: I0327 10:12:04.080782 18341 hierarchical.cpp:1517] Performed allocation for 1 agents in 140840ns 3: /tmp/SRC/src/tests/oversubscription_tests.cpp:319: Failure 3: Value of: update.isReady() 3: Actual: true 3: Expected: false 3: I0327 10:12:04.082888 18321 slave.cpp:919] Agent terminating 3: I0327 10:12:04.083225 18335 master.cpp:1295] Agent b5c97327-11cc-4183-82ed-75e62b71cc58-S0 at slave(546)@172.17.0.2:35020 (1931c74e0c4c) disconnected 3: I0327 10:12:04.083271 18335 master.cpp:3283] Disconnecting agent b5c97327-11cc-4183-82ed-75e62b71cc58-S0 at slave(546)@172.17.0.2:35020 (1931c74e0c4c) 3: I0327 10:12:04.083369 18335 master.cpp:3302] Deactivating agent b5c97327-11cc-4183-82ed-75e62b71cc58-S0 at slave(546)@172.17.0.2:35020 (1931c74e0c4c) 3: I0327 10:12:04.083616 18341 hierarchical.cpp:766] Agent b5c97327-11cc-4183-82ed-75e62b71cc58-S0 deactivated 3: I0327 10:12:04.092846 18320 master.cpp:1137] Master terminating 3: I0327 10:12:04.093572 18323 hierarchical.cpp:609] Removed agent b5c97327-11cc-4183-82ed-75e62b71cc58-S0 3: [ FAILED ] OversubscriptionTest.ForwardUpdateSlaveMessage (68 ms)",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8735","03/27/2018 16:32:47",5,"Implement recovery for resource provider manager registrar ""In order to properly persist and recover resource provider information in the resource provider manager we should # Include a registrar in the manager, and # Implement missing recovery functionality in the registrar so it can return a recovered registry.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8741","03/27/2018 21:30:39",1,"`Add` to sequence will not run if it races with sequence destruction ""Adding item to sequence is realized by dispatching `add()` to the sequence actor. However, this could race with the sequence actor destruction.: After the dispatch but before the dispatched `add()` message gets processed by the sequence actor, if the sequence gets destroyed, a terminate message will be injected to the *head* of the message queue. This would result in the destruction of the sequence without the `add()` call ever gets processed. User would end up with a pending future and the future's `onDiscarded' would not be triggered during the sequence destruction. The solution is to set the `inject` flag to `false` so that the terminating message is enqueued to the end of the sequence actor message queue. All `add()` messages that happen before the destruction will be processed before the terminating message.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8748","03/28/2018 19:46:38",3,"Create ACL for grow and shrink volume ""As follow up work of MESOS-4965, we should make sure new operations are properly protected in ACL and authorizer.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8769","04/10/2018 08:07:36",1,"Agent crashes when CNI config not defined ""I was deploying an application through marathon in an integration test that looked like this: * Mesos container (UCR) * container network * some network name specified Given network name did not exist, I did not even passed CNI config to the agent. After Mesos tried to deploy my task, the agent crashed because of missing CNI config. """," [31mWARN [0;39m[10:51:53 AppDeployIntegrationTest-MesosAgent-32780] *** SIGABRT (@0x1980) received by PID 6528 (TID 0x7f3124b58700) from PID 6528; stack trace: *** [31mWARN [0;39m[10:51:53 AppDeployIntegrationTest-MesosAgent-32780] @ 0x7f312e5c2890 (unknown) [31mWARN [0;39m[10:51:53 AppDeployIntegrationTest-MesosAgent-32780] @ 0x7f312e23d067 (unknown) [31mWARN [0;39m[10:51:53 AppDeployIntegrationTest-MesosAgent-32780] @ 0x7f312e23e448 (unknown) [31mWARN [0;39m[10:51:53 AppDeployIntegrationTest-MesosAgent-32780] @ 0x7f312e236266 (unknown) [31mWARN [0;39m[10:51:53 AppDeployIntegrationTest-MesosAgent-32780] @ 0x7f312e236312 (unknown) [31mWARN [0;39m[10:51:53 AppDeployIntegrationTest-MesosAgent-32780] @ 0x7f31304fd233 _ZNKR6OptionISsE3getEv.part.103 [31mWARN [0;39m[10:51:53 AppDeployIntegrationTest-MesosAgent-32780] @ 0x7f313050b60c mesos::internal::slave::NetworkCniIsolatorProcess::getNetworkConfigJSON() [31mWARN [0;39m[10:51:53 AppDeployIntegrationTest-MesosAgent-32780] @ 0x7f313050bd54 mesos::internal::slave::NetworkCniIsolatorProcess::prepare() [31mWARN [0;39m[10:51:53 AppDeployIntegrationTest-MesosAgent-32780] @ 0x7f313027b903 _ZNSt17_Function_handlerIFvPN7process11ProcessBaseEESt5_BindIFZNS0_8dispatchI6OptionIN5mesos5slave19ContainerLaunchInfoEENS7_8internal5slave20MesosIsolatorProcessERKNS7_11ContainerIDERKNS8_15ContainerConfigESG_SJ_EENS0_6FutureIT_EERKNS0_3PIDIT0_EEMSO_FSM_T1_T2_EOT3_OT4_EUlRSE_RSH_S2_E_SE_SH_St12_PlaceholderILi1EEEEE9_M_invokeERKSt9_Any_dataS2_ [31mWARN [0;39m[10:51:53 AppDeployIntegrationTest-MesosAgent-32780] @ 0x7f3130a7ee29 process::ProcessManager::resume() ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8774","04/11/2018 12:35:13",8,"Authenticate and authorize calls to the resource provider manager's API ""The resource provider manager is exposed via an agent endpoint against which resource providers subscribe or perform other actions. We should authenticate and authorize any interactions there. Since currently local resource providers run on agents who manages their lifetime it seems natural to extend the framework used for executor authentication to resource providers as well. The agent would then generate a secret token whenever a new resource provider is started and inject it into the resource providers it launches. Resource providers in turn would use this token when interacting with the manager API.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8777","04/12/2018 02:45:40",5,"Support `STAGE_UNSTAGE_VOLUME` CSI capability in SLRP ""CSI v0.2 introduces a new `STAGE_UNSTAGE_VOLUME` node capability. If a plugin has this capability, SLRP needs to call `NodeStageVolume` before publishing a volume, and call `NodeUnstageVolume` after unpublishing a volume.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8781","04/13/2018 00:34:33",3,"Mesos master shouldn't silently drop operations ""We should make sure that all call places of {{void Master::drop(Framework*, const Offer::Operation&, const string&)}} send a status update if an operation ID was specified. OR we should make sure that they do NOT send one, and make that method send one.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8782","04/13/2018 00:37:09",3,"Transition operations to OPERATION_GONE_BY_OPERATOR when marking an agent gone. ""The master should transition operations to the state {{OPERATION_GONE_BY_OPERATOR}} when an agent is marked gone, sending an operation status update to the frameworks that created them. We should also remove them from {{Master::frameworks}}.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8783","04/13/2018 00:39:35",3,"Transition pending operations to OPERATION_UNREACHABLE when an agent is removed. ""Pending operations on an agent should be transitioned to `OPERATION_UNREACHABLE` when an agent is marked unreachable. We should also make sure that we pro-actively send operation status updates for these operations when the agent becomes unreachable. We should also make sure that we send new operation updates if/when the agent reconnects - perhaps this is already accomplished with the existing operation update logic in the agent?""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8784","04/13/2018 00:41:29",3,"OPERATION_DROPPED operation status updates should include the operation/framework IDs ""The agent should include the operation/framework IDs in operation status updates sent in response to a reconciliation request from the master. These status updates have the operation status: {{OPERATION_DROPPED}}.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8786","04/13/2018 14:02:13",5,"CgroupIsolatorProcess accesses subsystem processes directly. ""The {{CgroupsIsolatorProcess}} interacts with the different cgroups subsystems via {{Processes}} dealing with a dedicated subsystem each. Each {{Process}} is held by {{CgroupsIsolatorProcess}} directly and e.g., no intermediate wrapper class is involved performing {{dispatch}} to an underlying process. Since no wrapper around these {{Subsystem}} processes is used, a user needs to make sure to only {{dispatch}} to the process himself, he should e.g., never directly invoke functions on the {{Process}} or else inconsistencies or races can arise inside the {{Subsystem}} process; if e.g., a {{Subsystem}} dispatches to itself, {{CgroupsIsolatorProcess}} might concurrently invoke {{Subsystem}} functions.   {{CgroupsIsolatorProcess}} does not always {{dispatch}} to these process, but invokes them directly. We should fix this by either introducing wrappers around the {{Subsystem}} wrappers, or by explicitly fixing {{CgroupsIsolatorProcess}} to always use {{dispatch}} to interact with its subsystems. While the first approach seems cleaner and more future-proof, the latter might be less effort _now_.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8789","04/15/2018 00:17:52",3,"/roles and webui roles table should display distinct offered and allocated resources. ""The role endpoints currently show accumulated values for resources (allocated), containing offered resources. For gaining an overview showing our allocated resources separately from the offered resources could improve the signal quality, depending on the use case. This also affects the UI display, for example the """"Roles"""" tab. ""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8794","04/17/2018 06:28:00",13,"Support docker image tarball hdfs based fetching. ""Support docker image tarball hdfs based fetching.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8799","04/17/2018 22:27:07",2,"Master should show dynamic resources in state endpoint ""The master currently only shows static agent resources, i.e., resources defined in the agent's {{SlaveInfo}} in its state endpoint. We should fix this code to show the dynamic resources so that at least resource provider resources are shown. We might need to filter out oversubscribed resources for backward-compatibility.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8809","04/20/2018 08:43:10",3,"Add functions for manipulating POSIX ACLs into stout ""We need to add functions for setting/getting POSIX ACLs into stout so that we can leverage these functions to grant volume permissions to the specific task user. This will introduce a new dependency {{libacl-devel}} when building Mesos.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8810","04/20/2018 08:54:47",13,"Grant non-root task user the permissions to access the SANDBOX_PATH volume of PARENT type ""See [design doc|https://docs.google.com/document/d/1QyeDDX4Zr9E-0jKMoPTzsGE-v4KWwjmnCR0l8V4Tq2U/edit#heading=h.s6f8rmu65g2p] for why we need to do this.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8813","04/20/2018 14:04:15",8,"Support multiple tasks with different users can access a persistent volume. ""See [design doc|https://docs.google.com/document/d/1QyeDDX4Zr9E-0jKMoPTzsGE-v4KWwjmnCR0l8V4Tq2U/edit#heading=h.f4x59l41lxwx] for why we need to do this.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8818","04/23/2018 10:07:47",2,"VolumeSandboxPathIsolatorTest.SharedParentTypeVolume fails on macOS ""This test fails on macOS with: Likely a regression introduced in commit {{189efed864ca2455674b0790d6be4a73c820afd6}} which removed {{volume/sandbox_path}} for POSIX."""," [ RUN ] VolumeSandboxPathIsolatorTest.SharedParentTypeVolume I0423 10:55:19.624977 2767623040 containerizer.cpp:296] Using isolation { environment_secret, filesystem/posix, volume/sandbox_path } I0423 10:55:19.625176 2767623040 provisioner.cpp:299] Using default backend 'copy' ../../src/tests/containerizer/volume_sandbox_path_isolator_tests.cpp:130: Failure create: Unknown or unsupported isolator 'volume/sandbox_path' [ FAILED ] VolumeSandboxPathIsolatorTest.SharedParentTypeVolume (3 ms) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8819","04/23/2018 12:39:12",1,"mesos.pom file hardcodes developers ""Currently {{src/java/mesos.pom.in}} hardcodes developers. The information there duplicates {{docs/comitters.md}} and is currently likely outdated and will get out of sync again in the future. It seems we should either automatically populate this field during the release process or drop this field without replacement. We already point to the dev mailing list which can be used to reach Mesos developers.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8835","04/25/2018 10:30:28",1,"mesos-tests takes a long time to execute no tests ""Executing {{mesos-tests}} takes substantially more time than running {{stout-tests}} or {{libprocess-tests}} when no tests are executed, e.g., for a release build with debug symbols Looking at where time is spent with {{perf}} points to two {{parameters}} functions in {{src/tests/resources_tests.cpp}}. These functions construct large collections of {{Resource}} when registering {{Resources_Contains_BENCHMARK_Test}} and {{Resources_Scalar_Arithmetic_BENCHMARK_Test}}, i.e., these functions will be executed even if the corresponding test is filtered out."""," _|master⚡ ⇒ GTEST_FILTER= hyperfine --warmup=3 ./3rdparty/stout/tests/stout-tests ./3rdparty/libprocess/src/tests/libprocess-tests ./src/mesos-tests Benchmark #1: ./3rdparty/stout/tests/stout-tests Time (mean ± σ): 17.1 ms ± 4.3 ms [User: 12.4 ms, System: 4.5 ms] Range (min … max): 11.3 ms … 25.1 ms Benchmark #2: ./3rdparty/libprocess/src/tests/libprocess-tests Time (mean ± σ): 17.2 ms ± 0.2 ms [User: 13.7 ms, System: 4.7 ms] Range (min … max): 16.9 ms … 18.0 ms Benchmark #3: ./src/mesos-tests Time (mean ± σ): 795.4 ms ± 10.5 ms [User: 397.6 ms, System: 79.5 ms] Range (min … max): 784.7 ms … 814.3 ms Summary './3rdparty/stout/tests/stout-tests' ran 1.01x faster than './3rdparty/libprocess/src/tests/libprocess-tests' 46.56x faster than './src/mesos-tests' ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8837","04/25/2018 11:54:37",1,"Add test of resource provider manager recovery ""See https://reviews.apache.org/r/66546/.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8838","04/25/2018 12:15:34",2,"Consider validating that resubscribing resource providers do not change their name or type ""The agent currently uses a resource provider's name and type to construct e.g., paths for persisting resource provider state and their recovery. With that we should likely prevent resource providers from changing that information since we might otherwise be unable to recover them successfully.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8839","04/25/2018 15:00:46",3,"Resource provider manager registrar recovery can race with agent on agent state leading to hard failures ""When running in the agent the resource provider manager persists its state into the agent's state. The agent uses a LevelDB state which protects against concurrent access. The way we modelled LevelDB an {{fetch}} when a lock is present leads to a failed {{Future}} result. When the resource provider manager encounters a failed recovery it emits a fatal error, e.g., We should not fail hard for such recoverable failure scenarios."""," 11:48:26 F0425 11:48:26.650568 26819 manager.cpp:254] Failed to recover resource provider manager registry: Failed: IO error: lock /tmp/ParentChildContainerTypeAndContentType_AgentContainerAPITest_RecoverNestedContainer_10_HXbQCK/meta/slaves/6645885c-050a-4518-b896-a20b3e72a070-S0/resource_provider_registry/LOCK: already held by process 11:48:26 *** Check failure stack trace: ***",0,0,1,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8841","04/26/2018 14:31:54",2,"Flaky `MasterAllocatorTest/0.SingleFramework` ""   """," [ RUN ] MasterAllocatorTest/0.SingleFramework F0426 08:31:29.775804 9701 hierarchical.cpp:586] Check failed: slaves.contains(slaveId) *** Check failure stack trace: *** @ 0x7f365e108fb8 google::LogMessage::Fail() @ 0x7f365e108f15 google::LogMessage::SendToLog() @ 0x7f365e10890f google::LogMessage::Flush() @ 0x7f365e10b6d2 google::LogMessageFatal::~LogMessageFatal() @ 0x7f365c63b8d7 mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::removeSlave() @ 0x55728a500ac7 _ZZN7process8dispatchIN5mesos8internal6master9allocator21MesosAllocatorProcessERKNS1_7SlaveIDES8_EEvRKNS_3PIDIT_EEMSA_FvT0_EOT1_ENKUlOS6_PNS_11ProcessBaseEE_clESJ_SL_ @ 0x55728a589908 _ZN5cpp176invokeIZN7process8dispatchIN5mesos8internal6master9allocator21MesosAllocatorProcessERKNS3_7SlaveIDESA_EEvRKNS1_3PIDIT_EEMSC_FvT0_EOT1_EUlOS8_PNS1_11ProcessBaseEE_JS8_SN_EEEDTclcl7forwardISC_Efp_Espcl7forwardIT0_Efp0_EEEOSC_DpOSP_ @ 0x55728a586a0f _ZN6lambda8internal7PartialIZN7process8dispatchIN5mesos8internal6master9allocator21MesosAllocatorProcessERKNS4_7SlaveIDESB_EEvRKNS2_3PIDIT_EEMSD_FvT0_EOT1_EUlOS9_PNS2_11ProcessBaseEE_JS9_St12_PlaceholderILi1EEEE13invoke_expandISP_St5tupleIJS9_SR_EESU_IJOSO_EEJLm0ELm1EEEEDTcl6invokecl7forwardISD_Efp_Espcl6expandcl3getIXT2_EEcl7forwardISH_Efp0_EEcl7forwardISK_Efp2_EEEEOSD_OSH_N5cpp1416integer_sequenceImJXspT2_EEEESL_ @ 0x55728a5852b0 _ZNO6lambda8internal7PartialIZN7process8dispatchIN5mesos8internal6master9allocator21MesosAllocatorProcessERKNS4_7SlaveIDESB_EEvRKNS2_3PIDIT_EEMSD_FvT0_EOT1_EUlOS9_PNS2_11ProcessBaseEE_JS9_St12_PlaceholderILi1EEEEclIJSO_EEEDTcl13invoke_expandcl4movedtdefpT1fEcl4movedtdefpT10bound_argsEcvN5cpp1416integer_sequenceImJLm0ELm1EEEE_Ecl16forward_as_tuplespcl7forwardIT_Efp_EEEEDpOSX_ @ 0x55728a584209 _ZN5cpp176invokeIN6lambda8internal7PartialIZN7process8dispatchIN5mesos8internal6master9allocator21MesosAllocatorProcessERKNS6_7SlaveIDESD_EEvRKNS4_3PIDIT_EEMSF_FvT0_EOT1_EUlOSB_PNS4_11ProcessBaseEE_JSB_St12_PlaceholderILi1EEEEEJSQ_EEEDTclcl7forwardISF_Efp_Espcl7forwardIT0_Efp0_EEEOSF_DpOSV_ @ 0x55728a583995 _ZN6lambda8internal6InvokeIvEclINS0_7PartialIZN7process8dispatchIN5mesos8internal6master9allocator21MesosAllocatorProcessERKNS7_7SlaveIDESE_EEvRKNS5_3PIDIT_EEMSG_FvT0_EOT1_EUlOSC_PNS5_11ProcessBaseEE_JSC_St12_PlaceholderILi1EEEEEJSR_EEEvOSG_DpOT0_ @ 0x55728a581522 _ZNO6lambda12CallableOnceIFvPN7process11ProcessBaseEEE10CallableFnINS_8internal7PartialIZNS1_8dispatchIN5mesos8internal6master9allocator21MesosAllocatorProcessERKNSA_7SlaveIDESH_EEvRKNS1_3PIDIT_EEMSJ_FvT0_EOT1_EUlOSF_S3_E_JSF_St12_PlaceholderILi1EEEEEEclEOS3_ @ 0x7f365e0484c0 _ZNO6lambda12CallableOnceIFvPN7process11ProcessBaseEEEclES3_ @ 0x7f365e025760 process::ProcessBase::consume() @ 0x7f365e033abc _ZNO7process13DispatchEvent7consumeEPNS_13EventConsumerE @ 0x55728a1cb6ea process::ProcessBase::serve() @ 0x7f365e0225ed process::ProcessManager::resume() @ 0x7f365e01e94c _ZZN7process14ProcessManager12init_threadsEvENKUlvE_clEv @ 0x7f365e031080 _ZNSt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEE9_M_invokeIJEEEvSt12_Index_tupleIJXspT_EEE @ 0x7f365e030a34 _ZNSt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEclEv @ 0x7f365e030338 _ZNSt6thread11_State_implISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv @ 0x7f365478976f (unknown) @ 0x7f3654e6973a start_thread @ 0x7f3653eefe7f __GI___clone",0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8843","04/26/2018 18:43:15",2,"Per Framework CALL metrics ""Metrics about number of different kinds of calls sent by a framework to master.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8844","04/26/2018 18:43:56",2,"Per Framework EVENT metrics ""Metrics for number of events sent by the master to the framework.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8845","04/26/2018 18:44:50",2,"Per Framework Operation metrics ""Metris for number of operations sent via ACCEPT calls by framework.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8848","04/26/2018 18:49:05",2,"Per Framework Offer metrics ""Metrics regarding number of offers (sent, accepted, declined, rescinded) on a per framework basis.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8851","04/26/2018 22:47:56",3,"Introduce a push-based gauge. ""Currently, we only have pull-based gauges which have significant performance downsides. A push-based gauge differs from a pull-based gauge in that the client is responsible for pushing the latest value into the gauge whenever it changes. This can be challenging in some cases as it requires the client to have a good handle on when the gauge value changes (rather than just computing the current value when asked). It is highly recommended to use push-based gauges if possible as they provide significant performance benefits over pull-based gauges. Pull-based gauge suffer from delays getting processed on the event queue of a Process, as well as incur computation cost on the Process each time the metrics are collected. Push-based gauges, on the other hand, incur no cost to the owning Process when metrics are collected, and instead incur a trivial cost when the Process pushes new values in.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8865","05/02/2018 09:58:28",1,"Suspicious enum value comparisons in scheduler Java bindings ""Clang reports suspicious comparisons of enum values in the scheduler Java bindings, While the current implementation might just work since the different enum values might by accident map onto the same integer values (needs to be confirmed), this seems brittle and against the type safety the languages offers. We should fix this code."""," /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:563:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::SUBSCRIBE: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:576:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::TEARDOWN: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:581:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::ACCEPT: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:601:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::ACCEPT_INVERSE_OFFERS: ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:602:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::DECLINE_INVERSE_OFFERS: ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:603:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::SHUTDOWN: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:609:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::DECLINE: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:621:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::REVIVE: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:626:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::KILL: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:631:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::ACKNOWLEDGE: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:642:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::ACKNOWLEDGE_OPERATION_STATUS: ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:645:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::RECONCILE: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:660:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::RECONCILE_OPERATIONS: ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:663:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::MESSAGE: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:671:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::REQUEST: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:682:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::SUPPRESS: { ^ /home/bbannier/src/mesos/src/java/jni/org_apache_mesos_v1_scheduler_V0Mesos.cpp:687:10: warning: comparison of two values with different enumeration types in switch statement ('::mesos::scheduler::Call_Type' and 'const mesos::v1::scheduler::Call::Type' (aka 'const mesos::v1::scheduler::Call_Type')) [clang-diagnostic-enum-compare-switch] case Call::UNKNOWN: { ^ ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8866","05/02/2018 11:31:11",1,"CMake builds are missing byproduct declaration for jemalloc. ""The {{jemalloc}} dependency is missing a byproduct declaration in the CMake configuration. As a result, building Mesos with enabled {{jemalloc}} using CMake and Ninja will fail.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8870","05/02/2018 15:34:09",2,"Master does not correctly reconcile dropped operations after agent failover ""When an operation does not reach the agent before an agent failover, the master currently does not detect the dropped operation when the agent reregisters and sends the list of its operation in an {{UpdateSlaveMessage}}.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8871","05/02/2018 17:58:27",3,"Agent may fail to recover if the agent dies before image store cache checkpointed. "" This may happen if the agent dies after the file is created but before the contents are persisted on disk."""," E0502 13:51:45.398555 10100 slave.cpp:7305] EXIT with status 1: Failed to perform recovery: Collect failed: Collect failed: Collect failed: Unexpected empty images file '/var/lib/mesos/slave/store/docker/storedImages' ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8872","05/03/2018 01:04:53",1,"OperationReconciliationTest.AgentPendingOperationAfterMasterFailover is flaky. ""This test is flaky on CI: """," ../../src/tests/operation_reconciliation_tests.cpp:782: Failure Failed to wait 15secs for slaveReregistered ... ../../3rdparty/libprocess/include/process/gmock.hpp:466: Failure Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(testing::A()))... Expected args: message matcher (32-byte object <20-71 CD-3E D3-55 00-00 27-00 00-00 00-00 00-00 27-00 00-00 00-00 00-00 B0-C8 61-3E D3-55 00-00>, 1, 1) Expected: to be called once Actual: never called - unsatisfied and active [ FAILED ] ContentType/OperationReconciliationTest.AgentPendingOperationAfterMasterFailover/0, where GetParam() = application/x-protobuf (15117 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8873","05/03/2018 01:08:48",2,"StorageLocalResourceProviderTest.ROOT_ZeroSizedDisk is flaky. ""This test is flaky on CI: """," ../../src/tests/storage_local_resource_provider_tests.cpp:406: Failure Value of: updateSlave2->has_resource_providers() Actual: false Expected: true ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8874","05/03/2018 01:14:10",1,"ResourceProviderManagerHttpApiTest.ResubscribeResourceProvider is flaky. ""This test is flaky on CI: This is different from https://issues.apache.org/jira/browse/MESOS-8315."""," ../../src/tests/resource_provider_manager_tests.cpp:1114: Failure Mock function called more times than expected - taking default action specified at: ../../src/tests/mesos.hpp:2972: Function call: subscribed(@0x7f881c00aff0 32-byte object <58-04 98-43 88-7F 00-00 00-00 00-00 00-00 00-00 01-00 00-00 00-00 00-00 E0-01 01-1C 88-7F 00-00>) Expected: to be called once Actual: called twice - over-saturated and active ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8876","05/03/2018 05:20:25",2,"Normal exit of Docker container using rexray volume results in TASK_FAILED. ""In the fix to  MESOS-8488, we reap the Docker container process directly in Docker executor, and it will wait for `docker run` to return for at most 3 seconds. However, in some cases, the `docker run` command will indeed need more than 3 seconds to return, e.g., the Docker container uses an external rexray volume (see the attached task json as an example), for such container, there will be about 5 seconds between container process exits and the `docker run` returns (I suspect Docker daemon was doing some stuff related to rexray volume during this time), so we will reap this container, and send a {{TASK_FAILED}}.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8877","05/03/2018 10:25:53",3,"Docker container's resources will be wrongly enlarged in cgroups after agent recovery ""Reproduce steps: 1. Run `mesos-execute --master=10.0.49.2:5050 --task=[file:///home/qzhang/workspace/config/task_docker.json] --checkpoint=true` to launch a Docker container. 2. When the Docker container is running, we can see its resources in cgroups are correctly set, so far so good. 3. Restart Mesos agent, and then we will see the resources of the Docker container will be wrongly enlarged. """," # cat task_docker.json { """"name"""": """"test"""", """"task_id"""": {""""value"""" : """"test""""}, """"agent_id"""": {""""value"""" : """"""""}, """"resources"""": [ {""""name"""": """"cpus"""", """"type"""": """"SCALAR"""", """"scalar"""": {""""value"""": 0.1}}, {""""name"""": """"mem"""", """"type"""": """"SCALAR"""", """"scalar"""": {""""value"""": 32}} ], """"command"""": { """"value"""": """"sleep 55555"""" }, """"container"""": { """"type"""": """"DOCKER"""", """"docker"""": { """"image"""": """"alpine"""" } } } # cat /sys/fs/cgroup/cpu,cpuacct/docker/a711b3c7b0d91cd6d1c7d8daf45a90ff78d2fd66973e615faca55a717ec6b106/cpu.cfs_quota_us 10000 # cat /sys/fs/cgroup/memory/docker/a711b3c7b0d91cd6d1c7d8daf45a90ff78d2fd66973e615faca55a717ec6b106/memory.limit_in_bytes 33554432 I0503 02:06:17.268340 29512 docker.cpp:1855] Updated 'cpu.shares' to 204 at /sys/fs/cgroup/cpu,cpuacct/docker/a711b3c7b0d91cd6d1c7d8daf45a90ff78d2fd66973e615faca55a717ec6b106 for container 1b21295b-2f49-4d08-84c7-43b9ae15ad88 I0503 02:06:17.271390 29512 docker.cpp:1882] Updated 'cpu.cfs_period_us' to 100ms and 'cpu.cfs_quota_us' to 20ms (cpus 0.2) for container 1b21295b-2f49-4d08-84c7-43b9ae15ad88 I0503 02:06:17.273082 29512 docker.cpp:1924] Updated 'memory.soft_limit_in_bytes' to 64MB for container 1b21295b-2f49-4d08-84c7-43b9ae15ad88 I0503 02:06:17.275908 29512 docker.cpp:1950] Updated 'memory.limit_in_bytes' to 64MB at /sys/fs/cgroup/memory/docker/a711b3c7b0d91cd6d1c7d8daf45a90ff78d2fd66973e615faca55a717ec6b106 for container 1b21295b-2f49-4d08-84c7-43b9ae15ad88 # cat /sys/fs/cgroup/cpu,cpuacct/docker/a711b3c7b0d91cd6d1c7d8daf45a90ff78d2fd66973e615faca55a717ec6b106/cpu.cfs_quota_us 20000 # cat /sys/fs/cgroup/memory/docker/a711b3c7b0d91cd6d1c7d8daf45a90ff78d2fd66973e615faca55a717ec6b106/memory.limit_in_bytes 67108864 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8892","05/08/2018 03:16:44",1,"MasterSlaveReconciliationTest.ReconcileDroppedOperation is flaky ""This was observed on a Debian 9 SSL/GRPC-enabled build. It appears that a poorly-timed {{UpdateSlaveMessage}} leads to the operation reconciliation occurring before the expectation for the {{ReconcileOperationsMessage}} is registered: Full log is attached as {{MasterSlaveReconciliationTest.ReconcileDroppedOperation.txt}}."""," I0508 00:11:09.700815 22498 master.cpp:4362] Processing ACCEPT call for offers: [ f850080d-9c7a-4ff7-8d4b-9e54aa0418cb-O0 ] on agent f850080d-9c7a-4ff7-8d4b-9e54aa0418cb-S0 at slave(212)@127.0.0.1:36309 (localhost) for framework f850080d-9c7a-4ff7-8d4b-9e54aa0418cb-0000 (default) at scheduler-b0f55e01-2f6f-42c8-8614-901036acfc31@127.0.0.1:36309 I0508 00:11:09.700870 22498 master.cpp:3602] Authorizing principal 'test-principal' to reserve resources 'cpus(allocated: default-role)(reservations: [(DYNAMIC,default-role,test-principal)]):2; mem(allocated: default-role)(reservations: [(DYNAMIC,default-role,test-principal)]):1024; disk(allocated: default-role)(reservations: [(DYNAMIC,default-role,test-principal)]):1024; ports(allocated: default-role)(reservations: [(DYNAMIC,default-role,test-principal)]):[31000-32000]' I0508 00:11:09.701228 22493 master.cpp:4725] Applying RESERVE operation for resources [{""""allocation_info"""":{""""role"""":""""default-role""""},""""name"""":""""cpus"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""default-role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""default-role""""},""""name"""":""""mem"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""default-role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""default-role""""},""""name"""":""""disk"""",""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""default-role"""",""""type"""":""""DYNAMIC""""}],""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""default-role""""},""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""reservations"""":[{""""principal"""":""""test-principal"""",""""role"""":""""default-role"""",""""type"""":""""DYNAMIC""""}],""""type"""":""""RANGES""""}] from framework f850080d-9c7a-4ff7-8d4b-9e54aa0418cb-0000 (default) at scheduler-b0f55e01-2f6f-42c8-8614-901036acfc31@127.0.0.1:36309 to agent f850080d-9c7a-4ff7-8d4b-9e54aa0418cb-S0 at slave(212)@127.0.0.1:36309 (localhost) I0508 00:11:09.701498 22493 master.cpp:11265] Sending operation '' (uuid: 81dffb62-6e75-4c6c-a97b-41c92c58d6a7) to agent f850080d-9c7a-4ff7-8d4b-9e54aa0418cb-S0 at slave(212)@127.0.0.1:36309 (localhost) I0508 00:11:09.701627 22494 slave.cpp:1564] Forwarding agent update {""""operations"""":{},""""resource_version_uuid"""":{""""value"""":""""0HeA06ftS6m76SNoNZNPag==""""},""""slave_id"""":{""""value"""":""""f850080d-9c7a-4ff7-8d4b-9e54aa0418cb-S0""""},""""update_oversubscribed_resources"""":true} I0508 00:11:09.701848 22494 master.cpp:7800] Received update of agent f850080d-9c7a-4ff7-8d4b-9e54aa0418cb-S0 at slave(212)@127.0.0.1:36309 (localhost) with total oversubscribed resources {} W0508 00:11:09.701905 22494 master.cpp:7974] Performing explicit reconciliation with agent for known operation 81dffb62-6e75-4c6c-a97b-41c92c58d6a7 since it was not present in original reconciliation message from agent I0508 00:11:09.702085 22494 master.cpp:11015] Updating the state of operation '' (uuid: 81dffb62-6e75-4c6c-a97b-41c92c58d6a7) for framework f850080d-9c7a-4ff7-8d4b-9e54aa0418cb-0000 (latest state: OPERATION_PENDING, status update state: OPERATION_DROPPED) I0508 00:11:09.702239 22491 hierarchical.cpp:925] Updated allocation of framework f850080d-9c7a-4ff7-8d4b-9e54aa0418cb-0000 on agent f850080d-9c7a-4ff7-8d4b-9e54aa0418cb-S0 from cpus(allocated: default-role):2; mem(allocated: default-role):1024; disk(allocated: default-role):1024; ports(allocated: default-role):[31000-32000] to disk(allocated: default-role)(reservations: [(DYNAMIC,default-role,test-principal)]):1024; cpus(allocated: default-role)(reservations: [(DYNAMIC,default-role,test-principal)]):2; mem(allocated: default-role)(reservations: [(DYNAMIC,default-role,test-principal)]):1024; ports(allocated: default-role)(reservations: [(DYNAMIC,default-role,test-principal)]):[31000-32000] I0508 00:11:09.702267 22493 slave.cpp:1274] New master detected at master@127.0.0.1:36309 I0508 00:11:09.702306 22495 task_status_update_manager.cpp:181] Pausing sending task status updates I0508 00:11:09.702337 22493 slave.cpp:1329] Detecting new master ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8906","05/11/2018 04:23:09",2,"`UriDiskProfileAdaptor` fails to update profile selectors. ""The {{UriDiskProfileAdaptor}} ignores the polled profile matrix if the polled one has the same size as the current one: https://github.com/apache/mesos/blob/1.5.x/src/resource_provider/storage/uri_disk_profile.cpp#L282-L286 {code:cxx} // Profiles can only be added, so if the parsed data is the same size, // nothing has changed and no notifications need to be sent. if (parsed.profile_matrix().size() <= profileMatrix.size()) { return; } {code} However, this prevents the profile selector from being updated, which is not the desired behavior."""," // Profiles can only be added, so if the parsed data is the same size, // nothing has changed and no notifications need to be sent. if (parsed.profile_matrix().size() <= profileMatrix.size()) { return; } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8907","05/11/2018 05:08:35",3,"Docker image fetcher fails with HTTP/2. "" Note that curl is saying the HTTP version is """"HTTP/2"""". This happens on modern curl that automatically negotiates HTTP/2, but the docker fetcher isn't prepared to parse that. """," [ RUN ] ImageAlpine/ProvisionerDockerTest.ROOT_INTERNET_CURL_SimpleCommand/2 ... I0510 20:52:00.209815 25010 registry_puller.cpp:287] Pulling image 'quay.io/coreos/alpine-sh' from 'docker-manifest://quay.iocoreos/alpine-sh?latest#https' to '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_wF7EfM/store/docker/staging/qit1Jn' E0510 20:52:00.756072 25003 slave.cpp:6176] Container '5eb869c5-555c-4dc9-a6ce-ddc2e7dbd01a' for executor 'ad9aa898-026e-47d8-bac6-0ff993ec5904' of framework 7dbe7cd6-8ffe-4bcf-986a-17ba677b5a69-0000 failed to start: Failed to decode HTTP responses: Decoding failed HTTP/2 200 server: nginx/1.13.12 date: Fri, 11 May 2018 03:52:00 GMT content-type: application/vnd.docker.distribution.manifest.v1+prettyjws content-length: 4486 docker-content-digest: sha256:61bd5317a92c3213cfe70e2b629098c51c50728ef48ff984ce929983889ed663 x-frame-options: DENY strict-transport-security: max-age=63072000; preload ... $ curl -i --raw -L -s -S -o - 'http://quay.io/coreos/alpine-sh?latest#https' HTTP/1.1 301 Moved Permanently Content-Type: text/html Date: Fri, 11 May 2018 04:07:44 GMT Location: https://quay.io/coreos/alpine-sh?latest Server: nginx/1.13.12 Content-Length: 186 Connection: keep-alive HTTP/2 301 server: nginx/1.13.12 date: Fri, 11 May 2018 04:07:45 GMT content-type: text/html; charset=utf-8 content-length: 287 location: https://quay.io/coreos/alpine-sh/?latest x-frame-options: DENY strict-transport-security: max-age=63072000; preload ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8916","05/15/2018 01:21:03",5,"Allocation logic cleanup. ""The allocation logic has grown organically and is now very hard to read and maintain. This epic will track cleanups to improve the readability of the core allocation logic: * Add a function for returning the subset of frameworks that are capable of receiving offers from the agent. This moves the capability checking out of the core allocation logic and means the loops can just iterate over a smaller set of framework candidates rather than having to write 'continue' cases. This covers the GPU_RESOURCES and REGION_AWARE capabilities. * Similarly, add a function that allows framework capability based filtering of resources. This pulls out the filtering logic from the core allocation logic and instead the core allocation logic can just all out to the capability filtering function. This covers the SHARED_RESOURCES, REVOCABLE_RESOURCES and RESERVATION_REFINEMENT capabilities. Note that in order to implement this one, we must refactor the shared resources logic in order to have the resource generation occur regardless of the framework capability (followed by getting filtered out via this new function if the framework is not capable). * Update the scalar quantity related functions to also strip static reservation metadata. Currently there is extra code in the allocator across many places (including the allocation logic) to perform this in the call-sites. * Track across allocation cycles or pull out the following into functions: quantity of quota that is currently """"charged"""" to a role, amount of """"headroom"""" that is needed/available for unsatisfied quota guarantees. * Pull out the resource shrinking function.""","",1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8917","05/15/2018 12:51:51",3,"Agent leaking file descriptors into forked processes ""If not all file descriptors are carefully {{open}}'ed with {{O_CLOEXEC}} the Mesos agent might leak them into forked processes e.g., executors. This presents a potential security issue as such processes can interfere with the agent. The current approach is to fix all invocations of {{open}} to always set {{O_CLOEXEC}}, but this approach breaks down when using 3rdparty libraries as there is no reliable way to patch unbundled dependencies. It seems a more reliable approach would be to {{close}} all but a whitelisted set of file descriptors when after {{fork}}, but before the {{exec*}}. It should be possible to assemble such a whitelist for the typical use cases (e.g., in for the Mesos containerizer's  {{launch}}) and pass it to a modified functions to start subprocess. We might need to audit uses of raw {{fork}} in the code.""","",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8919","05/16/2018 01:32:44",2,"Per Framework SUBSCRIBE metrics. ""Per Framework SUBSCRIBE metrics.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8921","05/16/2018 06:23:24",3,"Autotools don't work with newer OpenJDK versions ""There are three distinct issues with modern Java and Linux versions: 1. Mesos configure script expects `libjvm.so` at `$JAVA_HOME/jre/lib//server/libjvm.so`, but in the newer openjdk versions, `libjvm.so` is found at `$JAVA_HOME/lib/server/libjvm.so`. 2. On some distros (e.g., Ubuntu 18.04), JAVA_HOME env var might be missing. In such cases, the configure is able to compute it by looking at `java` and `javac` paths and succeeds. However, some maven plugins require JAVA_HOME to be set and could fail if it's not found. Because configure scripts generate an automake variable `JAVA_HOME`, we can simply invoke maven in the following way to fix this issue:  These two behaviors were observed with OpenJDK 1.11 on Ubuntu 18.04 but I suspect that the behavior is present on other distros/OpenJDK versions. 3. `javah` has been removed as of OpenJDK 1.10. Instead `javac -h` is to be used as a replacement. See [http://openjdk.java.net/jeps/313] for more details."""," [ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (build-and-attach-javadocs) on project mesos: MavenReportException: Error while creating archive: Unable to find javadoc command: The environment variable JAVA_HOME is not correctly set. -> [Help 1] JAVA_HOME=$JAVA_HOME mvn ...",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8924","05/16/2018 19:43:31",5,"Refactor the libprocess gRPC warpper. ""Refactor {{process::grpc::client::Runtime}} for better naming and interface, and fewer synchronizations.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8932","05/17/2018 23:04:20",2,"Quota guarantee metric does not handle removal correctly. ""The quota guarantee metric is not removed when the quota gets removed: https://github.com/apache/mesos/blob/1.6.0/src/master/allocator/mesos/metrics.cpp#L165-L174 The consequence of this is that the metric will hold the initial value that gets set and all subsequent removes / sets will not be exposed via the metric.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8935","05/19/2018 00:00:43",3,"Quota limit ""chopping"" can lead to cpu-only and memory-only offers. ""When we allocate resources to a role, we'll """"chop"""" the available resources of the agent up to the quota limit for the role (per MESOS-7099). This prevents the role from exceeding its quota limit. This has the unintended consequence of creating cpu-only and memory-only offers. Consider agents with 10 cpus and 100 GB mem and roles with quota guarantee/limit of 5 cpus, 10 GB mem. The following allocations will occur: agent 1: r1 -> 5 cpus 10GB mem r2 -> 5 cpus 10GB mem r3 -> 0 cpus 10GB mem (quota allocates even if it can make progress towards a single resource and MESOS-1688 allows this) r4 -> 0 cpus 10GB mem ... r10 -> 0 cpus 10GB mem agent 2: r3 -> 5 cpus 0GB mem (r3 is already at its 10GB mem limit) r4 -> 5 cpus 0GB mem r11 -> 0 cpus 10GB mem ... r20 -> 0 cpus 10GB mem Here, roles 3-20 receive memory only and cpu only offers. This gets further exacerbated if DRF chooses the same ordering between roles across cycles. ""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8936","05/19/2018 01:46:39",5,"Implement a Random Sorter for offer allocations. ""The only sorter that Mesos supports today is the DRF sorter. But, there are cases when DRF sorting causes offer fragmentation when dealing with non-revocable resources and multiple frameworks. One of the options to improve this situation is to introduce a new random sorter instead of DRF sorter. See [https://docs.google.com/document/d/1uvTmBo_21Ul9U_mijgWyh7hE0E_yZXrFr43JIB9OCl8/edit#heading=h.nfye94rqpotp] for additional details.    ""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8942","05/22/2018 21:57:02",2,"Master streaming API does not send (health) check updates for tasks. ""Currently, Master API subscribers get task status updates when task state changes (the actual logic is [slightly more complex|https://github.com/apache/mesos/blob/d7d7cfbc3e5609fc9a4e8de8203a6ecb11afeac7/src/master/master.cpp#L10794-L10841]). We use task status updates to deliver health and check information to schedulers, in which case task state does not change. Hence these updates are filtered out and the subscribers do not get any task health updates. Here is a test that confirms the described behaviour: https://gist.github.com/rukletsov/c079d95479fb134d137ea3ae8b7ae874""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8943","05/23/2018 00:12:18",5,"Add metrics about CSI calls. ""We should add metrics for CSI calls so operators can be alerted on flapping CSI plugins.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8961","05/28/2018 11:48:57",1,"Output of tasks gets corrupted if task defines the same environment variables as the executor container ""The issue is easily reproducible if one launches a task group and the taks nested container defines the same set of environment variables as the executor. In those circumstances, the following [snippet is activated|https://github.com/apache/mesos/blob/285d82080748cd69044c226950274c7046048c4b/src/slave/containerizer/mesos/launch.cpp#L1057]: But this is not the only time that this file writes into {{cout}}. This may be a bad idea because applications which consume the standard output of a task may end up being corrupted by the container manager output. In these cases, writing to {{cerr}} should be the right approach. """," if (environment.contains(name) && environment[name] != value) { cout << """"Overwriting environment variable '"""" << name << """"'"""" << endl; } ",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8975","06/04/2018 15:10:28",5,"Problem and solution overview for the slow API issue. ""Collect data from the clusters regarding {{state.json}} responsiveness, figure out, where the bottlenecks are, and prepare an overview of solutions.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8985","06/08/2018 00:02:58",2,"Posting to the operator api with 'accept recordio' header can crash the agent ""It's possible to crash the mesos agent by posting a reasonable request to the operator API. h3. Background: Sending a request to the v1 api endpoint with an unsupported 'accept' header: Results in the following friendly error message: h3. Reproducible crash: However, sending the same request with 'application/recordio' 'accept' header: causes the agent to crash (no response is received). Crash log is shown below, full log from the agent is attached here: """," curl -X POST http://10.0.3.27:5051/api/v1 \ -H 'accept: application/atom+xml' \ -H 'content-type: application/json' \ -d '{""""type"""":""""GET_CONTAINERS"""",""""get_containers"""":{""""show_nested"""": true,""""show_standalone"""": true}}' Expecting 'Accept' to allow application/json or application/x-protobuf or application/recordio curl -X POST \ http://10.0.3.27:5051/api/v1 \ -H 'accept: application/recordio' \ -H 'content-type: application/json' \ -d '{""""type"""":""""GET_CONTAINERS"""",""""get_containers"""":{""""show_nested"""": true,""""show_standalone"""": true}}' Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: I0607 22:30:32.397320 3743 logfmt.cpp:178] type=audit timestamp=2018-06-07 22:30:32.397243904+00:00 reason=""""Error in token 'Missing 'Authorization' header from HTTP request'. Allowing anonymous connection"""" object=""""/slave(1)/api/v1"""" agent=""""Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36"""" authorizer=""""mesos-agent"""" action=""""POST"""" result=allow srcip=10.0.6.99 dstport=5051 srcport=42084 dstip=10.0.3.27 Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: W0607 22:30:32.397434 3743 authenticator.cpp:289] Error in token on request from '10.0.6.99:42084': Missing 'Authorization' header from HTTP request Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: W0607 22:30:32.397466 3743 authenticator.cpp:291] Falling back to anonymous connection using user 'dcos_anonymous' Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: I0607 22:30:32.397629 3748 http.cpp:1099] HTTP POST for /slave(1)/api/v1 from 10.0.6.99:42084 with User-Agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36' Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: I0607 22:30:32.397784 3748 http.cpp:2030] Processing GET_CONTAINERS call Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: F0607 22:30:32.398736 3747 http.cpp:121] Serializing a RecordIO stream is not supported Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: *** Check failure stack trace: *** Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f619478636d google::LogMessage::Fail() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f619478819d google::LogMessage::SendToLog() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6194785f5c google::LogMessage::Flush() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6194788a99 google::LogMessageFatal::~LogMessageFatal() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f61935e2b9d mesos::internal::serialize() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a4c0ef _ZNO6lambda12CallableOnceIFN7process6FutureINS1_4http8ResponseEEERKN4JSON5ArrayEEE10CallableFnIZNK5mesos8internal5slave4Http13getContainersERKNSD_5agent4CallENSD_11ContentTypeERK6OptionINS3_14authentication9PrincipalEEEUlRKNS2_IS7_EEE0_EclES9_ Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a81d61 process::internal::thenf<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a59b15 _ZNO6lambda12CallableOnceIFvRKN7process6FutureIN4JSON5ArrayEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFNS2_INS1_4http8ResponseEEERKS4_EEESt10unique_ptrINS1_7PromiseISE_EESt14default_deleteISN_EES7_EJSJ_SQ_St12_PlaceholderILi1EEEEEEclES7_ Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a6e4e9 process::internal::run<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a7fa28 process::Future<>::_set<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a7f9fe process::Future<>::_set<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a7f9fe process::Future<>::_set<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a7f9fe process::Future<>::_set<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a7f9fe process::Future<>::_set<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a84e00 process::Future<>::onReady() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a8509e process::Promise<>::associate() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a856ac process::internal::thenf<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a59935 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt5tupleIINS2_ISt4listINS2_IN5mesos15ContainerStatusEEESaIS7_EEEENS2_IS4_INS2_INS5_18ResourceStatisticsEEESaISC_EEEEEEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFNS2_IN4JSON5ArrayEEERKSG_EEESt10unique_ptrINS1_7PromiseISQ_EESt14default_deleteISZ_EESJ_EISV_S12_St12_PlaceholderILi1EEEEEEclESJ_ Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a81359 process::internal::run<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a83f12 _ZN7process6FutureISt5tupleIJNS0_ISt4listINS0_IN5mesos15ContainerStatusEEESaIS5_EEEENS0_IS2_INS0_INS3_18ResourceStatisticsEEESaISA_EEEEEEE4_setIRKSE_EEbOT_ Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a85f10 _ZNK7process6FutureISt5tupleIJNS0_ISt4listINS0_IN5mesos15ContainerStatusEEESaIS5_EEEENS0_IS2_INS0_INS3_18ResourceStatisticsEEESaISA_EEEEEEE7onReadyEON6lambda12CallableOnceIFvRKSE_EEE Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a861ae process::Promise<>::associate() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a866ac process::internal::thenf<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f6193a59875 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt4listINS2_I7NothingEESaIS5_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFNS2_ISt5tupleIINS2_IS3_INS2_IN5mesos15ContainerStatusEEESaISJ_EEEENS2_IS3_INS2_INSH_18ResourceStatisticsEEESaISO_EEEEEEEERKS7_EEESt10unique_ptrINS1_7PromiseISS_EESt14default_deleteIS11_EESA_EISX_S14_St12_PlaceholderILi1EEEEEEclESA_ Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f61935c1a19 process::internal::run<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f61935cf25f process::Future<>::_set<>() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f61935cf44b process::internal::AwaitProcess<>::waited() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f61946d79d1 process::ProcessBase::consume() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f61946e8dcc process::ProcessManager::resume() Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f61946ee7a6 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f61918d8d73 (unknown) Jun 07 22:30:32 ip-10-0-3-27.us-west-2.compute.internal mesos-agent[3718]: @ 0x7f61913d952c (unknown) Jun 07 22:30:34 ip-10-0-3-27.us-west-2.compute.internal systemd[1]: dcos-mesos-slave.service: Main process exited, code=killed, status=6/ABRT Jun 07 22:30:34 ip-10-0-3-27.us-west-2.compute.internal systemd[1]: dcos-mesos-slave.service: Unit entered failed state. Jun 07 22:30:34 ip-10-0-3-27.us-west-2.compute.internal systemd[1]: dcos-mesos-slave.service: Failed with result 'signal'. Jun 07 22:30:39 ip-10-0-3-27.us-west-2.compute.internal systemd[1]: dcos-mesos-slave.service: Service hold-off time over, scheduling restart. Jun 07 22:30:39 ip-10-0-3-27.us-west-2.compute.internal systemd[1]: Stopped Mesos Agent: distributed systems kernel agent.",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8987","06/12/2018 23:46:00",5,"Master asks agent to shutdown upon auth errors. ""The Mesos master sends a {{ShutdownMessage}} to an agent if there is an [authentication|https://github.com/apache/mesos/blob/d733b1031350e03bce443aa287044eb4eee1053a/src/master/master.cpp#L6532-L6543] or an [authorization|https://github.com/apache/mesos/blob/d733b1031350e03bce443aa287044eb4eee1053a/src/master/master.cpp#L6622-L6633] error during agent registration.   Upon receipt of this message, the agent kills alls its tasks and commits suicide. This means that transient auth errors can lead to whole agents being killed along with it's tasks. I think the master should stop sending the {{ShutdownMessage}} in these cases, or at least let the agent retry the registration a few times before asking it to shutdown.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8995","06/14/2018 21:03:35",5,"Add SLRP unit tests for missing profiles. ""We need to add unit tests to verify SLRP correctness when the set of known profiles are shrunk by the disk profile adaptor. Here lists a couple scenarios worth testing: 1. `CREATE_VOLUME` should succeed if it is submitted before the profile becomes stale. 2. `CREATE_VOLUME` should be dropped if it is submitted after the profile becomes stale. 3. `DESTROY_VOLUME` should free spaces for a stale profile.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-8998","06/15/2018 03:05:30",2,"Allow for unbundled libevent in CMake builds to work around 2.1.x SSL issues. ""On macOS and Ubuntu 17, libevent-openssl >= 2.1.x is broken in conjunction with libprocess. We tried to pinpoint the issue but so far with no success. For enabling CMake SSL builds on those systems , we have to support prior libevent versions. Allowing building against a preinstalled libevent version paves that way.""","",0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-8999","06/15/2018 12:14:39",3,"Add default bodies for libprocess HTTP error responses. ""By default on error libprocess would only return a response with the correct status code and no response body. However, most browsers do not visually indicate the response status code, so if any error occurs anyone using a browser will only see a blank page, making it hard to figure out what happened.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9000","06/15/2018 12:20:44",3,"Operator API event stream can miss task status updates. ""As of now, the master only sends TaskUpdated messages to subscribers when the latest known task state on the agent changed: The latest state is set like this: However, the `TaskStatus` message included in an `TaskUpdated` event is the event at the bottom of the queue when the update was sent. So we can easily get in a situation where e.g. the first TaskUpdated has .status.state == TASK_STARTING and .state == TASK_RUNNING, and the second update with .status.state == TASK_RUNNNING and .state == TASK_RUNNING would not get delivered because the latest known state did not change. This implies that schedulers can not reliably wait for the status information corresponding to specific task state, since there is no guarantee that subscribers get notified during the time when this status update will be included in the status field."""," // src/master/master.cpp if (!protobuf::isTerminalState(task->state())) { if (status.state() != task->state()) { sendSubscribersUpdate = true; } task->set_state(latestState.getOrElse(status.state())); } // src/messages/messages.proto message StatusUpdate { [...] // This corresponds to the latest state of the task according to the // agent. Note that this state might be different than the state in // 'status' because task status update manager queues updates. In // other words, 'status' corresponds to the update at top of the // queue and 'latest_state' corresponds to the update at bottom of // the queue. optional TaskState latest_state = 7; } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9008","06/19/2018 17:49:05",1,"Fetcher fails to extract some archives containing hardlinks ""We recently switched the infrastructure to e.g., extract archives to libarchive which seems to have narrower support for e.g., hardlinks in archives, see e.g., [https://github.com/libarchive/libarchive/wiki/Hardlinks] upstream (likely outdated). In a particular case, we tried to extract https://artifacts.elastic.co/downloads/kibana/kibana-5.6.5-linux-x86_64.tar.gz which the fetcher successfully extracted in e.g., 1.6.0, but which now leads to failures like """," W0619 08:53:38.000000 4610 fetcher.cpp:913] Begin fetcher log (stderr in sandbox) for container d39bc16c-e6c6-440f-a82c-ad26332a1b36 from running command: /opt/mesosphere/packages/mesos--95f27ab971fb1d03bc1277f6a16e63a9815e1a61/libexec/mesos/mesos-fetcher I0619 08:53:34.620458 7571 fetcher.cpp:560] Fetcher Info: {""""cache_directory"""":""""\/tmp\/mesos\/fetch\/nobody"""",""""items"""":[{""""action"""":""""DOWNLOAD_AND_CACHE"""",""""cache_filename"""":""""c29-kibana-5.6__64.tar.gz"""",""""uri"""":{""""cache"""":true,""""executable"""":false,""""extract"""":true,""""value"""":""""https:\/\/artifacts.elastic.co\/downloads\/kibana\/kibana-5.6.5-linux-x86_64.tar.gz""""}},{""""action"""":""""DOWNLOAD_AND_CACHE"""",""""cache_filename"""":""""c30-x-pack-5.6.5.zip"""",""""uri"""":{""""cache"""":true,""""executable"""":false,""""extract"""":false,""""value"""":""""https:\/\/artifacts.elastic.co\/downloads\/packs\/x-pack\/x-pack-5.6.5.zip""""}}],""""sandbox_directory"""":""""\/var\/lib\/mesos\/slave\/slaves\/7f2e186c-c748-493d-82e4-644b5b23c9bb-S6\/frameworks\/7f2e186c-c748-493d-82e4-644b5b23c9bb-0001\/executors\/kibana.3f6c7799-739e-11e8-a678-168288e5aa33\/runs\/d39bc16c-e6c6-440f-a82c-ad26332a1b36"""",""""stall_timeout"""":{""""nanoseconds"""":60000000000},""""user"""":""""nobody""""} I0619 08:53:34.623996 7571 fetcher.cpp:457] Fetching URI 'https://artifacts.elastic.co/downloads/kibana/kibana-5.6.5-linux-x86_64.tar.gz' I0619 08:53:34.624017 7571 fetcher.cpp:431] Downloading into cache I0619 08:53:34.624027 7571 fetcher.cpp:225] Fetching URI 'https://artifacts.elastic.co/downloads/kibana/kibana-5.6.5-linux-x86_64.tar.gz' I0619 08:53:34.624043 7571 fetcher.cpp:175] Downloading resource from 'https://artifacts.elastic.co/downloads/kibana/kibana-5.6.5-linux-x86_64.tar.gz' to '/tmp/mesos/fetch/nobody/c29-kibana-5.6__64.tar.gz' I0619 08:53:38.634416 7571 fetcher.cpp:350] Fetching from cache E0619 08:53:38.686941 7571 fetcher.cpp:613] EXIT with status 1: Failed to fetch 'https://artifacts.elastic.co/downloads/kibana/kibana-5.6.5-linux-x86_64.tar.gz': Failed to extract archive '/tmp/mesos/fetch/nobody/c29-kibana-5.6__64.tar.gz' to '/var/lib/mesos/slave/slaves/7f2e186c-c748-493d-82e4-644b5b23c9bb-S6/frameworks/7f2e186c-c748-493d-82e4-644b5b23c9bb-0001/executors/kibana.3f6c7799-739e-11e8-a678-168288e5aa33/runs/d39bc16c-e6c6-440f-a82c-ad26332a1b36': Failed to write archive header: Hard-link target 'kibana-5.6.5-linux-x86_64/node_modules/svgo/node_modules/js-yaml/bin/js-yaml.js' does not exist. End fetcher log for container d39bc16c-e6c6-440f-a82c-ad26332a1b36 E0619 08:53:38.000000 4610 fetcher.cpp:571] Failed to run mesos-fetcher: Failed to fetch all URIs for container 'd39bc16c-e6c6-440f-a82c-ad26332a1b36': exited with status 1 E0619 08:53:38.000000 4608 slave.cpp:6180] Container 'd39bc16c-e6c6-440f-a82c-ad26332a1b36' for executor 'kibana.3f6c7799-739e-11e8-a678-168288e5aa33' of framework 7f2e186c-c748-493d-82e4-644b5b23c9bb-0001 failed to start: Failed to fetch all URIs for container 'd39bc16c-e6c6-440f-a82c-ad26332a1b36': exited with status 1 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9009","06/19/2018 19:51:44",5,"Support for creation non-existing host paths in a whitelist as source paths ""Docker creates a directory specified in {{docker run}}'s {{--volume}}/{{-v}} option as the source path that will get bind-mounted into the container, if the source location didn't originally exist on the host. Unlike Docker, UCR bails on launching containers if any of their host mount paths doesn't originally exist. While this is more secure and eliminates unnecessary side effects, it breaks transparent compatibility when trying to migrate from Docker. As a trade-off, we should allow host path creation in a restricted manner, by introducing a new Mesos agent flag ({{--host_path_volume_force_creation}}) as a colon-separated whitelist (similar to the format of POSIX's {{$PATH}} environment variable), under whose items' subdirectories the host paths are allowed to be created. ""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9010","06/20/2018 04:09:46",2,"`UPDATE_STATE` can race with `UPDATE_OPERATION_STATUS` for a resource provider. ""Since a resource provider and its operation status update manager run in different actors, for a completed operation, its `UPDATE_OPERATION_STATUS` call may race with an `UPDATE_STATE`. When the `UPDATE_STATE` arrives to the agent earlier, the total resources will be updated, but the terminal status of the completed operation will be ignored since it is known by both the agent and the resource provider. As a result, when the `UPDATE_OPERATION_STATUS` arrives later, the agent will try to apply the operation, but this is incorrect since the total resources has already been updated.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9014","06/20/2018 23:07:22",1,"MasterAPITest.SubscribersReceiveHealthUpdates is flaky ""This test fails flaky on CI. Log attached. ""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9015","06/21/2018 02:03:47",5,"Allow resources to be removed when updating the sorter. ""Currently we do not allow resource conversions to change the resource quantities when updating the sorter; we only allow the metadata for their consumed resources to be modified. However, this restricts Mesos from supporting operations that remove resources. For example, when a CSI volume with a stale profile is destroyed, it would be better to convert it into an empty resource since the disk space is no longer available. See https://issues.apache.org/jira/browse/MESOS-8825. To make the allocator more flexible, we should allow resource conversions to remove resources when updating the sorter.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9025","06/24/2018 04:13:09",3,"The container which joins CNI network and has checkpoint enabled will be mistakenly destroyed by agent ""Reproduce steps: 1) Run {{mesos-execute}} to launch a task which joins a CNI network {{net1}} and has checkpoint enabled: 2) After task is in the {{TASK_RUNNING}} state, restart the agent process, and then in the agent log, we will see the container is destroyed. And {{mesos-execute}} will receive a {{TASK_GONE}} for the task: """," $ cat task_cni.json { """"name"""": """"test1"""", """"task_id"""": {""""value"""" : """"test1""""}, """"agent_id"""": {""""value"""" : """"""""}, """"resources"""": [ {""""name"""": """"cpus"""", """"type"""": """"SCALAR"""", """"scalar"""": {""""value"""": 0.1}}, {""""name"""": """"mem"""", """"type"""": """"SCALAR"""", """"scalar"""": {""""value"""": 32}} ], """"command"""": { """"value"""": """"sleep 1000"""" }, """"container"""": { """"type"""": """"MESOS"""", """"network_infos"""": [ { """"name"""": """"net1"""" } ] } } $ mesos-execute --master=192.168.56.5:5050 --task=file:///home/stack/workspace/config/task_cni.json --checkpoint ... I0622 17:30:00.792310 7426 containerizer.cpp:1024] Recovering isolators I0622 17:30:00.798740 7430 cni.cpp:437] Removing unknown orphaned container faf69105-e76f-49c7-8e56-964c2f882cff ... I0622 17:30:01.025600 7433 cni.cpp:1546] Unmounted the network namespace handle '/run/mesos/isolators/network/cni/faf69105-e76f-49c7-8e56-964c2f882cff/ns' for container faf69105-e76f-49c7-8e56-964c2f882cff I0622 17:30:01.026211 7433 cni.cpp:1557] Removed the container directory '/run/mesos/isolators/network/cni/faf69105-e76f-49c7-8e56-964c2f882cff' I0622 17:30:02.935093 7429 slave.cpp:5215] Cleaning up un-reregistered executors I0622 17:30:02.935221 7429 slave.cpp:5233] Killing un-reregistered executor 'test1' of framework dc2b3db0-953c-47a4-8fd4-f6d040e9d10e-0002 at executor(1)@192.168.11.7:33719 I0622 17:30:02.935900 7429 slave.cpp:7311] Finished recovery I0622 17:30:02.937409 7427 containerizer.cpp:2405] Destroying container faf69105-e76f-49c7-8e56-964c2f882cff in RUNNING state $ mesos-execute --master=192.168.56.5:5050 --task=file:///home/stack/workspace/config/task_cni.json --checkpoint I0622 17:29:50.538630 7246 scheduler.cpp:189] Version: 1.7.0 I0622 17:29:50.548589 7261 scheduler.cpp:355] Using default 'basic' HTTP authenticatee I0622 17:29:50.550348 7263 scheduler.cpp:538] New master detected at master@192.168.56.5:5050 Subscribed with ID dc2b3db0-953c-47a4-8fd4-f6d040e9d10e-0002 Submitted task 'test1' to agent 'dc2b3db0-953c-47a4-8fd4-f6d040e9d10e-S0' Received status update TASK_STARTING for task 'test1' source: SOURCE_EXECUTOR Received status update TASK_RUNNING for task 'test1' source: SOURCE_EXECUTOR Received status update TASK_GONE for task 'test1' message: 'Executor did not reregister within 2secs' source: SOURCE_AGENT reason: REASON_EXECUTOR_REREGISTRATION_TIMEOUT ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9027","06/25/2018 23:23:22",2,"GPU Isolator still depends on cgroups/devices agent flag given cgrous/all is supported. ""GPU Isolator still depends on cgroups/devices agent flag given cgrous/all is supported.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0 +"MESOS-9032","06/28/2018 15:48:22",5,"Update build scripts to support `seccomp-isolator` flag and `libseccomp` library ""This ticket consists of the following subtasks: 1) Bundle `libseccomp` tarball as a third-party library 2) Add build rules to compile `libseccomp` library and to link Mesos agent against this library 3) Add `enable-seccomp-isolator` flag to the build scripts""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9033","06/28/2018 16:02:24",3,"Add Seccomp-related protobufs ""Define `SeccompProfile` and `SeccompInfo` messages and then update appropriate messages, including `LinuxInfo` and `ContainerLaunchInfo`.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9034","06/28/2018 16:37:56",5,"Implement a wrapper class for `libseccomp` API ""The main purpose of this class is to provide translation of `SeccompProfile` protobuf into invocations of `libseccomp` API. The main user of this class is a containerizer launcher.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9035","06/28/2018 17:30:34",5,"Implement `linux/seccomp` isolator ""The main purpose of this isolator is to prepare `ContainerSeccompProfile` for a containerizer launcher. `ContainerSeccompProfile` message is generated by the isolator from a JSON-file that contains declaration of Seccomp filter rules. In addition, seccomp isolator should check for a Seccomp support by the Linux kernel.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9039","06/29/2018 02:40:49",2,"CNI isolator recovery should wait until unknown orphan cleanup is done ""Currently, CNI isolator will cleanup unknown orphaned containers in an asynchronous way (see [here|https://github.com/apache/mesos/blob/1.6.0/src/slave/containerizer/mesos/isolators/network/cni/cni.cpp#L439] for details) during recovery, that means agent recovery can finish while the cleanup of unknown orphaned containers is still ongoing which is not ideal. So we need to make CNI isolator recovery waits until unknown orphan cleanup is done.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0 +"MESOS-9049","07/03/2018 21:08:31",2,"Agent GC could unmount a dangling persistent volume multiple times. ""When the agent GC an executor dir and the sandbox of one of its run that contains a dangling persistent volume, the agent might try to unmount the persistent volume twice, which leads to an {{EINVAL}} when trying to unmount the target for the second time. Here is the log from a failure run of {{GarbageCollectorIntegrationTest.ROOT_DanglingMount}}: """," W0702 23:35:31.669946 25401 gc.cpp:241] Unmounting dangling mount point '/tmp/GarbageCollectorIntegrationTest_ROOT_DanglingMount_zkItvU/slaves/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-S0/frameworks/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-0000/executors/test-task123/runs/3fcde2c8-b461-4f22-afec-daa269291c95/dangling' of persistent volume '/tmp/GarbageCollectorIntegrationTest_ROOT_DanglingMount_zkItvU/volumes/roles/default-role/persistence-id' inside garbage collected path '/tmp/GarbageCollectorIntegrationTest_ROOT_DanglingMount_zkItvU/slaves/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-S0/frameworks/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-0000/executors/test-task123' W0702 23:35:31.683878 25401 gc.cpp:241] Unmounting dangling mount point '/tmp/GarbageCollectorIntegrationTest_ROOT_DanglingMount_zkItvU/slaves/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-S0/frameworks/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-0000/executors/test-task123/runs/3fcde2c8-b461-4f22-afec-daa269291c95/dangling' of persistent volume '/tmp/GarbageCollectorIntegrationTest_ROOT_DanglingMount_zkItvU/volumes/roles/default-role/persistence-id' inside garbage collected path '/tmp/GarbageCollectorIntegrationTest_ROOT_DanglingMount_zkItvU/slaves/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-S0/frameworks/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-0000' W0702 23:35:31.683912 25401 gc.cpp:248] Skipping deletion of '/tmp/GarbageCollectorIntegrationTest_ROOT_DanglingMount_zkItvU/slaves/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-S0/frameworks/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-0000' because unmount failed on '/tmp/GarbageCollectorIntegrationTest_ROOT_DanglingMount_zkItvU/slaves/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-S0/frameworks/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-0000/executors/test-task123/runs/3fcde2c8-b461-4f22-afec-daa269291c95/dangling': Failed to unmount '/tmp/GarbageCollectorIntegrationTest_ROOT_DanglingMount_zkItvU/slaves/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-S0/frameworks/f4dc0941-e3b0-4f2c-b7f9-025a1af264c8-0000/executors/test-task123/runs/3fcde2c8-b461-4f22-afec-daa269291c95/dangling': Invalid argument ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9055","07/07/2018 00:08:16",3,"Make gRPC call deadline configurable. ""Currently, the deadline for a gRPC call to become terminal is hard-coded to 5 seconds. This would cause problems on slow machines. Ideally, we should make this deadline configurable.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9066","07/10/2018 04:18:35",3,"Changing `CREATE_VOLUME` and `CREATE_BLOCK` to `CREATE_DISK`. ""Mesos 1.5 introduced four new operations for better storage support through CSI. These operations are: * CREATE_VOLUME converts RAW disks to MOUNT or PATH disks. * DESTROY_VOLUME converts MOUNT or PATH disks back to RAW disks. * CREATE_BLOCK converts RAW disks to BLOCK disks. * DESTROY_BLOCK converts BLOCK disks back to RAW disks. However, the following two issues are raised for these operations: 1. """"Volume"""" is overloaded and leads to conflicting/inconsistent naming. 2. The concept of """"PATH"""" disks does not exist in CSI, which could be problematic. To address this, we could change CREATE_VOLUME/CREATE_BLOCK to CREATE_DISK, and DESTROY_VOLUME/DESTROY_BLOCK to DESTROY_DISK, and make CREATE_DISK support only MOUNT and BLOCK disks.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9070","07/11/2018 22:35:54",3,"Support systemd and freezer cgroup subsystems bind mount for container with rootfs. ""From MESOS-8327, cgroup subsystems are bind mounted to the container's rootfs, but systemd and freezer cgroup are not bind mounted yet since they are not subsystems under the cgroup isolator but from the linux launcher. Some applications (e.g., dockerd) may check the /proc/self/cgorup for enabled subsystems and check them at /proc/self/mountinfo to make sure there are those mounts. Here is an example: The first one is a task without image, the second one is a task using debian image. So any app relies on systemd and freezer cgroup would may fail: So, we should consider add systemd and freezer cgroup bind mount at the cgroup isolator and make a *NOTE* for this behavior."""," ➜ aws dcos task exec --interactive test.bf2fad80-846b-11e8-b5a0-eaa1bec34306 /bin/bash cat /proc/self/cgroup 11:blkio:/mesos/87899f08-53e5-47bf-aba3-712c31c33543 10:perf_event:/mesos/87899f08-53e5-47bf-aba3-712c31c33543 9:cpuset:/mesos/87899f08-53e5-47bf-aba3-712c31c33543 8:memory:/mesos/87899f08-53e5-47bf-aba3-712c31c33543 7:pids:/mesos/87899f08-53e5-47bf-aba3-712c31c33543 6:devices:/mesos/87899f08-53e5-47bf-aba3-712c31c33543 5:cpu,cpuacct:/mesos/87899f08-53e5-47bf-aba3-712c31c33543 4:freezer:/mesos/87899f08-53e5-47bf-aba3-712c31c33543/mesos/12fde554-5262-473c-a20c-7dd201148b11 3:net_cls,net_prio:/mesos/87899f08-53e5-47bf-aba3-712c31c33543 2:hugetlb:/mesos/87899f08-53e5-47bf-aba3-712c31c33543 1:name=systemd:/mesos/87899f08-53e5-47bf-aba3-712c31c33543/mesos/12fde554-5262-473c-a20c-7dd201148b11 cat /proc/self/mountinfo 388 387 202:9 / / rw,relatime master:1 - ext4 /dev/xvda9 rw,seclabel,data=ordered 389 388 254:0 / /usr ro,relatime master:2 - ext4 /dev/mapper/usr ro,seclabel,block_validity,delalloc,barrier,user_xattr,acl 390 389 202:6 / /usr/share/oem rw,nodev,relatime master:32 - ext4 /dev/xvda6 rw,seclabel,commit=600,data=ordered 391 388 0:6 / /dev rw,nosuid master:3 - devtmpfs devtmpfs rw,seclabel,size=8201844k,nr_inodes=2050461,mode=755 392 391 0:19 / /dev/shm rw,nosuid,nodev master:4 - tmpfs tmpfs rw,seclabel 393 391 0:20 / /dev/pts rw,nosuid,noexec,relatime master:5 - devpts devpts rw,seclabel,gid=5,mode=620,ptmxmode=000 394 391 0:15 / /dev/mqueue rw,relatime master:26 - mqueue mqueue rw,seclabel 395 391 0:37 / /dev/hugepages rw,relatime master:27 - hugetlbfs hugetlbfs rw,seclabel 396 388 0:4 / /proc rw,nosuid,nodev,noexec,relatime master:6 - proc proc rw 397 396 0:35 / /proc/sys/fs/binfmt_misc rw,relatime master:24 - autofs systemd-1 rw,fd=23,pgrp=0,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=1017 398 396 0:40 / /proc/xen rw,relatime master:31 - xenfs xenfs rw 399 388 0:18 / /sys rw,nosuid,nodev,noexec,relatime master:7 - sysfs sysfs rw,seclabel 400 399 0:17 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime master:8 - securityfs securityfs rw 401 399 0:22 / /sys/fs/cgroup ro,nosuid,nodev,noexec master:9 - tmpfs tmpfs ro,seclabel,mode=755 402 401 0:23 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime master:10 - cgroup cgroup rw,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 403 401 0:25 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,hugetlb 404 401 0:26 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime master:12 - cgroup cgroup rw,net_cls,net_prio 405 401 0:27 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime master:13 - cgroup cgroup rw,freezer 406 401 0:28 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime master:14 - cgroup cgroup rw,cpu,cpuacct 407 401 0:29 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime master:15 - cgroup cgroup rw,devices 408 401 0:30 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,pids 409 401 0:31 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,memory 410 401 0:32 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,cpuset 411 401 0:33 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,perf_event 412 401 0:34 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime master:20 - cgroup cgroup rw,blkio 413 399 0:24 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime master:21 - pstore pstore rw,seclabel 414 399 0:16 / /sys/fs/selinux rw,relatime master:22 - selinuxfs selinuxfs rw 415 399 0:7 / /sys/kernel/debug rw,relatime master:29 - debugfs debugfs rw,seclabel 416 388 0:21 / /run rw,nosuid,nodev master:23 - tmpfs tmpfs rw,seclabel,mode=755 417 388 0:36 / /boot rw,relatime master:25 - autofs systemd-1 rw,fd=33,pgrp=0,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=10774 418 417 202:1 / /boot rw,relatime master:33 - vfat /dev/xvda1 rw,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro 419 388 0:38 / /media rw,nosuid,nodev,noexec,relatime master:28 - tmpfs tmpfs rw,seclabel 420 388 0:39 / /tmp rw,nosuid,nodev master:30 - tmpfs tmpfs rw,seclabel 421 388 202:16 / /var/lib rw,relatime master:218 - ext4 /dev/xvdb rw,seclabel,data=ordered 422 421 202:16 /docker/overlay /var/lib/docker/overlay rw,relatime - ext4 /dev/xvdb rw,seclabel,data=ordered 423 421 202:16 /mesos/slave/volumes/roles/kubernetes-role/b12a0508-c837-4d89-b1e3-d1400355833c /var/lib/mesos/slave/slaves/cbb0007d-bcc7-4fe8-b47d-3d67604a2eb2-S0/frameworks/cbb0007d-bcc7-4fe8-b47d-3d67604a2eb2-0002/executors/kubernetes__etcd__465602c0-ad54-4f46-960e-3a5e8e18f3e8/runs/300d07e7-319d-4642-b9c9-63b9293765fd/data-dir rw,relatime master:218 - ext4 /dev/xvdb rw,seclabel,data=ordered 424 421 202:16 /mesos/slave/volumes/roles/kubernetes-role/a60b4165-e5ee-4847-8437-2a7f78f38c5d /var/lib/mesos/slave/slaves/cbb0007d-bcc7-4fe8-b47d-3d67604a2eb2-S0/frameworks/cbb0007d-bcc7-4fe8-b47d-3d67604a2eb2-0002/executors/kubernetes__etcd__465602c0-ad54-4f46-960e-3a5e8e18f3e8/runs/300d07e7-319d-4642-b9c9-63b9293765fd/wal-pv rw,relatime master:218 - ext4 /dev/xvdb rw,seclabel,data=ordered 426 396 0:51 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw 427 421 0:52 / /var/lib/mesos/slave/slaves/cbb0007d-bcc7-4fe8-b47d-3d67604a2eb2-S0/frameworks/cbb0007d-bcc7-4fe8-b47d-3d67604a2eb2-0001/executors/test.bf2fad80-846b-11e8-b5a0-eaa1bec34306/runs/87899f08-53e5-47bf-aba3-712c31c33543/.secret-113d83da-d9ce-4a5f-9565-9179ed8bd94a rw,relatime - ramfs ramfs rw ➜ aws dcos task exec --interactive debian.6c333651-846c-11e8-b5a0-eaa1bec34306 /bin/bash cat /proc/self/cgroup 11:freezer:/mesos/66896178-3726-439f-ac45-6eb025b944fc/mesos/e69b6a82-4c4a-4758-99c8-6afac41ae1a5 10:devices:/mesos/66896178-3726-439f-ac45-6eb025b944fc 9:hugetlb:/mesos/66896178-3726-439f-ac45-6eb025b944fc 8:blkio:/mesos/66896178-3726-439f-ac45-6eb025b944fc 7:cpuset:/mesos/66896178-3726-439f-ac45-6eb025b944fc 6:pids:/mesos/66896178-3726-439f-ac45-6eb025b944fc 5:perf_event:/mesos/66896178-3726-439f-ac45-6eb025b944fc 4:cpu,cpuacct:/mesos/66896178-3726-439f-ac45-6eb025b944fc 3:memory:/mesos/66896178-3726-439f-ac45-6eb025b944fc 2:net_cls,net_prio:/mesos/66896178-3726-439f-ac45-6eb025b944fc 1:name=systemd:/mesos/66896178-3726-439f-ac45-6eb025b944fc/mesos/e69b6a82-4c4a-4758-99c8-6afac41ae1a5 cat /proc/self/mountinfo 466 423 0:51 / / rw,relatime master:148 - overlay overlay rw,lowerdir=/tmp/xRzx5s/1:/tmp/xRzx5s/0,upperdir=/var/lib/mesos/slave/provisioner/containers/66896178-3726-439f-ac45-6eb025b944fc/backends/overlay/scratch/704eebdc-1862-4054-9245-2025563a1919/upperdir,workdir=/var/lib/mesos/slave/provisioner/containers/66896178-3726-439f-ac45-6eb025b944fc/backends/overlay/scratch/704eebdc-1862-4054-9245-2025563a1919/workdir 467 466 202:9 /etc/resolv.conf//deleted /etc/resolv.conf ro,nosuid,nodev,noexec,relatime master:1 - ext4 /dev/xvda9 rw,seclabel,data=ordered 468 466 202:9 /etc/hostname /etc/hostname ro,nosuid,nodev,noexec,relatime master:1 - ext4 /dev/xvda9 rw,seclabel,data=ordered 469 466 202:9 /etc/hosts /etc/hosts ro,nosuid,nodev,noexec,relatime master:1 - ext4 /dev/xvda9 rw,seclabel,data=ordered 470 466 202:16 /mesos/slave/slaves/cbb0007d-bcc7-4fe8-b47d-3d67604a2eb2-S1/frameworks/cbb0007d-bcc7-4fe8-b47d-3d67604a2eb2-0001/executors/debian.6c333651-846c-11e8-b5a0-eaa1bec34306/runs/66896178-3726-439f-ac45-6eb025b944fc /mnt/mesos/sandbox rw,relatime master:218 - ext4 /dev/xvdb rw,seclabel,data=ordered 471 466 0:52 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw 472 471 0:52 /bus /proc/bus ro,nosuid,nodev,noexec,relatime - proc proc rw 473 471 0:52 /fs /proc/fs ro,nosuid,nodev,noexec,relatime - proc proc rw 474 471 0:52 /irq /proc/irq ro,nosuid,nodev,noexec,relatime - proc proc rw 475 471 0:52 /sys /proc/sys ro,nosuid,nodev,noexec,relatime - proc proc rw 476 471 0:52 /sysrq-trigger /proc/sysrq-trigger ro,nosuid,nodev,noexec,relatime - proc proc rw 477 466 0:18 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs rw,seclabel 478 477 0:54 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,seclabel,mode=755 479 466 0:55 / /dev rw,nosuid,noexec - tmpfs tmpfs rw,seclabel,mode=755 480 479 0:56 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,seclabel,mode=600,ptmxmode=666 481 479 0:57 / /dev/shm rw,nosuid,nodev - tmpfs tmpfs rw,seclabel 482 478 0:31 /mesos/66896178-3726-439f-ac45-6eb025b944fc /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,blkio 483 478 0:27 /mesos/66896178-3726-439f-ac45-6eb025b944fc /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime master:13 - cgroup cgroup rw,cpu,cpuacct 484 478 0:30 /mesos/66896178-3726-439f-ac45-6eb025b944fc /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,cpuset 485 478 0:33 /mesos/66896178-3726-439f-ac45-6eb025b944fc /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,devices 486 478 0:32 /mesos/66896178-3726-439f-ac45-6eb025b944fc /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,hugetlb 487 478 0:26 /mesos/66896178-3726-439f-ac45-6eb025b944fc /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime master:12 - cgroup cgroup rw,memory 488 478 0:25 /mesos/66896178-3726-439f-ac45-6eb025b944fc /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,net_cls,net_prio 489 478 0:28 /mesos/66896178-3726-439f-ac45-6eb025b944fc /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime master:14 - cgroup cgroup rw,perf_event 490 478 0:29 /mesos/66896178-3726-439f-ac45-6eb025b944fc /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime master:15 - cgroup cgroup rw,pids returned error: cgroups: cannot find cgroup mount destination: unknown ./docker/docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown. ",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9088","07/18/2018 20:46:37",3,"`createStrippedScalarQuantity()` should clear all metadata fields. ""Currently `createStrippedScalarQuantity()` strips resource meta-data and transforms dynamic reservations into a static reservation. However, no current code depends on the reservations in resources returned by this helper function. This leads to boilerplate code around call sites and performance overhead.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9099","07/20/2018 20:04:22",3,"Add allocator quota tests regarding reserve/unreserve already allocated resources. ""Add allocator quota tests regarding reserve/unreserve already allocated resources: - Reserve already allocated resources should not affect quota headroom; - The same applies to unreserve allocated resources.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9104","07/21/2018 00:21:50",3,"Refactor capability related logic in the allocator. ""- Add a function for returning the subset of frameworks that are capable of receiving offers from the agent. This moves the capability checking out of the core allocation logic and means the loops can just iterate over a smaller set of framework candidates rather than having to write 'continue' cases. This covers the GPU_RESOURCES and REGION_AWARE capabilities. - Similarly, add a function that allows framework capability based filtering of resources. This pulls out the filtering logic from the core allocation logic and instead the core allocation logic can just all out to the capability filtering function. This covers the SHARED_RESOURCES, REVOCABLE_RESOURCES and RESERVATION_REFINEMENT capabilities. Note that in order to implement this one, we must refactor the shared resources logic in order to have the resource generation occur regardless of the framework capability (followed by getting filtered out via this new function if the framework is not capable).""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9105","07/23/2018 14:07:03",5,"Implement Docker Seccomp profile parser. ""The parser should translate Docker seccomp profile into the `ContainerSeccompProfile` protobuf message.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9106","07/23/2018 14:42:41",3,"Add seccomp filter into containerizer launcher. ""Containerizer launcher should create an instance of the `SeccompFilter` class, which will be used to setup/load a Seccomp filter rules using the given `ContainerSeccompProfile` message.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9112","07/25/2018 11:31:18",2,"mesos-style reports violations on a clean checkout ""When running {{support/mesos-style.py}} on a clean checkout of e.g., {{e879e920c35}} Python style violations are reported, I would expect a clean checkout to not report any violations."""," Checking 46 Python files Using config file /home/bbannier/src/mesos/support/pylint.config ************* Module cli.plugins.base E:119, 0: Bad option value 'R0204' (bad-option-value) ************* Module settings E: 32, 4: No name 'VERSION' in module 'version' (no-name-in-module) Using config file /home/bbannier/src/mesos/support/pylint.config ************* Module mesos.http E: 25, 0: Unable to import 'urlparse' (import-error) E: 87,30: Undefined variable 'xrange' (undefined-variable) Using config file /home/bbannier/src/mesos/support/pylint.config ************* Module apply-reviews R: 99, 0: Either all return statements in a function should return an expression, or none of them should. (inconsistent-return-statements) C:124, 9: Do not use `len(SEQUENCE)` to determine if a sequence is empty (len-as-condition) R:302, 4: Unnecessary """"else"""" after """"return"""" (no-else-return) ************* Module generate-endpoint-help R:215, 4: Unnecessary """"else"""" after """"return"""" (no-else-return) ************* Module verify-reviews C:261, 7: Do not use `len(SEQUENCE)` to determine if a sequence is empty (len-as-condition) Total errors found: 9 ",0,0,1,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9116","07/27/2018 19:17:42",8,"Launch nested container session fails due to incorrect detection of `mnt` namespace of command executor's task. ""Launch nested container call might fail with the following error: This happens when the containerizer launcher [tries to enter|https://github.com/apache/mesos/blob/077f122d52671412a2ab5d992d535712cc154002/src/slave/containerizer/mesos/launch.cpp#L879-L892] `mnt` namespace using the pid of a terminated process. The pid [was detected|https://github.com/apache/mesos/blob/077f122d52671412a2ab5d992d535712cc154002/src/slave/containerizer/mesos/containerizer.cpp#L1930-L1958] by the agent before spawning the containerizer launcher process, because the process was running back then. The issue can be reproduced using the following test (pseudocode): When we start the very first nested container, `getMountNamespaceTarget()` returns a PID of the task (`sleep 1000`), because it's the only process whose `mnt` namespace differs from the parent container. This nested container becomes a child of PID 1 process, which is also a parent of the command executor. It's not an executor's child! It can be seen in attached `pstree.png`. When we start a second nested container, `getMountNamespaceTarget()` might return PID of the previous nested container (`echo echo`) instead of the task's PID (`sleep 1000`). It happens because the first nested container entered `mnt` namespace of the task. Then, the containerizer launcher (""""nanny"""" process) attempts to enter `mnt` namespace using the PID of a terminated process, so we get this error."""," Failed to enter mount namespace: Failed to open '/proc/29473/ns/mnt': No such file or directory launchTask(""""sleep 1000"""") parentContainerId = containerizer.containers().begin() outputs = [] for i in range(10): ContainerId containerId containerId.parent = parentContainerId containerId.id = UUID.random() LAUNCH_NESTED_CONTAINER_SESSION(containerId, """"echo echo"""") response = ATTACH_CONTAINER_OUTPUT(containerId) outputs.append(response.reader) for output in outputs: stdout, stderr = getProcessIOData(output) assert(""""echo"""" == stdout + stderr)",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9122","07/31/2018 14:25:14",5,"Parallel serving of '/state' requests in the Master. ""To reduce the impact of '/state'-related workloads on the Master actor and to increase the average response time when multiple '/state' requests are in the Master's mailbox, accumulate '/state' requests and process them in parallel while blocking the master actor only once.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9123","07/31/2018 17:55:44",5,"Expose quota consumption metrics. ""Currently, quota related metrics exposes quota guarantee and allocated quota. We should expose """"consumed"""" which is allocated quota plus unallocated reservations. We already have this info in the allocator as `consumedQuotaScalarQuantities`, just needs to expose it.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9124","07/31/2018 19:04:24",3,"Agent reconfiguration can cause master to unsuppress on scheduler's behalf. ""When agent reconfiguration was enabled in Mesos, the allocator was also updated to remove all offer filters associated with an agent when that agent's attributes change. In addition, whenever filters for an agent are removed, the framework is unsuppressed for any roles that had filters on the agent. While this ensures that schedulers will have an opportunity to use resources on an agent after reconfiguration, modifying the scheduler's suppression may put the scheduler in an inconsistent state, where it believes it is suppressed in a particular role when it is not.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9125","08/01/2018 20:04:29",1,"Port mapper CNI plugin might fail with ""Resource temporarily unavailable"" ""https://github.com/apache/mesos/blob/master/src/slave/containerizer/mesos/isolators/network/cni/plugins/port_mapper/port_mapper.cpp#L345 Looks like we're missing a `-w` for the iptable command. This will lead to issues like This becomes more likely if there are many concurrent launches of Mesos contianers that uses port mapper on the box."""," The CNI plugin '/opt/mesosphere/active/mesos/libexec/mesos/mesos-cni-port-mapper' failed to attach container a710dc89-7b22-493b-b8bb-fb80a99d5321 to CNI network 'mesos-bridge': stdout='{""""cniVersion"""":""""0.3.0"""",""""code"""":103,""""msg"""":""""Failed to add DNAT rule with tag: Resource temporarily unavailable""""} ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"MESOS-9130","08/02/2018 23:55:53",1,"Test `StorageLocalResourceProviderTest.ROOT_ContainerTerminationMetric` is flaky. ""This test is flaky and can fail with the following error: The actual error is the following: The root cause is that the SLRP calls {{ListVolumes}} and {{GetCapacity}} when starting up, and if the plugin container is killed when these calls are ongoing, gRPC will return an {{OS Error}} which will lead the SLRP to fail. This flakiness will be fixed once we finish https://issues.apache.org/jira/browse/MESOS-8400."""," ../../src/tests/storage_local_resource_provider_tests.cpp:3167 Failed to wait 15secs for pluginRestarted E0802 22:13:37.265038  8216 provider.cpp:1496] Failed to reconcile resource provider b9379982-d990-4f63-8a5b-10edd4f5a1bb: Collect failed: OS Error",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0 +"MESOS-9137","08/06/2018 18:12:20",2,"GRPC build fails to pass compiler flags ""The GRPC build integration fails to pass compiler flags down from the main build into the GRPC component build. This can make the build fail in surprising ways. For example, if you use {{CXXFLAGS=""""-fsanitize=thread"""" CFLAGS=""""-fsanitize=thread""""}}, the build fails because of the inconsistent application of these flags across bundled components. In this build log, libprotobuf was built using the correct flags, which then causes GRPC to fail because it is missing the flags: """," make[3]: Entering directory '/home/jpeach/src/asf-mesos/build/3rdparty' 20 cd grpc-1.10.0 && \ 19 CPPFLAGS=""""-I/home/jpeach/src/asf-mesos/build/3rdparty/protobuf-3.5.0/src \ 18 \ 17 \ 16 -Wno-array-bounds"""" \ 15 make \ 14 /home/jpeach/src/asf-mesos/build/3rdparty/grpc-1.10.0/libs/opt/libgrpc++.a /home/jpeach/src/asf-mesos/build/3rdparty/grpc-1 .10.0/libs/opt/libgrpc.a /home/jpeach/src/asf-mesos/build/3rdparty/grpc-1.10.0/libs/opt/libgpr.a \ 13 CC=""""/home/jpeach/src/asf-mesos/build/cc"""" \ 12 CXX=""""/home/jpeach/src/asf-mesos/build/c++"""" \ 11 LD=""""/home/jpeach/src/asf-mesos/build/cc"""" \ 10 LDXX=""""/home/jpeach/src/asf-mesos/build/c++"""" \ 9 LDFLAGS=""""-L/home/jpeach/src/asf-mesos/build/3rdparty/protobuf-3.5.0/src/.libs \ 8 \ 7 """" \ 6 HAS_PKG_CONFIG=false \ 5 NO_PROTOC=false \ 4 PROTOC=""""/home/jpeach/src/asf-mesos/build/3rdparty/protobuf-3.5.0/src/protoc"""" 3 make[4]: Entering directory '/home/jpeach/src/asf-mesos/build/3rdparty/grpc-1.10.0' 2 mkdir -p `dirname /home/jpeach/src/asf-mesos/build/3rdparty/grpc-1.10.0/bins/opt/grpc_cpp_plugin` 1 /home/jpeach/src/asf-mesos/build/c++ -L/home/jpeach/src/asf-mesos/build/3rdparty/protobuf-3.5.0/src/.libs /home/jpeach/src/asf-mesos/build/3rdparty/grpc-1.10.0/objs/opt/src/compiler/cpp_plugin.o /home/j peach/src/asf-mesos/build/3rdparty/grpc-1.10.0/libs/opt/libgrpc_plugin_support.a -lprotoc -lprotobuf -ldl -lrt -lm -lpthread - lz -lprotoc -lprotobuf -o /home/jpeach/src/asf-mesos/build/3rdparty/grpc-1.10.0/bins/opt/grpc_cpp_plugin 31 /home/jpeach/src/asf-mesos/build/3rdparty/protobuf-3.5.0/src/.libs/libprotoc.a(code_generator.o): In function `__cxx_global_var _init': 1 code_generator.cc:(.text.startup+0xd): undefined reference to `__tsan_func_entry' 2 code_generator.cc:(.text.startup+0x43): undefined reference to `__tsan_func_exit' 3 code_generator.cc:(.text.startup+0x57): undefined reference to `__tsan_func_exit' 4 /home/jpeach/src/asf-mesos/build/3rdparty/protobuf-3.5.0/src/.libs/libprotoc.a(code_generator.o): In function `_GLOBAL__sub_I_c ode_generator.cc': 5 code_generator.cc:(.text.startup+0x7d): undefined reference to `__tsan_func_entry' 6 code_generator.cc:(.text.startup+0x8c): undefined reference to `__tsan_func_exit' 7 code_generator.cc:(.text.startup+0xa0): undefined reference to `__tsan_func_exit' 8 /home/jpeach/src/asf-mesos/build/3rdparty/protobuf-3.5.0/src/.libs/libprotoc.a(code_generator.o): In function `google::protobuf ::compiler::CodeGenerator::~CodeGenerator()': 9 code_generator.cc:(.text._ZN6google8protobuf8compiler13CodeGeneratorD0Ev+0x14): undefined reference to `__tsan_func_entry' 10 /home/jpeach/src/asf-mesos/build/3rdparty/protobuf-3.5.0/src/.libs/libprotoc.a(code_generator.o): In function `google::protobuf ::compiler::CodeGenerator::GenerateAll(std::vector > const&, std::__cxx11::basic_string, std::allocator > const&, google:: protobuf::compiler::GeneratorContext*, std::__cxx11::basic_string, std::allocator >*) const' : ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9143","08/09/2018 15:26:53",2,"MasterQuotaTest.RemoveSingleQuota is flaky. """""," ../../src/tests/master_quota_tests.cpp:493 Value of: metrics.at(metricKey).isNone() Actual: false Expected: true ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9149","08/10/2018 22:34:47",2,"Failed to build gRPC on Linux without OpenSSL. ""Building Mesos on Ubuntu 16.04 without SSL headers installed yields the following message: """," DEPENDENCY ERROR The target you are trying to run requires an OpenSSL implementation. Your system doesn't have one, and either the third_party directory doesn't have it, or your compiler can't build BoringSSL. Please consult INSTALL to get more information. If you need information about why these tests failed, run: make run_dep_checks",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9151","08/10/2018 23:41:18",8,"Container stuck at ISOLATING due to FD leak ""When containers are launching on a single agent at scale, one container stuck at ISOLATING could occasionally happen. And this container becomes un-destroyable due to containerizer destroy always wait for isolate() finish to continue. We add more logging to debug this issue: which shows that the await() in CNI::attach() stuck at the second future (io::read() for stdout). By looking at the df of this stdout: We found pipe 275230 is held by the agent process and the sleep process at the same time! The reason why the leak is possible is because we don't use `pipe2` to create a pipe with `O_CLOEXEC` in subprocess: https://github.com/apache/mesos/blob/1.5.x/3rdparty/libprocess/src/subprocess_posix.cpp#L61 Although we do set cloexec on those fds later: https://github.com/apache/mesos/blob/1.5.x/3rdparty/libprocess/src/subprocess.cpp#L366-L373 There is a race where a fork happens after `pipe()` call, but before cloexec is called later. This is more likely on a busy system (this explains why it's not hard to repo the issue when launching a lot of containers on a single box)."""," Aug 10 17:23:28 ip-10-0-1-129.us-west-2.compute.internal mesos-agent[2974]: I0810 17:23:28.050068 2995 collect.hpp:271] $$$$: AwaitProcess waited invoked for ProcessBase ID: __await__(26651); futures size: 3; future: Ready; future index: 2; ready count: 1 Aug 10 17:23:28 ip-10-0-1-129.us-west-2.compute.internal mesos-agent[2974]: I0810 17:23:28.414436 2998 collect.hpp:271] $$$$: AwaitProcess waited invoked for ProcessBase ID: __await__(26651); futures size: 3; future: Ready; future index: 0; ready count: 2 Aug 10 17:23:27 ip-10-0-1-129.us-west-2.compute.internal mesos-agent[2974]: I0810 17:23:27.657501 2995 cni.cpp:1287] !!!!: Start to await for plugin '/opt/mesosphere/active/mesos/libexec/mesos/mesos-cni-port-mapper' to finish for container 1c8abf4c-f71a-4704-9a73-1ab0dd709c62 with pid '16644'; stdout fd: 1781; stderr fd: 1800 core@ip-10-0-1-129 ~ $ ps aux | grep mesos-agent core 1674 0.0 0.0 6704 864 pts/0 S+ 20:00 0:00 grep --colour=auto mesos-agent root 2974 16.4 2.5 1211096 414048 ? Ssl 17:02 29:11 /opt/mesosphere/packages/mesos--61265af3be37861f26b657c1f9800293b86a0374/bin/mesos-agent core@ip-10-0-1-129 ~ $ sudo ls -al /proc/2974/fd/ | grep 1781 l-wx------. 1 root root 64 Aug 10 19:38 1781 -> /var/lib/mesos/slave/meta/slaves/d3089315-8e34-40b4-b1f7-0ac6a624d7db-S0/frameworks/d3089315-8e34-40b4-b1f7-0ac6a624d7db-0000/executors/test2.d820326d-9cc1-11e8-9809-ee15da5c8980/runs/38e9270d-ebda-4758-ad96-40c5b84bffdc/tasks/test2.d820326d-9cc1-11e8-9809-ee15da5c8980/task.updates core@ip-10-0-1-129 ~ $ ps aux | grep 27981 core 2201 0.0 0.0 6704 884 pts/0 S+ 20:06 0:00 grep --colour=auto 27981 root 27981 0.0 0.0 1516 4 ? Ss 17:25 0:00 sleep 10000 core@ip-10-0-1-129 ~ $ cat /proc/s^C core@ip-10-0-1-129 ~ $ sudo -s ip-10-0-1-129 core # ls -al /proc/27981/fd | grep 275230 lr-x------. 1 root root 64 Aug 10 20:05 1781 -> pipe:[275230] l-wx------. 1 root root 64 Aug 10 20:05 1787 -> pipe:[275230] core@ip-10-0-1-129 ~ $ sudo ls -al /proc/2974/fd/ | grep pipe lr-x------. 1 root root 64 Aug 10 17:02 11 -> pipe:[49380] l-wx------. 1 root root 64 Aug 10 17:02 14 -> pipe:[49380] lr-x------. 1 root root 64 Aug 10 17:02 17 -> pipe:[48909] lr-x------. 1 root root 64 Aug 10 19:38 1708 -> pipe:[275089] l-wx------. 1 root root 64 Aug 10 19:38 1755 -> pipe:[275089] lr-x------. 1 root root 64 Aug 10 19:38 1787 -> pipe:[275230] l-wx------. 1 root root 64 Aug 10 17:02 19 -> pipe:[48909] ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9156","08/15/2018 13:35:11",1,"StorageLocalResourceProviderProcess can deadlock ""On fatal conditions the {{StorageLocalResourceProviderProcess}} triggers its {{fatal}} function which causes its {{Driver}} process to be torn down. Invocations of {{fatal}} need to be properly {{defer}}'d and must never execute on the {{Driver}} process. We saw an invocation of {{fatal}} deadlock in our internal CI since its invocation in {{StorageLocalResourceProviderProcess::sendResourceProviderStateUpdate}} wasn't explicitly {{defer}}'d, and by accident was executing on the {{Driver}}'s process.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9158","08/16/2018 17:20:16",5,"Parallel serving of state-related read-only requests in the Master. ""Similar to MESOS-9122, make all read-only master state endpoints batched.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9160","08/16/2018 23:05:41",1,"Failed to compile gRPC when the build path contains symlinks. "" Apparently, in GRPC makefile, it uses realpath (no symlinks) when computing the build directory, whereas, Mesos builds use `abspath` (doesn't resolve symlinks). So there is a target mismatch if any directory in your Mesos path is a symlink."""," make[4]: *** No rule to make target '/home/kapil/mesos/master/build/3rdparty/grpc-1.10.0/libs/opt/libgrpc++_unsecure.a'.  Stop.",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9162","08/17/2018 14:11:45",5,"Unkillable pod container stuck in ISOLATING ""We have a simple test that launches a pod with two containers (one writes in a file and the other reads it). This test is flaky because the container sometimes fails to start. Marathon app definition: During the test, Marathon tries to launch the pod but doesn't receive a {{TASK_RUNNING}} for the first container and so after 2min decides to kill the pod which also fails. Agent sandbox (attached to this ticket minus docker layers, since they're too big to attach) shows that one of the containers wasn't started properly - the last line in the agent log says: Until then the log looks pretty unspektakular. Afterwards, Marathon tries to kill the container repeatedly, but doesn't succeed - the executor receives the reuests but doesn't send anything back: Relevant Ids for grepping the logs: """," { """"id"""": """"/simple-pod"""", """"scaling"""": { """"kind"""": """"fixed"""", """"instances"""": 1 }, """"environment"""": { """"PING"""": """"PONG"""" }, """"containers"""": [ { """"name"""": """"ct1"""", """"resources"""": { """"cpus"""": 0.1, """"mem"""": 32 }, """"image"""": { """"kind"""": """"DOCKER"""", """"id"""": """"busybox"""" }, """"exec"""": { """"command"""": { """"shell"""": """"while true; do echo the current time is $(date) > ./test-v1/clock; sleep 1; done"""" } }, """"volumeMounts"""": [ { """"name"""": """"v1"""", """"mountPath"""": """"test-v1"""" } ] }, { """"name"""": """"ct2"""", """"resources"""": { """"cpus"""": 0.1, """"mem"""": 32 }, """"exec"""": { """"command"""": { """"shell"""": """"while true; do echo -n $PING ' '; cat ./etc/clock; sleep 1; done"""" } }, """"volumeMounts"""": [ { """"name"""": """"v1"""", """"mountPath"""": """"etc"""" }, { """"name"""": """"v2"""", """"mountPath"""": """"docker"""" } ] } ], """"networks"""": [ { """"mode"""": """"host"""" } ], """"volumes"""": [ { """"name"""": """"v1"""" }, { """"name"""": """"v2"""", """"host"""": """"/var/lib/docker"""" } ] } Transitioning the state of container ff4f4fdc-9327-42fb-be40-29e919e15aee.e9b05652-e779-46f8-9b76-b2e1ce7e5940 from PREPARING to ISOLATING I0816 22:52:53.111995 4 default_executor.cpp:204] Received SUBSCRIBED event I0816 22:52:53.112520 4 default_executor.cpp:208] Subscribed executor on 10.10.0.222 I0816 22:52:53.112783 4 default_executor.cpp:204] Received LAUNCH_GROUP event I0816 22:52:53.116516 11 default_executor.cpp:428] Setting 'MESOS_CONTAINER_IP' to: 10.10.0.222 I0816 22:52:53.169596 4 default_executor.cpp:204] Received ACKNOWLEDGED event I0816 22:52:53.194416 10 default_executor.cpp:204] Received ACKNOWLEDGED event I0816 22:54:50.559470 8 default_executor.cpp:204] Received KILL event I0816 22:54:50.559496 8 default_executor.cpp:1251] Received kill for task 'simple-pod-bcc8f180b611494aa972520b8b650ca9.instance-1ad9ecbb-a1a7-11e8-b35a-6e17842c13e2.ct1' I0816 22:54:50.559737 4 default_executor.cpp:204] Received KILL event I0816 22:54:50.559751 4 default_executor.cpp:1251] Received kill for task 'simple-pod-bcc8f180b611494aa972520b8b650ca9.instance-1ad9ecbb-a1a7-11e8-b35a-6e17842c13e2.ct2' ... Marathon app id: /simple-pod-bcc8f180b611494aa972520b8b650ca9 Mesos tasks id: simple-pod-bcc8f180b611494aa972520b8b650ca9.instance-1ad9ecbb-a1a7-11e8-b35a-6e17842c13e2.ct1 Mesos container id: e9b05652-e779-46f8-9b76-b2e1ce7e5940 ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9164","08/17/2018 19:01:10",3,"Subprocess should unset CLOEXEC on whitelisted file descriptors. ""The libprocess subprocess API accepts a set of whitelisted file descriptors that are supposed to be inherited to the child process. On windows, these are used, but otherwise the subprocess API just ignores them. We probably should make sure that the API clears the {{CLOEXEC}} flag on this descriptors so that they are inherited to the child.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9166","08/17/2018 22:04:09",1,"Ignore pre-existing CSI volumes known to SLRP. ""A pre-existing CSI volume can be known to an SLRP if there is a CSI volume state checkpoint for the volume, but the corresponding resource checkpoint is missing. This typically happen when the agent ID changes which would make the SLRP lose all its resource provider state. In such a case, we should not treat these volumes as unmanaged pre-existing volumes (i.e., disk resources without profiles). For now we should not report them, and consider designing a way to recover these volumes.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9170","08/21/2018 13:51:51",2,"Zookeeper doesn't compile with newer gcc due to format error ""RR: https://reviews.apache.org/r/68370/""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9177","08/22/2018 15:34:22",3,"Mesos master segfaults when responding to /state requests. """""," *** SIGSEGV (@0x8) received by PID 66991 (TID 0x7f36792b7700) from PID 8; stack trace: *** @ 0x7f367e7226d0 (unknown) @ 0x7f3681266913 _ZZNK5mesos8internal6master19FullFrameworkWriterclEPN4JSON12ObjectWriterEENKUlPNS3_11ArrayWriterEE1_clES7_ @ 0x7f3681266af0 _ZNSt17_Function_handlerIFvPN9rapidjson6WriterINS0_19GenericStringBufferINS0_4UTF8IcEENS0_12CrtAllocatorEEES4_S4_S5_Lj0EEEEZN4JSON8internal7jsonifyIZNK5mesos8internal6master19FullFrameworkWriterclEPNSA_12ObjectWriterEEUlPNSA_11ArrayWriterEE1_vEESt8functionIS9_ERKT_NSB_6PreferEEUlS8_E_E9_M_invokeERKSt9_Any_dataS8_ @ 0x7f36812882d0 mesos::internal::master::FullFrameworkWriter::operator()() @ 0x7f36812889d0 _ZNSt17_Function_handlerIFvPN9rapidjson6WriterINS0_19GenericStringBufferINS0_4UTF8IcEENS0_12CrtAllocatorEEES4_S4_S5_Lj0EEEEZN4JSON8internal7jsonifyIN5mesos8internal6master19FullFrameworkWriterEvEESt8functionIS9_ERKT_NSB_6PreferEEUlS8_E_E9_M_invokeERKSt9_Any_dataS8_ @ 0x7f368121aef0 _ZNSt17_Function_handlerIFvPN9rapidjson6WriterINS0_19GenericStringBufferINS0_4UTF8IcEENS0_12CrtAllocatorEEES4_S4_S5_Lj0EEEEZN4JSON8internal7jsonifyIZZZN5mesos8internal6master6Master4Http25processStateRequestsBatchEvENKUlRKN7process4http7RequestERKNSI_5OwnedINSD_15ObjectApproversEEEE_clESM_SR_ENKUlPNSA_12ObjectWriterEE_clESU_EUlPNSA_11ArrayWriterEE3_vEESt8functionIS9_ERKT_NSB_6PreferEEUlS8_E_E9_M_invokeERKSt9_Any_dataS8_ @ 0x7f3681241be3 _ZZZN5mesos8internal6master6Master4Http25processStateRequestsBatchEvENKUlRKN7process4http7RequestERKNS4_5OwnedINS_15ObjectApproversEEEE_clES8_SD_ENKUlPN4JSON12ObjectWriterEE_clESH_ @ 0x7f3681242760 _ZNSt17_Function_handlerIFvPN9rapidjson6WriterINS0_19GenericStringBufferINS0_4UTF8IcEENS0_12CrtAllocatorEEES4_S4_S5_Lj0EEEEZN4JSON8internal7jsonifyIZZN5mesos8internal6master6Master4Http25processStateRequestsBatchEvENKUlRKN7process4http7RequestERKNSI_5OwnedINSD_15ObjectApproversEEEE_clESM_SR_EUlPNSA_12ObjectWriterEE_vEESt8functionIS9_ERKT_NSB_6PreferEEUlS8_E_E9_M_invokeERKSt9_Any_dataS8_ @ 0x7f36810a41bb _ZNO4JSON5ProxycvSsEv @ 0x7f368215f60e process::http::OK::OK() @ 0x7f3681219061 _ZN7process20AsyncExecutorProcess7executeIZN5mesos8internal6master6Master4Http25processStateRequestsBatchEvEUlRKNS_4http7RequestERKNS_5OwnedINS2_15ObjectApproversEEEE_S8_SD_Li0EEENSt9result_ofIFT_T0_T1_EE4typeERKSI_SJ_SK_ @ 0x7f36812212c0 _ZZN7process8dispatchINS_4http8ResponseENS_20AsyncExecutorProcessERKZN5mesos8internal6master6Master4Http25processStateRequestsBatchEvEUlRKNS1_7RequestERKNS_5OwnedINS4_15ObjectApproversEEEE_S9_SE_SJ_RS9_RSE_EENS_6FutureIT_EERKNS_3PIDIT0_EEMSQ_FSN_T1_T2_T3_EOT4_OT5_OT6_ENKUlSt10unique_ptrINS_7PromiseIS2_EESt14default_deleteIS17_EEOSH_OS9_OSE_PNS_11ProcessBaseEE_clES1A_S1B_S1C_S1D_S1F_ @ 0x7f36812215ac _ZNO6lambda12CallableOnceIFvPN7process11ProcessBaseEEE10CallableFnINS_8internal7PartialIZNS1_8dispatchINS1_4http8ResponseENS1_20AsyncExecutorProcessERKZN5mesos8internal6master6Master4Http25processStateRequestsBatchEvEUlRKNSA_7RequestERKNS1_5OwnedINSD_15ObjectApproversEEEE_SI_SN_SS_RSI_RSN_EENS1_6FutureIT_EERKNS1_3PIDIT0_EEMSZ_FSW_T1_T2_T3_EOT4_OT5_OT6_EUlSt10unique_ptrINS1_7PromiseISB_EESt14default_deleteIS1G_EEOSQ_OSI_OSN_S3_E_IS1J_SQ_SI_SN_St12_PlaceholderILi1EEEEEEclEOS3_ @ 0x7f36821f3541 process::ProcessBase::consume() @ 0x7f3682209fbc process::ProcessManager::resume() @ 0x7f368220fa76 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv @ 0x7f367eefc2b0 (unknown) @ 0x7f367e71ae25 start_thread @ 0x7f367e444bad __clone ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9185","08/27/2018 16:11:52",3,"An attempt to remove or destroy container in composing containerizer leads to segfault. ""`LAUNCH_NESTED_CONTAINER` and `LAUNCH_NESTED_CONTAINER_SESSION` leads to segfault in the agent when the parent container is unknown to the composing containerizer. If the parent container cannot be found during an attempt to launch a nested container via `ComposingContainerizerProcess::launch()`, the composing container returns an error without cleaning up the container. On `launch()` failures, the agent calls `destroy()` which accesses uninitialized `containerizer` field.""","",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9186","08/27/2018 17:17:21",2,"Failed to build Mesos with Python 3.7 and new CLI enabled ""I've tried to build Mesos with the flag 'enable-new-cli' and Python 3.7 and it failed with this error message: """," Traceback (most recent call last): File """"/Users/mesosphere/code/mesos/src/python/cli_new/bin/../tests/main.py"""", line 26, in from cli.tests import CLITestCase File """"/Users/mesosphere/code/mesos/src/python/cli_new/lib/cli/__init__.py"""", line 24, in from . import util File """"/Users/mesosphere/code/mesos/src/python/cli_new/lib/cli/util.py"""", line 29, in from kazoo.client import KazooClient File """"/Users/mesosphere/code/mesos/build/src/.virtualenv/lib/python3.7/site-packages/kazoo/client.py"""", line 67, in from kazoo.recipe.partitioner import SetPartitioner File """"/Users/mesosphere/code/mesos/build/src/.virtualenv/lib/python3.7/site-packages/kazoo/recipe/partitioner.py"""", line 194 self._child_watching(self._allocate_transition, async=True) ^ SyntaxError: invalid syntax make[3]: *** [check-local] Error 1 make[2]: *** [check-am] Error 2 make[1]: *** [check] Error 2 make: *** [check-recursive] Error 1 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9190","08/29/2018 02:11:26",2,"Test `StorageLocalResourceProviderTest.ROOT_CreateDestroyDiskRecovery` is flaky. ""The test is flaky in 1.7.x: This is because of `DESTRY_DISK` races with a profile poll. If the poll finishes first, SLRP will start reconciling storage pools, and drop certain incoming operations during reconciliation."""," I0824 22:20:01.018494 4208 provider.cpp:1520] Received DESTROY_DISK operation '' (uuid: 7aaadd15-1f6d-4d4e-9000-4c250495f7ba) W0824 22:20:01.018517 4208 provider.cpp:3008] Dropping operation (uuid: 7aaadd15-1f6d-4d4e-9000-4c250495f7ba): Cannot apply operation when reconciling storage pools ... I0824 22:20:01.086668 4209 master.cpp:9445] Sending offers [ 0ab2c552-4d85-40fd-8717-8e4d19c7a65e-O4 ] to framework 0ab2c552-4d85-40fd-8717-8e4d19c7a65e-0000 (default) at scheduler-0af22a76-f591-43ba-8470-f4b863292d61@172.16.10.36:35916 ../../src/tests/storage_local_resource_provider_tests.cpp:995: Failure Mock function called more times than expected - returning directly. Function call: resourceOffers(0x7ffe7ba8c240, @0x7f04a09808c0 { 160-byte object <98-7C 05-AD 04-7F 00-00 00-00 00-00 00-00 00-00 5F-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 05-00 00-00 05-00 00-00 10-A7 03-84 04-7F 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 ... 90-F2 08-84 04-7F 00-00 10-71 00-84 04-7F 00-00 40-71 00-84 04-7F 00-00 00-51 02-84 04-7F 00-00 C0-6A 03-84 04-7F 00-00 00-00 00-00 00-00 00-00 10-F0 00-84 04-7F 00-00 00-00 00-00 00-00 00-00> }) Expected: to be called once Actual: called twice - over-saturated and active ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9196","08/30/2018 23:36:50",5,"Removing rootfs mounts may fail with EBUSY. ""We observed in production environment that this Consider fixing the issue by using detach unmount when unmounting container rootfs. See MESOS-3349 for details. The root cause on why """"Device or resource busy"""" is received when doing rootfs unmount is still unknown. _UPDATE_: The production environment has a cronjob that scan filesystems to build index (updatedb for mlocate). This can explain the EBUSY we receive when doing `unmount`. _UPDATE_: Splunk that's scanning `/var/lib/mesos` could also be a source of triggers."""," Failed to destroy the provisioned rootfs when destroying container: Collect failed: Failed to destroy overlay-mounted rootfs '/var/lib/mesos/slave/provisioner/containers/6332cf3d-9897-475b-88b3-40e983a2a531/containers/e8f36ad7-c9ae-40da-9d14-431e98174735/backends/overlay/rootfses/d601ef1b-11b9-445a-b607-7c6366cd21ec': Failed to unmount '/var/lib/mesos/slave/provisioner/containers/6332cf3d-9897-475b-88b3-40e983a2a531/containers/e8f36ad7-c9ae-40da-9d14-431e98174735/backends/overlay/rootfses/d601ef1b-11b9-445a-b607-7c6366cd21ec': Device or resource busy ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9201","08/31/2018 22:23:33",1,"Add docs for UPDATE_OPERATION_STATUS event ""We need to add the {{UPDATE_OPERATION_STATUS}} event to the docs for the v1 scheduler API event stream.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9213","09/06/2018 21:34:14",1,"Avoid double copying of master->framework messages when incrementing metrics. ""When incrementing metrics, we currently do stuff like which is not efficient. We should update such callsites to avoid gratuitous conversions which could degrade performance when many events are being sent."""," metrics.incrementEvent(devolve(evolve(message))); ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1 +"MESOS-9217","09/07/2018 11:02:47",2,"LongLivedDefaultExecutorRestart is flaky. """""," 03:52:07 [ RUN ] GarbageCollectorIntegrationTest.LongLivedDefaultExecutorRestart 03:52:07 I0907 03:52:07.699676 2350 cluster.cpp:173] Creating default 'local' authorizer 03:52:07 I0907 03:52:07.700664 2374 master.cpp:413] Master 8e9d97f6-4dc4-490b-81f6-d2033e2109d3 (ip-172-16-10-27.ec2.internal) started on 172.16.10.27:45074 03:52:07 I0907 03:52:07.700690 2374 master.cpp:416] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""1secs"""" --allocator=""""hierarchical"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authentication_v0_timeout=""""15secs"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/cuUPYo/credentials"""" --filter_gpu_resources=""""true"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_unreachable_tasks_per_framework=""""1000"""" --memory_profiling=""""false"""" --min_allocatable_resources=""""cpus:0.01|mem:32"""" --port=""""5050"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --require_agent_domain=""""false"""" --role_sorter=""""drf"""" --root_submissions=""""true"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/cuUPYo/master"""" --zk_session_timeout=""""10secs"""" 03:52:07 I0907 03:52:07.700857 2374 master.cpp:465] Master only allowing authenticated frameworks to register 03:52:07 I0907 03:52:07.700870 2374 master.cpp:471] Master only allowing authenticated agents to register 03:52:07 I0907 03:52:07.700947 2374 master.cpp:477] Master only allowing authenticated HTTP frameworks to register 03:52:07 I0907 03:52:07.700958 2374 credentials.hpp:37] Loading credentials for authentication from '/tmp/cuUPYo/credentials' 03:52:07 I0907 03:52:07.701068 2374 master.cpp:521] Using default 'crammd5' authenticator 03:52:07 I0907 03:52:07.701151 2374 http.cpp:1037] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly' 03:52:07 I0907 03:52:07.701254 2374 http.cpp:1037] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' 03:52:07 I0907 03:52:07.701352 2374 http.cpp:1037] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' 03:52:07 I0907 03:52:07.701445 2374 master.cpp:602] Authorization enabled 03:52:07 I0907 03:52:07.701566 2370 whitelist_watcher.cpp:77] No whitelist given 03:52:07 I0907 03:52:07.701695 2376 hierarchical.cpp:182] Initialized hierarchical allocator process 03:52:07 I0907 03:52:07.702237 2374 master.cpp:2083] Elected as the leading master! 03:52:07 I0907 03:52:07.702255 2374 master.cpp:1638] Recovering from registrar 03:52:07 I0907 03:52:07.702293 2375 registrar.cpp:339] Recovering registrar 03:52:07 I0907 03:52:07.706190 2375 registrar.cpp:383] Successfully fetched the registry (0B) in 3.884032ms 03:52:07 I0907 03:52:07.706233 2375 registrar.cpp:487] Applied 1 operations in 7967ns; attempting to update the registry 03:52:07 I0907 03:52:07.706378 2375 registrar.cpp:544] Successfully updated the registry in 126976ns 03:52:07 I0907 03:52:07.706413 2375 registrar.cpp:416] Successfully recovered registrar 03:52:07 I0907 03:52:07.706507 2375 master.cpp:1752] Recovered 0 agents from the registry (172B); allowing 10mins for agents to reregister 03:52:07 I0907 03:52:07.706548 2375 hierarchical.cpp:220] Skipping recovery of hierarchical allocator: nothing to recover 03:52:07 W0907 03:52:07.708107 2350 process.cpp:2810] Attempted to spawn already running process files@172.16.10.27:45074 03:52:07 I0907 03:52:07.708500 2350 containerizer.cpp:305] Using isolation { environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni } 03:52:07 I0907 03:52:07.710343 2350 linux_launcher.cpp:144] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher 03:52:07 I0907 03:52:07.710748 2350 provisioner.cpp:298] Using default backend 'overlay' 03:52:07 I0907 03:52:07.711236 2350 cluster.cpp:485] Creating default 'local' authorizer 03:52:07 I0907 03:52:07.711629 2376 slave.cpp:267] Mesos agent started on (90)@172.16.10.27:45074 03:52:07 I0907 03:52:07.711647 2376 slave.cpp:268] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/store/appc"""" --authenticate_http_executors=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authentication_timeout_max=""""1mins"""" --authentication_timeout_min=""""5secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_destroy_timeout=""""1mins"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/credential"""" --default_role=""""*"""" --disallow_sharing_agent_pid_namespace=""""false"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/fetch"""" --fetcher_cache_size=""""2GB"""" --fetcher_stall_timeout=""""1mins"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --gc_non_executor_container_sandboxes=""""true"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --jwt_secret_key=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/jwt_secret_key"""" --launcher=""""linux"""" --launcher_dir=""""/home/ubuntu/workspace/mesos/Mesos_CI-build/FLAG/SSL/label/mesos-ec2-ubuntu-16.04/mesos/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --memory_profiling=""""false"""" --network_cni_metrics=""""true"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --reconfiguration_policy=""""equal"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv"""" --zk_session_timeout=""""10secs"""" 03:52:07 I0907 03:52:07.711815 2376 credentials.hpp:86] Loading credential for authentication from '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/credential' 03:52:07 I0907 03:52:07.711861 2376 slave.cpp:300] Agent using credential for: test-principal 03:52:07 I0907 03:52:07.711872 2376 credentials.hpp:37] Loading credentials for authentication from '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/http_credentials' 03:52:07 I0907 03:52:07.711936 2376 http.cpp:1037] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-executor' 03:52:07 I0907 03:52:07.711989 2376 http.cpp:1058] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-executor' 03:52:07 I0907 03:52:07.712060 2376 http.cpp:1037] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' 03:52:07 I0907 03:52:07.712100 2376 http.cpp:1058] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-readonly' 03:52:07 I0907 03:52:07.712137 2376 http.cpp:1037] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' 03:52:07 I0907 03:52:07.712158 2376 http.cpp:1058] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-readwrite' 03:52:07 I0907 03:52:07.712229 2376 disk_profile_adaptor.cpp:80] Creating default disk profile adaptor module 03:52:07 I0907 03:52:07.712790 2376 slave.cpp:615] Agent resources: [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] 03:52:07 I0907 03:52:07.712839 2376 slave.cpp:623] Agent attributes: [ ] 03:52:07 I0907 03:52:07.712848 2376 slave.cpp:632] Agent hostname: ip-172-16-10-27.ec2.internal 03:52:07 I0907 03:52:07.712952 2370 task_status_update_manager.cpp:181] Pausing sending task status updates 03:52:07 I0907 03:52:07.713099 2350 scheduler.cpp:189] Version: 1.8.0 03:52:07 I0907 03:52:07.713196 2376 state.cpp:66] Recovering state from '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta' 03:52:07 I0907 03:52:07.713269 2373 slave.cpp:6909] Finished recovering checkpointed state from '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta', beginning agent recovery 03:52:07 I0907 03:52:07.713306 2373 task_status_update_manager.cpp:207] Recovering task status update manager 03:52:07 I0907 03:52:07.713399 2373 containerizer.cpp:727] Recovering Mesos containers 03:52:07 I0907 03:52:07.713490 2373 linux_launcher.cpp:286] Recovering Linux launcher 03:52:07 I0907 03:52:07.713629 2373 containerizer.cpp:1053] Recovering isolators 03:52:07 I0907 03:52:07.713811 2374 scheduler.cpp:355] Using default 'basic' HTTP authenticatee 03:52:07 I0907 03:52:07.713838 2372 containerizer.cpp:1092] Recovering provisioner 03:52:07 I0907 03:52:07.713896 2374 scheduler.cpp:538] New master detected at master@172.16.10.27:45074 03:52:07 I0907 03:52:07.713909 2374 scheduler.cpp:547] Waiting for 0ns before initiating a re-(connection) attempt with the master 03:52:07 I0907 03:52:07.713941 2372 provisioner.cpp:494] Provisioner recovery complete 03:52:07 I0907 03:52:07.714123 2371 composing.cpp:339] Finished recovering all containerizers 03:52:07 I0907 03:52:07.714171 2371 slave.cpp:7138] Recovering executors 03:52:07 I0907 03:52:07.714198 2371 slave.cpp:7291] Finished recovery 03:52:07 I0907 03:52:07.714467 2371 slave.cpp:1254] New master detected at master@172.16.10.27:45074 03:52:07 I0907 03:52:07.714491 2371 slave.cpp:1319] Detecting new master 03:52:07 I0907 03:52:07.714519 2371 task_status_update_manager.cpp:181] Pausing sending task status updates 03:52:07 I0907 03:52:07.714651 2373 scheduler.cpp:429] Connected with the master at http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.714872 2373 scheduler.cpp:248] Sending SUBSCRIBE call to http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.715217 2376 process.cpp:3569] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' 03:52:07 I0907 03:52:07.715551 2374 http.cpp:1177] HTTP POST for /master/api/v1/scheduler from 172.16.10.27:41612 03:52:07 I0907 03:52:07.715626 2374 master.cpp:2502] Received subscription request for HTTP framework 'default' 03:52:07 I0907 03:52:07.715656 2374 master.cpp:2155] Authorizing framework principal 'test-principal' to receive offers for roles '{ * }' 03:52:07 I0907 03:52:07.715811 2372 master.cpp:2637] Subscribing framework 'default' with checkpointing enabled and capabilities [ MULTI_ROLE, RESERVATION_REFINEMENT ] 03:52:07 I0907 03:52:07.716225 2372 master.cpp:9883] Adding framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) with roles { } suppressed 03:52:07 I0907 03:52:07.716414 2374 hierarchical.cpp:306] Added framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.716454 2374 hierarchical.cpp:1564] Performed allocation for 0 agents in 6701ns 03:52:07 I0907 03:52:07.716715 2375 scheduler.cpp:845] Enqueuing event SUBSCRIBED received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.716856 2373 scheduler.cpp:845] Enqueuing event HEARTBEAT received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.719863 2373 slave.cpp:1346] Authenticating with master master@172.16.10.27:45074 03:52:07 I0907 03:52:07.719892 2373 slave.cpp:1355] Using default CRAM-MD5 authenticatee 03:52:07 I0907 03:52:07.719959 2375 authenticatee.cpp:121] Creating new client SASL connection 03:52:07 I0907 03:52:07.720403 2375 master.cpp:9653] Authenticating slave(90)@172.16.10.27:45074 03:52:07 I0907 03:52:07.720453 2375 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(200)@172.16.10.27:45074 03:52:07 I0907 03:52:07.720512 2375 authenticator.cpp:98] Creating new server SASL connection 03:52:07 I0907 03:52:07.720927 2375 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 03:52:07 I0907 03:52:07.720947 2375 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 03:52:07 I0907 03:52:07.720979 2375 authenticator.cpp:204] Received SASL authentication start 03:52:07 I0907 03:52:07.721015 2375 authenticator.cpp:326] Authentication requires more steps 03:52:07 I0907 03:52:07.721052 2375 authenticatee.cpp:259] Received SASL authentication step 03:52:07 I0907 03:52:07.721097 2375 authenticator.cpp:232] Received SASL authentication step 03:52:07 I0907 03:52:07.721117 2375 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-27.ec2.internal' server FQDN: 'ip-172-16-10-27.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 03:52:07 I0907 03:52:07.721127 2375 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 03:52:07 I0907 03:52:07.721137 2375 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 03:52:07 I0907 03:52:07.721144 2375 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-27.ec2.internal' server FQDN: 'ip-172-16-10-27.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 03:52:07 I0907 03:52:07.721151 2375 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 03:52:07 I0907 03:52:07.721158 2375 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 03:52:07 I0907 03:52:07.721169 2375 authenticator.cpp:318] Authentication success 03:52:07 I0907 03:52:07.721215 2370 authenticatee.cpp:299] Authentication success 03:52:07 I0907 03:52:07.721282 2370 slave.cpp:1446] Successfully authenticated with master master@172.16.10.27:45074 03:52:07 I0907 03:52:07.721366 2370 slave.cpp:1877] Will retry registration in 14.537761ms if necessary 03:52:07 I0907 03:52:07.721418 2375 master.cpp:9685] Successfully authenticated principal 'test-principal' at slave(90)@172.16.10.27:45074 03:52:07 I0907 03:52:07.721427 2374 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(200)@172.16.10.27:45074 03:52:07 I0907 03:52:07.721469 2375 master.cpp:6605] Received register agent message from slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:07 I0907 03:52:07.721518 2375 master.cpp:3964] Authorizing agent providing resources 'cpus:2; mem:1024; disk:1024; ports:[31000-32000]' with principal 'test-principal' 03:52:07 I0907 03:52:07.721632 2373 master.cpp:6672] Authorized registration of agent at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:07 I0907 03:52:07.721666 2373 master.cpp:6787] Registering agent at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) with id 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 03:52:07 I0907 03:52:07.721783 2373 registrar.cpp:487] Applied 1 operations in 28796ns; attempting to update the registry 03:52:07 I0907 03:52:07.721932 2370 registrar.cpp:544] Successfully updated the registry in 124928ns 03:52:07 I0907 03:52:07.721992 2370 master.cpp:6835] Admitted agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:07 I0907 03:52:07.722132 2370 master.cpp:6880] Registered agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] 03:52:07 I0907 03:52:07.722203 2374 hierarchical.cpp:601] Added agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 (ip-172-16-10-27.ec2.internal) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (allocated: {}) 03:52:07 I0907 03:52:07.722205 2370 slave.cpp:1479] Registered with master master@172.16.10.27:45074; given agent ID 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 03:52:07 I0907 03:52:07.722396 2370 slave.cpp:1499] Checkpointing SlaveInfo to '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/slave.info' 03:52:07 I0907 03:52:07.722447 2374 hierarchical.cpp:1564] Performed allocation for 1 agents in 173395ns 03:52:07 I0907 03:52:07.722482 2374 task_status_update_manager.cpp:188] Resuming sending task status updates 03:52:07 I0907 03:52:07.722573 2374 master.cpp:9468] Sending offers [ 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-O0 ] to framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) 03:52:07 I0907 03:52:07.722651 2370 slave.cpp:1548] Forwarding agent update {""""operations"""":{},""""resource_version_uuid"""":{""""value"""":""""V1oIFlhHRv2xxYxsF9hCkQ==""""},""""slave_id"""":{""""value"""":""""8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0""""},""""update_oversubscribed_resources"""":false} 03:52:07 I0907 03:52:07.722790 2374 master.cpp:7939] Ignoring update on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) as it reports no changes 03:52:07 I0907 03:52:07.723096 2375 scheduler.cpp:845] Enqueuing event OFFERS received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.723731 2369 scheduler.cpp:248] Sending ACCEPT call to http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.724097 2371 process.cpp:3569] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' 03:52:07 I0907 03:52:07.724397 2372 http.cpp:1177] HTTP POST for /master/api/v1/scheduler from 172.16.10.27:41610 03:52:07 I0907 03:52:07.724557 2372 master.cpp:11462] Removing offer 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-O0 03:52:07 I0907 03:52:07.724704 2372 master.cpp:4467] Processing ACCEPT call for offers: [ 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-O0 ] on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) for framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) 03:52:07 I0907 03:52:07.724750 2372 master.cpp:3541] Authorizing framework principal 'test-principal' to launch task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f 03:52:07 I0907 03:52:07.724932 2372 master.cpp:3541] Authorizing framework principal 'test-principal' to launch task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd 03:52:07 I0907 03:52:07.725594 2373 master.cpp:12209] Adding task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f with resources cpus(allocated: *):0.1; mem(allocated: *):32; disk(allocated: *):32 on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:07 I0907 03:52:07.725653 2373 master.cpp:5663] Launching task group { 2e8c13b6-fa45-4e3c-89cd-398a5abc192f } of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) with resources cpus(allocated: *):0.1; mem(allocated: *):32; disk(allocated: *):32 on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) on new executor 03:52:07 I0907 03:52:07.725885 2374 slave.cpp:2014] Got assigned task group containing tasks [ 2e8c13b6-fa45-4e3c-89cd-398a5abc192f ] for framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.725953 2374 slave.cpp:8908] Checkpointing FrameworkInfo to '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/framework.info' 03:52:07 I0907 03:52:07.726119 2373 master.cpp:12209] Adding task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd with resources cpus(allocated: *):0.1; mem(allocated: *):32; disk(allocated: *):32 on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:07 I0907 03:52:07.726174 2373 master.cpp:5663] Launching task group { c6c81339-65c6-4f86-b0ab-c5be60ea5fbd } of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) with resources cpus(allocated: *):0.1; mem(allocated: *):32; disk(allocated: *):32 on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) on existing executor 03:52:07 I0907 03:52:07.726406 2372 hierarchical.cpp:1236] Recovered cpus(allocated: *):1.7; mem(allocated: *):928; disk(allocated: *):928; ports(allocated: *):[31000-32000] (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000], allocated: cpus(allocated: *):0.3; mem(allocated: *):96; disk(allocated: *):96) on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 from framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.726444 2372 hierarchical.cpp:1282] Framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 filtered agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 for 5secs 03:52:07 I0907 03:52:07.726583 2374 slave.cpp:8919] Checkpointing framework pid '@0.0.0.0:0' to '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/framework.pid' 03:52:07 I0907 03:52:07.727038 2374 slave.cpp:2014] Got assigned task group containing tasks [ c6c81339-65c6-4f86-b0ab-c5be60ea5fbd ] for framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.727283 2374 slave.cpp:2388] Authorizing task group containing tasks [ 2e8c13b6-fa45-4e3c-89cd-398a5abc192f ] for framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.727318 2374 slave.cpp:8466] Authorizing framework principal 'test-principal' to launch task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f 03:52:07 I0907 03:52:07.727609 2374 slave.cpp:2388] Authorizing task group containing tasks [ c6c81339-65c6-4f86-b0ab-c5be60ea5fbd ] for framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.727641 2374 slave.cpp:8466] Authorizing framework principal 'test-principal' to launch task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd 03:52:07 I0907 03:52:07.728116 2374 slave.cpp:2831] Launching task group containing tasks [ 2e8c13b6-fa45-4e3c-89cd-398a5abc192f ] for framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.728165 2374 paths.cpp:752] Creating sandbox '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f' for user 'root' 03:52:07 I0907 03:52:07.728693 2374 slave.cpp:9694] Checkpointing ExecutorInfo to '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/executor.info' 03:52:07 I0907 03:52:07.728947 2374 paths.cpp:755] Creating sandbox '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f' 03:52:07 I0907 03:52:07.729185 2374 slave.cpp:8994] Launching executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 with resources [{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""cpus"""",""""scalar"""":{""""value"""":0.1},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""mem"""",""""scalar"""":{""""value"""":32.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""disk"""",""""scalar"""":{""""value"""":32.0},""""type"""":""""SCALAR""""}] in work directory '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f' 03:52:07 I0907 03:52:07.729526 2374 slave.cpp:9725] Checkpointing TaskInfo to '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/tasks/2e8c13b6-fa45-4e3c-89cd-398a5abc192f/task.info' 03:52:07 I0907 03:52:07.731062 2374 slave.cpp:3028] Queued task group containing tasks [ 2e8c13b6-fa45-4e3c-89cd-398a5abc192f ] for executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.731325 2374 slave.cpp:988] Successfully attached '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f' to virtual path '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/latest' 03:52:07 I0907 03:52:07.731355 2374 slave.cpp:988] Successfully attached '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f' to virtual path '/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/latest' 03:52:07 I0907 03:52:07.731370 2374 slave.cpp:988] Successfully attached '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f' to virtual path '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f' 03:52:07 I0907 03:52:07.731529 2374 slave.cpp:3509] Launching container dbe02af2-3122-4f2e-9747-0c4343627c2f for executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.731864 2374 slave.cpp:2831] Launching task group containing tasks [ c6c81339-65c6-4f86-b0ab-c5be60ea5fbd ] for framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.731918 2374 slave.cpp:9725] Checkpointing TaskInfo to '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/tasks/c6c81339-65c6-4f86-b0ab-c5be60ea5fbd/task.info' 03:52:07 I0907 03:52:07.733171 2374 slave.cpp:3028] Queued task group containing tasks [ c6c81339-65c6-4f86-b0ab-c5be60ea5fbd ] for executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.733458 2373 containerizer.cpp:1280] Starting container dbe02af2-3122-4f2e-9747-0c4343627c2f 03:52:07 I0907 03:52:07.733788 2373 containerizer.cpp:1446] Checkpointed ContainerConfig at '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/containers/dbe02af2-3122-4f2e-9747-0c4343627c2f/config' 03:52:07 I0907 03:52:07.733808 2373 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f from PROVISIONING to PREPARING 03:52:07 I0907 03:52:07.734777 2369 containerizer.cpp:1939] Launching 'mesos-containerizer' with flags '--help=""""false"""" --launch_info=""""{""""command"""":{""""arguments"""":[""""mesos-default-executor"""",""""--launcher_dir=/home/ubuntu/workspace/mesos/Mesos_CI-build/FLAG/SSL/label/mesos-ec2-ubuntu-16.04/mesos/build/src""""],""""shell"""":false,""""value"""":""""/home/ubuntu/workspace/mesos/Mesos_CI-build/FLAG/SSL/label/mesos-ec2-ubuntu-16.04/mesos/build/src/mesos-default-executor""""},""""environment"""":{""""variables"""":[{""""name"""":""""LIBPROCESS_PORT"""",""""type"""":""""VALUE"""",""""value"""":""""0""""},{""""name"""":""""MESOS_AGENT_ENDPOINT"""",""""type"""":""""VALUE"""",""""value"""":""""172.16.10.27:45074""""},{""""name"""":""""MESOS_CHECKPOINT"""",""""type"""":""""VALUE"""",""""value"""":""""1""""},{""""name"""":""""MESOS_DIRECTORY"""",""""type"""":""""VALUE"""",""""value"""":""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f""""},{""""name"""":""""MESOS_EXECUTOR_AUTHENTICATION_TOKEN"""",""""type"""":""""VALUE"""",""""value"""":""""eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJjaWQiOiJkYmUwMmFmMi0zMTIyLTRmMmUtOTc0Ny0wYzQzNDM2MjdjMmYiLCJlaWQiOiJkZWZhdWx0IiwiZmlkIjoiOGU5ZDk3ZjYtNGRjNC00OTBiLTgxZjYtZDIwMzNlMjEwOWQzLTAwMDAifQ.Ww__Iwo_c3fJl_ruqYdi_EePl81IKoIQJv74nq6pHl8""""},{""""name"""":""""MESOS_EXECUTOR_ID"""",""""type"""":""""VALUE"""",""""value"""":""""default""""},{""""name"""":""""MESOS_EXECUTOR_SHUTDOWN_GRACE_PERIOD"""",""""type"""":""""VALUE"""",""""value"""":""""5secs""""},{""""name"""":""""MESOS_FRAMEWORK_ID"""",""""type"""":""""VALUE"""",""""value"""":""""8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000""""},{""""name"""":""""MESOS_HTTP_COMMAND_EXECUTOR"""",""""type"""":""""VALUE"""",""""value"""":""""0""""},{""""name"""":""""MESOS_RECOVERY_TIMEOUT"""",""""type"""":""""VALUE"""",""""value"""":""""15mins""""},{""""name"""":""""MESOS_SLAVE_ID"""",""""type"""":""""VALUE"""",""""value"""":""""8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0""""},{""""name"""":""""MESOS_SLAVE_PID"""",""""type"""":""""VALUE"""",""""value"""":""""slave(90)@172.16.10.27:45074""""},{""""name"""":""""MESOS_SUBSCRIPTION_BACKOFF_MAX"""",""""type"""":""""VALUE"""",""""value"""":""""2secs""""},{""""name"""":""""MESOS_SANDBOX"""",""""type"""":""""VALUE"""",""""value"""":""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f""""}]},""""task_environment"""":{},""""user"""":""""root"""",""""working_directory"""":""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f""""}"""" --pipe_read=""""15"""" --pipe_write=""""18"""" --runtime_directory=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/containers/dbe02af2-3122-4f2e-9747-0c4343627c2f"""" --unshare_namespace_mnt=""""false""""' 03:52:07 I0907 03:52:07.734957 2376 linux_launcher.cpp:492] Launching container dbe02af2-3122-4f2e-9747-0c4343627c2f and cloning with namespaces 03:52:07 I0907 03:52:07.754221 2369 containerizer.cpp:2044] Checkpointing container's forked pid 10086 to '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/pids/forked.pid' 03:52:07 I0907 03:52:07.754623 2369 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f from PREPARING to ISOLATING 03:52:07 I0907 03:52:07.755056 2369 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f from ISOLATING to FETCHING 03:52:07 I0907 03:52:07.755112 2369 fetcher.cpp:369] Starting to fetch URIs for container: dbe02af2-3122-4f2e-9747-0c4343627c2f, directory: /tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f 03:52:07 I0907 03:52:07.755334 2369 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f from FETCHING to RUNNING 03:52:07 I0907 03:52:07.856390 10100 executor.cpp:201] Version: 1.8.0 03:52:07 I0907 03:52:07.858984 2374 process.cpp:3569] Handling HTTP event for process 'slave(90)' with path: '/slave(90)/api/v1/executor' 03:52:07 I0907 03:52:07.859725 2373 http.cpp:1177] HTTP POST for /slave(90)/api/v1/executor from 172.16.10.27:41614 03:52:07 I0907 03:52:07.859807 2373 slave.cpp:4607] Received Subscribe request for HTTP executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.859856 2373 slave.cpp:4670] Creating a marker file for HTTP based executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (via HTTP) at path '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/http.marker' 03:52:07 I0907 03:52:07.860514 2373 slave.cpp:3282] Sending queued task group containing tasks [ 2e8c13b6-fa45-4e3c-89cd-398a5abc192f ] to executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (via HTTP) 03:52:07 I0907 03:52:07.860643 2373 slave.cpp:3282] Sending queued task group containing tasks [ c6c81339-65c6-4f86-b0ab-c5be60ea5fbd ] to executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (via HTTP) 03:52:07 I0907 03:52:07.861613 10117 default_executor.cpp:204] Received SUBSCRIBED event 03:52:07 I0907 03:52:07.861932 10117 default_executor.cpp:208] Subscribed executor on ip-172-16-10-27.ec2.internal 03:52:07 I0907 03:52:07.862021 10117 default_executor.cpp:204] Received LAUNCH_GROUP event 03:52:07 I0907 03:52:07.862232 10117 default_executor.cpp:204] Received LAUNCH_GROUP event 03:52:07 I0907 03:52:07.862555 10119 default_executor.cpp:428] Setting 'MESOS_CONTAINER_IP' to: 172.16.10.27 03:52:07 I0907 03:52:07.863678 2373 process.cpp:3569] Handling HTTP event for process 'slave(90)' with path: '/slave(90)/api/v1/executor' 03:52:07 I0907 03:52:07.863754 2373 process.cpp:3569] Handling HTTP event for process 'slave(90)' with path: '/slave(90)/api/v1' 03:52:07 I0907 03:52:07.864528 2373 http.cpp:1177] HTTP POST for /slave(90)/api/v1 from 172.16.10.27:41618 03:52:07 I0907 03:52:07.864616 2373 http.cpp:1177] HTTP POST for /slave(90)/api/v1/executor from 172.16.10.27:41616 03:52:07 I0907 03:52:07.864670 10121 default_executor.cpp:428] Setting 'MESOS_CONTAINER_IP' to: 172.16.10.27 03:52:07 I0907 03:52:07.864679 2373 slave.cpp:5269] Handling status update TASK_STARTING (Status UUID: e10442b0-02ac-4479-9810-b270714ce3c9) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.864825 2373 http.cpp:2444] Processing LAUNCH_NESTED_CONTAINER call for container 'dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc' 03:52:07 I0907 03:52:07.865089 2376 task_status_update_manager.cpp:328] Received task status update TASK_STARTING (Status UUID: e10442b0-02ac-4479-9810-b270714ce3c9) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.865116 2376 task_status_update_manager.cpp:507] Creating StatusUpdate stream for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.865145 2373 containerizer.cpp:1242] Creating sandbox '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/dbc72eae-9465-4bf7-a082-0bf8a055fecc' for user 'root' 03:52:07 I0907 03:52:07.865350 2373 containerizer.cpp:1280] Starting container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc 03:52:07 I0907 03:52:07.865348 2376 task_status_update_manager.cpp:842] Checkpointing UPDATE for task status update TASK_STARTING (Status UUID: e10442b0-02ac-4479-9810-b270714ce3c9) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.865442 2376 task_status_update_manager.cpp:383] Forwarding task status update TASK_STARTING (Status UUID: e10442b0-02ac-4479-9810-b270714ce3c9) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to the agent 03:52:07 I0907 03:52:07.865520 2376 slave.cpp:5761] Forwarding the update TASK_STARTING (Status UUID: e10442b0-02ac-4479-9810-b270714ce3c9) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to master@172.16.10.27:45074 03:52:07 I0907 03:52:07.865600 2376 slave.cpp:5654] Task status update manager successfully handled status update TASK_STARTING (Status UUID: e10442b0-02ac-4479-9810-b270714ce3c9) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.865655 2369 master.cpp:8375] Status update TASK_STARTING (Status UUID: e10442b0-02ac-4479-9810-b270714ce3c9) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 from agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:07 I0907 03:52:07.865679 2369 master.cpp:8432] Forwarding status update TASK_STARTING (Status UUID: e10442b0-02ac-4479-9810-b270714ce3c9) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.865785 2369 master.cpp:10932] Updating the state of task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (latest state: TASK_STARTING, status update state: TASK_STARTING) 03:52:07 I0907 03:52:07.866125 2372 scheduler.cpp:845] Enqueuing event UPDATE received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.866276 2373 containerizer.cpp:1446] Checkpointed ContainerConfig at '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/containers/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/dbc72eae-9465-4bf7-a082-0bf8a055fecc/config' 03:52:07 I0907 03:52:07.866367 2375 scheduler.cpp:248] Sending ACKNOWLEDGE call to http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.866674 2372 process.cpp:3569] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' 03:52:07 I0907 03:52:07.866292 2373 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc from PROVISIONING to PREPARING 03:52:07 I0907 03:52:07.867278 2375 containerizer.cpp:1939] Launching 'mesos-containerizer' with flags '--help=""""false"""" --launch_info=""""{""""command"""":{""""shell"""":true,""""value"""":""""sleep 1000""""},""""environment"""":{""""variables"""":[{""""name"""":""""MESOS_SANDBOX"""",""""type"""":""""VALUE"""",""""value"""":""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/dbc72eae-9465-4bf7-a082-0bf8a055fecc""""},{""""name"""":""""MESOS_CONTAINER_IP"""",""""type"""":""""VALUE"""",""""value"""":""""172.16.10.27""""}]},""""task_environment"""":{},""""user"""":""""root"""",""""working_directory"""":""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/dbc72eae-9465-4bf7-a082-0bf8a055fecc""""}"""" --pipe_read=""""22"""" --pipe_write=""""23"""" --runtime_directory=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/containers/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/dbc72eae-9465-4bf7-a082-0bf8a055fecc"""" --unshare_namespace_mnt=""""false""""' 03:52:07 I0907 03:52:07.867460 2374 linux_launcher.cpp:492] Launching nested container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc and cloning with namespaces 03:52:07 I0907 03:52:07.869093 10119 default_executor.cpp:204] Received ACKNOWLEDGED event 03:52:07 I0907 03:52:07.869769 2373 process.cpp:3569] Handling HTTP event for process 'slave(90)' with path: '/slave(90)/api/v1' 03:52:07 I0907 03:52:07.869877 2373 process.cpp:3569] Handling HTTP event for process 'slave(90)' with path: '/slave(90)/api/v1/executor' 03:52:07 I0907 03:52:07.870610 2370 http.cpp:1177] HTTP POST for /slave(90)/api/v1 from 172.16.10.27:41620 03:52:07 I0907 03:52:07.870734 2370 http.cpp:2444] Processing LAUNCH_NESTED_CONTAINER call for container 'dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c' 03:52:07 I0907 03:52:07.886466 2375 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc from PREPARING to ISOLATING 03:52:07 I0907 03:52:07.886638 2375 containerizer.cpp:1242] Creating sandbox '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c' for user 'root' 03:52:07 I0907 03:52:07.886797 2375 containerizer.cpp:1280] Starting container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c 03:52:07 I0907 03:52:07.887120 2375 containerizer.cpp:1446] Checkpointed ContainerConfig at '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/containers/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c/config' 03:52:07 I0907 03:52:07.887145 2375 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c from PROVISIONING to PREPARING 03:52:07 I0907 03:52:07.887554 2375 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc from ISOLATING to FETCHING 03:52:07 I0907 03:52:07.887653 2375 fetcher.cpp:369] Starting to fetch URIs for container: dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc, directory: /tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/dbc72eae-9465-4bf7-a082-0bf8a055fecc 03:52:07 I0907 03:52:07.888100 2375 containerizer.cpp:1939] Launching 'mesos-containerizer' with flags '--help=""""false"""" --launch_info=""""{""""command"""":{""""shell"""":true,""""value"""":""""exit 0""""},""""environment"""":{""""variables"""":[{""""name"""":""""MESOS_SANDBOX"""",""""type"""":""""VALUE"""",""""value"""":""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c""""},{""""name"""":""""MESOS_CONTAINER_IP"""",""""type"""":""""VALUE"""",""""value"""":""""172.16.10.27""""}]},""""task_environment"""":{},""""user"""":""""root"""",""""working_directory"""":""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c""""}"""" --pipe_read=""""24"""" --pipe_write=""""25"""" --runtime_directory=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/containers/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c"""" --unshare_namespace_mnt=""""false""""' 03:52:07 I0907 03:52:07.888273 2371 linux_launcher.cpp:492] Launching nested container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c and cloning with namespaces 03:52:07 I0907 03:52:07.893172 2375 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c from PREPARING to ISOLATING 03:52:07 I0907 03:52:07.893383 2375 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc from FETCHING to RUNNING 03:52:07 I0907 03:52:07.893857 2375 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c from ISOLATING to FETCHING 03:52:07 I0907 03:52:07.894094 2375 fetcher.cpp:369] Starting to fetch URIs for container: dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c, directory: /tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c 03:52:07 I0907 03:52:07.894352 2375 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c from FETCHING to RUNNING 03:52:07 I0907 03:52:07.896064 10116 default_executor.cpp:663] Finished launching tasks [ 2e8c13b6-fa45-4e3c-89cd-398a5abc192f ] in child containers [ dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc ] 03:52:07 I0907 03:52:07.896095 10116 default_executor.cpp:687] Waiting on child containers of tasks [ 2e8c13b6-fa45-4e3c-89cd-398a5abc192f ] 03:52:07 I0907 03:52:07.896347 10116 default_executor.cpp:663] Finished launching tasks [ c6c81339-65c6-4f86-b0ab-c5be60ea5fbd ] in child containers [ dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c ] 03:52:07 I0907 03:52:07.896368 10116 default_executor.cpp:687] Waiting on child containers of tasks [ c6c81339-65c6-4f86-b0ab-c5be60ea5fbd ] 03:52:07 I0907 03:52:07.896908 10118 default_executor.cpp:748] Waiting for child container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc of task '2e8c13b6-fa45-4e3c-89cd-398a5abc192f' 03:52:07 I0907 03:52:07.896988 10118 default_executor.cpp:748] Waiting for child container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c of task 'c6c81339-65c6-4f86-b0ab-c5be60ea5fbd' 03:52:07 I0907 03:52:07.897475 2374 process.cpp:3569] Handling HTTP event for process 'slave(90)' with path: '/slave(90)/api/v1' 03:52:07 I0907 03:52:07.897568 2374 process.cpp:3569] Handling HTTP event for process 'slave(90)' with path: '/slave(90)/api/v1' 03:52:07 I0907 03:52:07.898296 2374 http.cpp:1177] HTTP POST for /slave(90)/api/v1 from 172.16.10.27:41622 03:52:07 I0907 03:52:07.898389 2374 http.cpp:1177] HTTP POST for /slave(90)/api/v1 from 172.16.10.27:41624 03:52:07 I0907 03:52:07.898488 2374 http.cpp:2679] Processing WAIT_NESTED_CONTAINER call for container 'dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc' 03:52:07 I0907 03:52:07.898587 2374 http.cpp:2679] Processing WAIT_NESTED_CONTAINER call for container 'dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c' 03:52:07 I0907 03:52:07.906261 2374 process.cpp:3569] Handling HTTP event for process 'slave(90)' with path: '/slave(90)/api/v1/executor' 03:52:07 I0907 03:52:07.906337 2374 process.cpp:3569] Handling HTTP event for process 'slave(90)' with path: '/slave(90)/api/v1/executor' 03:52:07 I0907 03:52:07.906690 2374 http.cpp:1177] HTTP POST for /master/api/v1/scheduler from 172.16.10.27:41610 03:52:07 I0907 03:52:07.906752 2374 master.cpp:6241] Processing ACKNOWLEDGE call for status e10442b0-02ac-4479-9810-b270714ce3c9 for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 03:52:07 I0907 03:52:07.906955 2374 task_status_update_manager.cpp:401] Received task status update acknowledgement (UUID: e10442b0-02ac-4479-9810-b270714ce3c9) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.906989 2374 task_status_update_manager.cpp:842] Checkpointing ACK for task status update TASK_STARTING (Status UUID: e10442b0-02ac-4479-9810-b270714ce3c9) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.907456 2374 slave.cpp:4505] Task status update manager successfully handled status update acknowledgement (UUID: e10442b0-02ac-4479-9810-b270714ce3c9) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.908149 2376 http.cpp:1177] HTTP POST for /slave(90)/api/v1/executor from 172.16.10.27:41616 03:52:07 I0907 03:52:07.908223 2376 slave.cpp:5269] Handling status update TASK_STARTING (Status UUID: 8dc5aae0-605b-42a0-a597-014404ae1bde) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.908481 2376 http.cpp:1177] HTTP POST for /slave(90)/api/v1/executor from 172.16.10.27:41616 03:52:07 I0907 03:52:07.908550 2376 slave.cpp:5269] Handling status update TASK_RUNNING (Status UUID: 898e5c8a-fb6a-47e6-8732-9c088c8986ea) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.908641 2376 http.cpp:1177] HTTP POST for /slave(90)/api/v1/executor from 172.16.10.27:41616 03:52:07 I0907 03:52:07.908699 2376 slave.cpp:5269] Handling status update TASK_RUNNING (Status UUID: fa82b142-8506-416c-9f9a-5d5a0b0f5906) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.909559 2376 task_status_update_manager.cpp:328] Received task status update TASK_STARTING (Status UUID: 8dc5aae0-605b-42a0-a597-014404ae1bde) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.909593 2376 task_status_update_manager.cpp:507] Creating StatusUpdate stream for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.909816 2376 task_status_update_manager.cpp:842] Checkpointing UPDATE for task status update TASK_STARTING (Status UUID: 8dc5aae0-605b-42a0-a597-014404ae1bde) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.909917 2376 task_status_update_manager.cpp:383] Forwarding task status update TASK_STARTING (Status UUID: 8dc5aae0-605b-42a0-a597-014404ae1bde) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to the agent 03:52:07 I0907 03:52:07.910491 2369 task_status_update_manager.cpp:328] Received task status update TASK_RUNNING (Status UUID: 898e5c8a-fb6a-47e6-8732-9c088c8986ea) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.910521 2369 task_status_update_manager.cpp:842] Checkpointing UPDATE for task status update TASK_RUNNING (Status UUID: 898e5c8a-fb6a-47e6-8732-9c088c8986ea) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.910598 2369 task_status_update_manager.cpp:383] Forwarding task status update TASK_RUNNING (Status UUID: 898e5c8a-fb6a-47e6-8732-9c088c8986ea) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to the agent 03:52:07 I0907 03:52:07.910706 2373 task_status_update_manager.cpp:328] Received task status update TASK_RUNNING (Status UUID: fa82b142-8506-416c-9f9a-5d5a0b0f5906) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.910732 2373 task_status_update_manager.cpp:842] Checkpointing UPDATE for task status update TASK_RUNNING (Status UUID: fa82b142-8506-416c-9f9a-5d5a0b0f5906) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.910827 2376 slave.cpp:5761] Forwarding the update TASK_STARTING (Status UUID: 8dc5aae0-605b-42a0-a597-014404ae1bde) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to master@172.16.10.27:45074 03:52:07 I0907 03:52:07.910946 2372 master.cpp:8375] Status update TASK_STARTING (Status UUID: 8dc5aae0-605b-42a0-a597-014404ae1bde) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 from agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:07 I0907 03:52:07.910979 2372 master.cpp:8432] Forwarding status update TASK_STARTING (Status UUID: 8dc5aae0-605b-42a0-a597-014404ae1bde) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.911111 2372 master.cpp:10932] Updating the state of task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (latest state: TASK_RUNNING, status update state: TASK_STARTING) 03:52:07 I0907 03:52:07.911465 2375 scheduler.cpp:845] Enqueuing event UPDATE received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.911682 2375 scheduler.cpp:248] Sending ACKNOWLEDGE call to http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.912065 2371 process.cpp:3569] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' 03:52:07 I0907 03:52:07.912154 2376 slave.cpp:5654] Task status update manager successfully handled status update TASK_STARTING (Status UUID: 8dc5aae0-605b-42a0-a597-014404ae1bde) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.912531 10119 default_executor.cpp:204] Received ACKNOWLEDGED event 03:52:07 I0907 03:52:07.912585 2376 slave.cpp:5761] Forwarding the update TASK_RUNNING (Status UUID: 898e5c8a-fb6a-47e6-8732-9c088c8986ea) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to master@172.16.10.27:45074 03:52:07 I0907 03:52:07.912716 2373 master.cpp:8375] Status update TASK_RUNNING (Status UUID: 898e5c8a-fb6a-47e6-8732-9c088c8986ea) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 from agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:07 I0907 03:52:07.912746 2373 master.cpp:8432] Forwarding status update TASK_RUNNING (Status UUID: 898e5c8a-fb6a-47e6-8732-9c088c8986ea) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.912868 2373 master.cpp:10932] Updating the state of task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) 03:52:07 I0907 03:52:07.913197 2370 scheduler.cpp:845] Enqueuing event UPDATE received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.913413 2370 scheduler.cpp:248] Sending ACKNOWLEDGE call to http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.913676 2376 slave.cpp:5654] Task status update manager successfully handled status update TASK_RUNNING (Status UUID: 898e5c8a-fb6a-47e6-8732-9c088c8986ea) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.914064 10119 default_executor.cpp:204] Received ACKNOWLEDGED event 03:52:07 I0907 03:52:07.914105 2376 slave.cpp:5654] Task status update manager successfully handled status update TASK_RUNNING (Status UUID: fa82b142-8506-416c-9f9a-5d5a0b0f5906) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.914460 10120 default_executor.cpp:204] Received ACKNOWLEDGED event 03:52:07 I0907 03:52:07.950346 2371 process.cpp:3569] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' 03:52:07 I0907 03:52:07.950747 2371 http.cpp:1177] HTTP POST for /master/api/v1/scheduler from 172.16.10.27:41610 03:52:07 I0907 03:52:07.950814 2371 master.cpp:6241] Processing ACKNOWLEDGE call for status 8dc5aae0-605b-42a0-a597-014404ae1bde for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 03:52:07 I0907 03:52:07.950927 2371 http.cpp:1177] HTTP POST for /master/api/v1/scheduler from 172.16.10.27:41610 03:52:07 I0907 03:52:07.950971 2371 master.cpp:6241] Processing ACKNOWLEDGE call for status 898e5c8a-fb6a-47e6-8732-9c088c8986ea for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 03:52:07 I0907 03:52:07.951253 2371 task_status_update_manager.cpp:401] Received task status update acknowledgement (UUID: 8dc5aae0-605b-42a0-a597-014404ae1bde) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.951315 2371 task_status_update_manager.cpp:842] Checkpointing ACK for task status update TASK_STARTING (Status UUID: 8dc5aae0-605b-42a0-a597-014404ae1bde) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.951396 2371 task_status_update_manager.cpp:383] Forwarding task status update TASK_RUNNING (Status UUID: fa82b142-8506-416c-9f9a-5d5a0b0f5906) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to the agent 03:52:07 I0907 03:52:07.951472 2371 task_status_update_manager.cpp:401] Received task status update acknowledgement (UUID: 898e5c8a-fb6a-47e6-8732-9c088c8986ea) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.951519 2371 task_status_update_manager.cpp:842] Checkpointing ACK for task status update TASK_RUNNING (Status UUID: 898e5c8a-fb6a-47e6-8732-9c088c8986ea) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.951596 2371 slave.cpp:5761] Forwarding the update TASK_RUNNING (Status UUID: fa82b142-8506-416c-9f9a-5d5a0b0f5906) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to master@172.16.10.27:45074 03:52:07 I0907 03:52:07.951683 2371 slave.cpp:4505] Task status update manager successfully handled status update acknowledgement (UUID: 8dc5aae0-605b-42a0-a597-014404ae1bde) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.951723 2371 slave.cpp:4505] Task status update manager successfully handled status update acknowledgement (UUID: 898e5c8a-fb6a-47e6-8732-9c088c8986ea) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.951797 2371 master.cpp:8375] Status update TASK_RUNNING (Status UUID: fa82b142-8506-416c-9f9a-5d5a0b0f5906) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 from agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:07 I0907 03:52:07.951818 2371 master.cpp:8432] Forwarding status update TASK_RUNNING (Status UUID: fa82b142-8506-416c-9f9a-5d5a0b0f5906) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:07 I0907 03:52:07.951963 2371 master.cpp:10932] Updating the state of task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING) 03:52:07 I0907 03:52:07.952499 2372 scheduler.cpp:845] Enqueuing event UPDATE received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.952716 2372 scheduler.cpp:248] Sending ACKNOWLEDGE call to http://172.16.10.27:45074/master/api/v1/scheduler 03:52:07 I0907 03:52:07.953107 2369 process.cpp:3569] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' 03:52:08 I0907 03:52:07.990520 2371 http.cpp:1177] HTTP POST for /master/api/v1/scheduler from 172.16.10.27:41610 03:52:08 I0907 03:52:07.990584 2371 master.cpp:6241] Processing ACKNOWLEDGE call for status fa82b142-8506-416c-9f9a-5d5a0b0f5906 for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 03:52:08 I0907 03:52:07.990727 2374 task_status_update_manager.cpp:401] Received task status update acknowledgement (UUID: fa82b142-8506-416c-9f9a-5d5a0b0f5906) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:07.990777 2374 task_status_update_manager.cpp:842] Checkpointing ACK for task status update TASK_RUNNING (Status UUID: fa82b142-8506-416c-9f9a-5d5a0b0f5906) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:07.990979 2373 slave.cpp:4505] Task status update manager successfully handled status update acknowledgement (UUID: fa82b142-8506-416c-9f9a-5d5a0b0f5906) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.056339 2371 containerizer.cpp:2957] Container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c has exited 03:52:08 I0907 03:52:08.056361 2371 containerizer.cpp:2455] Destroying container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c in RUNNING state 03:52:08 I0907 03:52:08.056371 2371 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c from RUNNING to DESTROYING 03:52:08 I0907 03:52:08.056445 2371 linux_launcher.cpp:580] Asked to destroy container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c 03:52:08 I0907 03:52:08.056494 2371 linux_launcher.cpp:622] Destroying cgroup '/sys/fs/cgroup/freezer/mesos/dbe02af2-3122-4f2e-9747-0c4343627c2f/mesos/05a1238c-5190-476a-945f-4d8f1225e45c' 03:52:08 I0907 03:52:08.056720 2373 cgroups.cpp:2838] Freezing cgroup /sys/fs/cgroup/freezer/mesos/dbe02af2-3122-4f2e-9747-0c4343627c2f/mesos/05a1238c-5190-476a-945f-4d8f1225e45c 03:52:08 I0907 03:52:08.056849 2373 cgroups.cpp:1229] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/dbe02af2-3122-4f2e-9747-0c4343627c2f/mesos/05a1238c-5190-476a-945f-4d8f1225e45c after 104192ns 03:52:08 I0907 03:52:08.056985 2374 cgroups.cpp:2856] Thawing cgroup /sys/fs/cgroup/freezer/mesos/dbe02af2-3122-4f2e-9747-0c4343627c2f/mesos/05a1238c-5190-476a-945f-4d8f1225e45c 03:52:08 I0907 03:52:08.057072 2374 cgroups.cpp:1258] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/dbe02af2-3122-4f2e-9747-0c4343627c2f/mesos/05a1238c-5190-476a-945f-4d8f1225e45c after 59136ns 03:52:08 I0907 03:52:08.057595 2375 provisioner.cpp:597] Ignoring destroy request for unknown container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c 03:52:08 I0907 03:52:08.057662 2375 containerizer.cpp:2747] Checkpointing termination state to nested container's runtime directory '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/containers/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c/termination' 03:52:08 I0907 03:52:08.057904 2376 gc.cpp:95] Scheduling '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c' for gc 6.99998775645926days in the future 03:52:08 I0907 03:52:08.058738 2369 process.cpp:3569] Handling HTTP event for process 'slave(90)' with path: '/slave(90)/api/v1/executor' 03:52:08 I0907 03:52:08.060922 10121 default_executor.cpp:955] Child container dbe02af2-3122-4f2e-9747-0c4343627c2f.05a1238c-5190-476a-945f-4d8f1225e45c of task 'c6c81339-65c6-4f86-b0ab-c5be60ea5fbd' completed in state TASK_FINISHED: Command exited with status 0 03:52:08 I0907 03:52:08.098583 2371 http.cpp:1177] HTTP POST for /slave(90)/api/v1/executor from 172.16.10.27:41616 03:52:08 I0907 03:52:08.098664 2371 slave.cpp:5269] Handling status update TASK_FINISHED (Status UUID: 7459155f-590d-4e56-ad85-9cd2c2f4ba40) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.099135 2373 task_status_update_manager.cpp:328] Received task status update TASK_FINISHED (Status UUID: 7459155f-590d-4e56-ad85-9cd2c2f4ba40) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.099171 2373 task_status_update_manager.cpp:842] Checkpointing UPDATE for task status update TASK_FINISHED (Status UUID: 7459155f-590d-4e56-ad85-9cd2c2f4ba40) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.099236 2373 task_status_update_manager.cpp:383] Forwarding task status update TASK_FINISHED (Status UUID: 7459155f-590d-4e56-ad85-9cd2c2f4ba40) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to the agent 03:52:08 I0907 03:52:08.099304 2373 slave.cpp:5761] Forwarding the update TASK_FINISHED (Status UUID: 7459155f-590d-4e56-ad85-9cd2c2f4ba40) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to master@172.16.10.27:45074 03:52:08 I0907 03:52:08.099382 2373 slave.cpp:5654] Task status update manager successfully handled status update TASK_FINISHED (Status UUID: 7459155f-590d-4e56-ad85-9cd2c2f4ba40) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.099511 2373 master.cpp:8375] Status update TASK_FINISHED (Status UUID: 7459155f-590d-4e56-ad85-9cd2c2f4ba40) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 from agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:08 I0907 03:52:08.099537 2373 master.cpp:8432] Forwarding status update TASK_FINISHED (Status UUID: 7459155f-590d-4e56-ad85-9cd2c2f4ba40) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.099639 2373 master.cpp:10932] Updating the state of task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (latest state: TASK_FINISHED, status update state: TASK_FINISHED) 03:52:08 I0907 03:52:08.099834 2373 hierarchical.cpp:1236] Recovered cpus(allocated: *):0.1; mem(allocated: *):32; disk(allocated: *):32 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000], allocated: cpus(allocated: *):0.2; mem(allocated: *):64; disk(allocated: *):64) on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 from framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.100028 10119 default_executor.cpp:204] Received ACKNOWLEDGED event 03:52:08 I0907 03:52:08.100237 2369 scheduler.cpp:845] Enqueuing event UPDATE received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:08 I0907 03:52:08.100456 2370 scheduler.cpp:248] Sending ACKNOWLEDGE call to http://172.16.10.27:45074/master/api/v1/scheduler 03:52:08 I0907 03:52:08.100777 2369 process.cpp:3569] Handling HTTP event for process 'master' with path: '/master/api/v1/scheduler' 03:52:08 I0907 03:52:08.138344 2370 http.cpp:1177] HTTP POST for /master/api/v1/scheduler from 172.16.10.27:41610 03:52:08 I0907 03:52:08.138397 2370 master.cpp:6241] Processing ACKNOWLEDGE call for status 7459155f-590d-4e56-ad85-9cd2c2f4ba40 for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 03:52:08 I0907 03:52:08.138432 2370 master.cpp:11030] Removing task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd with resources cpus(allocated: *):0.1; mem(allocated: *):32; disk(allocated: *):32 of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:08 I0907 03:52:08.138602 2369 task_status_update_manager.cpp:401] Received task status update acknowledgement (UUID: 7459155f-590d-4e56-ad85-9cd2c2f4ba40) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.138638 2369 task_status_update_manager.cpp:842] Checkpointing ACK for task status update TASK_FINISHED (Status UUID: 7459155f-590d-4e56-ad85-9cd2c2f4ba40) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.138676 2369 task_status_update_manager.cpp:538] Cleaning up status update stream for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.138803 2369 slave.cpp:4505] Task status update manager successfully handled status update acknowledgement (UUID: 7459155f-590d-4e56-ad85-9cd2c2f4ba40) for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.138820 2369 slave.cpp:9651] Completing task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd 03:52:08 I0907 03:52:08.138900 2369 gc.cpp:95] Scheduling '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/tasks/c6c81339-65c6-4f86-b0ab-c5be60ea5fbd' for gc 6.99998681873778days in the future 03:52:08 I0907 03:52:08.139261 2350 slave.cpp:909] Agent terminating 03:52:08 I0907 03:52:08.139454 2371 master.cpp:1251] Agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) disconnected 03:52:08 I0907 03:52:08.139475 2371 master.cpp:3267] Disconnecting agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:08 I0907 03:52:08.139492 2371 master.cpp:3286] Deactivating agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(90)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:08 I0907 03:52:08.139536 2371 hierarchical.cpp:795] Agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 deactivated 03:52:08 W0907 03:52:08.140225 2350 process.cpp:2810] Attempted to spawn already running process files@172.16.10.27:45074 03:52:08 I0907 03:52:08.140589 2350 containerizer.cpp:305] Using isolation { environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni } 03:52:08 I0907 03:52:08.142592 2350 linux_launcher.cpp:144] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher 03:52:08 I0907 03:52:08.143028 2350 provisioner.cpp:298] Using default backend 'overlay' 03:52:08 I0907 03:52:08.143687 2350 cluster.cpp:485] Creating default 'local' authorizer 03:52:08 I0907 03:52:08.144181 2370 slave.cpp:267] Mesos agent started on (91)@172.16.10.27:45074 03:52:08 I0907 03:52:08.144197 2370 slave.cpp:268] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/store/appc"""" --authenticate_http_executors=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authentication_timeout_max=""""1mins"""" --authentication_timeout_min=""""5secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_destroy_timeout=""""1mins"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/credential"""" --default_role=""""*"""" --disallow_sharing_agent_pid_namespace=""""false"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/fetch"""" --fetcher_cache_size=""""2GB"""" --fetcher_stall_timeout=""""1mins"""" --frameworks_home="""""""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --gc_non_executor_container_sandboxes=""""true"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --jwt_secret_key=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/jwt_secret_key"""" --launcher=""""linux"""" --launcher_dir=""""/home/ubuntu/workspace/mesos/Mesos_CI-build/FLAG/SSL/label/mesos-ec2-ubuntu-16.04/mesos/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --memory_profiling=""""false"""" --network_cni_metrics=""""true"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --reconfiguration_policy=""""equal"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv"""" --zk_session_timeout=""""10secs"""" 03:52:08 I0907 03:52:08.144366 2370 credentials.hpp:86] Loading credential for authentication from '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/credential' 03:52:08 I0907 03:52:08.144414 2370 slave.cpp:300] Agent using credential for: test-principal 03:52:08 I0907 03:52:08.144425 2370 credentials.hpp:37] Loading credentials for authentication from '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_thYmJB/http_credentials' 03:52:08 I0907 03:52:08.144500 2370 http.cpp:1037] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-executor' 03:52:08 I0907 03:52:08.144552 2370 http.cpp:1058] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-executor' 03:52:08 I0907 03:52:08.144613 2370 http.cpp:1037] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' 03:52:08 I0907 03:52:08.144652 2370 http.cpp:1058] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-readonly' 03:52:08 I0907 03:52:08.144690 2370 http.cpp:1037] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite' 03:52:08 I0907 03:52:08.144726 2370 http.cpp:1058] Creating default 'jwt' HTTP authenticator for realm 'mesos-agent-readwrite' 03:52:08 I0907 03:52:08.144798 2370 disk_profile_adaptor.cpp:80] Creating default disk profile adaptor module 03:52:08 I0907 03:52:08.145326 2370 slave.cpp:615] Agent resources: [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":2.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] 03:52:08 I0907 03:52:08.145390 2370 slave.cpp:623] Agent attributes: [ ] 03:52:08 I0907 03:52:08.145401 2370 slave.cpp:632] Agent hostname: ip-172-16-10-27.ec2.internal 03:52:08 I0907 03:52:08.145444 2373 task_status_update_manager.cpp:181] Pausing sending task status updates 03:52:08 I0907 03:52:08.145632 2370 state.cpp:66] Recovering state from '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta' 03:52:08 I0907 03:52:08.145664 2370 state.cpp:711] No committed checkpointed resources found at '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/resources/resources.info' 03:52:08 E0907 03:52:08.145715 10118 executor.cpp:714] End-Of-File received from agent. The agent closed the event stream 03:52:08 I0907 03:52:08.147001 10118 default_executor.cpp:176] Disconnected from agent 03:52:08 I0907 03:52:08.147406 2370 slave.cpp:6909] Finished recovering checkpointed state from '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta', beginning agent recovery 03:52:08 I0907 03:52:08.147666 2370 slave.cpp:7388] Recovering framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.147833 2370 slave.cpp:9112] Recovering executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.148141 2370 slave.cpp:9651] Completing task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd 03:52:08 I0907 03:52:08.148406 2376 gc.cpp:95] Scheduling '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/tasks/c6c81339-65c6-4f86-b0ab-c5be60ea5fbd' for gc 6.99998670912days in the future 03:52:08 I0907 03:52:08.148608 2373 task_status_update_manager.cpp:207] Recovering task status update manager 03:52:08 I0907 03:52:08.148630 2373 task_status_update_manager.cpp:215] Recovering executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.148672 2373 task_status_update_manager.cpp:507] Creating StatusUpdate stream for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.148811 2370 slave.cpp:988] Successfully attached '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f' to virtual path '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/latest' 03:52:08 I0907 03:52:08.148908 2373 task_status_update_manager.cpp:818] Replaying task status update stream for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f 03:52:08 I0907 03:52:08.148990 2373 task_status_update_manager.cpp:507] Creating StatusUpdate stream for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.148998 2377 process.cpp:2735] Returning '404 Not Found' for '/slave(90)/api/v1/executor' 03:52:08 I0907 03:52:08.149178 2373 task_status_update_manager.cpp:818] Replaying task status update stream for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd 03:52:08 I0907 03:52:08.149216 2373 task_status_update_manager.cpp:538] Cleaning up status update stream for task c6c81339-65c6-4f86-b0ab-c5be60ea5fbd of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.149478 2370 slave.cpp:988] Successfully attached '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f' to virtual path '/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/latest' 03:52:08 I0907 03:52:08.149642 2370 slave.cpp:988] Successfully attached '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f' to virtual path '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f' 03:52:08 W0907 03:52:08.149492 10114 executor.cpp:666] Received '404 Not Found' () for SUBSCRIBE 03:52:08 I0907 03:52:08.149968 2370 containerizer.cpp:727] Recovering Mesos containers 03:52:08 I0907 03:52:08.150168 2370 containerizer.cpp:784] Recovering container dbe02af2-3122-4f2e-9747-0c4343627c2f for executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:08 I0907 03:52:08.154752 2370 gc.cpp:95] Scheduling '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c' for gc 6.99998663851852days in the future 03:52:08 I0907 03:52:08.154945 2370 linux_launcher.cpp:286] Recovering Linux launcher 03:52:08 I0907 03:52:08.155302 2370 linux_launcher.cpp:343] Recovered container dbe02af2-3122-4f2e-9747-0c4343627c2f 03:52:08 I0907 03:52:08.155437 2370 linux_launcher.cpp:331] Not recovering cgroup mesos/dbe02af2-3122-4f2e-9747-0c4343627c2f/mesos 03:52:08 I0907 03:52:08.155575 2370 linux_launcher.cpp:343] Recovered container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc 03:52:08 I0907 03:52:08.155746 2370 containerizer.cpp:1053] Recovering isolators 03:52:08 I0907 03:52:08.157411 2370 containerizer.cpp:1092] Recovering provisioner 03:52:08 I0907 03:52:08.162133 2376 provisioner.cpp:494] Provisioner recovery complete 03:52:08 I0907 03:52:08.162586 2371 composing.cpp:339] Finished recovering all containerizers 03:52:08 I0907 03:52:08.162637 2376 slave.cpp:7138] Recovering executors 03:52:08 I0907 03:52:08.162654 2376 slave.cpp:7225] Waiting for executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (via HTTP) to subscribe 03:52:08 I0907 03:52:08.702880 2370 hierarchical.cpp:1564] Performed allocation for 1 agents in 31079ns 03:52:08 I0907 03:52:08.859201 2377 process.cpp:2735] Returning '404 Not Found' for '/slave(90)/api/v1/executor' 03:52:08 W0907 03:52:08.859589 10114 executor.cpp:666] Received '404 Not Found' () for SUBSCRIBE 03:52:09 I0907 03:52:09.149554 2377 process.cpp:2735] Returning '404 Not Found' for '/slave(90)/api/v1/executor' 03:52:09 W0907 03:52:09.149917 10114 executor.cpp:666] Received '404 Not Found' () for SUBSCRIBE 03:52:09 I0907 03:52:09.703336 2376 hierarchical.cpp:1564] Performed allocation for 1 agents in 30173ns 03:52:09 I0907 03:52:09.859925 2377 process.cpp:2735] Returning '404 Not Found' for '/slave(90)/api/v1/executor' 03:52:09 W0907 03:52:09.860404 10117 executor.cpp:666] Received '404 Not Found' () for SUBSCRIBE 03:52:10 I0907 03:52:10.150609 2377 process.cpp:2735] Returning '404 Not Found' for '/slave(90)/api/v1/executor' 03:52:10 W0907 03:52:10.151021 10121 executor.cpp:666] Received '404 Not Found' () for SUBSCRIBE 03:52:10 I0907 03:52:10.162950 2373 slave.cpp:5197] Cleaning up un-reregistered executors 03:52:10 I0907 03:52:10.162973 2373 slave.cpp:5215] Killing un-reregistered executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (via HTTP) 03:52:10 I0907 03:52:10.163033 2373 slave.cpp:7291] Finished recovery 03:52:10 I0907 03:52:10.163075 2371 containerizer.cpp:2455] Destroying container dbe02af2-3122-4f2e-9747-0c4343627c2f in RUNNING state 03:52:10 I0907 03:52:10.163096 2371 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f from RUNNING to DESTROYING 03:52:10 I0907 03:52:10.163106 2371 containerizer.cpp:2455] Destroying container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc in RUNNING state 03:52:10 I0907 03:52:10.163115 2371 containerizer.cpp:3118] Transitioning the state of container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc from RUNNING to DESTROYING 03:52:10 I0907 03:52:10.163275 2374 linux_launcher.cpp:580] Asked to destroy container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc 03:52:10 I0907 03:52:10.163326 2374 linux_launcher.cpp:622] Destroying cgroup '/sys/fs/cgroup/freezer/mesos/dbe02af2-3122-4f2e-9747-0c4343627c2f/mesos/dbc72eae-9465-4bf7-a082-0bf8a055fecc' 03:52:10 I0907 03:52:10.163509 2374 cgroups.cpp:2838] Freezing cgroup /sys/fs/cgroup/freezer/mesos/dbe02af2-3122-4f2e-9747-0c4343627c2f/mesos/dbc72eae-9465-4bf7-a082-0bf8a055fecc 03:52:10 I0907 03:52:10.164005 2373 slave.cpp:1254] New master detected at master@172.16.10.27:45074 03:52:10 I0907 03:52:10.164031 2373 slave.cpp:1319] Detecting new master 03:52:10 I0907 03:52:10.164106 2371 task_status_update_manager.cpp:181] Pausing sending task status updates 03:52:10 I0907 03:52:10.165093 2372 slave.cpp:1346] Authenticating with master master@172.16.10.27:45074 03:52:10 I0907 03:52:10.165128 2372 slave.cpp:1355] Using default CRAM-MD5 authenticatee 03:52:10 I0907 03:52:10.165200 2376 authenticatee.cpp:121] Creating new client SASL connection 03:52:10 I0907 03:52:10.165675 2376 master.cpp:9653] Authenticating slave(91)@172.16.10.27:45074 03:52:10 I0907 03:52:10.165731 2376 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(201)@172.16.10.27:45074 03:52:10 I0907 03:52:10.165791 2376 authenticator.cpp:98] Creating new server SASL connection 03:52:10 I0907 03:52:10.166316 2376 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 03:52:10 I0907 03:52:10.166338 2376 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 03:52:10 I0907 03:52:10.166371 2376 authenticator.cpp:204] Received SASL authentication start 03:52:10 I0907 03:52:10.166407 2376 authenticator.cpp:326] Authentication requires more steps 03:52:10 I0907 03:52:10.166440 2376 authenticatee.cpp:259] Received SASL authentication step 03:52:10 I0907 03:52:10.166481 2376 authenticator.cpp:232] Received SASL authentication step 03:52:10 I0907 03:52:10.166496 2376 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-27.ec2.internal' server FQDN: 'ip-172-16-10-27.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 03:52:10 I0907 03:52:10.166505 2376 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 03:52:10 I0907 03:52:10.166515 2376 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 03:52:10 I0907 03:52:10.166524 2376 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: 'ip-172-16-10-27.ec2.internal' server FQDN: 'ip-172-16-10-27.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 03:52:10 I0907 03:52:10.166532 2376 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 03:52:10 I0907 03:52:10.166538 2376 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 03:52:10 I0907 03:52:10.166549 2376 authenticator.cpp:318] Authentication success 03:52:10 I0907 03:52:10.166592 2376 authenticatee.cpp:299] Authentication success 03:52:10 I0907 03:52:10.166638 2376 master.cpp:9685] Successfully authenticated principal 'test-principal' at slave(91)@172.16.10.27:45074 03:52:10 I0907 03:52:10.166651 2371 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(201)@172.16.10.27:45074 03:52:10 I0907 03:52:10.166708 2376 slave.cpp:1446] Successfully authenticated with master master@172.16.10.27:45074 03:52:10 I0907 03:52:10.166925 2376 slave.cpp:1877] Will retry registration in 4.802199ms if necessary 03:52:10 I0907 03:52:10.167089 2371 master.cpp:6959] Received reregister agent message from agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(91)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:10 I0907 03:52:10.167194 2371 master.cpp:3964] Authorizing agent providing resources 'cpus:2; mem:1024; disk:1024; ports:[31000-32000]' with principal 'test-principal' 03:52:10 I0907 03:52:10.167399 2372 master.cpp:7051] Authorized re-registration of agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(91)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:10 I0907 03:52:10.167444 2372 master.cpp:7135] Agent is already marked as registered: 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(91)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:10 I0907 03:52:10.167487 2372 master.cpp:7503] Registry updated for slave 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(91)@172.16.10.27:45074(ip-172-16-10-27.ec2.internal) 03:52:10 I0907 03:52:10.167649 2376 hierarchical.cpp:697] Agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 (ip-172-16-10-27.ec2.internal) updated with total resources cpus:2; mem:1024; disk:1024; ports:[31000-32000] 03:52:10 I0907 03:52:10.167691 2376 hierarchical.cpp:783] Agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 reactivated 03:52:10 I0907 03:52:10.167817 2372 slave.cpp:1585] Re-registered with master master@172.16.10.27:45074 03:52:10 I0907 03:52:10.168326 2371 hierarchical.cpp:1564] Performed allocation for 1 agents in 170435ns 03:52:10 I0907 03:52:10.170323 2376 task_status_update_manager.cpp:188] Resuming sending task status updates 03:52:10 I0907 03:52:10.170390 2372 slave.cpp:1630] Forwarding agent update {""""operations"""":{},""""resource_version_uuid"""":{""""value"""":""""j4ZjLDA/StuRQkYO2dDq+A==""""},""""slave_id"""":{""""value"""":""""8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0""""},""""update_oversubscribed_resources"""":false} 03:52:10 I0907 03:52:10.170478 2372 slave.cpp:4067] Updating info for framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:10 I0907 03:52:10.170260 2369 scheduler.cpp:845] Enqueuing event HEARTBEAT received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:10 I0907 03:52:10.168475 2370 cgroups.cpp:2856] Thawing cgroup /sys/fs/cgroup/freezer/mesos/dbe02af2-3122-4f2e-9747-0c4343627c2f/mesos/dbc72eae-9465-4bf7-a082-0bf8a055fecc 03:52:10 I0907 03:52:10.168576 2375 master.cpp:1827] Skipping periodic registry garbage collection: no agents qualify for removal 03:52:10 I0907 03:52:10.170827 2369 cgroups.cpp:1258] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/dbe02af2-3122-4f2e-9747-0c4343627c2f/mesos/dbc72eae-9465-4bf7-a082-0bf8a055fecc after 0ns 03:52:10 I0907 03:52:10.170676 2372 slave.cpp:8908] Checkpointing FrameworkInfo to '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/framework.info' 03:52:10 I0907 03:52:10.173640 2375 master.cpp:9468] Sending offers [ 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-O1 ] to framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) 03:52:10 I0907 03:52:10.169373 2373 gc.cpp:272] Deleting /tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/tasks/c6c81339-65c6-4f86-b0ab-c5be60ea5fbd 03:52:10 I0907 03:52:10.174196 2369 scheduler.cpp:845] Enqueuing event OFFERS received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:10 I0907 03:52:10.174222 2375 master.cpp:7939] Ignoring update on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(91)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) as it reports no changes 03:52:10 I0907 03:52:10.269464 2372 slave.cpp:8919] Checkpointing framework pid '@0.0.0.0:0' to '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/framework.pid' 03:52:10 E0907 03:52:10.269704 2372 slave.cpp:6267] Termination of executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 failed: Failed to destroy nested containers: Failed to kill all processes in the container: Timed out after 1mins 03:52:10 I0907 03:52:10.269757 2372 slave.cpp:5269] Handling status update TASK_LOST (Status UUID: f5a0fab7-ad86-4d4d-94a7-8894f15521fd) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 from @0.0.0.0:0 03:52:10 I0907 03:52:10.269884 2372 slave.cpp:6824] Current disk usage 44.82%. Max allowed age: 3.162545109095440days 03:52:10 I0907 03:52:10.269927 2372 task_status_update_manager.cpp:188] Resuming sending task status updates 03:52:10 I0907 03:52:10.270087 2369 gc.cpp:331] Pruning directories with remaining removal time 0ns 03:52:10 I0907 03:52:10.270102 2369 gc.cpp:331] Pruning directories with remaining removal time 0ns 03:52:10 I0907 03:52:10.270117 2369 gc.cpp:188] Skipping deletion of '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/tasks/c6c81339-65c6-4f86-b0ab-c5be60ea5fbd' as it is already in progress 03:52:10 E0907 03:52:10.270146 2369 slave.cpp:5600] Failed to update resources for container dbe02af2-3122-4f2e-9747-0c4343627c2f of executor 'default' running task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f on status update for terminal task, destroying container: Container not found 03:52:10 W0907 03:52:10.270192 2369 composing.cpp:609] Attempted to destroy unknown container dbe02af2-3122-4f2e-9747-0c4343627c2f 03:52:10 I0907 03:52:10.270216 2369 task_status_update_manager.cpp:328] Received task status update TASK_LOST (Status UUID: f5a0fab7-ad86-4d4d-94a7-8894f15521fd) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:10 I0907 03:52:10.270236 2369 task_status_update_manager.cpp:842] Checkpointing UPDATE for task status update TASK_LOST (Status UUID: f5a0fab7-ad86-4d4d-94a7-8894f15521fd) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:10 I0907 03:52:10.270299 2369 task_status_update_manager.cpp:383] Forwarding task status update TASK_LOST (Status UUID: f5a0fab7-ad86-4d4d-94a7-8894f15521fd) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to the agent 03:52:10 I0907 03:52:10.270349 2369 slave.cpp:5761] Forwarding the update TASK_LOST (Status UUID: f5a0fab7-ad86-4d4d-94a7-8894f15521fd) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 to master@172.16.10.27:45074 03:52:10 I0907 03:52:10.270387 2369 slave.cpp:5654] Task status update manager successfully handled status update TASK_LOST (Status UUID: f5a0fab7-ad86-4d4d-94a7-8894f15521fd) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:10 I0907 03:52:10.270020 2372 master.cpp:8618] Executor 'default' of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(91)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal): wait status -1 03:52:10 I0907 03:52:10.270433 2372 master.cpp:11061] Removing executor 'default' with resources [{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""cpus"""",""""scalar"""":{""""value"""":0.1},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""mem"""",""""scalar"""":{""""value"""":32.0},""""type"""":""""SCALAR""""},{""""allocation_info"""":{""""role"""":""""*""""},""""name"""":""""disk"""",""""scalar"""":{""""value"""":32.0},""""type"""":""""SCALAR""""}] of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(91)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:10 I0907 03:52:10.270642 2372 master.cpp:8375] Status update TASK_LOST (Status UUID: f5a0fab7-ad86-4d4d-94a7-8894f15521fd) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 from agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(91)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:10 I0907 03:52:10.270660 2372 master.cpp:8432] Forwarding status update TASK_LOST (Status UUID: f5a0fab7-ad86-4d4d-94a7-8894f15521fd) for task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:10 I0907 03:52:10.270738 2372 master.cpp:10932] Updating the state of task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (latest state: TASK_LOST, status update state: TASK_LOST) 03:52:10 I0907 03:52:10.270931 2372 hierarchical.cpp:1236] Recovered cpus(allocated: *):0.1; mem(allocated: *):32; disk(allocated: *):32 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000], allocated: cpus(allocated: *):1.9; mem(allocated: *):992; disk(allocated: *):992; ports(allocated: *):[31000-32000]) on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 from framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:10 I0907 03:52:10.271028 2372 hierarchical.cpp:1236] Recovered cpus(allocated: *):0.1; mem(allocated: *):32; disk(allocated: *):32 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000], allocated: cpus(allocated: *):1.8; mem(allocated: *):960; disk(allocated: *):960; ports(allocated: *):[31000-32000]) on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 from framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:10 I0907 03:52:10.271473 2371 scheduler.cpp:845] Enqueuing event FAILURE received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:10 03:52:10 GMOCK WARNING: 03:52:10 Uninteresting mock function call - returning directly. 03:52:10 Function call: failure(0x7ffcee44e920, @0x7ff40001de70 48-byte object <88-43 53-2C F4-7F 00-00 00-00 00-00 00-00 00-00 07-00 00-00 00-00 00-00 70-0A 00-00 F4-7F 00-00 30-FF 00-00 F4-7F 00-00 FF-FF FF-FF F4-7F 00-00>) 03:52:10 NOTE: You can safely ignore the above warning unless this call should not happen. Do not suppress it by blindly adding an EXPECT_CALL() if you don't mean to enforce the call. See https://github.com/google/googletest/blob/master/googlemock/docs/CookBook.md#knowing-when-to-expect for details. 03:52:10 I0907 03:52:10.271648 2371 scheduler.cpp:845] Enqueuing event UPDATE received from http://172.16.10.27:45074/master/api/v1/scheduler 03:52:10 unknown file: Failure 03:52:10 03:52:10 Unexpected mock function call - returning directly. 03:52:10 Function call: update(0x7ffcee44e920, @0x7ff4000015a0 TASK_LOST (Status UUID: f5a0fab7-ad86-4d4d-94a7-8894f15521fd) Source: SOURCE_AGENT Reason: REASON_EXECUTOR_REREGISTRATION_TIMEOUT Message: 'Executor did not reregister within 2secs; Abnormal executor termination: Failed to destroy nested containers: Failed to kill all processes in the container: Timed out after 1mins' for task '2e8c13b6-fa45-4e3c-89cd-398a5abc192f' on agent: 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0) 03:52:10 Google Mock tried the following 5 expectations, but none matched: 03:52:10 03:52:10 ../../src/tests/gc_tests.cpp:986: tried expectation #0: EXPECT_CALL(*scheduler, update(_, AllOf( TaskStatusUpdateTaskIdEq(longLivedTaskInfo), TaskStatusUpdateStateEq(v1::TASK_STARTING))))... 03:52:10 Expected: the expectation is active 03:52:10 Actual: it is retired 03:52:10 Expected: to be called once 03:52:10 Actual: called once - saturated and retired 03:52:10 ../../src/tests/gc_tests.cpp:996: tried expectation #1: EXPECT_CALL(*scheduler, update(_, AllOf( TaskStatusUpdateTaskIdEq(longLivedTaskInfo), TaskStatusUpdateStateEq(v1::TASK_RUNNING))))... 03:52:10 Expected arg #1: (task status update task id eq name: """"test-task"""" 03:52:10 task_id { 03:52:10 value: """"2e8c13b6-fa45-4e3c-89cd-398a5abc192f"""" 03:52:10 } 03:52:10 agent_id { 03:52:10 value: """"8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0"""" 03:52:10 } 03:52:10 resources { 03:52:10 name: """"cpus"""" 03:52:10 type: SCALAR 03:52:10 scalar { 03:52:10 value: 0.1 03:52:10 } 03:52:10 } 03:52:10 resources { 03:52:10 name: """"mem"""" 03:52:10 type: SCALAR 03:52:10 scalar { 03:52:10 value: 32 03:52:10 } 03:52:10 } 03:52:10 resources { 03:52:10 name: """"disk"""" 03:52:10 type: SCALAR 03:52:10 scalar { 03:52:10 value: 32 03:52:10 } 03:52:10 } 03:52:10 command { 03:52:10 value: """"sleep 1000"""" 03:52:10 } 03:52:10 ) and (task status update state eq TASK_RUNNING) 03:52:10 Actual: TASK_LOST (Status UUID: f5a0fab7-ad86-4d4d-94a7-8894f15521fd) Source: SOURCE_AGENT Reason: REASON_EXECUTOR_REREGISTRATION_TIMEOUT Message: 'Executor did not reregister within 2secs; Abnormal executor termination: Failed to destroy nested containers: Failed to kill all processes in the container: Timed out after 1mins' for task '2e8c13b6-fa45-4e3c-89cd-398a5abc192f' on agent: 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 03:52:10 Expected: to be called once 03:52:10 Actual: called once - saturated and active 03:52:10 ../../src/tests/gc_tests.cpp:1011: tried expectation #2: EXPECT_CALL(*scheduler, update(_, AllOf( TaskStatusUpdateTaskIdEq(shortLivedTaskInfo), TaskStatusUpdateStateEq(v1::TASK_STARTING))))... 03:52:10 Expected: the expectation is active 03:52:10 Actual: it is retired 03:52:10 Expected: to be called once 03:52:10 Actual: called once - saturated and retired 03:52:10 ../../src/tests/gc_tests.cpp:1021: tried expectation #3: EXPECT_CALL(*scheduler, update(_, AllOf( TaskStatusUpdateTaskIdEq(shortLivedTaskInfo), TaskStatusUpdateStateEq(v1::TASK_RUNNING))))... 03:52:10 Expected: the expectation is active 03:52:10 Actual: it is retired 03:52:10 Expected: to be called once 03:52:10 Actual: called once - saturated and retired 03:52:10 ../../src/tests/gc_tests.cpp:1031: tried expectation #4: EXPECT_CALL(*scheduler, update(_, AllOf( TaskStatusUpdateTaskIdEq(shortLivedTaskInfo), TaskStatusUpdateStateEq(v1::TASK_FINISHED))))... 03:52:10 Expected arg #1: (task status update task id eq name: """"test-task"""" 03:52:10 task_id { 03:52:10 value: """"c6c81339-65c6-4f86-b0ab-c5be60ea5fbd"""" 03:52:10 } 03:52:10 agent_id { 03:52:10 value: """"8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0"""" 03:52:10 } 03:52:10 resources { 03:52:10 name: """"cpus"""" 03:52:10 type: SCALAR 03:52:10 scalar { 03:52:10 value: 0.1 03:52:10 } 03:52:10 } 03:52:10 resources { 03:52:10 name: """"mem"""" 03:52:10 type: SCALAR 03:52:10 scalar { 03:52:10 value: 32 03:52:10 } 03:52:10 } 03:52:10 resources { 03:52:10 name: """"disk"""" 03:52:10 type: SCALAR 03:52:10 scalar { 03:52:10 value: 32 03:52:10 } 03:52:10 } 03:52:10 command { 03:52:10 value: """"exit 0"""" 03:52:10 } 03:52:10 ) and (task status update state eq TASK_FINISHED) 03:52:10 Actual: TASK_LOST (Status UUID: f5a0fab7-ad86-4d4d-94a7-8894f15521fd) Source: SOURCE_AGENT Reason: REASON_EXECUTOR_REREGISTRATION_TIMEOUT Message: 'Executor did not reregister within 2secs; Abnormal executor termination: Failed to destroy nested containers: Failed to kill all processes in the container: Timed out after 1mins' for task '2e8c13b6-fa45-4e3c-89cd-398a5abc192f' on agent: 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 03:52:10 Expected: to be called once 03:52:10 Actual: called once - saturated and active 03:52:10 I0907 03:52:10.327615 2373 gc.cpp:288] Deleted '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/meta/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/tasks/c6c81339-65c6-4f86-b0ab-c5be60ea5fbd' 03:52:10 I0907 03:52:10.327862 2370 gc.cpp:188] Skipping deletion of '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c' as it is already in progress 03:52:10 I0907 03:52:10.328534 2369 gc.cpp:188] Skipping deletion of '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c' as it is already in progress 03:52:10 I0907 03:52:10.329061 2373 gc.cpp:272] Deleting /tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c 03:52:10 I0907 03:52:10.329295 2373 gc.cpp:288] Deleted '/tmp/GarbageCollectorIntegrationTest_LongLivedDefaultExecutorRestart_X4h6lv/slaves/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0/frameworks/8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000/executors/default/runs/dbe02af2-3122-4f2e-9747-0c4343627c2f/containers/05a1238c-5190-476a-945f-4d8f1225e45c' 03:52:10 I0907 03:52:10.860333 2377 process.cpp:2735] Returning '404 Not Found' for '/slave(90)/api/v1/executor' 03:52:10 W0907 03:52:10.860883 10114 executor.cpp:666] Received '404 Not Found' () for SUBSCRIBE 03:52:11 I0907 03:52:11.151208 2377 process.cpp:2735] Returning '404 Not Found' for '/slave(90)/api/v1/executor' 03:52:11 W0907 03:52:11.151674 10114 executor.cpp:666] Received '404 Not Found' () for SUBSCRIBE 03:52:11 I0907 03:52:11.213229 2371 hierarchical.cpp:1564] Performed allocation for 1 agents in 188978ns 03:52:11 I0907 03:52:11.213379 2371 containerizer.cpp:2957] Container dbe02af2-3122-4f2e-9747-0c4343627c2f.dbc72eae-9465-4bf7-a082-0bf8a055fecc has exited 03:52:11 I0907 03:52:11.213652 2371 master.cpp:9468] Sending offers [ 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-O2 ] to framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) 03:52:11 I0907 03:52:11.214164 2350 slave.cpp:909] Agent terminating 03:52:11 I0907 03:52:11.214273 2369 master.cpp:1366] Framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) disconnected 03:52:11 I0907 03:52:11.214293 2369 master.cpp:3230] Deactivating framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) 03:52:11 W0907 03:52:11.214360 2369 master.hpp:2605] Unable to send event to framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default): connection closed 03:52:11 I0907 03:52:11.214375 2369 master.cpp:11462] Removing offer 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-O2 03:52:11 W0907 03:52:11.214435 2369 master.hpp:2605] Unable to send event to framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default): connection closed 03:52:11 I0907 03:52:11.214445 2369 master.cpp:11462] Removing offer 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-O1 03:52:11 I0907 03:52:11.214455 2369 master.cpp:3207] Disconnecting framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) 03:52:11 I0907 03:52:11.214462 2369 master.cpp:1381] Giving framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) 0ns to failover 03:52:11 I0907 03:52:11.214606 2370 hierarchical.cpp:420] Deactivated framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:11 I0907 03:52:11.214704 2370 hierarchical.cpp:1236] Recovered cpus(allocated: *):0.2; mem(allocated: *):64; disk(allocated: *):64 (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000], allocated: cpus(allocated: *):1.8; mem(allocated: *):960; disk(allocated: *):960; ports(allocated: *):[31000-32000]) on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 from framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:11 I0907 03:52:11.214819 2370 hierarchical.cpp:1236] Recovered cpus(allocated: *):1.8; mem(allocated: *):960; disk(allocated: *):960; ports(allocated: *):[31000-32000] (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000], allocated: {}) on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 from framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:11 I0907 03:52:11.214895 2369 master.cpp:9261] Framework failover timeout, removing framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) 03:52:11 I0907 03:52:11.214910 2369 master.cpp:10197] Removing framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (default) 03:52:11 I0907 03:52:11.214947 2369 master.cpp:10932] Updating the state of task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 (latest state: TASK_LOST, status update state: TASK_KILLED) 03:52:11 I0907 03:52:11.214964 2369 master.cpp:11030] Removing task 2e8c13b6-fa45-4e3c-89cd-398a5abc192f with resources cpus(allocated: *):0.1; mem(allocated: *):32; disk(allocated: *):32 of framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 on agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 at slave(91)@172.16.10.27:45074 (ip-172-16-10-27.ec2.internal) 03:52:11 I0907 03:52:11.215090 2369 hierarchical.cpp:359] Removed framework 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-0000 03:52:11 I0907 03:52:11.217375 2350 master.cpp:1093] Master terminating 03:52:11 I0907 03:52:11.217579 2375 hierarchical.cpp:637] Removed agent 8e9d97f6-4dc4-490b-81f6-d2033e2109d3-S0 03:52:11 [ FAILED ] GarbageCollectorIntegrationTest.LongLivedDefaultExecutorRestart (3519 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9221","09/08/2018 02:43:29",5,"If some image layers are large, the image pulling may stuck due to the authorized token expired. ""The image layer blobs pulling happen asynchronously but in the same libprocess process. There is a chance that one layer get the token then the thread switch to another layer curling which may take long. When the original layer curling resumes, the token already expired (e.g., after 60 seconds). The impact is the task launch stuck and all subsequent task using this image would also stuck because it waits for the same image pulling future to become ready. Please note that this issue is not likely to be reproduced, unless on a busy system using images containing large layers."""," $ sudo cat /var/lib/mesos/slave/store/docker/staging/0gx64f/sha256\:c75480ad9aafadef6c7faf829ede40cf2fa990c9308d6cd354d53041b01a7cda {""""errors"""":[{""""code"""":""""UNAUTHORIZED"""",""""message"""":""""authentication required"""",""""detail"""":[{""""Type"""":""""repository"""",""""Class"""":"""""""",""""Name"""":""""mesosphere/dapis"""",""""Action"""":""""pull""""}]}]} ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9223","09/10/2018 12:10:20",3,"Storage local provider does not sufficiently handle container launch failures or errors ""The storage local resource provider as currently implemented does not handle launch failures or task errors of its standalone containers well enough, If e.g., a RP container fails to come up during node start a warning would be logged, but an operator still needs to detect degraded functionality, manually check the state of containers with {{GET_CONTAINERS}}, and decide whether the agent needs restarting; I suspect they do not have always have enough context for this decision. It would be better if the provider would either enforce a restart by failing over the whole agent, or by retrying the operation (optionally: up to some maximum amount of retries).""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9224","09/10/2018 14:46:18",5,"De-duplicate read-only requests to master based on principal. """"""Identical"""" read-only requests can be batched and answered together. With batching available (MESOS-9158), we can now deduplicate requests based on principal.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9225","09/10/2018 15:51:21",2,"Github's mesos/modules does not build. ""The examples modules repo at GitHub.com named mesos/modules does currently not build against the latest Apache Mesos. We should update that system. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9229","09/13/2018 13:05:18",2,"Install Python3 on ubuntu-16.04-arm docker image ""With the upgrade to Python 3 in the Mesos codebase builds which rely on docker images started to fail since they were missing a `python3` installation. We fixed those issues for most of the Docker images in https://issues.apache.org/jira/browse/MESOS-8957. We still miss Python 3 on the Ubuntu-16.04-arm image which can be found in `support/mesos-build/ubuntu-16.04-arm`.""","",0,0,0,1,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9258","09/26/2018 01:13:40",5,"Prevent subscribers to the master's event stream from leaking connections ""Some reverse proxies (e.g., ELB using an HTTP listener) won't close the upstream connection to Mesos when they detect that their client is disconnected. This can make Mesos leak subscribers, which generates unnecessary authorization requests and affects performance. We should evaluate methods (e.g., heartbeats) to enable Mesos to detect that a subscriber is gone, even if the TCP connection is still open. ""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9265","09/26/2018 14:34:58",8,"Analyse and pinpoint libprocess SSL failures when using libevent 2.1.8. ""Mesos SSL based on libevent >2.1.5beta fails to function properly. Depending on the underlying open SSL version, failures happen on accept or on receive. The issue has been properly described https://issues.apache.org/jira/browse/MESOS-7076. We landed a workaround by bundling libevent 2.0.22. This ticket is meant to track further analysis of the true reason for the issue - so we fix it, instead of relying on a hard to maintain bandaid. ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9266","09/26/2018 15:01:01",2,"Whenever our packaging tasks trigger errors we run into permission problems. ""As shown in MESOS-9238, failures within our packaging cause permission failures on cleanup. We should clean that up."""," cleanup rm: cannot remove '/home/jenkins/jenkins-slave/workspace/Mesos-Docker-CentOS/centos7/.cache': Permission denied rm: cannot remove '/home/jenkins/jenkins-slave/workspace/Mesos-Docker-CentOS/centos7/rpmbuild/SRPMS': Permission denied rm: cannot remove '/home/jenkins/jenkins-slave/workspace/Mesos-Docker-CentOS/centos7/rpmbuild/BUILDROOT/mesos-1.8.0-0.1.pre.20180915git4805a47.el7.x86_64/var/lib/mesos': Permission denied rm: cannot remove '/home/jenkins/jenkins-slave/workspace/Mesos-Docker-CentOS/centos7/rpmbuild/BUILDROOT/mesos-1.8.0-0.1.pre.20180915git4805a47.el7.x86_64/var/log/mesos': Permission denied ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9270","09/27/2018 01:00:20",3,"Get rid of dependency on `net-tools` in network/cni isolator. ""The `network/cni` isolator has a dependency on `net-tools`. The last release of `net-tools` was released in 2001. The tools were deprecated many years ago (see [Debian|https://lists.debian.org/debian-devel/2009/03/msg00780.html], [RH|https://bugzilla.redhat.com/show_bug.cgi?id=687920], and [LWN|https://lwn.net/Articles/710533/]) and no longer installed by default. [https://github.com/apache/mesos/blob/983607e/src/slave/containerizer/mesos/isolators/network/cni/cni.cpp#L2248]""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0 +"MESOS-9274","09/27/2018 15:22:39",3,"v1 JAVA scheduler library can drop TEARDOWN upon destruction. ""Currently the v1 JAVA scheduler library neither ensures {{Call}} s are sent to the master nor waits for responses. This can be problematic if the library is destroyed (or garbage collected) right after sending a {{TEARDOWN}} call: destruction of the underlying {{Mesos}} actor races with sending the call. ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9275","09/27/2018 20:17:53",8,"Allow optional `profile` to be specified in `CREATE_DISK` offer operation. ""This will allow the framework to """"import"""" pre-existing volumes reported by the corresponding CSI plugin. For instance, the LVM CSI plugin might detect some pre-existing volumes that Dan has created out of band. Currently, those volumes will be represented as RAW """"disk"""" resource with a volume ID, but no volume profile by the SLRP. When a framework tries to use the RAW volume as either MOUNT or BLOCK volume, it'll issue a CREATE_DISK operation. The corresponding SLRP will handles the operation, and validate against a default profile for MOUNT volumes. However, this prevents the volume to have a different profile that the framework might want. Ideally, we should allow the framework to optionally specify a profile that it wants the volume to have during CREATE_DISK because it might have some expectations on the volume. The SLRP will validate with the corresponding CSI plugin using the ValidateVolumeCapabilities RPC call to see if the profile is applicable to the volume.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0 +"MESOS-9278","09/28/2018 23:27:48",3,"Add an operation status update manager to the agent ""Review here: https://reviews.apache.org/r/69505/""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9281","10/01/2018 23:05:41",5,"SLRP gets a stale checkpoint after system crash. ""SLRP checkpoints a pending operations before issuing the corresponding CSI call through {{slave::state::checkpoint}}, which writes a new checkpoint to a temporary file then do a {{rename}}. However, because we don't do any {{fsync}}, {{rename}} is not atomic w.r.t. system crash. As a result, if the operation is processed during a system crash, it is possible that the CSI call has been executed, but the SLRP gets back a stale checkpoint after reboot and totally doesn't know about the operation. To address this problem, we need to ensure the followings before issuing the CSI call: 1. The temp file is synced to the disk. 2. The rename is committed to the disk. A possible solution is to do an {{fsync}} after writing the temp file, and do another {{fsync}} on the checkpoint dir after the {{rename}}.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9283","10/02/2018 06:51:58",3,"Docker containerizer actor can get backlogged with large number of containers. ""We observed during some scale testing that we do internally. When launching 300+ Docker containers on a single agent box, it's possible that the Docker containerizer actor gets backlogged. As a result, API processing like `GET_CONTAINERS` will become unresponsive. It'll also block Mesos containerizer from launching containers if one specified `--containers=docker,mesos` because Docker containerizer launch will be invoked first by the composing containerizer (and queued). Profiling results show that the bottleneck is `os::killtree`, which will be invoked when the Docker commands are discarded (e.g., client disconnect, etc.). For this particular case, killtree is not really necessary because the docker command does not fork additional subprocesses. If we use the argv version of `subprocess` to launch docker commands, we can simply use os::kill instead. We confirmed that, by switching to os::kill, the performance issues goes away, and the agent can easily scale up to 300+ containers.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9292","10/04/2018 16:23:01",1,"Rejected quotas request error messages should specify which resources were overcommitted. ""If we reject a quota request due to not having enough available resources, we fail with the following error: but we don't print *which* resource was not available. This can be confusing to operators when the quota was attempted to be set for multiple resources at once."""," Not enough available cluster capacity to reasonably satisfy quota request; the force flag can be used to override this check ",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9295","10/05/2018 01:35:49",5,"Nested container launch could fail if the agent upgrade with new cgroup subsystems. ""Nested container launch could fail if the agent upgrade with new cgroup subsystems, because the new cgroup subsystems do not exist on parent container's cgroup hierarchy.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9302","10/09/2018 13:57:50",2,"Mesos fails to build on Fedora 28 ""Trying to compile a fresh Mesos checkout on a Fedora 28 system with the following configuration flags: and the following compiler fails the build due to two warnings (even though --disable-werror was passed): """," ../configure --enable-debug --enable-optimize --disable-java --disable-python --disable-libtool-wrappers --enable-ssl --enable-libevent --disable-werror [bevers@core1.hw.ca1 build]$ gcc --version gcc (GCC) 8.1.1 20180712 (Red Hat 8.1.1-5) Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. make[4]: Entering directory '/home/bevers/mesos/build/3rdparty/grpc-1.10.0' [C] Compiling third_party/cares/cares/ares_init.c third_party/cares/cares/ares_init.c: In function ‘ares_dup’: third_party/cares/cares/ares_init.c:301:17: error: argument to ‘sizeof’ in ‘strncpy’ call is the same expression as the source; did you mean to use the size of the destination? [-Werror=sizeof-pointer-memaccess] sizeof(src->local_dev_name)); ^ third_party/cares/cares/ares_init.c: At top level: cc1: error: unrecognized command line option ‘-Wno-invalid-source-encoding’ [-Werror] cc1: all warnings being treated as errors make[4]: *** [Makefile:2635: /home/bevers/mesos/build/3rdparty/grpc-1.10.0/objs/opt/third_party/cares/cares/ares_init.o] Error 1 ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9305","10/10/2018 03:55:12",3,"Create cgoup recursively to workaround systemd deleting cgroups_root. ""This is my case: My cgroups_root of mesos-slave is some_user/mesos under /sys/fs/cgroup。 It happens that this some_user dir may be gone for some unknown reason, in which case I can no longer create any cgroup and any task will fail. So I would like to change    to in CgroupsIsolatorProcess::prepare in src/slave/containerizer/mesos/isolators/cgroups/cgroups.cpp. However, I'm not sure if there's any potential problem doing so. Any advice?  """," Try create = cgroups::create( hierarchy, infos[containerId]->cgroup); Try create = cgroups::create( hierarchy, infos[containerId]->cgroup, true); ",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9306","10/10/2018 21:19:17",5,"Mesos containerizer can get stuck during cgroup cleanup ""I observed a task group's executor container which failed to be completely destroyed after its associated tasks were killed. The following is an excerpt from the agent log which is filtered to include only lines with the container ID, {{d463b9fe-970d-4077-bab9-558464889a9e}}: The last log line from the containerizer's destroy path is: (that is the second such log line, from {{LinuxLauncherProcess::_destroy}}) Then we just see repeatedly, which occurs because the agent's {{GET_CONTAINERS}} call is being polled once per minute. This seems to indicate that the container in question is still in the agent's {{containers_}} map. So, it seems that the containerizer is stuck either in the Linux launcher's {{destroy()}} code path, or the containerizer's {{destroy()}} code path."""," 2018-10-10 14:20:50: I1010 14:20:50.204756 6799 containerizer.cpp:2963] Container d463b9fe-970d-4077-bab9-558464889a9e has exited 2018-10-10 14:20:50: I1010 14:20:50.204839 6799 containerizer.cpp:2457] Destroying container d463b9fe-970d-4077-bab9-558464889a9e in RUNNING state 2018-10-10 14:20:50: I1010 14:20:50.204859 6799 containerizer.cpp:3124] Transitioning the state of container d463b9fe-970d-4077-bab9-558464889a9e from RUNNING to DESTROYING 2018-10-10 14:20:50: I1010 14:20:50.204960 6799 linux_launcher.cpp:580] Asked to destroy container d463b9fe-970d-4077-bab9-558464889a9e 2018-10-10 14:20:50: I1010 14:20:50.204993 6799 linux_launcher.cpp:622] Destroying cgroup '/sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e' 2018-10-10 14:20:50: I1010 14:20:50.205417 6806 cgroups.cpp:2838] Freezing cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos 2018-10-10 14:20:50: I1010 14:20:50.205477 6810 cgroups.cpp:2838] Freezing cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e 2018-10-10 14:20:50: I1010 14:20:50.205708 6808 cgroups.cpp:1229] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos after 203008ns 2018-10-10 14:20:50: I1010 14:20:50.205878 6800 cgroups.cpp:1229] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e after 339200ns 2018-10-10 14:20:50: I1010 14:20:50.206185 6799 cgroups.cpp:2856] Thawing cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos 2018-10-10 14:20:50: I1010 14:20:50.206226 6808 cgroups.cpp:2856] Thawing cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e 2018-10-10 14:20:50: I1010 14:20:50.206455 6808 cgroups.cpp:1258] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e after 83968ns 2018-10-10 14:20:50: I1010 14:20:50.306803 6810 cgroups.cpp:1258] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/d463b9fe-970d-4077-bab9-558464889a9e/mesos after 100.50816ms 2018-10-10 14:20:50: I1010 14:20:50.307531 6805 linux_launcher.cpp:654] Destroying cgroup '/sys/fs/cgroup/systemd/mesos/d463b9fe-970d-4077-bab9-558464889a9e' 2018-10-10 14:21:40: W1010 14:21:40.032855 6809 containerizer.cpp:2401] Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: Container does not exist 2018-10-10 14:22:40: W1010 14:22:40.031224 6800 containerizer.cpp:2401] Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: Container does not exist 2018-10-10 14:23:40: W1010 14:23:40.031946 6799 containerizer.cpp:2401] Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: Container does not exist 2018-10-10 14:24:40: W1010 14:24:40.032979 6804 containerizer.cpp:2401] Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: Container does not exist 2018-10-10 14:25:40: W1010 14:25:40.030784 6808 containerizer.cpp:2401] Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: Container does not exist 2018-10-10 14:26:40: W1010 14:26:40.032526 6810 containerizer.cpp:2401] Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: Container does not exist 2018-10-10 14:27:40: W1010 14:27:40.029932 6801 containerizer.cpp:2401] Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: Container does not exist 14:20:50.307531 6805 linux_launcher.cpp:654] Destroying cgroup '/sys/fs/cgroup/systemd/mesos/d463b9fe-970d-4077-bab9-558464889a9e' containerizer.cpp:2401] Skipping status for container d463b9fe-970d-4077-bab9-558464889a9e because: Container does not exist ",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9308","10/11/2018 06:15:41",1,"URI disk profile adaptor could deadlock. ""The loop here can be infinit: https://github.com/apache/mesos/blob/1.7.0/src/resource_provider/storage/uri_disk_profile_adaptor.cpp#L61-L80 ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9314","10/13/2018 00:39:21",5,"Consider introducing a ScalarResourceQuantity protobuf message. ""As part of introducing quota limits, we're adding a new master::Call for updating quota. This call can take a simplified message that expresses scalar resource quantities: This greatly simplified the validation code, as well as the UX of the API when it comes to knowing what kind of data to provide. Ideally, the new quota paths can use this message in lieu of Resource objects, but we'll have to explore backwards compatibility (e.g. registry data)."""," message ScalarResourceQuantity { required string name; required Value::Scalar quantity; } ",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9317","10/15/2018 14:03:47",5,"Some master endpoints do not handle failed authorization properly. ""When we authorize _some_ actions (right now I see this happening to create / destroy volumes, reserve / unreserve resources) *and* {{authorizer}} fails (i.e. returns the future in non-ready state), an assertion is triggered: This is due to incorrect assumption in our code, see for example [https://github.com/apache/mesos/blob/a063afce9868dcee38a0ab7efaa028244f3999cf/src/master/master.cpp#L3752-L3763]: Futures returned from {{await}} are guaranteed to be in terminal state, but not necessarily ready! In the snippet above, {{!authorization.get()}} is invoked without being checked ⇒ assertion fails. Full stack trace: """," mesos-master[49173]: F1015 11:40:29.795748 49396 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attempts return await(authorizations) .then([](const vector>& authorizations) -> Future { // Compute a disjunction. foreach (const Future& authorization, authorizations) { if (!authorization.get()) { return false; } } return true; }); Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: F1015 11:40:29.795748 49396 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.796037 49395 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.796097 49384 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.796249 49393 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.796375 49390 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.796483 49388 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.796629 49381 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.796700 49385 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.796780 49386 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.796893 49391 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.797003 49392 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.797868 49394 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.798266 49389 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attemptsF1015 11:40:29.799608 49387 future.hpp:1306] Check failed: !isFailed() Future::get() but state == FAILED: Failed to retrieve permissions from IAM at url https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions the request failed: Failed to contact bouncer at https://localhost:443/acs/api/v1/users/marathon_user_ee/permissions due to time out after 3 attempts Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: *** Check failure stack trace: *** Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7199fd google::LogMessage::Fail() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71b82d google::LogMessage::SendToLog() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e7195ec google::LogMessage::Flush() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7071b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master21authorizeCreateVolumeERKNSC_22Offer_Operation_CreateERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7070b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master25authorizeReserveResourcesERKNSC_9ResourcesERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7070b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master25authorizeReserveResourcesERKNSC_9ResourcesERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e71c129 google::LogMessageFatal::~LogMessageFatal() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7070b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master25authorizeReserveResourcesERKNSC_9ResourcesERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7071b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master21authorizeCreateVolumeERKNSC_22Offer_Operation_CreateERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7070b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master25authorizeReserveResourcesERKNSC_9ResourcesERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7071b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master21authorizeCreateVolumeERKNSC_22Offer_Operation_CreateERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7070b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master25authorizeReserveResourcesERKNSC_9ResourcesERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7071b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master21authorizeCreateVolumeERKNSC_22Offer_Operation_CreateERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7071b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master21authorizeCreateVolumeERKNSC_22Offer_Operation_CreateERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7071b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master21authorizeCreateVolumeERKNSC_22Offer_Operation_CreateERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7070b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master25authorizeReserveResourcesERKNSC_9ResourcesERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7070b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master25authorizeReserveResourcesERKNSC_9ResourcesERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7070b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master25authorizeReserveResourcesERKNSC_9ResourcesERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d4f97e3 process::Future<>::get() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d7071b1 _ZNO6lambda12CallableOnceIFN7process6FutureIbEERKSt6vectorIS3_SaIS3_EEEE10CallableFnIZN5mesos8internal6master6Master21authorizeCreateVolumeERKNSC_22Offer_Operation_CreateERK6OptionINS1_4http14authentication9PrincipalEEEUlS8_E_EclES8_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d790c91 process::internal::thenf<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e669c51 process::ProcessBase::consume() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e669c51 process::ProcessBase::consume() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e669c51 process::ProcessBase::consume() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798154 process::Future<>::_set<>() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d765405 _ZNO6lambda12CallableOnceIFvRKN7process6FutureISt6vectorINS2_IbEESaIS4_EEEEEE10CallableFnINS_8internal7PartialIPFvONS0_IFS4_RKS6_EEESt10unique_ptrINS1_7PromiseIbEESt14default_deleteISM_EES9_EJSI_SP_St12_PlaceholderILi1EEEEEEclES9_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e6806cc process::ProcessManager::resume() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e6806cc process::ProcessManager::resume() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e6806cc process::ProcessManager::resume() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d788169 _ZN7process8internal3runIN6lambda12CallableOnceIFvRKNS_6FutureISt6vectorINS4_IbEESaIS6_EEEEEEEJRS9_EEEvOS5_IT_SaISF_EEDpOT0_ Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e686186 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1d798355 process::internal::AwaitProcess<>::waited() Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1b353070 (unknown) Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1ab71e25 start_thread Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1e686186 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1a89bbad __clone Oct 15 11:40:39 int-master2-mwst9.scaletesting.mesosphe.re mesos-master[49173]: @ 0x7fea1b353070 (unknown) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9320","10/16/2018 02:10:29",5,"UCR container launch stuck at PROVISIONING during image fetching. ""We observed mesos containerizer stuck at PROVISIONING when launching a mesos container using docker image: `kvish/jenkins-dev:595c74f713f609fd1d3b05a40d35113fc03227c9`: The image pulling never finishes. Insufficient image contents are still in image store staging directory /var/lib/mesos/slave/store/docker/staging/egLYqO, forever. It is not clear yet why the SHA pulling does not finish, so we use the same image on another empty machine with UCR. The other machine has the container RUNNING correctly, and has the following staging directory before moving to the layers dir: By comparing two cases, we can see one layer `8aabf8f13bdf0feed398c7c8b0ac24db59d60d0d06f9dc6cb1400de4df898324` is missing on the problematic agent node, and it is the last layer to fetch. Here is the manifest as a reference: This should not be related: when we try to find the extracted layers on the layers dir, we could only find two: These two are base layers that were downloaded earlier from other images. We still need to figure out why there is one layer fetch not finished. (no curl process and tar process running stuck at background)"""," OK-22:50:06-root@int-agent89-mwst9:/var/lib/mesos/slave/store/docker/staging/egLYqO # ls -alh total 1.1G drwx------. 2 root root 4.0K Oct 15 13:02 . drwxr-xr-x. 3 root root 20 Oct 15 22:40 .. -rw-r--r--. 1 root root 59K Oct 15 13:02 manifest -rw-r--r--. 1 root root 2.6K Oct 15 13:02 sha256:08239cb71d7a3e0d8ed680397590b338a2133117250e1a3e2ee5c5c45292db63 -rw-r--r--. 1 root root 440 Oct 15 13:02 sha256:0984904c0e1558248eb25e93d9fc14c47c0052d58569e64c185afca93a060b66 -rw-r--r--. 1 root root 248 Oct 15 13:02 sha256:0bbc7b377a9155696eb0b684bd1999bc43937918552d73fd9697ea50ef46528a -rw-r--r--. 1 root root 240 Oct 15 13:02 sha256:0c5c0c095e351b976943453c80271f3b75b1208dbad3ca7845332e873361f3bb -rw-r--r--. 1 root root 562 Oct 15 13:02 sha256:1558b7c35c9e25577ee719529d6fcdddebea68f5bdf8cbdf13d8d75a02f8a5b1 -rw-r--r--. 1 root root 11M Oct 15 13:02 sha256:1ab373b3deaed929a15574ac1912afc6e173f80d400aba0e96c89f6a58961f2d -rw-r--r--. 1 root root 130 Oct 15 13:02 sha256:1b6c70b3786f72e5255ccd51e27840d1c853a17561b5e94a4359b17d27494d50 -rw-r--r--. 1 root root 176 Oct 15 13:02 sha256:1bf4aab5c3b363b4fdfc46026df9ae854db8858a5cbcccdd4409434817d59312 -rw-r--r--. 1 root root 380 Oct 15 13:02 sha256:213b0c5bb5300df1d2d06df6213ae94448419cf18ecf61358e978a5d25651d5a -rw-r--r--. 1 root root 71M Oct 15 13:02 sha256:31aaab384e3fa66b73eced4870fc96be590a2376e93fd4f8db5d00f94fb11604 -rw-r--r--. 1 root root 1.4K Oct 15 13:02 sha256:32442b7d159ed2b7f00b00a989ca1d3ee1a3f566df5d5acbd25f0c3dfdad69d1 -rw-r--r--. 1 root root 653K Oct 15 13:02 sha256:340cd692075b636b5e1803fcde9b1a56a2f6e2728e4fb10f7295d39c7d0e0d01 -rw-r--r--. 1 root root 184 Oct 15 13:02 sha256:398819b00c6cbf9cce6c1ed25005c9e1242cace7a6436730e17da052000c7f90 -rw-r--r--. 1 root root 366K Oct 15 13:02 sha256:41d78c0cb1b2a47189068e55f61d6266be14c4fa75935cb021f17668dd8e7f94 -rw-r--r--. 1 root root 23K Oct 15 13:02 sha256:4f5852c22c7ce0155494b6e86a0a4c536c3c95cb87cad84806aa2d56184b95d2 -rw-r--r--. 1 root root 384M Oct 15 13:02 sha256:4fe621515c4d23e33d9850a6cdfc3aa686d790704b9c5569f1726b4469aa30c0 -rw-r--r--. 1 root root 1.5K Oct 15 13:02 sha256:50dcd1d0618b1d42bf6633dc8176e164571081494fa6483ec4489a59637518bc -rw-r--r--. 1 root root 48M Oct 15 13:02 sha256:57c8de432dbe337bb6cb1ad328e6c564303a3d3fd05b5e872fd9c47c16fdd02c -rw-r--r--. 1 root root 30M Oct 15 13:02 sha256:63a0f0b6b5d7014b647ac4a164808208229d2e3219f45a39914f0561a4f831bf -rw-r--r--. 1 root root 306M Oct 15 13:02 sha256:67f41ed73c082c6ffee553a90b0abd56bc74b260d90b9d594d652b66cbcd5e7f -rw-r--r--. 1 root root 435 Oct 15 13:02 sha256:6cb303e084ed78386ae87cdaf95e8817d48e94b3ce7c0442a28335600f0efa3d -rw-r--r--. 1 root root 5.5K Oct 15 13:02 sha256:7d4d905c2060a5ec994ec201e6877714ee73030ef4261f9562abdb0f844174d5 -rw-r--r--. 1 root root 39M Oct 15 13:02 sha256:80d923f4b955c2db89e2e8a9f2dcb0c36a29c1520a5b359578ce2f3d0b849d10 -rw-r--r--. 1 root root 615 Oct 15 13:02 sha256:842cc8bd099d94f6f9c082785bbaa35439af965d1cf6a13300830561427c266b -rw-r--r--. 1 root root 712 Oct 15 13:02 sha256:977c8e6687e0ca5f0682915102c025dc12d7ff71bf70de17aab3502adda25af2 -rw-r--r--. 1 root root 12K Oct 15 13:02 sha256:989ac24c53a1f7951438aa92ac39bc9053c178336bea4ebe6ab733d4975c9728 -rw-r--r--. 1 root root 861 Oct 15 13:02 sha256:a18e3c45bf91ac3bd11a46b489fb647a721417f60eae66c5f605360ccd8d6352 -rw-r--r--. 1 root root 32 Oct 15 13:02 sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 -rw-r--r--. 1 root root 266K Oct 15 13:02 sha256:b1d3e8de8ec6d87b8485a8a3b66d63125a033cfb0711f8af24b4f600f524e276 -rw-r--r--. 1 root root 1.6K Oct 15 13:02 sha256:b3a122ff7868d2ed9c063df73b0bf67fd77348d3baa2a92368b3479b41f8aa74 -rw-r--r--. 1 root root 4.2M Oct 15 13:02 sha256:b542772b417703c0311c0b90136091369bcd9c2176c0e3ceed5a0114d743ee3c -rw-r--r--. 1 root root 1.1K Oct 15 13:02 sha256:b6e3599b777bb2dd681fd84f174a7e0ce3cb01f5a84dcd3c771d0e999a39bc58 -rw-r--r--. 1 root root 2.8K Oct 15 13:02 sha256:b970c9afc934d5e6bb524a6057342a1d1cc835972f047a805f436c540ee20747 -rw-r--r--. 1 root root 6.3M Oct 15 13:02 sha256:b984f623b82721cc642c25cd4797f6c3d2c01b6b063c49905a97bb0a7f0725a5 -rw-r--r--. 1 root root 1.8K Oct 15 13:02 sha256:c0cc702ea6bfc6490ccb2edd9d9c8070964ae2023129d14f90729d0b365f6215 -rw-r--r--. 1 root root 4.1K Oct 15 13:02 sha256:cbedea0328015d1baf1efdd06b2417283f6314c0ef671bc0246ada3221ca21ac -rw-r--r--. 1 root root 355 Oct 15 13:02 sha256:cc7516477cdbfa791d6fd66c9c19b46036cc294f884d8ebbbd0a7fc878433c87 -rw-r--r--. 1 root root 165M Oct 15 13:02 sha256:d9bbcf733166f991331a80e1cd55a91111c4ba96fc7ce1ecabd05b450b7da7a3 -rw-r--r--. 1 root root 872K Oct 15 13:02 sha256:da44f64ae9991a9e8cb7c2af4dfd63608bd4026552b2b6a7f523dcfac960e1ac -rw-r--r--. 1 root root 431 Oct 15 13:02 sha256:edb369c8c5d7b67e773eee549901a38b80dfa1246597815ae6eb21d1beceec1a -rw-r--r--. 1 root root 19M Oct 15 13:02 sha256:f4457f4b3bfe0282e803dd9172421048b80168d9c1433da969555fa571a4a1d6 -rw-r--r--. 1 root root 198 Oct 15 13:02 sha256:f67b87ed7ea47d30c673e289d4c2fd28f5e8e3059840152932e8e813183462ec -rw-r--r--. 1 root root 550K Oct 15 13:02 sha256:f96e7fcceb6e12a816cb49d01574e29767d9cc2b6f92436f314a59570abae320 -rw-r--r--. 1 root root 676 Oct 15 13:02 sha256:fec44d138823b8076f9a49148f93a7c3d6b0e79ca34b489d60b194d7b1c2c2fa -rw-r--r--. 1 root root 2.6K Oct 15 18:03 sha256:08239cb71d7a3e0d8ed680397590b338a2133117250e1a3e2ee5c5c45292db63 -rw-r--r--. 1 root root 440 Oct 15 18:03 sha256:0984904c0e1558248eb25e93d9fc14c47c0052d58569e64c185afca93a060b66 -rw-r--r--. 1 root root 248 Oct 15 18:03 sha256:0bbc7b377a9155696eb0b684bd1999bc43937918552d73fd9697ea50ef46528a -rw-r--r--. 1 root root 240 Oct 15 18:03 sha256:0c5c0c095e351b976943453c80271f3b75b1208dbad3ca7845332e873361f3bb -rw-r--r--. 1 root root 562 Oct 15 18:03 sha256:1558b7c35c9e25577ee719529d6fcdddebea68f5bdf8cbdf13d8d75a02f8a5b1 -rw-r--r--. 1 root root 11M Oct 15 18:03 sha256:1ab373b3deaed929a15574ac1912afc6e173f80d400aba0e96c89f6a58961f2d -rw-r--r--. 1 root root 130 Oct 15 18:03 sha256:1b6c70b3786f72e5255ccd51e27840d1c853a17561b5e94a4359b17d27494d50 -rw-r--r--. 1 root root 176 Oct 15 18:03 sha256:1bf4aab5c3b363b4fdfc46026df9ae854db8858a5cbcccdd4409434817d59312 -rw-r--r--. 1 root root 380 Oct 15 18:03 sha256:213b0c5bb5300df1d2d06df6213ae94448419cf18ecf61358e978a5d25651d5a -rw-r--r--. 1 root root 71M Oct 15 18:03 sha256:31aaab384e3fa66b73eced4870fc96be590a2376e93fd4f8db5d00f94fb11604 -rw-r--r--. 1 root root 1.4K Oct 15 18:03 sha256:32442b7d159ed2b7f00b00a989ca1d3ee1a3f566df5d5acbd25f0c3dfdad69d1 -rw-r--r--. 1 root root 653K Oct 15 18:03 sha256:340cd692075b636b5e1803fcde9b1a56a2f6e2728e4fb10f7295d39c7d0e0d01 -rw-r--r--. 1 root root 184 Oct 15 18:03 sha256:398819b00c6cbf9cce6c1ed25005c9e1242cace7a6436730e17da052000c7f90 -rw-r--r--. 1 root root 366K Oct 15 18:03 sha256:41d78c0cb1b2a47189068e55f61d6266be14c4fa75935cb021f17668dd8e7f94 -rw-r--r--. 1 root root 23K Oct 15 18:03 sha256:4f5852c22c7ce0155494b6e86a0a4c536c3c95cb87cad84806aa2d56184b95d2 -rw-r--r--. 1 root root 122M Oct 15 18:03 sha256:4fe621515c4d23e33d9850a6cdfc3aa686d790704b9c5569f1726b4469aa30c0 -rw-r--r--. 1 root root 1.5K Oct 15 18:03 sha256:50dcd1d0618b1d42bf6633dc8176e164571081494fa6483ec4489a59637518bc -rw-r--r--. 1 root root 48M Oct 15 18:03 sha256:57c8de432dbe337bb6cb1ad328e6c564303a3d3fd05b5e872fd9c47c16fdd02c -rw-r--r--. 1 root root 30M Oct 15 18:03 sha256:63a0f0b6b5d7014b647ac4a164808208229d2e3219f45a39914f0561a4f831bf -rw-r--r--. 1 root root 92M Oct 15 18:03 sha256:67f41ed73c082c6ffee553a90b0abd56bc74b260d90b9d594d652b66cbcd5e7f -rw-r--r--. 1 root root 435 Oct 15 18:03 sha256:6cb303e084ed78386ae87cdaf95e8817d48e94b3ce7c0442a28335600f0efa3d -rw-r--r--. 1 root root 5.5K Oct 15 18:03 sha256:7d4d905c2060a5ec994ec201e6877714ee73030ef4261f9562abdb0f844174d5 -rw-r--r--. 1 root root 39M Oct 15 18:03 sha256:80d923f4b955c2db89e2e8a9f2dcb0c36a29c1520a5b359578ce2f3d0b849d10 -rw-r--r--. 1 root root 615 Oct 15 18:03 sha256:842cc8bd099d94f6f9c082785bbaa35439af965d1cf6a13300830561427c266b -rw-r--r--. 1 root root 712 Oct 15 18:03 sha256:977c8e6687e0ca5f0682915102c025dc12d7ff71bf70de17aab3502adda25af2 -rw-r--r--. 1 root root 12K Oct 15 18:03 sha256:989ac24c53a1f7951438aa92ac39bc9053c178336bea4ebe6ab733d4975c9728 -rw-r--r--. 1 root root 861 Oct 15 18:03 sha256:a18e3c45bf91ac3bd11a46b489fb647a721417f60eae66c5f605360ccd8d6352 -rw-r--r--. 1 root root 32 Oct 15 18:03 sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 -rw-r--r--. 1 root root 266K Oct 15 18:03 sha256:b1d3e8de8ec6d87b8485a8a3b66d63125a033cfb0711f8af24b4f600f524e276 -rw-r--r--. 1 root root 1.6K Oct 15 18:03 sha256:b3a122ff7868d2ed9c063df73b0bf67fd77348d3baa2a92368b3479b41f8aa74 -rw-r--r--. 1 root root 4.2M Oct 15 18:03 sha256:b542772b417703c0311c0b90136091369bcd9c2176c0e3ceed5a0114d743ee3c -rw-r--r--. 1 root root 1.1K Oct 15 18:03 sha256:b6e3599b777bb2dd681fd84f174a7e0ce3cb01f5a84dcd3c771d0e999a39bc58 -rw-r--r--. 1 root root 2.8K Oct 15 18:03 sha256:b970c9afc934d5e6bb524a6057342a1d1cc835972f047a805f436c540ee20747 -rw-r--r--. 1 root root 6.3M Oct 15 18:03 sha256:b984f623b82721cc642c25cd4797f6c3d2c01b6b063c49905a97bb0a7f0725a5 -rw-r--r--. 1 root root 1.8K Oct 15 18:03 sha256:c0cc702ea6bfc6490ccb2edd9d9c8070964ae2023129d14f90729d0b365f6215 -rw-r--r--. 1 root root 44M Oct 15 18:03 sha256:c73ab1c6897bf5c11da3c95cab103e7ca8cf10a6d041eda2ff836f45a40e3d3b -rw-r--r--. 1 root root 4.1K Oct 15 18:03 sha256:cbedea0328015d1baf1efdd06b2417283f6314c0ef671bc0246ada3221ca21ac -rw-r--r--. 1 root root 355 Oct 15 18:03 sha256:cc7516477cdbfa791d6fd66c9c19b46036cc294f884d8ebbbd0a7fc878433c87 -rw-r--r--. 1 root root 82M Oct 15 18:03 sha256:d9bbcf733166f991331a80e1cd55a91111c4ba96fc7ce1ecabd05b450b7da7a3 -rw-r--r--. 1 root root 872K Oct 15 18:03 sha256:da44f64ae9991a9e8cb7c2af4dfd63608bd4026552b2b6a7f523dcfac960e1ac -rw-r--r--. 1 root root 431 Oct 15 18:03 sha256:edb369c8c5d7b67e773eee549901a38b80dfa1246597815ae6eb21d1beceec1a -rw-r--r--. 1 root root 19M Oct 15 18:03 sha256:f4457f4b3bfe0282e803dd9172421048b80168d9c1433da969555fa571a4a1d6 -rw-r--r--. 1 root root 198 Oct 15 18:03 sha256:f67b87ed7ea47d30c673e289d4c2fd28f5e8e3059840152932e8e813183462ec -rw-r--r--. 1 root root 550K Oct 15 18:03 sha256:f96e7fcceb6e12a816cb49d01574e29767d9cc2b6f92436f314a59570abae320 -rw-r--r--. 1 root root 676 Oct 15 18:03 sha256:fec44d138823b8076f9a49148f93a7c3d6b0e79ca34b489d60b194d7b1c2c2fa OK-17:42:20-root@int-agent89-mwst9:/var/lib/mesos/slave/store/docker/staging/egLYqO # cat manifest { """"schemaVersion"""": 1, """"name"""": """"kvish/jenkins-dev"""", """"tag"""": """"595c74f713f609fd1d3b05a40d35113fc03227c9"""", """"architecture"""": """"amd64"""", """"fsLayers"""": [ { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:0c5c0c095e351b976943453c80271f3b75b1208dbad3ca7845332e873361f3bb"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:4fe621515c4d23e33d9850a6cdfc3aa686d790704b9c5569f1726b4469aa30c0"""" }, { """"blobSum"""": """"sha256:c0cc702ea6bfc6490ccb2edd9d9c8070964ae2023129d14f90729d0b365f6215"""" }, { """"blobSum"""": """"sha256:4f5852c22c7ce0155494b6e86a0a4c536c3c95cb87cad84806aa2d56184b95d2"""" }, { """"blobSum"""": """"sha256:f96e7fcceb6e12a816cb49d01574e29767d9cc2b6f92436f314a59570abae320"""" }, { """"blobSum"""": """"sha256:b984f623b82721cc642c25cd4797f6c3d2c01b6b063c49905a97bb0a7f0725a5"""" }, { """"blobSum"""": """"sha256:67f41ed73c082c6ffee553a90b0abd56bc74b260d90b9d594d652b66cbcd5e7f"""" }, { """"blobSum"""": """"sha256:b6e3599b777bb2dd681fd84f174a7e0ce3cb01f5a84dcd3c771d0e999a39bc58"""" }, { """"blobSum"""": """"sha256:6cb303e084ed78386ae87cdaf95e8817d48e94b3ce7c0442a28335600f0efa3d"""" }, { """"blobSum"""": """"sha256:cc7516477cdbfa791d6fd66c9c19b46036cc294f884d8ebbbd0a7fc878433c87"""" }, { """"blobSum"""": """"sha256:32442b7d159ed2b7f00b00a989ca1d3ee1a3f566df5d5acbd25f0c3dfdad69d1"""" }, { """"blobSum"""": """"sha256:fec44d138823b8076f9a49148f93a7c3d6b0e79ca34b489d60b194d7b1c2c2fa"""" }, { """"blobSum"""": """"sha256:1bf4aab5c3b363b4fdfc46026df9ae854db8858a5cbcccdd4409434817d59312"""" }, { """"blobSum"""": """"sha256:977c8e6687e0ca5f0682915102c025dc12d7ff71bf70de17aab3502adda25af2"""" }, { """"blobSum"""": """"sha256:1558b7c35c9e25577ee719529d6fcdddebea68f5bdf8cbdf13d8d75a02f8a5b1"""" }, { """"blobSum"""": """"sha256:842cc8bd099d94f6f9c082785bbaa35439af965d1cf6a13300830561427c266b"""" }, { """"blobSum"""": """"sha256:08239cb71d7a3e0d8ed680397590b338a2133117250e1a3e2ee5c5c45292db63"""" }, { """"blobSum"""": """"sha256:989ac24c53a1f7951438aa92ac39bc9053c178336bea4ebe6ab733d4975c9728"""" }, { """"blobSum"""": """"sha256:f67b87ed7ea47d30c673e289d4c2fd28f5e8e3059840152932e8e813183462ec"""" }, { """"blobSum"""": """"sha256:63a0f0b6b5d7014b647ac4a164808208229d2e3219f45a39914f0561a4f831bf"""" }, { """"blobSum"""": """"sha256:80d923f4b955c2db89e2e8a9f2dcb0c36a29c1520a5b359578ce2f3d0b849d10"""" }, { """"blobSum"""": """"sha256:f4457f4b3bfe0282e803dd9172421048b80168d9c1433da969555fa571a4a1d6"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:b970c9afc934d5e6bb524a6057342a1d1cc835972f047a805f436c540ee20747"""" }, { """"blobSum"""": """"sha256:b3a122ff7868d2ed9c063df73b0bf67fd77348d3baa2a92368b3479b41f8aa74"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:213b0c5bb5300df1d2d06df6213ae94448419cf18ecf61358e978a5d25651d5a"""" }, { """"blobSum"""": """"sha256:a18e3c45bf91ac3bd11a46b489fb647a721417f60eae66c5f605360ccd8d6352"""" }, { """"blobSum"""": """"sha256:50dcd1d0618b1d42bf6633dc8176e164571081494fa6483ec4489a59637518bc"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:0984904c0e1558248eb25e93d9fc14c47c0052d58569e64c185afca93a060b66"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:31aaab384e3fa66b73eced4870fc96be590a2376e93fd4f8db5d00f94fb11604"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:edb369c8c5d7b67e773eee549901a38b80dfa1246597815ae6eb21d1beceec1a"""" }, { """"blobSum"""": """"sha256:41d78c0cb1b2a47189068e55f61d6266be14c4fa75935cb021f17668dd8e7f94"""" }, { """"blobSum"""": """"sha256:7d4d905c2060a5ec994ec201e6877714ee73030ef4261f9562abdb0f844174d5"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:398819b00c6cbf9cce6c1ed25005c9e1242cace7a6436730e17da052000c7f90"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:cbedea0328015d1baf1efdd06b2417283f6314c0ef671bc0246ada3221ca21ac"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:340cd692075b636b5e1803fcde9b1a56a2f6e2728e4fb10f7295d39c7d0e0d01"""" }, { """"blobSum"""": """"sha256:b1d3e8de8ec6d87b8485a8a3b66d63125a033cfb0711f8af24b4f600f524e276"""" }, { """"blobSum"""": """"sha256:d9bbcf733166f991331a80e1cd55a91111c4ba96fc7ce1ecabd05b450b7da7a3"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:1b6c70b3786f72e5255ccd51e27840d1c853a17561b5e94a4359b17d27494d50"""" }, { """"blobSum"""": """"sha256:0bbc7b377a9155696eb0b684bd1999bc43937918552d73fd9697ea50ef46528a"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:da44f64ae9991a9e8cb7c2af4dfd63608bd4026552b2b6a7f523dcfac960e1ac"""" }, { """"blobSum"""": """"sha256:57c8de432dbe337bb6cb1ad328e6c564303a3d3fd05b5e872fd9c47c16fdd02c"""" }, { """"blobSum"""": """"sha256:b542772b417703c0311c0b90136091369bcd9c2176c0e3ceed5a0114d743ee3c"""" }, { """"blobSum"""": """"sha256:1ab373b3deaed929a15574ac1912afc6e173f80d400aba0e96c89f6a58961f2d"""" }, { """"blobSum"""": """"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"""" }, { """"blobSum"""": """"sha256:c73ab1c6897bf5c11da3c95cab103e7ca8cf10a6d041eda2ff836f45a40e3d3b"""" } ], """"history"""": [ { """"v1Compatibility"""": """"{\""""architecture\"""":\""""amd64\"""",\""""config\"""":{\""""Hostname\"""":\""""\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""nobody\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""ExposedPorts\"""":{\""""50000/tcp\"""":{},\""""8080/tcp\"""":{}},\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"""",\""""LANG=C.UTF-8\"""",\""""JAVA_HOME=/docker-java-home\"""",\""""JAVA_VERSION=8u162\"""",\""""JAVA_DEBIAN_VERSION=8u162-b12-1~deb9u1\"""",\""""CA_CERTIFICATES_JAVA_VERSION=20170531+nmu1\"""",\""""JENKINS_HOME=/var/jenkinsdcos_home\"""",\""""JENKINS_SLAVE_AGENT_PORT=50000\"""",\""""JENKINS_VERSION=2.107.2\"""",\""""JENKINS_UC=https://updates.jenkins.io\"""",\""""JENKINS_UC_EXPERIMENTAL=https://updates.jenkins.io/experimental\"""",\""""COPY_REFERENCE_FILE_LOG=/var/jenkinsdcos_home/copy_reference_file.log\"""",\""""JENKINS_FOLDER=/usr/share/jenkins\"""",\""""JENKINS_CSP_OPTS=sandbox; default-src 'none'; img-src 'self'; style-src 'self';\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""/usr/local/jenkins/bin/run.sh\""""],\""""ArgsEscaped\"""":true,\""""Image\"""":\""""sha256:c5e3baf9fe6fc4f564fdad4c9c4705587fad40ae28d1999a608b3547625ffefe\"""",\""""Volumes\"""":{\""""/var/jenkins_home\"""":{}},\""""WorkingDir\"""":\""""/tmp\"""",\""""Entrypoint\"""":[\""""/sbin/tini\"""",\""""--\"""",\""""/usr/local/bin/jenkins.sh\""""],\""""OnBuild\"""":[],\""""Labels\"""":null},\""""container\"""":\""""e4111508e68c304ec5b36009773b41384b96fd887b61177cd42935b9757567fd\"""",\""""container_config\"""":{\""""Hostname\"""":\""""e4111508e68c\"""",\""""Domainname\"""":\""""\"""",\""""User\"""":\""""nobody\"""",\""""AttachStdin\"""":false,\""""AttachStdout\"""":false,\""""AttachStderr\"""":false,\""""ExposedPorts\"""":{\""""50000/tcp\"""":{},\""""8080/tcp\"""":{}},\""""Tty\"""":false,\""""OpenStdin\"""":false,\""""StdinOnce\"""":false,\""""Env\"""":[\""""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"""",\""""LANG=C.UTF-8\"""",\""""JAVA_HOME=/docker-java-home\"""",\""""JAVA_VERSION=8u162\"""",\""""JAVA_DEBIAN_VERSION=8u162-b12-1~deb9u1\"""",\""""CA_CERTIFICATES_JAVA_VERSION=20170531+nmu1\"""",\""""JENKINS_HOME=/var/jenkinsdcos_home\"""",\""""JENKINS_SLAVE_AGENT_PORT=50000\"""",\""""JENKINS_VERSION=2.107.2\"""",\""""JENKINS_UC=https://updates.jenkins.io\"""",\""""JENKINS_UC_EXPERIMENTAL=https://updates.jenkins.io/experimental\"""",\""""COPY_REFERENCE_FILE_LOG=/var/jenkinsdcos_home/copy_reference_file.log\"""",\""""JENKINS_FOLDER=/usr/share/jenkins\"""",\""""JENKINS_CSP_OPTS=sandbox; default-src 'none'; img-src 'self'; style-src 'self';\""""],\""""Cmd\"""":[\""""/bin/sh\"""",\""""-c\"""",\""""#(nop) \"""",\""""CMD [\\\""""/bin/sh\\\"""" \\\""""-c\\\"""" \\\""""/usr/local/jenkins/bin/run.sh\\\""""]\""""],\""""ArgsEscaped\"""":true,\""""Image\"""":\""""sha256:c5e3baf9fe6fc4f564fdad4c9c4705587fad40ae28d1999a608b3547625ffefe\"""",\""""Volumes\"""":{\""""/var/jenkins_home\"""":{}},\""""WorkingDir\"""":\""""/tmp\"""",\""""Entrypoint\"""":[\""""/sbin/tini\"""",\""""--\"""",\""""/usr/local/bin/jenkins.sh\""""],\""""OnBuild\"""":[],\""""Labels\"""":{}},\""""created\"""":\""""2018-09-26T17:33:57.6822239Z\"""",\""""docker_version\"""":\""""18.03.0-ce\"""",\""""id\"""":\""""fb401ed0b4f9de5534c224811d0dca94b876225c31ddc3cbb0993ad2faf32cff\"""",\""""os\"""":\""""linux\"""",\""""parent\"""":\""""bb54e3dc4a692004ece424733d0ff7bbfe930bdc65491776e2f409a461e838f1\"""",\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""bb54e3dc4a692004ece424733d0ff7bbfe930bdc65491776e2f409a461e838f1\"""",\""""parent\"""":\""""2ef4a3efec8e89b04691b207a4fe3fddd2e3d3a2a539f8e7dc8a645773758e1f\"""",\""""created\"""":\""""2018-09-26T17:33:57.3350528Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|11 BLUEOCEAN_VERSION=1.5.0 JENKINS_DCOS_HOME=/var/jenkinsdcos_home JENKINS_STAGING=/usr/share/jenkins/ref/ LIBMESOS_DOWNLOAD_SHA256=bd4a785393f0477da7f012bf9624aa7dd65aa243c94d38ffe94adaa10de30274 LIBMESOS_DOWNLOAD_URL=https://downloads.mesosphere.io/libmesos-bundle/libmesos-bundle-1.11.0.tar.gz MESOS_PLUG_HASH=347c1ac133dc0cb6282a0dde820acd5b4eb21133 PROMETHEUS_PLUG_HASH=a347bf2c63efe59134c15b8ef83a4a1f627e3b5d STATSD_PLUG_HASH=929d4a6cb3d3ce5f1e03af73075b13687d4879c8 gid=99 uid=99 user=nobody /bin/sh -c echo 2.0 \\u003e /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""2ef4a3efec8e89b04691b207a4fe3fddd2e3d3a2a539f8e7dc8a645773758e1f\"""",\""""parent\"""":\""""36bfc452a3f13511e6f9c0345ffac82054286eea40b02263c83ea791d00a22ea\"""",\""""created\"""":\""""2018-09-26T17:33:56.0461597Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) USER nobody\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""36bfc452a3f13511e6f9c0345ffac82054286eea40b02263c83ea791d00a22ea\"""",\""""parent\"""":\""""ea550cbe252ca3ca06771b0e837d1f7cc61c50404f7f1920ed5bc6cc816d8a0a\"""",\""""created\"""":\""""2018-09-26T17:33:55.6692099Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|11 BLUEOCEAN_VERSION=1.5.0 JENKINS_DCOS_HOME=/var/jenkinsdcos_home JENKINS_STAGING=/usr/share/jenkins/ref/ LIBMESOS_DOWNLOAD_SHA256=bd4a785393f0477da7f012bf9624aa7dd65aa243c94d38ffe94adaa10de30274 LIBMESOS_DOWNLOAD_URL=https://downloads.mesosphere.io/libmesos-bundle/libmesos-bundle-1.11.0.tar.gz MESOS_PLUG_HASH=347c1ac133dc0cb6282a0dde820acd5b4eb21133 PROMETHEUS_PLUG_HASH=a347bf2c63efe59134c15b8ef83a4a1f627e3b5d STATSD_PLUG_HASH=929d4a6cb3d3ce5f1e03af73075b13687d4879c8 gid=99 uid=99 user=nobody /bin/sh -c chmod -R ugo+rw \\\""""$JENKINS_HOME\\\"""" \\\""""${JENKINS_FOLDER}\\\"""" \\u0026\\u0026 chmod -R ugo+r \\\""""${JENKINS_STAGING}\\\"""" \\u0026\\u0026 chmod -R ugo+rx /usr/local/jenkins/bin/ \\u0026\\u0026 chmod -R ugo+rw /var/jenkins_home/ \\u0026\\u0026 chmod -R ugo+rw /var/lib/nginx/ /var/nginx/ /var/log/nginx \\u0026\\u0026 chmod ugo+rx /usr/local/jenkins/bin/*\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""ea550cbe252ca3ca06771b0e837d1f7cc61c50404f7f1920ed5bc6cc816d8a0a\"""",\""""parent\"""":\""""c4c140687ce95f2d23202b7efaa543ef7d064b226864fb4a0ae68bef283e074f\"""",\""""created\"""":\""""2018-09-26T17:33:49.7534514Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|11 BLUEOCEAN_VERSION=1.5.0 JENKINS_DCOS_HOME=/var/jenkinsdcos_home JENKINS_STAGING=/usr/share/jenkins/ref/ LIBMESOS_DOWNLOAD_SHA256=bd4a785393f0477da7f012bf9624aa7dd65aa243c94d38ffe94adaa10de30274 LIBMESOS_DOWNLOAD_URL=https://downloads.mesosphere.io/libmesos-bundle/libmesos-bundle-1.11.0.tar.gz MESOS_PLUG_HASH=347c1ac133dc0cb6282a0dde820acd5b4eb21133 PROMETHEUS_PLUG_HASH=a347bf2c63efe59134c15b8ef83a4a1f627e3b5d STATSD_PLUG_HASH=929d4a6cb3d3ce5f1e03af73075b13687d4879c8 gid=99 uid=99 user=nobody /bin/sh -c groupadd -g ${gid} nobody \\u0026\\u0026 usermod -u ${uid} -g ${gid} ${user} \\u0026\\u0026 usermod -a -G users nobody \\u0026\\u0026 echo \\\""""nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin\\\"""" \\u003e\\u003e /etc/passwd\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""c4c140687ce95f2d23202b7efaa543ef7d064b226864fb4a0ae68bef283e074f\"""",\""""parent\"""":\""""2f04614030c91b184503348484780b62bda952a2905cc1fb035c5a6f371ca239\"""",\""""created\"""":\""""2018-09-26T17:33:48.3150654Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ADD c84b80e3ceaef7f211a221093369729eeb89e5cfc5f3d0a5cd4917e7b6c7027f in /usr/share/jenkins/ref//plugins/metrics-graphite.hpi \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""2f04614030c91b184503348484780b62bda952a2905cc1fb035c5a6f371ca239\"""",\""""parent\"""":\""""a0ad27b653be7a9400d9e46784f897097cf24f157bfc3fb647e49c360b7c12c1\"""",\""""created\"""":\""""2018-09-26T17:33:47.8920446Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ADD f4d41c9bf39651b20107d62d85c101014320946e6a33763e5519ec18aee77858 in /usr/share/jenkins/ref//plugins/prometheus.hpi \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""a0ad27b653be7a9400d9e46784f897097cf24f157bfc3fb647e49c360b7c12c1\"""",\""""parent\"""":\""""f9f80b9791fb4dd2e525de37e33e0b730809bed2b2edb7898b802d3fde3d9c08\"""",\""""created\"""":\""""2018-09-26T17:33:46.775839Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ADD 652f0ad5e9ad70b4db10957b64265f808b45c63d8ef07b107d3082450084164c in /usr/share/jenkins/ref//plugins/mesos.hpi \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""f9f80b9791fb4dd2e525de37e33e0b730809bed2b2edb7898b802d3fde3d9c08\"""",\""""parent\"""":\""""0b648b2545d81712d56536a42a0a99d3d78008bf6f1f04f22d140c427b645b76\"""",\""""created\"""":\""""2018-09-26T17:33:45.5611867Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|11 BLUEOCEAN_VERSION=1.5.0 JENKINS_DCOS_HOME=/var/jenkinsdcos_home JENKINS_STAGING=/usr/share/jenkins/ref/ LIBMESOS_DOWNLOAD_SHA256=bd4a785393f0477da7f012bf9624aa7dd65aa243c94d38ffe94adaa10de30274 LIBMESOS_DOWNLOAD_URL=https://downloads.mesosphere.io/libmesos-bundle/libmesos-bundle-1.11.0.tar.gz MESOS_PLUG_HASH=347c1ac133dc0cb6282a0dde820acd5b4eb21133 PROMETHEUS_PLUG_HASH=a347bf2c63efe59134c15b8ef83a4a1f627e3b5d STATSD_PLUG_HASH=929d4a6cb3d3ce5f1e03af73075b13687d4879c8 gid=99 uid=99 user=nobody /bin/sh -c /usr/local/bin/install-plugins.sh blueocean-bitbucket-pipeline:${BLUEOCEAN_VERSION} blueocean-commons:${BLUEOCEAN_VERSION} blueocean-config:${BLUEOCEAN_VERSION} blueocean-dashboard:${BLUEOCEAN_VERSION} blueocean-events:${BLUEOCEAN_VERSION} blueocean-git-pipeline:${BLUEOCEAN_VERSION} blueocean-github-pipeline:${BLUEOCEAN_VERSION} blueocean-i18n:${BLUEOCEAN_VERSION} blueocean-jwt:${BLUEOCEAN_VERSION} blueocean-jira:${BLUEOCEAN_VERSION} blueocean-personalization:${BLUEOCEAN_VERSION} blueocean-pipeline-api-impl:${BLUEOCEAN_VERSION} blueocean-pipeline-editor:${BLUEOCEAN_VERSION} blueocean-pipeline-scm-api:${BLUEOCEAN_VERSION} blueocean-rest-impl:${BLUEOCEAN_VERSION} blueocean-rest:${BLUEOCEAN_VERSION} blueocean-web:${BLUEOCEAN_VERSION} blueocean:${BLUEOCEAN_VERSION} ant:1.8 ansicolor:0.5.2 antisamy-markup-formatter:1.5 artifactory:2.15.1 authentication-tokens:1.3 azure-credentials:1.6.0 azure-vm-agents:0.7.0 branch-api:2.0.19 build-name-setter:1.6.9 build-timeout:1.19 cloudbees-folder:6.4 conditional-buildstep:1.3.6 config-file-provider:2.18 copyartifact:1.39.1 cvs:2.14 docker-build-publish:1.3.2 docker-workflow:1.15.1 durable-task:1.22 ec2:1.39 embeddable-build-status:1.9 external-monitor-job:1.7 ghprb:1.40.0 git:3.8.0 git-client:2.7.1 git-server:1.7 github:1.29.0 github-api:1.90 github-branch-source:2.3.3 github-organization-folder:1.6 gitlab-plugin:1.5.5 gradle:1.28 greenballs:1.15 handlebars:1.1.1 ivy:1.28 jackson2-api:2.8.11.3 job-dsl:1.68 jobConfigHistory:2.18 jquery:1.12.4-0 ldap:1.20 mapdb-api:1.0.9.0 marathon:1.6.0 matrix-auth:2.2 matrix-project:1.13 maven-plugin:3.1.2 metrics:3.1.2.11 monitoring:1.72.0 nant:1.4.3 node-iterator-api:1.5.0 pam-auth:1.3 parameterized-trigger:2.35.2 pipeline-build-step:2.7 pipeline-github-lib:1.0 pipeline-input-step:2.8 pipeline-milestone-step:1.3.1 pipeline-model-api:1.2.8 pipeline-model-definition:1.2.8 pipeline-model-extensions:1.2.8 pipeline-rest-api:2.10 pipeline-stage-step:2.3 pipeline-stage-view:2.10 plain-credentials:1.4 prometheus:1.2.0 rebuild:1.28 role-strategy:2.7.0 run-condition:1.0 s3:0.11.0 saferestart:0.3 saml:1.0.5 scm-api:2.2.6 ssh-agent:1.15 ssh-slaves:1.26 subversion:2.10.5 timestamper:1.8.9 translation:1.16 variant:1.1 windows-slaves:1.3.1 workflow-aggregator:2.5 workflow-api:2.27 workflow-basic-steps:2.6 workflow-cps:2.48 workflow-cps-global-lib:2.9 workflow-durable-task-step:2.19 workflow-job:2.18 workflow-multibranch:2.17 workflow-scm-step:2.6 workflow-step-api:2.14 workflow-support:2.18\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""0b648b2545d81712d56536a42a0a99d3d78008bf6f1f04f22d140c427b645b76\"""",\""""parent\"""":\""""1982429c4258750d3f70dc0f1c563e870725c6d807e9444d9785456d626ef556\"""",\""""created\"""":\""""2018-09-26T17:31:24.2544617Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:59ced817d4cd74453e0658c69f937959d2b4d86cfe15d699cd1fdcf2f6867067 in /usr/share/jenkins/ref//init.groovy.d/mesos-auth.groovy \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""1982429c4258750d3f70dc0f1c563e870725c6d807e9444d9785456d626ef556\"""",\""""parent\"""":\""""35f1271210258c711e78bd483beee0a7fc9c2d4ee12cf78787d7992277c5a957\"""",\""""created\"""":\""""2018-09-26T17:31:23.9384301Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:8ca0529d27d0fa91b7848e39a5d04e55df01746ab31ca6bae1816f062667f8cc in /usr/share/jenkins/ref//nodeMonitors.xml \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""35f1271210258c711e78bd483beee0a7fc9c2d4ee12cf78787d7992277c5a957\"""",\""""parent\"""":\""""12d065bb54a4529a4afddab4e580cb4cb78e509a2c7d6faa7e3967510f783887\"""",\""""created\"""":\""""2018-09-26T17:31:23.609004Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:beed7a659bf7217db04b70fa4220df32e07015c6f20edf4d73b5cab69354542e in /usr/share/jenkins/ref//jenkins.model.JenkinsLocationConfiguration.xml \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""12d065bb54a4529a4afddab4e580cb4cb78e509a2c7d6faa7e3967510f783887\"""",\""""parent\"""":\""""87ea27ae8f877a36014c0eeb3dba7c0a7b29cfc2d779a10bf438dbce8078cc62\"""",\""""created\"""":\""""2018-09-26T17:31:23.3055734Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:46468ed2b6fa66eeea868396b18d952f8cbdd0df6529ec2a4d5782a1acc7ee7a in /usr/share/jenkins/ref//config.xml \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""87ea27ae8f877a36014c0eeb3dba7c0a7b29cfc2d779a10bf438dbce8078cc62\"""",\""""parent\"""":\""""a13b24c2681da6bacf19dd99041ba2921a11d780fd19edc1b6c0d0b982deb730\"""",\""""created\"""":\""""2018-09-26T17:31:23.003904Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:6b54409cf8c3ce4dae538b70b64f8755636613e71806e479c5d8f081224c63e9 in /var/nginx/nginx.conf \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""a13b24c2681da6bacf19dd99041ba2921a11d780fd19edc1b6c0d0b982deb730\"""",\""""parent\"""":\""""39807d50654d8e842f4bf754ef8c4e5b72780dc6403db3e78611e7356c3c2173\"""",\""""created\"""":\""""2018-09-26T17:31:22.6859214Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|11 BLUEOCEAN_VERSION=1.5.0 JENKINS_DCOS_HOME=/var/jenkinsdcos_home JENKINS_STAGING=/usr/share/jenkins/ref/ LIBMESOS_DOWNLOAD_SHA256=bd4a785393f0477da7f012bf9624aa7dd65aa243c94d38ffe94adaa10de30274 LIBMESOS_DOWNLOAD_URL=https://downloads.mesosphere.io/libmesos-bundle/libmesos-bundle-1.11.0.tar.gz MESOS_PLUG_HASH=347c1ac133dc0cb6282a0dde820acd5b4eb21133 PROMETHEUS_PLUG_HASH=a347bf2c63efe59134c15b8ef83a4a1f627e3b5d STATSD_PLUG_HASH=929d4a6cb3d3ce5f1e03af73075b13687d4879c8 gid=99 uid=99 user=nobody /bin/sh -c mkdir -p /var/log/nginx/jenkins /var/nginx/\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""39807d50654d8e842f4bf754ef8c4e5b72780dc6403db3e78611e7356c3c2173\"""",\""""parent\"""":\""""0bd3eec411704b039cfeb3aec6c6e14882dbc8db68561fe0904548ba708394c5\"""",\""""created\"""":\""""2018-09-26T17:31:21.2086534Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:a4cf73ccc8a0e4b1a7acef249766ce76b31bf76d03f97ac157d6eccfab30d4f5 in /usr/local/jenkins/bin/run.sh \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""0bd3eec411704b039cfeb3aec6c6e14882dbc8db68561fe0904548ba708394c5\"""",\""""parent\"""":\""""f7ede5d3afbd0518be79d105f5ccb6a1f96117291300ba3acc32d9007c71d6a9\"""",\""""created\"""":\""""2018-09-26T17:31:20.9064351Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:3377f08a63084052efa9902be76b1eb669229849b476b52f448697333457e769 in /usr/local/jenkins/bin/dcos-account.sh \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""f7ede5d3afbd0518be79d105f5ccb6a1f96117291300ba3acc32d9007c71d6a9\"""",\""""parent\"""":\""""2f6f3100494a6050ecd28e7e1a647a3fc1f5e9dbde64a6646dc5e8f418bc7397\"""",\""""created\"""":\""""2018-09-26T17:31:20.5594535Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:5814edade36c8c883f19e868796f1ae1d46d6990af813451101abec8196856d4 in /usr/local/jenkins/bin/export-libssl.sh \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""2f6f3100494a6050ecd28e7e1a647a3fc1f5e9dbde64a6646dc5e8f418bc7397\"""",\""""parent\"""":\""""ee31e5ffacfc38c46b9fdad7ecb47ab748d5eb9baf82ea8cf2766df3f9e18cdc\"""",\""""created\"""":\""""2018-09-26T17:31:20.2349213Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:8206c6af7dc8888193958fd9428ba085ae19c8282c26eb05fb9f4c4f46973a4e in /usr/local/jenkins/bin/bootstrap.py \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""ee31e5ffacfc38c46b9fdad7ecb47ab748d5eb9baf82ea8cf2766df3f9e18cdc\"""",\""""parent\"""":\""""9fc63748986386223542f05b4c9685481a303ebb3e30603e174e65121906ea55\"""",\""""created\"""":\""""2018-07-09T20:54:30.984299193Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|11 BLUEOCEAN_VERSION=1.5.0 JENKINS_DCOS_HOME=/var/jenkinsdcos_home JENKINS_STAGING=/usr/share/jenkins/ref/ LIBMESOS_DOWNLOAD_SHA256=bd4a785393f0477da7f012bf9624aa7dd65aa243c94d38ffe94adaa10de30274 LIBMESOS_DOWNLOAD_URL=https://downloads.mesosphere.io/libmesos-bundle/libmesos-bundle-1.11.0.tar.gz MESOS_PLUG_HASH=347c1ac133dc0cb6282a0dde820acd5b4eb21133 PROMETHEUS_PLUG_HASH=a347bf2c63efe59134c15b8ef83a4a1f627e3b5d STATSD_PLUG_HASH=929d4a6cb3d3ce5f1e03af73075b13687d4879c8 gid=99 uid=99 user=nobody /bin/sh -c echo 'networkaddress.cache.ttl=60' \\u003e\\u003e ${JAVA_HOME}/jre/lib/security/java.security\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""9fc63748986386223542f05b4c9685481a303ebb3e30603e174e65121906ea55\"""",\""""parent\"""":\""""e37610cd26b6212ad9670f101e7d8d70cf108097ead058de50d5d7401cad8b22\"""",\""""created\"""":\""""2018-07-09T20:54:29.524404063Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|11 BLUEOCEAN_VERSION=1.5.0 JENKINS_DCOS_HOME=/var/jenkinsdcos_home JENKINS_STAGING=/usr/share/jenkins/ref/ LIBMESOS_DOWNLOAD_SHA256=bd4a785393f0477da7f012bf9624aa7dd65aa243c94d38ffe94adaa10de30274 LIBMESOS_DOWNLOAD_URL=https://downloads.mesosphere.io/libmesos-bundle/libmesos-bundle-1.11.0.tar.gz MESOS_PLUG_HASH=347c1ac133dc0cb6282a0dde820acd5b4eb21133 PROMETHEUS_PLUG_HASH=a347bf2c63efe59134c15b8ef83a4a1f627e3b5d STATSD_PLUG_HASH=929d4a6cb3d3ce5f1e03af73075b13687d4879c8 gid=99 uid=99 user=nobody /bin/sh -c mkdir -p \\\""""${JENKINS_HOME}\\\"""" \\\""""${JENKINS_FOLDER}/war\\\""""\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""e37610cd26b6212ad9670f101e7d8d70cf108097ead058de50d5d7401cad8b22\"""",\""""parent\"""":\""""84f29eb0d8745c0510f8d48c347676ff1283ec4921bb2e3e7f462681d8e62ba3\"""",\""""created\"""":\""""2018-07-09T20:54:28.236876676Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|11 BLUEOCEAN_VERSION=1.5.0 JENKINS_DCOS_HOME=/var/jenkinsdcos_home JENKINS_STAGING=/usr/share/jenkins/ref/ LIBMESOS_DOWNLOAD_SHA256=bd4a785393f0477da7f012bf9624aa7dd65aa243c94d38ffe94adaa10de30274 LIBMESOS_DOWNLOAD_URL=https://downloads.mesosphere.io/libmesos-bundle/libmesos-bundle-1.11.0.tar.gz MESOS_PLUG_HASH=347c1ac133dc0cb6282a0dde820acd5b4eb21133 PROMETHEUS_PLUG_HASH=a347bf2c63efe59134c15b8ef83a4a1f627e3b5d STATSD_PLUG_HASH=929d4a6cb3d3ce5f1e03af73075b13687d4879c8 gid=99 uid=99 user=nobody /bin/sh -c echo \\\""""deb http://ftp.debian.org/debian testing main\\\"""" \\u003e\\u003e /etc/apt/sources.list \\u0026\\u0026 apt-get update \\u0026\\u0026 apt-get -t testing install -y git\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""84f29eb0d8745c0510f8d48c347676ff1283ec4921bb2e3e7f462681d8e62ba3\"""",\""""parent\"""":\""""a0d4487b3138fe03804efb8c6aec561dcd75c6dc45d57458b9e46ca1184b49c2\"""",\""""created\"""":\""""2018-07-09T20:54:14.100019856Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|11 BLUEOCEAN_VERSION=1.5.0 JENKINS_DCOS_HOME=/var/jenkinsdcos_home JENKINS_STAGING=/usr/share/jenkins/ref/ LIBMESOS_DOWNLOAD_SHA256=bd4a785393f0477da7f012bf9624aa7dd65aa243c94d38ffe94adaa10de30274 LIBMESOS_DOWNLOAD_URL=https://downloads.mesosphere.io/libmesos-bundle/libmesos-bundle-1.11.0.tar.gz MESOS_PLUG_HASH=347c1ac133dc0cb6282a0dde820acd5b4eb21133 PROMETHEUS_PLUG_HASH=a347bf2c63efe59134c15b8ef83a4a1f627e3b5d STATSD_PLUG_HASH=929d4a6cb3d3ce5f1e03af73075b13687d4879c8 gid=99 uid=99 user=nobody /bin/sh -c curl -fsSL \\\""""$LIBMESOS_DOWNLOAD_URL\\\"""" -o libmesos-bundle.tar.gz \\u0026\\u0026 echo \\\""""$LIBMESOS_DOWNLOAD_SHA256 libmesos-bundle.tar.gz\\\"""" | sha256sum -c - \\u0026\\u0026 tar -C / -xzf libmesos-bundle.tar.gz \\u0026\\u0026 rm libmesos-bundle.tar.gz\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""a0d4487b3138fe03804efb8c6aec561dcd75c6dc45d57458b9e46ca1184b49c2\"""",\""""parent\"""":\""""bfc2e6fd97b4b81249e6ad41a0709d1d62de3c35251e1e07b39a855111c6f465\"""",\""""created\"""":\""""2018-07-09T20:54:00.580952612Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|11 BLUEOCEAN_VERSION=1.5.0 JENKINS_DCOS_HOME=/var/jenkinsdcos_home JENKINS_STAGING=/usr/share/jenkins/ref/ LIBMESOS_DOWNLOAD_SHA256=bd4a785393f0477da7f012bf9624aa7dd65aa243c94d38ffe94adaa10de30274 LIBMESOS_DOWNLOAD_URL=https://downloads.mesosphere.io/libmesos-bundle/libmesos-bundle-1.11.0.tar.gz MESOS_PLUG_HASH=347c1ac133dc0cb6282a0dde820acd5b4eb21133 PROMETHEUS_PLUG_HASH=a347bf2c63efe59134c15b8ef83a4a1f627e3b5d STATSD_PLUG_HASH=929d4a6cb3d3ce5f1e03af73075b13687d4879c8 gid=99 uid=99 user=nobody /bin/sh -c apt-get update \\u0026\\u0026 apt-get install -y nginx python zip jq\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""bfc2e6fd97b4b81249e6ad41a0709d1d62de3c35251e1e07b39a855111c6f465\"""",\""""parent\"""":\""""c7344d4d0e1cb95c3e531da5c342fc2fb289e4e51334ca2d1b430de241b28722\"""",\""""created\"""":\""""2018-07-09T20:53:46.425927046Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) USER root\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""c7344d4d0e1cb95c3e531da5c342fc2fb289e4e51334ca2d1b430de241b28722\"""",\""""parent\"""":\""""662ff9075f47894530c796a8e9b2fafe7f1bc5be7ec38b35d757d442b6833a84\"""",\""""created\"""":\""""2018-07-09T20:53:46.096470837Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV JENKINS_CSP_OPTS=sandbox; default-src 'none'; img-src 'self'; style-src 'self';\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""662ff9075f47894530c796a8e9b2fafe7f1bc5be7ec38b35d757d442b6833a84\"""",\""""parent\"""":\""""5f2b4cf9791c5f60b455dd4b847028183d61d7d519fc5c6bd1b6fe50ef119e74\"""",\""""created\"""":\""""2018-07-09T20:53:45.797188526Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV COPY_REFERENCE_FILE_LOG=/var/jenkinsdcos_home/copy_reference_file.log\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""5f2b4cf9791c5f60b455dd4b847028183d61d7d519fc5c6bd1b6fe50ef119e74\"""",\""""parent\"""":\""""25792f867e22dae243cd783474ff4eed5d54a323665c33205058d5a6adf545d9\"""",\""""created\"""":\""""2018-07-09T20:53:45.462915577Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV JENKINS_HOME=/var/jenkinsdcos_home\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""25792f867e22dae243cd783474ff4eed5d54a323665c33205058d5a6adf545d9\"""",\""""parent\"""":\""""1ff15242962d34fd623e14d7a02db78036b900bb2e526a74d44fc903477cc9e7\"""",\""""created\"""":\""""2018-07-09T20:53:45.124088811Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG gid=99\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""1ff15242962d34fd623e14d7a02db78036b900bb2e526a74d44fc903477cc9e7\"""",\""""parent\"""":\""""fc1935bf0e7b1a47c7b5947f1ba2c9347472b19f6a5cf4e0553c7c8dd4dac4c2\"""",\""""created\"""":\""""2018-07-09T20:53:44.827537014Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG uid=99\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""fc1935bf0e7b1a47c7b5947f1ba2c9347472b19f6a5cf4e0553c7c8dd4dac4c2\"""",\""""parent\"""":\""""3d5eaaf246c3dd14cb486d1249fc41cdba012693b3d20dafd9e9a395d104f740\"""",\""""created\"""":\""""2018-07-09T20:53:44.458211965Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG user=nobody\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""3d5eaaf246c3dd14cb486d1249fc41cdba012693b3d20dafd9e9a395d104f740\"""",\""""parent\"""":\""""ede8ff5b5cfbe778c731602a3da663b93f3c3304b3a34255412d1631d4b77c18\"""",\""""created\"""":\""""2018-07-09T20:53:44.10755361Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG JENKINS_DCOS_HOME=/var/jenkinsdcos_home\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""ede8ff5b5cfbe778c731602a3da663b93f3c3304b3a34255412d1631d4b77c18\"""",\""""parent\"""":\""""b2579c2af8b8b590584e666fc0a0427e6baa3b46bf9a1ec9b05aa37bbf6fe2fc\"""",\""""created\"""":\""""2018-07-09T20:53:43.757033301Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG STATSD_PLUG_HASH=929d4a6cb3d3ce5f1e03af73075b13687d4879c8\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""b2579c2af8b8b590584e666fc0a0427e6baa3b46bf9a1ec9b05aa37bbf6fe2fc\"""",\""""parent\"""":\""""d12f6a61c0f00cd2eeb401653c4ec8f52f9fc996514741a6af9e5c57a082f3a8\"""",\""""created\"""":\""""2018-07-09T20:53:43.442946812Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG PROMETHEUS_PLUG_HASH=a347bf2c63efe59134c15b8ef83a4a1f627e3b5d\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""d12f6a61c0f00cd2eeb401653c4ec8f52f9fc996514741a6af9e5c57a082f3a8\"""",\""""parent\"""":\""""fef12017cd9254dbbbb938bfadd8a9401241149c9a52c347373395a52a1c4ebe\"""",\""""created\"""":\""""2018-07-09T20:53:43.116440726Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG MESOS_PLUG_HASH=347c1ac133dc0cb6282a0dde820acd5b4eb21133\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""fef12017cd9254dbbbb938bfadd8a9401241149c9a52c347373395a52a1c4ebe\"""",\""""parent\"""":\""""f81de07a754d737714fcaec8c6d6db1656e64d6cca56e347ccebf23776b00617\"""",\""""created\"""":\""""2018-04-24T20:52:04.5174488Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG JENKINS_STAGING=/usr/share/jenkins/ref/\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""f81de07a754d737714fcaec8c6d6db1656e64d6cca56e347ccebf23776b00617\"""",\""""parent\"""":\""""9780d937abc8609fd998fecda4c079d6b097b1cb6a714de993329a2f56548133\"""",\""""created\"""":\""""2018-04-24T20:52:04.1863586Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG BLUEOCEAN_VERSION=1.5.0\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""9780d937abc8609fd998fecda4c079d6b097b1cb6a714de993329a2f56548133\"""",\""""parent\"""":\""""72437e315d169d61ae285cc7cb1fecb5ada9249936e77c39ce28d85e7e6d727d\"""",\""""created\"""":\""""2018-04-24T20:52:03.8152478Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG LIBMESOS_DOWNLOAD_SHA256=bd4a785393f0477da7f012bf9624aa7dd65aa243c94d38ffe94adaa10de30274\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""72437e315d169d61ae285cc7cb1fecb5ada9249936e77c39ce28d85e7e6d727d\"""",\""""parent\"""":\""""9fa0879e36daa2a5a08776124775e11a7fdb51ca2fb19407baa7dedad3ef97a8\"""",\""""created\"""":\""""2018-04-24T20:52:03.4353208Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG LIBMESOS_DOWNLOAD_URL=https://downloads.mesosphere.io/libmesos-bundle/libmesos-bundle-1.11.0.tar.gz\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""9fa0879e36daa2a5a08776124775e11a7fdb51ca2fb19407baa7dedad3ef97a8\"""",\""""parent\"""":\""""764cacaf206bfbeb6b11777e9b65d16ad6350530f559b9f78ee15413548d7749\"""",\""""created\"""":\""""2018-04-24T20:52:03.0719423Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV JENKINS_FOLDER=/usr/share/jenkins\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""764cacaf206bfbeb6b11777e9b65d16ad6350530f559b9f78ee15413548d7749\"""",\""""parent\"""":\""""7f59fe46c09c4aa9f59ab9268e34571a0bbc0cfa3c0ac4e4d8e55fdf1392bbd5\"""",\""""created\"""":\""""2018-04-24T20:52:02.73463Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) WORKDIR /tmp\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""7f59fe46c09c4aa9f59ab9268e34571a0bbc0cfa3c0ac4e4d8e55fdf1392bbd5\"""",\""""parent\"""":\""""35094fea2af3c400c07fc444f7477b90caf1013b87e40696cfa9e57fc0f9c80b\"""",\""""created\"""":\""""2018-04-11T10:05:00.283278344Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:2874a36404a19c4075e62bf579a79bf730d317e628e80b03c676af4509481acc in /usr/local/bin/install-plugins.sh \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""35094fea2af3c400c07fc444f7477b90caf1013b87e40696cfa9e57fc0f9c80b\"""",\""""parent\"""":\""""9d76182ab3259983e8e14d62c9461f4b08404a709f216b805649ac1a448a1fcc\"""",\""""created\"""":\""""2018-04-11T10:04:58.564052111Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:39d6085e6ad132734efabf90a5444f3bc74a21e8bf5a79f4d0176ac18bb98217 in /usr/local/bin/plugins.sh \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""9d76182ab3259983e8e14d62c9461f4b08404a709f216b805649ac1a448a1fcc\"""",\""""parent\"""":\""""fa49692d5ea14814e6efec3af9517b31313a7461065aeb184acdb68d5df23196\"""",\""""created\"""":\""""2018-04-11T10:04:56.647913351Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENTRYPOINT [\\\""""/sbin/tini\\\"""" \\\""""--\\\"""" \\\""""/usr/local/bin/jenkins.sh\\\""""]\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""fa49692d5ea14814e6efec3af9517b31313a7461065aeb184acdb68d5df23196\"""",\""""parent\"""":\""""afabf13b1f4f211c64b5535617d2ceb53038f76ceb76b74d8c2c0c66f9a5c9a3\"""",\""""created\"""":\""""2018-04-11T10:04:54.736575307Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:dc942ca949bb159f81bbc954773b3491e433d2d3e3ef90bac80ecf48a313c9c9 in /bin/tini \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""afabf13b1f4f211c64b5535617d2ceb53038f76ceb76b74d8c2c0c66f9a5c9a3\"""",\""""parent\"""":\""""64c769f00fe141263cbb93af8dded7de9c06f1c96010adbb22f4dc6acd0decc9\"""",\""""created\"""":\""""2018-04-11T10:04:51.974150657Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:1a73810a97d134925c37b2276c894e0a9c92125cdd8c750aaf8ef15c3c20aa72 in /usr/local/bin/jenkins.sh \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""64c769f00fe141263cbb93af8dded7de9c06f1c96010adbb22f4dc6acd0decc9\"""",\""""parent\"""":\""""0900f664d5ac780bca9e91b18de9f4680eb0eb4842e81cbe26b3a22f3eb8fdec\"""",\""""created\"""":\""""2018-04-11T10:04:50.171056466Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:88dd96a27353c9d476981c3cfc6b39c95983c45083324afa7c8bddb682d91bff in /usr/local/bin/jenkins-support \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""0900f664d5ac780bca9e91b18de9f4680eb0eb4842e81cbe26b3a22f3eb8fdec\"""",\""""parent\"""":\""""d70c19ee5d688e37e2fce67f01fd691c2509d45dd7903f68ede5ca78d1b7bdc4\"""",\""""created\"""":\""""2018-04-11T10:04:48.292041295Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) USER jenkins\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""d70c19ee5d688e37e2fce67f01fd691c2509d45dd7903f68ede5ca78d1b7bdc4\"""",\""""parent\"""":\""""c94d46c5aef23a2d56d2ac621b24a6778dcfbd81e1af03f2940ae91dcd991a20\"""",\""""created\"""":\""""2018-04-11T10:04:46.288406797Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV COPY_REFERENCE_FILE_LOG=/var/jenkins_home/copy_reference_file.log\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""c94d46c5aef23a2d56d2ac621b24a6778dcfbd81e1af03f2940ae91dcd991a20\"""",\""""parent\"""":\""""09e94fdaea35187ca8029b96c2180f04615c8e66a8255dc939f16c9429ba003f\"""",\""""created\"""":\""""2018-04-11T10:04:44.37013921Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) EXPOSE 50000\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""09e94fdaea35187ca8029b96c2180f04615c8e66a8255dc939f16c9429ba003f\"""",\""""parent\"""":\""""3d98aa3161d75b086c38aee654e4612b83b1bbfdea154785537df95ae157ca5a\"""",\""""created\"""":\""""2018-04-11T10:04:42.447771731Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) EXPOSE 8080\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""3d98aa3161d75b086c38aee654e4612b83b1bbfdea154785537df95ae157ca5a\"""",\""""parent\"""":\""""6de02152b7dce054eee77ce934fa5652c2142a8a210060e19872db23d9afdacd\"""",\""""created\"""":\""""2018-04-11T10:04:40.453492565Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|9 JENKINS_SHA=079ab885be74ea3dd4d2a57dd804a296752fae861f2d7c379bce06b674ae67ed JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/2.107.2/jenkins-war-2.107.2.war TINI_VERSION=v0.16.1 agent_port=50000 gid=1000 group=jenkins http_port=8080 uid=1000 user=jenkins /bin/sh -c chown -R ${user} \\\""""$JENKINS_HOME\\\"""" /usr/share/jenkins/ref\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""6de02152b7dce054eee77ce934fa5652c2142a8a210060e19872db23d9afdacd\"""",\""""parent\"""":\""""c178ed9e80695242e5ae0f6611ddce99a857d3e5a295207bc45d1e680d3c1379\"""",\""""created\"""":\""""2018-04-11T10:04:37.42404848Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV JENKINS_UC_EXPERIMENTAL=https://updates.jenkins.io/experimental\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""c178ed9e80695242e5ae0f6611ddce99a857d3e5a295207bc45d1e680d3c1379\"""",\""""parent\"""":\""""9aed7a6b86d981cfa853eae0e5a716d16be68b4475884e6b275762e8770946c8\"""",\""""created\"""":\""""2018-04-11T10:04:35.309385797Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV JENKINS_UC=https://updates.jenkins.io\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""9aed7a6b86d981cfa853eae0e5a716d16be68b4475884e6b275762e8770946c8\"""",\""""parent\"""":\""""6345f944b3e6d2f6e763b56af436b46d26c5424282ca0d2c57994beb2d5d1707\"""",\""""created\"""":\""""2018-04-11T10:04:33.341878374Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|9 JENKINS_SHA=079ab885be74ea3dd4d2a57dd804a296752fae861f2d7c379bce06b674ae67ed JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/2.107.2/jenkins-war-2.107.2.war TINI_VERSION=v0.16.1 agent_port=50000 gid=1000 group=jenkins http_port=8080 uid=1000 user=jenkins /bin/sh -c curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \\u0026\\u0026 echo \\\""""${JENKINS_SHA} /usr/share/jenkins/jenkins.war\\\"""" | sha256sum -c -\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""6345f944b3e6d2f6e763b56af436b46d26c5424282ca0d2c57994beb2d5d1707\"""",\""""parent\"""":\""""0b6d17bc569a59dadd8ce806ecf325bc39e0445b5097b004c4c5771d030a754c\"""",\""""created\"""":\""""2018-04-11T10:04:28.72473862Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/2.107.2/jenkins-war-2.107.2.war\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""0b6d17bc569a59dadd8ce806ecf325bc39e0445b5097b004c4c5771d030a754c\"""",\""""parent\"""":\""""14f6f33e1ddfb9196b4840cc1fb6de989a15de625581d7774c3629a4cc57e48d\"""",\""""created\"""":\""""2018-04-11T10:04:26.621369421Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG JENKINS_SHA=2d71b8f87c8417f9303a73d52901a59678ee6c0eefcf7325efed6035ff39372a\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""14f6f33e1ddfb9196b4840cc1fb6de989a15de625581d7774c3629a4cc57e48d\"""",\""""parent\"""":\""""3f780bea9189f56ebd2e363fe471a428bd75a2972e7cfc7dd997633b4a8eb951\"""",\""""created\"""":\""""2018-04-11T10:04:24.515479866Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV JENKINS_VERSION=2.107.2\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""3f780bea9189f56ebd2e363fe471a428bd75a2972e7cfc7dd997633b4a8eb951\"""",\""""parent\"""":\""""8384b8fae6e40ab737b32a849d462b1fedf10b34be86a67ead51536b5e278a90\"""",\""""created\"""":\""""2018-04-11T10:04:22.485876008Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG JENKINS_VERSION\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""8384b8fae6e40ab737b32a849d462b1fedf10b34be86a67ead51536b5e278a90\"""",\""""parent\"""":\""""14e86ed54896e03f91257436134c46b72b2230612de7487e208d09b2409c9366\"""",\""""created\"""":\""""2018-04-11T10:04:20.518174508Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:c84b91c835048a52bb864c1f4662607c56befe3c4b1520b0ea94633103a4554f in /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""14e86ed54896e03f91257436134c46b72b2230612de7487e208d09b2409c9366\"""",\""""parent\"""":\""""f69098d00b0be8a19c2189ff51d0fd286f3d1bba4b02024f5127e3afc65c241d\"""",\""""created\"""":\""""2018-04-11T10:04:18.593424219Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|7 TINI_VERSION=v0.16.1 agent_port=50000 gid=1000 group=jenkins http_port=8080 uid=1000 user=jenkins /bin/sh -c curl -fsSL https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-$(dpkg --print-architecture) -o /sbin/tini \\u0026\\u0026 curl -fsSL https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-$(dpkg --print-architecture).asc -o /sbin/tini.asc \\u0026\\u0026 gpg --import /var/jenkins_home/tini_pub.gpg \\u0026\\u0026 gpg --verify /sbin/tini.asc \\u0026\\u0026 rm -rf /sbin/tini.asc /root/.gnupg \\u0026\\u0026 chmod +x /sbin/tini\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""f69098d00b0be8a19c2189ff51d0fd286f3d1bba4b02024f5127e3afc65c241d\"""",\""""parent\"""":\""""8a6f3dcf5ca0ca44988097f598596e70cd0f59d9600c115d65fb35ff330848c2\"""",\""""created\"""":\""""2018-04-11T10:04:13.905006564Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) COPY file:653491cb486e752a4c2b4b407a46ec75646a54eabb597634b25c7c2b82a31424 in /var/jenkins_home/tini_pub.gpg \""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""8a6f3dcf5ca0ca44988097f598596e70cd0f59d9600c115d65fb35ff330848c2\"""",\""""parent\"""":\""""1c0472056a7e929c90a91ff8b93a3e66caad76310b8431a1efe220ad734e066c\"""",\""""created\"""":\""""2018-04-11T10:04:11.747045116Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG TINI_VERSION=v0.16.1\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""1c0472056a7e929c90a91ff8b93a3e66caad76310b8431a1efe220ad734e066c\"""",\""""parent\"""":\""""9e7db4035ee231fa280046b53c0513db0df2c9d49938cc74c7fb195d398ce5fb\"""",\""""created\"""":\""""2018-04-11T10:04:09.646844829Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|6 agent_port=50000 gid=1000 group=jenkins http_port=8080 uid=1000 user=jenkins /bin/sh -c mkdir -p /usr/share/jenkins/ref/init.groovy.d\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""9e7db4035ee231fa280046b53c0513db0df2c9d49938cc74c7fb195d398ce5fb\"""",\""""parent\"""":\""""cd5438b31ee369424fea23648dc89429b4cef574734f0933e48516c7ba9caf65\"""",\""""created\"""":\""""2018-04-11T10:04:05.986383436Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) VOLUME [/var/jenkins_home]\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""cd5438b31ee369424fea23648dc89429b4cef574734f0933e48516c7ba9caf65\"""",\""""parent\"""":\""""8e94e7a7be85cbc5b57c4f3e140535e297674e8d30df6e0ed6dbf2128b82935a\"""",\""""created\"""":\""""2018-04-11T10:04:03.98242692Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""|6 agent_port=50000 gid=1000 group=jenkins http_port=8080 uid=1000 user=jenkins /bin/sh -c groupadd -g ${gid} ${group} \\u0026\\u0026 useradd -d \\\""""$JENKINS_HOME\\\"""" -u ${uid} -g ${gid} -m -s /bin/bash ${user}\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""8e94e7a7be85cbc5b57c4f3e140535e297674e8d30df6e0ed6dbf2128b82935a\"""",\""""parent\"""":\""""f28491aa4d16c9db8733a4efb3588ee73941a4e3e1cfb9f5f50293f894307e96\"""",\""""created\"""":\""""2018-04-11T10:04:00.815710832Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV JENKINS_SLAVE_AGENT_PORT=50000\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""f28491aa4d16c9db8733a4efb3588ee73941a4e3e1cfb9f5f50293f894307e96\"""",\""""parent\"""":\""""5b04eeff8ce581af83220d1ff6f7d89806ddffa65a231b727f94f88ea19d02f0\"""",\""""created\"""":\""""2018-04-11T10:03:58.893891854Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV JENKINS_HOME=/var/jenkins_home\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""5b04eeff8ce581af83220d1ff6f7d89806ddffa65a231b727f94f88ea19d02f0\"""",\""""parent\"""":\""""b53c255204bc7107ee1d6d118ec9369ff868b049ae612f9927425f187df17b72\"""",\""""created\"""":\""""2018-04-11T10:03:57.021756845Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG agent_port=50000\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""b53c255204bc7107ee1d6d118ec9369ff868b049ae612f9927425f187df17b72\"""",\""""parent\"""":\""""d43650df8b88478fd70464bfdf9812cfcc5c2ae32e753cf62d9c68fe3aacc7bb\"""",\""""created\"""":\""""2018-04-11T10:03:55.096596096Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG http_port=8080\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""d43650df8b88478fd70464bfdf9812cfcc5c2ae32e753cf62d9c68fe3aacc7bb\"""",\""""parent\"""":\""""35011013d34dc72632cd62d92a28a8f789af62553f69feddb4f8f1699b2288c4\"""",\""""created\"""":\""""2018-04-11T10:03:53.140848234Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG gid=1000\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""35011013d34dc72632cd62d92a28a8f789af62553f69feddb4f8f1699b2288c4\"""",\""""parent\"""":\""""d9ae2fc7815c25c54be284aac826ee5fc6506626a9b5e839db1dbff5aed85ec7\"""",\""""created\"""":\""""2018-04-11T10:03:51.085212134Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG uid=1000\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""d9ae2fc7815c25c54be284aac826ee5fc6506626a9b5e839db1dbff5aed85ec7\"""",\""""parent\"""":\""""edaba7f43f101619ec35d84a5362844daa52078110c37ac76913a116b75bb0a7\"""",\""""created\"""":\""""2018-04-11T10:03:49.08677048Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG group=jenkins\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""edaba7f43f101619ec35d84a5362844daa52078110c37ac76913a116b75bb0a7\"""",\""""parent\"""":\""""27a497071bd1dbbda48bb96b8c4390c76f4b894722330e4f58fad50195326761\"""",\""""created\"""":\""""2018-04-11T10:03:47.139089021Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ARG user=jenkins\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""27a497071bd1dbbda48bb96b8c4390c76f4b894722330e4f58fad50195326761\"""",\""""parent\"""":\""""558e2b91047ab320c5fae50f79befa3e641fff2f7e2af49e7cc0dfbecc16635b\"""",\""""created\"""":\""""2018-04-11T10:03:45.06746326Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c apt-get update \\u0026\\u0026 apt-get install -y git curl \\u0026\\u0026 rm -rf /var/lib/apt/lists/*\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""558e2b91047ab320c5fae50f79befa3e641fff2f7e2af49e7cc0dfbecc16635b\"""",\""""parent\"""":\""""180253c9de444b27748a3818b60ac46af5979f9a2b8f714fbe8ee9d8403e4835\"""",\""""created\"""":\""""2018-03-19T21:23:43.026367652Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c /var/lib/dpkg/info/ca-certificates-java.postinst configure\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""180253c9de444b27748a3818b60ac46af5979f9a2b8f714fbe8ee9d8403e4835\"""",\""""parent\"""":\""""4443d1ea64bf58fcf407017a70cf91b3d2fc25b535f397f3f8d4cbcc21a8def3\"""",\""""created\"""":\""""2018-03-19T21:23:40.069312316Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c set -ex; \\t\\tif [ ! -d /usr/share/man/man1 ]; then \\t\\tmkdir -p /usr/share/man/man1; \\tfi; \\t\\tapt-get update; \\tapt-get install -y \\t\\topenjdk-8-jdk=\\\""""$JAVA_DEBIAN_VERSION\\\"""" \\t\\tca-certificates-java=\\\""""$CA_CERTIFICATES_JAVA_VERSION\\\"""" \\t; \\trm -rf /var/lib/apt/lists/*; \\t\\t[ \\\""""$(readlink -f \\\""""$JAVA_HOME\\\"""")\\\"""" = \\\""""$(docker-java-home)\\\"""" ]; \\t\\tupdate-alternatives --get-selections | awk -v home=\\\""""$(readlink -f \\\""""$JAVA_HOME\\\"""")\\\"""" 'index($3, home) == 1 { $2 = \\\""""manual\\\""""; print | \\\""""update-alternatives --set-selections\\\"""" }'; \\tupdate-alternatives --query java | grep -q 'Status: manual'\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""4443d1ea64bf58fcf407017a70cf91b3d2fc25b535f397f3f8d4cbcc21a8def3\"""",\""""parent\"""":\""""ad324fb058ede2aae5c0e928616606c62a4a1cf05fde29bff7a54258ef8df607\"""",\""""created\"""":\""""2018-03-19T21:22:53.380702822Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV CA_CERTIFICATES_JAVA_VERSION=20170531+nmu1\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""ad324fb058ede2aae5c0e928616606c62a4a1cf05fde29bff7a54258ef8df607\"""",\""""parent\"""":\""""7fbcff09cd7a0d880d8a97db3fe60cc283c0eeff7280031e2fab224d604924b3\"""",\""""created\"""":\""""2018-03-19T21:22:53.161529652Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV JAVA_DEBIAN_VERSION=8u162-b12-1~deb9u1\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""7fbcff09cd7a0d880d8a97db3fe60cc283c0eeff7280031e2fab224d604924b3\"""",\""""parent\"""":\""""7cf8101307fa1a243c2fb827fea79335afbd5741cf50082302acca5db55261a0\"""",\""""created\"""":\""""2018-03-19T21:22:52.921597489Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV JAVA_VERSION=8u162\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""7cf8101307fa1a243c2fb827fea79335afbd5741cf50082302acca5db55261a0\"""",\""""parent\"""":\""""a720b859b07e0ada6c73ba160b57de6182c351a951870317963cf2af6fc69d27\"""",\""""created\"""":\""""2018-03-14T11:09:02.54085877Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV JAVA_HOME=/docker-java-home\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""a720b859b07e0ada6c73ba160b57de6182c351a951870317963cf2af6fc69d27\"""",\""""parent\"""":\""""f9ddd3ece1d40d678cbdbf50a022175acd3ee1f58836eb886a2f44b0ec068523\"""",\""""created\"""":\""""2018-03-14T11:09:02.292291489Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c ln -svT \\\""""/usr/lib/jvm/java-8-openjdk-$(dpkg --print-architecture)\\\"""" /docker-java-home\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""f9ddd3ece1d40d678cbdbf50a022175acd3ee1f58836eb886a2f44b0ec068523\"""",\""""parent\"""":\""""b715162a4a7e7b2637f5442c739a99cc20454be35cb453c05ad16d0c1d62cc9b\"""",\""""created\"""":\""""2018-03-14T11:09:01.580163972Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c { \\t\\techo '#!/bin/sh'; \\t\\techo 'set -e'; \\t\\techo; \\t\\techo 'dirname \\\""""$(dirname \\\""""$(readlink -f \\\""""$(which javac || which java)\\\"""")\\\"""")\\\""""'; \\t} \\u003e /usr/local/bin/docker-java-home \\t\\u0026\\u0026 chmod +x /usr/local/bin/docker-java-home\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""b715162a4a7e7b2637f5442c739a99cc20454be35cb453c05ad16d0c1d62cc9b\"""",\""""parent\"""":\""""800c0f0cafc88aeedbe69b61ad8edf65106dafb4f52b1782de9120b092071cc4\"""",\""""created\"""":\""""2018-03-14T11:09:00.816087216Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ENV LANG=C.UTF-8\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""800c0f0cafc88aeedbe69b61ad8edf65106dafb4f52b1782de9120b092071cc4\"""",\""""parent\"""":\""""62ccd3d687be9f840b76a54a1e732cba0761f6af13c3c1840a4c534daf293602\"""",\""""created\"""":\""""2018-03-14T11:09:00.593223495Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c apt-get update \\u0026\\u0026 apt-get install -y --no-install-recommends \\t\\tbzip2 \\t\\tunzip \\t\\txz-utils \\t\\u0026\\u0026 rm -rf /var/lib/apt/lists/*\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""62ccd3d687be9f840b76a54a1e732cba0761f6af13c3c1840a4c534daf293602\"""",\""""parent\"""":\""""810dccd4311b51f59ddfbd269bda46dacedec3f27bf217c609e84570d49233be\"""",\""""created\"""":\""""2018-03-13T23:56:55.333999982Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c apt-get update \\u0026\\u0026 apt-get install -y --no-install-recommends \\t\\tbzr \\t\\tgit \\t\\tmercurial \\t\\topenssh-client \\t\\tsubversion \\t\\t\\t\\tprocps \\t\\u0026\\u0026 rm -rf /var/lib/apt/lists/*\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""810dccd4311b51f59ddfbd269bda46dacedec3f27bf217c609e84570d49233be\"""",\""""parent\"""":\""""e373d06b9c7892f565ac0428471923e278834968483972e524a310bf6eb43f67\"""",\""""created\"""":\""""2018-03-13T23:56:22.934435097Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c set -ex; \\tif ! command -v gpg \\u003e /dev/null; then \\t\\tapt-get update; \\t\\tapt-get install -y --no-install-recommends \\t\\t\\tgnupg \\t\\t\\tdirmngr \\t\\t; \\t\\trm -rf /var/lib/apt/lists/*; \\tfi\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""e373d06b9c7892f565ac0428471923e278834968483972e524a310bf6eb43f67\"""",\""""parent\"""":\""""ae4f7e1d7298f3e0bd9e0aabd310d8217afabb81c2b10bd6a9aa20c7c94de182\"""",\""""created\"""":\""""2018-03-13T23:56:19.194216172Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c apt-get update \\u0026\\u0026 apt-get install -y --no-install-recommends \\t\\tca-certificates \\t\\tcurl \\t\\twget \\t\\u0026\\u0026 rm -rf /var/lib/apt/lists/*\""""]}}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""ae4f7e1d7298f3e0bd9e0aabd310d8217afabb81c2b10bd6a9aa20c7c94de182\"""",\""""parent\"""":\""""8aabf8f13bdf0feed398c7c8b0ac24db59d60d0d06f9dc6cb1400de4df898324\"""",\""""created\"""":\""""2018-03-13T22:26:49.547884802Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) CMD [\\\""""bash\\\""""]\""""]},\""""throwaway\"""":true}"""" }, { """"v1Compatibility"""": """"{\""""id\"""":\""""8aabf8f13bdf0feed398c7c8b0ac24db59d60d0d06f9dc6cb1400de4df898324\"""",\""""created\"""":\""""2018-03-13T22:26:49.153534342Z\"""",\""""container_config\"""":{\""""Cmd\"""":[\""""/bin/sh -c #(nop) ADD file:b380df301ccb5ca09f0d7cd5697ed402fa55f3e9bc5df2f4d489ba31f28de58a in / \""""]}}"""" } ], """"signatures"""": [ { """"header"""": { """"jwk"""": { """"crv"""": """"P-256"""", """"kid"""": """"JTGT:L32L:BI2G:TG3A:RLO2:6H6K:OZXC:HFYY:SPZW:QXEZ:XNK3:2KAL"""", """"kty"""": """"EC"""", """"x"""": """"Q3Qr-lNb0qyOiyFBHzF5v4gxgVp_drIszYInemkB464"""", """"y"""": """"oBzQUsRherctDgDVxwOR0zkij_B7GAL9B20PWVtHzfs"""" }, """"alg"""": """"ES256"""" }, """"signature"""": """"X6BvXE9thNyPHIvyH_0GE1blPxznEcPbILpB5HBvI2339gSA5t4HAE7GMalgKLyThJbjrNjiq_PQqreFMBpqzA"""", """"protected"""": """"eyJmb3JtYXRMZW5ndGgiOjU4ODg5LCJmb3JtYXRUYWlsIjoiQ24wIiwidGltZSI6IjIwMTgtMTAtMTVUMTM6MDI6MjdaIn0"""" } ] } ERROR(130)-22:27:48-root@int-agent89-mwst9:/var/lib/mesos/slave/store/docker/layers # ls -alh | grep 'fb401ed0b4f9de5534c224811d0dca94b876225c31ddc3cbb0993ad2faf32cff\|bb54e3dc4a692004ece424733d0ff7bbfe930bdc65491776e2f409a461e838f1\|2ef4a3efec8e89b04691b207a4fe3fddd2e3d3a2a539f8e7dc8a645773758e1f\|36bfc452a3f13511e6f9c0345ffac82054286eea40b02263c83ea791d00a22ea\|ea550cbe252ca3ca06771b0e837d1f7cc61c50404f7f1920ed5bc6cc816d8a0a\|c4c140687ce95f2d23202b7efaa543ef7d064b226864fb4a0ae68bef283e074f\|2f04614030c91b184503348484780b62bda952a2905cc1fb035c5a6f371ca239\|a0ad27b653be7a9400d9e46784f897097cf24f157bfc3fb647e49c360b7c12c1\|f9f80b9791fb4dd2e525de37e33e0b730809bed2b2edb7898b802d3fde3d9c08\|0b648b2545d81712d56536a42a0a99d3d78008bf6f1f04f22d140c427b645b76\|1982429c4258750d3f70dc0f1c563e870725c6d807e9444d9785456d626ef556\|35f1271210258c711e78bd483beee0a7fc9c2d4ee12cf78787d7992277c5a957\|12d065bb54a4529a4afddab4e580cb4cb78e509a2c7d6faa7e3967510f783887\|87ea27ae8f877a36014c0eeb3dba7c0a7b29cfc2d779a10bf438dbce8078cc62\|a13b24c2681da6bacf19dd99041ba2921a11d780fd19edc1b6c0d0b982deb730\|39807d50654d8e842f4bf754ef8c4e5b72780dc6403db3e78611e7356c3c2173\|0bd3eec411704b039cfeb3aec6c6e14882dbc8db68561fe0904548ba708394c5\|f7ede5d3afbd0518be79d105f5ccb6a1f96117291300ba3acc32d9007c71d6a9\|2f6f3100494a6050ecd28e7e1a647a3fc1f5e9dbde64a6646dc5e8f418bc7397\|ee31e5ffacfc38c46b9fdad7ecb47ab748d5eb9baf82ea8cf2766df3f9e18cdc\|9fc63748986386223542f05b4c9685481a303ebb3e30603e174e65121906ea55\|e37610cd26b6212ad9670f101e7d8d70cf108097ead058de50d5d7401cad8b22\|84f29eb0d8745c0510f8d48c347676ff1283ec4921bb2e3e7f462681d8e62ba3\|a0d4487b3138fe03804efb8c6aec561dcd75c6dc45d57458b9e46ca1184b49c2\|bfc2e6fd97b4b81249e6ad41a0709d1d62de3c35251e1e07b39a855111c6f465\|c7344d4d0e1cb95c3e531da5c342fc2fb289e4e51334ca2d1b430de241b28722\|662ff9075f47894530c796a8e9b2fafe7f1bc5be7ec38b35d757d442b6833a84\|5f2b4cf9791c5f60b455dd4b847028183d61d7d519fc5c6bd1b6fe50ef119e74\|25792f867e22dae243cd783474ff4eed5d54a323665c33205058d5a6adf545d9\|1ff15242962d34fd623e14d7a02db78036b900bb2e526a74d44fc903477cc9e7\|fc1935bf0e7b1a47c7b5947f1ba2c9347472b19f6a5cf4e0553c7c8dd4dac4c2\|3d5eaaf246c3dd14cb486d1249fc41cdba012693b3d20dafd9e9a395d104f740\|ede8ff5b5cfbe778c731602a3da663b93f3c3304b3a34255412d1631d4b77c18\|b2579c2af8b8b590584e666fc0a0427e6baa3b46bf9a1ec9b05aa37bbf6fe2fc\|d12f6a61c0f00cd2eeb401653c4ec8f52f9fc996514741a6af9e5c57a082f3a8\|fef12017cd9254dbbbb938bfadd8a9401241149c9a52c347373395a52a1c4ebe\|f81de07a754d737714fcaec8c6d6db1656e64d6cca56e347ccebf23776b00617\|9780d937abc8609fd998fecda4c079d6b097b1cb6a714de993329a2f56548133\|72437e315d169d61ae285cc7cb1fecb5ada9249936e77c39ce28d85e7e6d727d\|9fa0879e36daa2a5a08776124775e11a7fdb51ca2fb19407baa7dedad3ef97a8\|764cacaf206bfbeb6b11777e9b65d16ad6350530f559b9f78ee15413548d7749\|7f59fe46c09c4aa9f59ab9268e34571a0bbc0cfa3c0ac4e4d8e55fdf1392bbd5\|35094fea2af3c400c07fc444f7477b90caf1013b87e40696cfa9e57fc0f9c80b\|9d76182ab3259983e8e14d62c9461f4b08404a709f216b805649ac1a448a1fcc\|fa49692d5ea14814e6efec3af9517b31313a7461065aeb184acdb68d5df23196\|afabf13b1f4f211c64b5535617d2ceb53038f76ceb76b74d8c2c0c66f9a5c9a3\|64c769f00fe141263cbb93af8dded7de9c06f1c96010adbb22f4dc6acd0decc9\|0900f664d5ac780bca9e91b18de9f4680eb0eb4842e81cbe26b3a22f3eb8fdec\|d70c19ee5d688e37e2fce67f01fd691c2509d45dd7903f68ede5ca78d1b7bdc4\|c94d46c5aef23a2d56d2ac621b24a6778dcfbd81e1af03f2940ae91dcd991a20\|09e94fdaea35187ca8029b96c2180f04615c8e66a8255dc939f16c9429ba003f\|3d98aa3161d75b086c38aee654e4612b83b1bbfdea154785537df95ae157ca5a\|6de02152b7dce054eee77ce934fa5652c2142a8a210060e19872db23d9afdacd\|c178ed9e80695242e5ae0f6611ddce99a857d3e5a295207bc45d1e680d3c1379\|9aed7a6b86d981cfa853eae0e5a716d16be68b4475884e6b275762e8770946c8\|6345f944b3e6d2f6e763b56af436b46d26c5424282ca0d2c57994beb2d5d1707\|0b6d17bc569a59dadd8ce806ecf325bc39e0445b5097b004c4c5771d030a754c\|14f6f33e1ddfb9196b4840cc1fb6de989a15de625581d7774c3629a4cc57e48d\|3f780bea9189f56ebd2e363fe471a428bd75a2972e7cfc7dd997633b4a8eb951\|8384b8fae6e40ab737b32a849d462b1fedf10b34be86a67ead51536b5e278a90\|14e86ed54896e03f91257436134c46b72b2230612de7487e208d09b2409c9366\|f69098d00b0be8a19c2189ff51d0fd286f3d1bba4b02024f5127e3afc65c241d\|8a6f3dcf5ca0ca44988097f598596e70cd0f59d9600c115d65fb35ff330848c2\|1c0472056a7e929c90a91ff8b93a3e66caad76310b8431a1efe220ad734e066c\|9e7db4035ee231fa280046b53c0513db0df2c9d49938cc74c7fb195d398ce5fb\|cd5438b31ee369424fea23648dc89429b4cef574734f0933e48516c7ba9caf65\|8e94e7a7be85cbc5b57c4f3e140535e297674e8d30df6e0ed6dbf2128b82935a\|f28491aa4d16c9db8733a4efb3588ee73941a4e3e1cfb9f5f50293f894307e96\|5b04eeff8ce581af83220d1ff6f7d89806ddffa65a231b727f94f88ea19d02f0\|b53c255204bc7107ee1d6d118ec9369ff868b049ae612f9927425f187df17b72\|d43650df8b88478fd70464bfdf9812cfcc5c2ae32e753cf62d9c68fe3aacc7bb\|35011013d34dc72632cd62d92a28a8f789af62553f69feddb4f8f1699b2288c4\|d9ae2fc7815c25c54be284aac826ee5fc6506626a9b5e839db1dbff5aed85ec7\|edaba7f43f101619ec35d84a5362844daa52078110c37ac76913a116b75bb0a7\|27a497071bd1dbbda48bb96b8c4390c76f4b894722330e4f58fad50195326761\|558e2b91047ab320c5fae50f79befa3e641fff2f7e2af49e7cc0dfbecc16635b\|180253c9de444b27748a3818b60ac46af5979f9a2b8f714fbe8ee9d8403e4835\|4443d1ea64bf58fcf407017a70cf91b3d2fc25b535f397f3f8d4cbcc21a8def3\|ad324fb058ede2aae5c0e928616606c62a4a1cf05fde29bff7a54258ef8df607\|7fbcff09cd7a0d880d8a97db3fe60cc283c0eeff7280031e2fab224d604924b3\|7cf8101307fa1a243c2fb827fea79335afbd5741cf50082302acca5db55261a0\|a720b859b07e0ada6c73ba160b57de6182c351a951870317963cf2af6fc69d27\|f9ddd3ece1d40d678cbdbf50a022175acd3ee1f58836eb886a2f44b0ec068523\|b715162a4a7e7b2637f5442c739a99cc20454be35cb453c05ad16d0c1d62cc9b\|800c0f0cafc88aeedbe69b61ad8edf65106dafb4f52b1782de9120b092071cc4\|62ccd3d687be9f840b76a54a1e732cba0761f6af13c3c1840a4c534daf293602\|810dccd4311b51f59ddfbd269bda46dacedec3f27bf217c609e84570d49233be\|e373d06b9c7892f565ac0428471923e278834968483972e524a310bf6eb43f67\|ae4f7e1d7298f3e0bd9e0aabd310d8217afabb81c2b10bd6a9aa20c7c94de182\|8aabf8f13bdf0feed398c7c8b0ac24db59d60d0d06f9dc6cb1400de4df898324' drwxr-xr-x. 3 root root 40 Oct 15 10:23 8aabf8f13bdf0feed398c7c8b0ac24db59d60d0d06f9dc6cb1400de4df898324 drwxr-xr-x. 3 root root 40 Oct 15 10:23 ae4f7e1d7298f3e0bd9e0aabd310d8217afabb81c2b10bd6a9aa20c7c94de182 ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9321","10/16/2018 05:40:50",5,"Add an optional `vendor` field in `Resource.DiskInfo.Source`. ""This will allow the framework to recover volumes reported by the corresponding CSI plugin across agent ID changes. When an agent changes its ID, all reservation information related to resources coming from a given resource provider will be lost, so frameworks needs an unique identifier to identify if a new volume associated with the new agent ID is the same volume. Since CSI volume ID are not unique across different plugins, we will need to add a new {{vendor}} field, which together with the existing {{id}} field can provide the means to globally uniquely identify this source.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0 +"MESOS-9324","10/16/2018 22:52:35",3,"Resource fragmentation: frameworks may be starved of port resources in the presence of large number roles with quota. ""In our environment where there are 1.5k frameworks and quota is heavily utilized, we would experience a severe resource fragmentation issue. Specifically, we observed a large number of port-less offers circulating in the cluster. Thus frameworks that need port resources are not able to launch tasks even if their roles have quota (because currently, we can only set quota for scalar resources, not port range resources). While most of the 1.5k frameworks do not suppress today and we believe the situation will significantly improve once they do. Still, I think there are some improvements the Mesos allocator can make to help. h3. How resource becomes fragmented The origin of these port-less offers stems from quota chopping. Specifically, when chopping an agent to satisfy a role’s quota, we will also hand out resources that this role does not have quota for (as long as it does not break other role’s quota). These “extra resources” certainly includes ALL the remaining port resources on the agent. After this offer, the agent will be left with no port resources even though it still has CPUs and etc. Later, these resources may be offered to other frameworks but they are useless due to no ports. Now we have some “bad offers” in the cluster. h3. How resource fragmentation prolonged A resource offer, once it is declined (e.g. due to no ports), is recovered by the allocator and offered to other frameworks again. Before this happens, it is possible that this offer might be able to merge with either the remaining resources or other declined resources on the same agent. However, it is conceivable that not uncommonly, the declined offer will be hand out again *as-is*. This is especially probable if the allocator makes offers faster than the framework offer response time. As a result, we will observe the circulation of bad offers across different frameworks. These bad offers will exist for a long time before being consolidated again. For how long? *The longevity of the bad offer will be roughly proportional to the number of active frameworks*. In the worse case, once all the active frameworks have (hopefully long) declined the bad offer, the bad offer will have nowhere to go and finally start to merge with other resources on that agent. Note, since the allocator performance has greatly improved in the past several months. The scenario described here could be increasingly common. Also, as we introduce quota limits and hierarchical quota, there will be much more agent chopping, making resource fragmentation even worse. h3. Near-term Mitigations As mentioned above, the longevity of a bad offer is proportional to the active frameworks. Thus framework suppression will certainly help. In addition, from the Mesos side, a couple of mitigation measures are worth considering (other than the long-term optimistic allocation strategy): 1. Adding a defragment interval once in a while in the allocator. For example, each minute or a dozen allocation cycles or so, we will pause the allocation, rescind all the offers and start allocating again. This essentially eliminates all the circulating bad offers by giving them a chance to be consolidated. Think of this as a periodic “reboot” of the allocator. 2. Consider chopping non-quota resources as well. Right now, for resources such as ports (or any other resources that the role does not have quota for), all are allocated in a single offer. We could choose to chop these non-quota resources as well. For example, port resources can be distributed proportionally to allocated CPU resources. 3. Provide support for specifying port quantities. With this, we can utilize the existing quota or `min_allocatable_resources` APIs to guarantee a certain number of port resources.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9331","10/17/2018 21:44:44",1,"Some library functions ignore failures from ::close which should probably be handled. ""Multiple functions e.g., in {{stout}} ignore the return value of {{::close}} with the following rationale, I believe this is incorrect in general. Especially when not calling {{::fsync}} after a write operation, the kernel might buffer writes and only trigger write-related failures when the file descriptor is closed, see e.g., the note on error handling [here|http://man7.org/linux/man-pages/man2/close.2.html#NOTES]. We should audit our code to make sure that failures to {{::close}} file descriptors are properly propagated when needed."""," // We ignore the return value of close(). This is because users // calling this function are interested in the return value of // write(). Also an unsuccessful close() doesn't affect the write.",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9332","10/18/2018 09:30:57",3,"Nested container should run as the same user of its parent container by default. ""Currently when launching a debug container, by default Mesos agent will use the executor's user as the debug container's user if the `user` field is not specified in the debug container's `commandInfo` (see [this code|https://github.com/apache/mesos/blob/1.7.0/src/slave/http.cpp#L2559] for details). This is OK for the command task since the command executor's user is same with command task's user (see [this code|https://github.com/apache/mesos/blob/1.7.0/src/slave/slave.cpp#L6068:L6070] for details), so the debug container will be launched as the same user of the task. But for the task in a task group, the default executor's user is same with the framework user (see [this code|https://github.com/apache/mesos/blob/1.7.0/src/slave/slave.cpp#L8959] for details), so in this case the debug container will be launched as the same user of the framework rather than the task. So in a scenario that framework user is a normal user but the task user is root, the debug container will be launched as the normal which is not desired, the expectation is the debug container should run as the same user of the container it debugs.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9333","10/18/2018 18:10:41",3,"Document usage and build of new Mesos CLI ""Stating how to compile and use the Mesos CLI + its limitations (only Mesos containerizer, exec DEBUG follows task-user).""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9334","10/19/2018 11:02:09",5,"Container stuck at ISOLATING state due to libevent poll never returns. ""We found UCR container may be stuck at `ISOLATING` state:  In the above logs, the state of container `1e5b8fc3-5c9e-4159-a0b9-3d46595a5b54` was transitioned to `ISOLATING` at 09:13:23, but did not transitioned to any other states until it was destroyed due to the executor registration timeout (10 mins). And the destroy can never complete since it needs to wait for the container to finish isolating."""," 2018-10-03 09:13:23: I1003 09:13:23.274561 2355 containerizer.cpp:3122] Transitioning the state of container 1e5b8fc3-5c9e-4159-a0b9-3d46595a5b54 from PREPARING to ISOLATING 2018-10-03 09:13:23: I1003 09:13:23.279223 2354 cni.cpp:962] Bind mounted '/proc/5244/ns/net' to '/run/mesos/isolators/network/cni/1e5b8fc3-5c9e-4159-a0b9-3d46595a5b54/ns' for container 1e5b8fc3-5c9e-4159-a0b9-3d46595a5b54 2018-10-03 09:23:22: I1003 09:23:22.879868 2354 containerizer.cpp:2459] Destroying container 1e5b8fc3-5c9e-4159-a0b9-3d46595a5b54 in ISOLATING state ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9341","10/22/2018 06:52:23",3,"Add non-interactive test(s) for `mesos task exec` ""As a source, we could use the tests in https://github.com/dcos/dcos-core-cli/blob/b930d2004dceb47090004ab658f35cb608bc70e4/python/lib/dcoscli/tests/integrations/test_task.py""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9342","10/22/2018 06:54:18",3,"Add interactive test(s) for `mesos task exec` ""As a source, we could use the tests in https://github.com/dcos/dcos-core-cli/blob/b930d2004dceb47090004ab658f35cb608bc70e4/python/lib/dcoscli/tests/integrations/test_task.py This will require new helper functions to get the input/output of the command.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9343","10/22/2018 06:56:03",3,"Add test(s) for `mesos task attach` on task launched with a TTY ""As a source, we could use the tests in https://github.com/dcos/dcos-core-cli/blob/b930d2004dceb47090004ab658f35cb608bc70e4/python/lib/dcoscli/tests/integrations/test_task.py""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9344","10/22/2018 06:57:18",3,"Add test for `mesos task attach` on task launched without a TTY ""As a source, we could use the tests in https://github.com/dcos/dcos-core-cli/blob/b930d2004dceb47090004ab658f35cb608bc70e4/python/lib/dcoscli/tests/integrations/test_task.py""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9357","10/25/2018 22:35:25",1,"FetcherTest.DuplicateFileURI fails on macos ""I see {{FetcherTest.DuplicateFileURI}} fail pretty reliably on macos, e.g., 10.14. """," ../../src/tests/fetcher_tests.cpp:173 Value of: os::exists(""""two"""") Actual: false Expected: true ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9366","11/01/2018 03:10:16",3,"Test `HealthCheckTest.HealthyTaskNonShell` can hang. ""In {{HealthCheckTest.HealthyTaskNonShell}} the {{statusRunning}} future is incorrectly checked before being waited: [https://github.com/apache/mesos/blob/d8062f231b9f27889b7cae7a42eef49e4eed79ec/src/tests/health_check_tests.cpp#L673] As a result, if for some arbitrary reason there is only one task status update sent (e.g., {{TASK_FAILED}}), {{statusRunning->state()}} will make the test hang forever: (The line number above are not correct because of additional logging I added to triage this error.)"""," #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 #1 0x00007fc1d9a9991c in std::condition_variable::wait(std::unique_lock&) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #2 0x00005652770d1950 in synchronized_wait () at ../../3rdparty/stout/include/stout/synchronized.hpp:201 #3 0x00007fc1e76ba909 in process::Gate::wait () at ../../../3rdparty/libprocess/src/gate.hpp:50 #4 0x00007fc1e768c01d in process::ProcessManager::wait () at ../../../3rdparty/libprocess/src/process.cpp:3232 #5 0x00007fc1e76917fd in process::wait () at ../../../3rdparty/libprocess/src/process.cpp:3973 #6 0x00007fc1e75ebf11 in process::Latch::await () at ../../../3rdparty/libprocess/src/latch.cpp:63 #7 0x0000565275431ff6 in process::Future::await () at ../../3rdparty/libprocess/include/process/future.hpp:1289 #8 0x0000565275441825 in process::Future::get () at ../../3rdparty/libprocess/include/process/future.hpp:1301 #9 0x0000565275432198 in process::Future::operator-> () at ../../3rdparty/libprocess/include/process/future.hpp:1319 #10 0x0000565275db5ef1 in mesos::internal::tests::HealthCheckTest_HealthyTaskNonShell_Test::TestBody () at ../../src/tests/health_check_tests.cpp:682 #11 0x000056527717296b in testing::internal::HandleSehExceptionsInMethodIfSupported () at googletest-release-1.8.0/googletest/src/gtest.cc:2402 #12 0x000056527716ca6b in testing::internal::HandleExceptionsInMethodIfSupported () at googletest-release-1.8.0/googletest/src/gtest.cc:2438 #13 0x0000565277149b82 in testing::Test::Run () at googletest-release-1.8.0/googletest/src/gtest.cc:2475 #14 0x000056527714a4a8 in testing::TestInfo::Run () at googletest-release-1.8.0/googletest/src/gtest.cc:2656 #15 0x000056527714ab45 in testing::TestCase::Run () at googletest-release-1.8.0/googletest/src/gtest.cc:2774 #16 0x0000565277151d3e in testing::internal::UnitTestImpl::RunAllTests () at googletest-release-1.8.0/googletest/src/gtest.cc:4649 #17 0x0000565277173703 in testing::internal::HandleSehExceptionsInMethodIfSupported () at googletest-release-1.8.0/googletest/src/gtest.cc:2402 #18 0x000056527716d69d in testing::internal::HandleExceptionsInMethodIfSupported () at googletest-release-1.8.0/googletest/src/gtest.cc:2438 #19 0x00005652771508da in testing::UnitTest::Run () at googletest-release-1.8.0/googletest/src/gtest.cc:4257 #20 0x0000565276034020 in RUN_ALL_TESTS () at ../3rdparty/googletest-release-1.8.0/googletest/include/gtest/gtest.h:2233 #21 0x0000565276033ab7 in main () at ../../src/tests/main.cpp:168",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9384","11/13/2018 09:49:01",5,"Resource providers reported by master should reflect connected resource providers ""Currently, the master will remember any resource provider it saw ever and report it e.g., in {{GET_AGENTS}} responses, regardless of whether the resource provider is currently connected or not. This is not very intuitive. The master should instead only report resource providers which are currently connected. Agents can still report even disconnected resource providers.""","",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9386","11/14/2018 16:38:45",5,"Implement Seccomp profile inheritance for POD containers ""Child containers inherit its parent container's Seccomp profile by default. Also, Seccomp profile can be overridden by a Framework for a particular child container by specifying a path to the Seccomp profile. Mesos containerizer persists information about containers on disk via `ContainerLaunchInfo` proto, which includes `ContainerSeccompProfile` proto. Mesos containerizer should use this proto to load the parent's profile for a child container. When a child inherits the parent's Seccomp profile, Mesos agent doesn't have to re-read a Seccomp profile from the disk, which was used for the parent container. Otherwise, we would have to check that a file content hasn't changed since the last time the parent was launched.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9395","11/16/2018 12:10:05",5,"Check failure on `StorageLocalResourceProviderProcess::applyCreateDisk`. ""Observed the following agent failure on one of our staging clusters: """," Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: I1116 11:57:24.641331 26684 http.cpp:1799] Processing GET_AGENT call Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: I1116 11:57:24.650429 26679 http.cpp:1117] HTTP POST for /slave(1)/api/v1/resource_provider from 172.31.8.65:57790 Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: I1116 11:57:24.650629 26679 manager.cpp:672] Subscribing resource provider {""""attributes"""":[{""""name"""":""""lvm-vg-name"""",""""text"""":{""""value"""":""""lvm-double-1540383639""""},""""type"""":""""SCALAR""""},{""""name"""":""""dss-asset-id"""",""""text"""":{""""value"""":""""6AbZV6W2DrK4YgcIR3ICVo""""},""""type"""":""""SCALAR""""}],""""default_reservations"""":[{""""principal"""":""""storage-principal"""",""""role"""":""""dcos-storage"""",""""type"""":""""DYNAMIC""""}],""""id"""":{""""value"""":""""8326e931-41f2-4f45-9174-13fe35c19300""""},""""name"""":""""rp_6AbZV6W2DrK4YgcIR3ICVo"""",""""storage"""":{""""plugin"""":{""""containers"""":[{""""command"""":{""""environment"""":{""""variables"""":[{""""name"""":""""PATH"""",""""type"""":""""VALUE"""",""""value"""":""""/opt/mesosphere/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""""},{""""name"""":""""LD_LIBRARY_PATH"""",""""type"""":""""VALUE"""",""""value"""":""""/opt/mesosphere/lib""""},{""""name"""":""""CONTAINER_LOGGER_DESTINATION_TYPE"""",""""type"""":""""VALUE"""",""""value"""":""""journald+logrotate""""},{""""name"""":""""CONTAINER_LOGGER_EXTRA_LABELS"""",""""type"""":""""VALUE"""",""""value"""":""""{\""""CSI_PLUGIN\"""":\""""csilvm\""""}""""}]},""""shell"""":true,""""uris"""":[{""""executable"""":true,""""extract"""":false,""""value"""":""""""""}],""""value"""":""""echo \""""a *:* rwm\"""" > /sys/fs/cgroup/devices`cat /proc/self/cgroup | grep devices | cut -d : -f 3`/devices.allow; exec ./csilvm -devices=/dev/xvdk,/dev/xvdj -volume-group=lvm-double-1540383639 -unix-addr-env=CSI_ENDPOINT -tag=6AbZV6W2DrK4YgcIR3ICVo""""},""""resources"""":[{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":0.1},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":128.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":10.0},""""type"""":""""SCALAR""""}],""""services"""":[""""CONTROLLER_SERVICE"""",""""NODE_SERVICE""""]}],""""name"""":""""plugin_6AbZV6W2DrK4YgcIR3ICVo"""",""""type"""":""""io.mesosphere.dcos.storage.csilvm""""}},""""type"""":""""org.apache.mesos.rp.local.storage""""} Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: I1116 11:57:24.690474 26685 provider.cpp:546] Received SUBSCRIBED event Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: I1116 11:57:24.690521 26685 provider.cpp:1492] Subscribed with ID 8326e931-41f2-4f45-9174-13fe35c19300 Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: I1116 11:57:24.690657 26681 status_update_manager_process.hpp:314] Recovering operation status update manager Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: F1116 11:57:24.691496 26682 provider.cpp:3121] Check failed: resource.disk().source().has_profile() != resource.disk().source().has_id() (1 vs. 1) Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: *** Check failure stack trace: *** Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecb099e9fd google::LogMessage::Fail() Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecb09a082d google::LogMessage::SendToLog() Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecb099e5ec google::LogMessage::Flush() Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecb09a1129 google::LogMessageFatal::~LogMessageFatal() Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecb01654ca mesos::internal::StorageLocalResourceProviderProcess::applyCreateDisk() Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecb017c683 mesos::internal::StorageLocalResourceProviderProcess::_applyOperation() Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecb017d64a _ZZN5mesos8internal35StorageLocalResourceProviderProcess26reconcileOperationStatusesEvENKUlRKNS0_26StatusUpdateManagerProcessIN2id4UUIDENS0_27UpdateOperationStatusRecordENS0_28UpdateOperationStatusMessageEE5StateEE_clESA_ Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecb017dd21 _ZNO6lambda12CallableOnceIFN7process6FutureI7NothingEEvEE10CallableFnINS_8internal7PartialIZN5mesos8internal35StorageLocalResourceProviderProcess26reconcileOperationStatusesEvEUlRKNSB_26StatusUpdateManagerProcessIN2id4UUIDENSB_27UpdateOperationStatusRecordENSB_28UpdateOperationStatusMessageEE5StateEE_ISJ_EEEEclEv Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecafa0ce97 _ZNO6lambda12CallableOnceIFvPN7process11ProcessBaseEEE10CallableFnINS_8internal7PartialIZNS1_8internal8DispatchINS1_6FutureI7NothingEEEclINS0_IFSD_vEEEEESD_RKNS1_4UPIDEOT_EUlSt10unique_ptrINS1_7PromiseISC_EESt14default_deleteISP_EEOSH_S3_E_JSS_SH_St12_PlaceholderILi1EEEEEEclEOS3_ Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecb08eec51 process::ProcessBase::consume() Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecb09056cc process::ProcessManager::resume() Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecb090b186 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecad5d8070 (unknown) Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecacdf6e25 start_thread Nov 16 11:57:24 int-mountvolumeagent2-soak112s.testing.mesosphe.re mesos-agent[26663]: @ 0x7fecacb20bad __clone ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9399","11/19/2018 11:11:12",2,"Update 'mesos task list' to only list running tasks ""Doing a {{mesos task list}} currently returns all tasks that have ever been run (not just running tasks). The default behavior should be to return only the running tasks and offer an option to return all of them. To tell them apart, there should be a state field in the table returned by this command.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9406","11/20/2018 01:07:24",2,"Allow for optionally unbundled leveldb from CMake builds. ""Following the example of unbundled libevent and libarchive, we should allow for unbundled leveldb if the user wishes so. For leveldb, this task is not as trivial as one would hope due to the fact that we link leveldb statically. This forces us to satisfy leveldb's strong dependencies against gpertools (tcmalloc) as well as snappy. Alternatively, we would resort into linking leveldb dynamically, solving these issues. ""","",0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9411","11/21/2018 13:46:03",5,"Validation of JWT tokens using HS256 hashing algorithm is not thread safe. ""from the [OpenSSL documentation|https://www.openssl.org/docs/man1.0.2/crypto/hmac.html]: {quote} It places the result in {{md}} (which must have space for the output of the hash function, which is no more than {{EVP_MAX_MD_SIZE}} bytes). If {{md}} is {{NULL}}, the digest is placed in a static array. The size of the output is placed in {{md_len}}, unless it is {{NULL}}. Note: passing a {{NULL}} value for {{md}} to use the static array is not thread safe. {quote} We are calling {{HMAC()}} as follows: Given that this code does not run inside a process, race conditions could occur."""," unsigned int md_len = 0; unsigned char* rc = HMAC( EVP_sha256(), secret.data(), secret.size(), reinterpret_cast(message.data()), message.size(), nullptr, // <----- This is `md` &md_len); if (rc == nullptr) { return Error(addErrorReason(""""HMAC failed"""")); } return string(reinterpret_cast(rc), md_len); ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9419","11/26/2018 23:21:23",3,"Executor to framework message crashes master if framework has not re-registered. ""If the executor sends a framework message after a master failover, and the framework has not yet re-registered with the master, this will crash the master: This is because Framework::send proceeds if the framework is disconnected. In the case of a recovered framework, it will not have a pid or http connection yet: https://github.com/apache/mesos/blob/9b889a10927b13510a1d02e7328925dba3438a0b/src/master/master.hpp#L2590-L2610 The executor to framework path does not guard against the framework being disconnected, unlike the status update path: https://github.com/apache/mesos/blob/9b889a10927b13510a1d02e7328925dba3438a0b/src/master/master.cpp#L6472-L6495 vs. https://github.com/apache/mesos/blob/9b889a10927b13510a1d02e7328925dba3438a0b/src/master/master.cpp#L8371-L8373 It was reported that this crash didn't occur for the user on 1.2.0, however the issue appears to present there as well, so we will try to backport a test to see if it's indeed not occurring in 1.2.0."""," W20181105 22:02:48.782819 172709 master.hpp:2304] Master attempted to send message to disconnected framework 03dc2603-acd6-491e-\ 8717-3f03e5ee37f4-0000 (Cook-1.24.0-9299b474217db499c9d28738050b359ac8dd55bb) F20181105 22:02:48.782830 172709 master.hpp:2314] CHECK_SOME(pid): is NONE *** Check failure stack trace: *** *** @ 0x7f09e016b6cd google::LogMessage::Fail() *** @ 0x7f09e016d38d google::LogMessage::SendToLog() *** @ 0x7f09e016b2b3 google::LogMessage::Flush() *** @ 0x7f09e016de09 google::LogMessageFatal::~LogMessageFatal() *** @ 0x7f09df086228 _CheckFatal::~_CheckFatal() *** @ 0x7f09df3a403d mesos::internal::master::Framework::send<>() *** @ 0x7f09df2f4886 mesos::internal::master::Master::executorMessage() *** @ 0x7f09df3b06a4 _ZN15ProtobufProcessIN5mesos8internal6master6MasterEE8handlerNINS1_26ExecutorToFrameworkMessageEJRKNS0\ _7SlaveIDERKNS0_11FrameworkIDERKNS0_10ExecutorIDERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEEJS9_SC_SF_SN_EEEvPS3_MS3\ _FvRKN7process4UPIDEDpT1_ESS_SN_DpMT_KFT0_vE @ 0x7f09df345b43 std::_Function_handler<>::_M_invoke() *** @ 0x7f09df36930f ProtobufProcess<>::consume() *** @ 0x7f09df2e0ff5 mesos::internal::master::Master::_consume() *** @ 0x7f09df2f5542 mesos::internal::master::Master::consume() *** @ 0x7f09e00d9c7a process::ProcessManager::resume() *** @ 0x7f09e00dd836 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv *** @ 0x7f09dd467ac8 execute_native_thread_routine *** @ 0x7f09dd6f6b50 start_thread *** @ 0x7f09dcc7030d (unknown) // Sends a message to the connected framework. template void Framework::send(const Message& message) { if (!connected()) { LOG(WARNING) << """"Master attempted to send message to disconnected"""" << """" framework """" << *this; // XXX proceeds! } metrics.incrementEvent(message); if (http.isSome()) { if (!http->send(message)) { LOG(WARNING) << """"Unable to send event to framework """" << *this << """":"""" << """" connection closed""""; } } else { CHECK_SOME(pid); // XXX Will crash. master->send(pid.get(), message); } } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9427","11/29/2018 23:10:01",5,"Revisit quota documentation. ""At this point the quota documentation in the docs/ folder has become rather stale. It would be good to at least update any inaccuracies and ideally re-write it to better reflect the current thinking.""","",0,0,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9434","12/01/2018 03:47:51",2,"Completed framework update streams may retry forever ""Since the agent/RP currently does not GC operation status update streams when frameworks are torn down, it's possible that active update streams associated with completed frameworks may remain and continue retrying forever. We should add a mechanism to complete these streams when the framework becomes completed. A couple options which have come up during discussion: * Have the master acknowledge updates associated with completed frameworks. Note that since completed frameworks are currently only tracked by the master in memory, a master failover could prevent this from working perfectly. * Extend the RP API to allow the GC of particular update streams, and have the agent GC streams associated with a framework when it receives a {{ShutdownFrameworkMessage}}. This would also require the addition of a new method to the status update manager.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0 +"MESOS-9462","12/07/2018 19:04:31",3,"Devices in a container are inaccessible due to `nodev` on `/var/run`. ""A recent [patch|https://reviews.apache.org/r/69086/] (commit ede8155d1d043137e15007c48da36ac5fa0b5124) changes the behavior of how standard device nodes (e.g., /dev/null, etc.) are setup. It uses bind mount (from host) now (instead of mknod). The devices nodes are created under `/var/run/mesos/containers//devices`, and then bind mounted to the container root filesystem. This is problematic for those Linux distros that mount `/var/run` (or `/run`) as `nodev`. For instance, CentOS 7.4: As a result, the `/dev/null` devices in the container will inherit the `nodev` from `/run` on the host This will cause """"Permission Denied"""" error when a process in the container tries to open the device node. You can try to reproduce this issue using Mesos Mini And the, go to Marathon UI (http://localhost:8080), and launch an app using the following config You'll see the task failed with """"Permission Denied"""". The task will run normally if you use `mesos/mesos-mini:master-2018-12-01`"""," [jie@core-dev ~]$ cat /proc/self/mountinfo | grep """"/run\ """" 24 62 0:19 / /run rw,nosuid,nodev shared:23 - tmpfs tmpfs rw,seclabel,mode=755 [jie@core-dev ~]$ cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core) 629 625 0:121 /mesos/containers/49f1da14-d741-4030-994c-0d8ed5093b13/devices/null /dev/null rw,nosuid,nodev - tmpfs tmpfs rw,mode=755 docker run --rm --privileged -p 5050:5050 -p 5051:5051 -p 8080:8080 mesos/mesos-mini:master-2018-12-06 { """"id"""": """"/test"""", """"cmd"""": """"dd if=/dev/zero of=file bs=1024 count=1 oflag=dsync"""", """"cpus"""": 1, """"mem"""": 128, """"disk"""": 128, """"instances"""": 1, """"container"""": { """"type"""": """"MESOS"""", """"docker"""": { """"image"""": """"ubuntu:18.04"""" } } } ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9469","12/11/2018 12:59:31",1,"Mesos does not validate framework-supplied FrameworkIDs ""Since Mesos masters do not persist frameworks (MESOS-1719) we might subscribe schedulers with self-assigned {{FrameworkIDs}}. While we cannot confirm that used {{FrameworkIDs}} were indeed assigned by Mesos masters, we should still validate the supplied values to make sure they do not break our internal assumptions (e.g., IDs might be used to construct filesystem paths).""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9471","12/11/2018 22:19:52",3,"Master should track operations on agent default resources. ""Make {{Master::updateSlave()}} add operations that the agent sends and the master doesn't know. Right now only operations from SLRPs are added to the master's in-memory state.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9472","12/11/2018 22:23:04",3,"Unblock operation feedback on agent default resources. ""# Remove {{CHECK}} marked with a TODO in {{Master::updateOperationStatus()}}. # Update {{Master::acknowledgeOperationStatus()}}, remove the CHECK requiring a resource provider ID. # Remove validation in {{Option validate(mesos::scheduler::Call& call, const Option& principal)}}""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9473","12/11/2018 22:24:17",8,"Add end to end tests for operations on agent default resources. ""Making note of particular cases we need to test: * Verify that frameworks will receive OPERATION_GONE_BY_OPERATOR for operations on agent default resources when an agent is marked gone * Verify that frameworks will receive OPERATION_GONE_BY_OPERATOR when they reconcile operations on agents which have been marked gone""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9474","12/12/2018 20:10:47",3,"Master does not respect authorization result for `CREATE_DISK` and `DESTROY_DISK`. ""On our internal cluster with a custom authorizer module we observed the following problem: The authorizer module caches authorization results, and that's why there was only one logged authorization requset. The problem is that, the logged request was for {{CREATE_MOUNT_DISK}}, and the result was {{deny}}, but despite the denial, all {{CREATE_DISK}} operations were processed, however another {{RESERVE}} operation was dropped because of this denial. The bug is that the master pushed a authorization future in the {{futures}} vector in {{Master::accept}} for each {{DESTROY_DISK}}: [https://github.com/apache/mesos/blob/18356bf3f4ac730b4a798261aad042555c4a4834/src/master/master.cpp#L4599-L4601] However, the master never popped and checked the future in {{Master::_accept}}, but go ahead to process the operation: [https://github.com/apache/mesos/blob/18356bf3f4ac730b4a798261aad042555c4a4834/src/master/master.cpp#L5706] The future ended up mismatched with the {{RESERVE}} operation, causing it to be dropped. Updated: a similar bug would happen if we have a validation error for {{LAUNCH_GROUP}}. See MESOS-9480."""," I1212 14:16:58.424782 12492 master.cpp:4489] Processing ACCEPT call for offers: [ 98c96586-1980-4007-9651-18dd837f529a-O16233 ] on agent 98c96586-1980-4007-9651-18dd837f529a-S1 at slave(1)@172.31.15.78:5051 (172.31.15.78) for framework 98c96586-1980-4007-9651-18dd837f529a-0009 (storage) I1212 14:16:58.424831 12492 master.cpp:3949] Authorizing principal 'storage-principal' to destroy disk 'disk(allocated: dcos-storage)(reservations: [(DYNAMIC,dcos-storage,storage-principal)])[MOUNT(g4FmoSCHyiqTZWw1nkOd0v9_d17d081f-c138-472d-9e0b-aa6b21e43c7c,profile-b)]:100' I1212 14:16:58.424913 12492 master.cpp:3949] Authorizing principal 'storage-principal' to destroy disk 'disk(allocated: dcos-storage)(reservations: [(DYNAMIC,dcos-storage,storage-principal)])[MOUNT(g4FmoSCHyiqTZWw1nkOd0v9_c62a4e89-f902-4968-96d8-69b9efa5f35a,profile-b)]:100' I1212 14:16:58.424979 12492 master.cpp:3949] Authorizing principal 'storage-principal' to destroy disk 'disk(allocated: dcos-storage)(reservations: [(DYNAMIC,dcos-storage,storage-principal)])[MOUNT(g4FmoSCHyiqTZWw1nkOd0v9_1f490a57-9e41-41a0-b7be-408147281113,profile-b)]:100' I1212 14:16:58.425055 12492 master.cpp:3653] Authorizing principal 'storage-principal' to reserve resources 'disk(allocated: dcos-storage)(reservations: [(DYNAMIC,dcos-storage,storage-principal)])[MOUNT(g4FmoSCHyiqTZWw1nkOd0v9_c60ec637-3f67-4d41-bbc8-448c022c05fb,profile-b)]:100' I1212 14:16:58.425118 12492 master.cpp:3949] Authorizing principal 'storage-principal' to destroy disk 'disk(allocated: dcos-storage)(reservations: [(DYNAMIC,dcos-storage,storage-principal)])[MOUNT(g4FmoSCHyiqTZWw1nkOd0v9_5ce79163-aef2-4fd4-8073-f310dec239bf,profile-b)]:100' I1212 14:16:58.425499 12488 authorizer.cpp:957] dstport=5050 dstip=172.31.5.116 result=deny object="""""""" action=""""DESTROY_MOUNT_DISK"""" uid=""""storage-principal"""" reason="""""""" authorizer=""""mesos-master"""" timestamp=2018-12-12 14:16:58.425453056+00:00 type=audit I1212 14:16:58.426434 12483 master.cpp:5769] Processing DESTROY_DISK operation for volume disk(allocated: dcos-storage)(reservations: [(DYNAMIC,dcos-storage,storage-principal)])[MOUNT(g4FmoSCHyiqTZWw1nkOd0v9_d17d081f-c138-472d-9e0b-aa6b21e43c7c,profile-b)]:100 from framework 98c96586-1980-4007-9651-18dd837f529a-0 .78:5051 (172.31.15.78) I1212 14:16:58.426746 12483 master.cpp:5769] Processing DESTROY_DISK operation for volume disk(allocated: dcos-storage)(reservations: [(DYNAMIC,dcos-storage,storage-principal)])[MOUNT(g4FmoSCHyiqTZWw1nkOd0v9_c62a4e89-f902-4968-96d8-69b9efa5f35a,profile-b)]:100 from framework 98c96586-1980-4007-9651-18dd837f529a-0 .78:5051 (172.31.15.78) I1212 14:16:58.427112 12483 master.cpp:5769] Processing DESTROY_DISK operation for volume disk(allocated: dcos-storage)(reservations: [(DYNAMIC,dcos-storage,storage-principal)])[MOUNT(g4FmoSCHyiqTZWw1nkOd0v9_1f490a57-9e41-41a0-b7be-408147281113,profile-b)]:100 from framework 98c96586-1980-4007-9651-18dd837f529a-0 .78:5051 (172.31.15.78) W1212 14:16:58.427274 12483 master.cpp:2275] Dropping RESERVE operation from framework 98c96586-1980-4007-9651-18dd837f529a-0009 (storage): Not authorized to reserve resources as 'storage-principal' I1212 14:16:58.427366 12483 master.cpp:5769] Processing DESTROY_DISK operation for volume disk(allocated: dcos-storage)(reservations: [(DYNAMIC,dcos-storage,storage-principal)])[MOUNT(g4FmoSCHyiqTZWw1nkOd0v9_5ce79163-aef2-4fd4-8073-f310dec239bf,profile-b)]:100 from framework 98c96586-1980-4007-9651-18dd837f529a-0009 (storage) to agent 98c96586-1980-4007-9651-18dd837f529a-S1 at slave(1)@172.31.15.78:5051 (172.31.15.78)",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9477","12/13/2018 18:42:16",5,"Documentation for operation feedback ""Review: https://reviews.apache.org/r/69871/""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9480","12/14/2018 18:59:05",2,"Master may skip processing authorization results for `LAUNCH_GROUP`. ""If there is a validation error for {{LAUNCH_GROUP}}, or if there are multiple authorization errors for some of the tasks in a {{LAUNCH_GROUP}}, the master will skip processing the remaining authorization results, which would result in these authorization results being examined by subsequent operations incorrectly: https://github.com/apache/mesos/blob/3ade731d0c1772206c4afdf56318cfab6356acee/src/master/master.cpp#L5487-L5521 ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9485","12/18/2018 05:58:10",5,"Unit test for master operation authorization. ""We should create a unit test for MESOS-9474 and MESOS-9480. To make the test easier, we might want to consider using operation feedback once MESOS-9472 is done.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9486","12/18/2018 06:03:17",1,"Set up `object.value` for `CREATE_DISK` and `DESTROY_DISK` authorizations. ""We should be defensive and set up {{object.value}} to the role of the resource for authorization actions {{CREATE_BLOCK_DISK}}, {{DESTROY_BLOCK_DISK}}, {{CREATE_MOUNT_DISK}} and {{DESTROY_MOUNT_DISK}} so an old-school authorizer can rely on the field to perform authorization. This behavior is deprecated though, so will be removed once all {{*_WITH_ROLE}} authorization action aliases are removed through MESOS-7073.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9495","12/20/2018 22:17:38",1,"Test `MasterTest.CreateVolumesV1AuthorizationFailure` is flaky. "" This is because we authorize the retried registration before dropping it. Full log: [^mesos-ec2-centos-7-CMake.Mesos.MasterTest.CreateVolumesV1AuthorizationFailure-badrun.txt]"""," I1219 22:45:59.578233 26107 slave.cpp:1884] Will retry registration in 2.10132ms if necessary I1219 22:45:59.578615 26107 master.cpp:6125] Received register agent message from slave(463)@172.16.10.13:35739 (ip-172-16-10-13.ec2.internal) I1219 22:45:59.578830 26107 master.cpp:3871] Authorizing agent with principal 'test-principal' I1219 22:45:59.578975 26107 master.cpp:6183] Authorized registration of agent at slave(463)@172.16.10.13:35739 (ip-172-16-10-13.ec2.internal) I1219 22:45:59.579039 26107 master.cpp:6294] Registering agent at slave(463)@172.16.10.13:35739 (ip-172-16-10-13.ec2.internal) with id 85292fcc-b698-4377-9faa-f76b0ccd4ee5-S0 I1219 22:45:59.579540 26107 registrar.cpp:495] Applied 1 operations in 143852ns; attempting to update the registry I1219 22:45:59.580102 26109 registrar.cpp:552] Successfully updated the registry in 510208ns I1219 22:45:59.580312 26109 master.cpp:6342] Admitted agent 85292fcc-b698-4377-9faa-f76b0ccd4ee5-S0 at slave(463)@172.16.10.13:35739 (ip-172-16-10-13.ec2.internal) I1219 22:45:59.580968 26111 slave.cpp:1884] Will retry registration in 23.973874ms if necessary I1219 22:45:59.581447 26111 slave.cpp:1486] Registered with master master@172.16.10.13:35739; given agent ID 85292fcc-b698-4377-9faa-f76b0ccd4ee5-S0 ... I1219 22:45:59.580950 26109 master.cpp:6391] Registered agent 85292fcc-b698-4377-9faa-f76b0ccd4ee5-S0 at slave(463)@172.16.10.13:35739 (ip-172-16-10-13.ec2.internal) with disk(reservations: [(STATIC,role1)]):1024; cpus:2; mem:6796; ports:[31000-32000] I1219 22:45:59.583326 26109 master.cpp:6125] Received register agent message from slave(463)@172.16.10.13:35739 (ip-172-16-10-13.ec2.internal) I1219 22:45:59.583524 26109 master.cpp:3871] Authorizing agent with principal 'test-principal' ... W1219 22:45:59.584242 26109 master.cpp:6175] Refusing registration of agent at slave(463)@172.16.10.13:35739 (ip-172-16-10-13.ec2.internal): Authorization failure: Authorizer failure ... I1219 22:45:59.586944 26113 http.cpp:1185] HTTP POST for /master/api/v1 from 172.16.10.13:47412 I1219 22:45:59.587129 26113 http.cpp:682] Processing call CREATE_VOLUMES /home/centos/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-centos-7/mesos/src/tests/master_tests.cpp:9386: Failure Mock function called more times than expected - returning default value. Function call: authorized(@0x7f5066524720 48-byte object ) Returns: Abandoned Expected: to be called once Actual: called twice - over-saturated and active I1219 22:45:59.587761 26113 master.cpp:3811] Authorizing principal 'test-principal' to create volumes '[{""""disk"""":{""""persistence"""":{""""id"""":""""id1"""",""""principal"""":""""test-principal""""},""""volume"""":{""""container_path"""":""""path1"""",""""mode"""":""""RW""""}},""""name"""":""""disk"""",""""reservations"""":[{""""role"""":""""role1"""",""""type"""":""""STATIC""""}],""""scalar"""":{""""value"""":64.0},""""type"""":""""SCALAR""""}]' ... /home/centos/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-centos-7/mesos/src/tests/master_tests.cpp:9398: Failure Failed to wait 15secs for response",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9497","12/21/2018 04:33:16",5,"Parallel reads for expensive master v1 read-only calls. ""Similar to MESOS-9158 - we should make the operator API calls which serve master state perform computation of multiple such responses in parallel to reduce the performance impact on the master actor. Note that this includes the initial expensive SUBSCRIBE payload for the event streaming API, which is less straightforward to incorporate into the parallel serving logic since it performs writes (to track the subscriber) and produces an infinite response, unlike the other state related calls.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9501","12/27/2018 01:51:15",5,"Mesos executor fails to terminate and gets stuck after agent host reboot. ""When an agent host reboots, all of its containers are gone but the agent will still try to recover from its checkpointed state after reboot. The agent will soon discover that all the cgroup hierarchies are gone and assume (correctly) that the containers are destroyed. However, when trying to terminate the executor, the agent will first try to wait for the exit status of its container: https://github.com/apache/mesos/blob/master/src/slave/containerizer/mesos/containerizer.cpp#L2631 Agent dose so by `waitpid` on the checkpointed child process pid. If, after the agent host reboot, a new process with the same pid gets spawned, then the parent will wait for the wrong child process. This could get stuck until the wrongly waited-for process is somehow exited, see `ReaperProcess::wait()`: https://github.com/apache/mesos/blob/master/3rdparty/libprocess/src/reap.cpp#L88-L114 This will block the executor termination as well as future task status update (e.g. master might still think the task is running).""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9502","12/28/2018 06:17:31",8,"IOswitchboard cleanup could get stuck due to FD leak from a race. ""Our check container got stuck during destroy which in turned stucks the parent container. It is blocked by the I/O switchboard cleanup: 1223 18:04:41.000000 16269 switchboard.cpp:814] Sending SIGTERM to I/O switchboard server (pid: 62854) since container 4d4074fa-bc87-471b-8659-08e519b68e13.16d02532-675a-4acb-964d-57459ecf6b67.check-e91521a3-bf72-4ac4-8ead-3950e31cf09e is being destroyed .... 1227 04:45:38.000000 5189 switchboard.cpp:916] I/O switchboard server process for container 4d4074fa-bc87-471b-8659-08e519b68e13.16d02532-675a-4acb-964d-57459ecf6b67.check-e91521a3-bf72-4ac4-8ead-3950e31cf09e has terminated (status=N/A) Note the timestamp. *Root Cause:* Fundamentally, this is caused by a race between *.discard()* triggered by Check Container TIMEOUT and IOSB extracting ContainerIO object. This race could be exposed by overloaded/slow agent process. Please see how this race be triggered below: # Right after IOSB server process is running, Check container Timed out and the checker process returns a failure, which would close the HTTP connection with agent. # From the agent side, if the connection breaks, the handler will trigger a discard on the returned future and that will result in containerizer->launch()'s future transitioned to DISCARDED state. # In containerizer, the DISCARDED state will be propagated back to IOSB prepare(), which stop its continuation on *extracting the containerIO* (it implies the object being cleaned up and FDs(one end of pipes created in IOSB) being closed in its destructor). # Agent starts to destroy the container due to its discarded launch result, and asks IOSB to cleanup the container. # IOSB server is still running, so agent sends a SIGTERM. # SIGTERM handler unblocks the IOSB from redirecting (to redirect stdout/stderr from container to logger before exiting). # io::redirect() calls io::splice() and reads the other end of those pipes forever. This issue is *not easy to reproduce unless* on a busy agent, because the timeout has to happen exactly *AFTER* IOSB server is running and *BEFORE* IOSB extracts containerIO.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9504","12/31/2018 18:53:19",3,"Use ResourceQuantities in the allocator and sorter to improve performance. ""In allocator and sorter, we need to do a lot of quantity calculations. Currently, we use the full {{Resources}} type with utilities like {{createScalarResourceQuantities()}}, even though we only care about quantities. Replace {{Resources}} with {{ResourceQuantities}}. See: https://github.com/apache/mesos/blob/386b1fe99bb9d10af2abaca4832bf584b6181799/src/master/allocator/sorter/drf/sorter.hpp#L444-L445 https://reviews.apache.org/r/70061/ With the addition of ResourceQuantities, callers can now just do {{ResourceQuantities.fromScalarResources(r.scalars())}} instead of using {{Resources::createStrippedScalarQuantity()}}, which should actually be a bit more efficient since we only copy the shared pointers rather than construct new `Resource` objects.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9507","01/02/2019 22:17:22",5,"Agent could not recover due to empty docker volume checkpointed files. ""Agent could not recover due to empty docker volume checkpointed files. Please see logs: This might happen after hard reboot. Docker volume isolator uses `state::checkpoint()` function which creates a temporary file, then writes the data, then renames the temporary file to destination file. This function is atomic and supports `fsync` for the data. However, Docker volume isolator does not use `fsync` option for performance reasons, hence the data might be lost if page cache is not synced before reboot. Basically the docker volume is not mounted yet, so the docker volume isolator should skip recovering this volume."""," Nov 12 17:12:00 guppy mesos-agent[38960]: E1112 17:12:00.978682 38969 slave.cpp:6279] EXIT with status 1: Failed to perform recovery: Collect failed: Collect failed: Failed to recover docker volumes for orphan container e1b04051-1e4a-47a9-b866-1d625cda1d22: JSON parse failed: syntax error at line 1 near: Nov 12 17:12:00 guppy mesos-agent[38960]: To remedy this do as follows: Nov 12 17:12:00 guppy mesos-agent[38960]: Step 1: rm -f /var/lib/mesos/slave/meta/slaves/latest Nov 12 17:12:00 guppy mesos-agent[38960]: This ensures agent doesn't recover old live executors. Nov 12 17:12:00 guppy mesos-agent[38960]: Step 2: Restart the agent. Nov 12 17:12:00 guppy systemd[1]: dcos-mesos-slave.service: main process exited, code=exited, status=1/FAILURE Nov 12 17:12:00 guppy systemd[1]: Unit dcos-mesos-slave.service entered failed state. Nov 12 17:12:00 guppy systemd[1]: dcos-mesos-slave.service failed. ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9508","01/03/2019 07:50:52",2,"Official 1.7.0 tarball can't be built on Ubuntu 16.04 LTS. ""I installed Ubuntu 16.04.5 LTS in a VM and precisely followed the steps in [http://mesos.apache.org/documentation/latest/building/] to build Mesos (fetching the 1.7.0 release). Nevertheless what I get is the following error message: {code:sh} make[4]: Entering directory '/home/max/mesos-1.7.0/build/3rdparty/grpc-1.10.0' DEPENDENCY ERROR The target you are trying to run requires an OpenSSL implementation. Your system doesn't have one, and either the third_party directory doesn't have it, or your compiler can't build BoringSSL. """," make[4]: Entering directory '/home/max/mesos-1.7.0/build/3rdparty/grpc-1.10.0' DEPENDENCY ERROR The target you are trying to run requires an OpenSSL implementation. Your system doesn't have one, and either the third_party directory doesn't have it, or your compiler can't build BoringSSL. max@ubuntu:~/mesos-1.7.0/build$ ../configure checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for g++... g++ checking whether the C++ compiler works... yes checking for C++ compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking for gcc... gcc checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking whether gcc understands -c and -o together... yes checking whether ln -s works... yes checking for C++ compiler vendor... gnu checking for a sed that does not truncate output... /bin/sed checking for C++ compiler version... 5.4.0 checking for C++ compiler vendor... (cached) gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking whether make supports nested variables... yes checking dependency style of gcc... gcc3 checking dependency style of g++... gcc3 checking whether to enable maintainer-specific portions of Makefiles... yes checking for ar... ar checking the archiver (ar) interface... ar checking how to print strings... printf checking for a sed that does not truncate output... (cached) /bin/sed checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for fgrep... /bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking the maximum length of command line arguments... 1572864 checking how to convert x86_64-unknown-linux-gnu file names to x86_64-unknown-linux-gnu format... func_convert_file_noop checking how to convert x86_64-unknown-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking for sysroot... no checking for a working dd... /bin/dd checking how to truncate binary pipes... /bin/dd bs=4096 count=1 checking for mt... mt checking if mt is a manifest tool... no checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... yes checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no checking how to run the C++ preprocessor... g++ -E checking for ld used by g++... /usr/bin/ld -m elf_x86_64 checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... yes checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... (cached) GNU/Linux ld.so checking how to hardcode library paths into programs... immediate configure: creating ./config.lt config.lt: creating libtool checking whether to enable GC of unused sections... no configure: Setting up CXXFLAGS for g++ version >= 4.8 checking whether C++ compiler accepts -fstack-protector-strong... yes checking whether g++ supports C++11 features by default... no checking whether g++ supports C++11 features with -std=c++11... yes checking if compiler needs -Werror to reject unknown flags... no checking for the pthreads library -lpthreads... no checking whether pthreads work without any flags... no checking whether pthreads work with -Kthread... no checking whether pthreads work with -kthread... no checking for the pthreads library -llthread... no checking whether pthreads work with -pthread... yes checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE checking if more special flags are required for pthreads... no checking for PTHREAD_PRIO_INHERIT... yes configure: Setting up build environment for x86_64 linux-gnu checking for backtrace in -lunwind... no checking for main in -lgflags... no checking for patch... patch checking fts.h usability... yes checking fts.h presence... yes checking for fts.h... yes checking for library containing fts_close... none required checking apr_pools.h usability... yes checking apr_pools.h presence... yes checking for apr_pools.h... yes checking for apr_initialize in -lapr-1... yes checking for curl_global_init in -lcurl... yes checking for javac... /usr/bin/javac checking for java... /usr/bin/java checking value of Java system property 'java.home'... /usr/lib/jvm/java-8-openjdk-amd64/jre configure: using JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 checking whether or not we can build with JNI... yes checking for mvn... /usr/bin/mvn checking for javah... /usr/lib/jvm/java-8-openjdk-amd64/bin/javah checking for sasl_done in -lsasl2... yes checking SASL CRAM-MD5 support... yes checking for RAND_poll in -lcrypto... no checking openssl/ssl.h usability... no checking openssl/ssl.h presence... no checking for openssl/ssl.h... no checking svn_version.h usability... yes checking svn_version.h presence... yes checking for svn_version.h... yes checking for svn_stringbuf_create_ensure in -lsvn_subr-1... yes checking svn_delta.h usability... yes checking svn_delta.h presence... yes checking for svn_delta.h... yes checking for svn_txdelta in -lsvn_delta-1... yes checking whether to enable the XFS disk isolator... no checking zlib.h usability... yes checking zlib.h presence... yes checking for zlib.h... yes checking for deflate, gzread, gzwrite, inflate in -lz... yes checking C++ standard library for undefined behaviour with selected optimization level... no checking for python... /usr/bin/python checking for python version... 2.7 checking for python platform... linux2 checking for python script directory... ${prefix}/lib/python2.7/dist-packages checking for python extension module directory... ${exec_prefix}/lib/python2.7/dist-packages checking for python2.7... (cached) /usr/bin/python checking for a version of Python >= '2.1.0'... yes checking for a version of Python >= '2.6'... yes checking for the distutils Python package... yes checking for Python include path... -I/usr/include/python2.7 checking for Python library path... -L/usr/lib -lpython2.7 checking for Python site-packages path... /usr/lib/python2.7/dist-packages checking python extra libraries... -lpthread -ldl -lutil -lm checking python extra linking flags... -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions checking consistency of all components of python development environment... yes checking whether we can build usable Python eggs... cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++ yes checking for an old installation of the Mesos egg (before 0.20.0)... no checking whether to enable new CLI... no checking that generated files are newer than configure... done configure: creating ./config.status config.status: creating Makefile config.status: creating mesos.pc config.status: creating src/Makefile config.status: creating 3rdparty/Makefile config.status: creating 3rdparty/libprocess/Makefile config.status: creating 3rdparty/libprocess/include/Makefile config.status: creating 3rdparty/stout/Makefile config.status: creating 3rdparty/stout/include/Makefile config.status: creating 3rdparty/gmock_sources.cc config.status: creating bin/mesos.sh config.status: creating bin/mesos-agent.sh config.status: creating bin/mesos-local.sh config.status: creating bin/mesos-master.sh config.status: creating bin/mesos-slave.sh config.status: creating bin/mesos-tests.sh config.status: creating bin/mesos-agent-flags.sh config.status: creating bin/mesos-local-flags.sh config.status: creating bin/mesos-master-flags.sh config.status: creating bin/mesos-slave-flags.sh config.status: creating bin/mesos-tests-flags.sh config.status: creating bin/gdb-mesos-agent.sh config.status: creating bin/gdb-mesos-local.sh config.status: creating bin/gdb-mesos-master.sh config.status: creating bin/gdb-mesos-slave.sh config.status: creating bin/gdb-mesos-tests.sh config.status: creating bin/lldb-mesos-agent.sh config.status: creating bin/lldb-mesos-local.sh config.status: creating bin/lldb-mesos-master.sh config.status: creating bin/lldb-mesos-slave.sh config.status: creating bin/lldb-mesos-tests.sh config.status: creating bin/valgrind-mesos-agent.sh config.status: creating bin/valgrind-mesos-local.sh config.status: creating bin/valgrind-mesos-master.sh config.status: creating bin/valgrind-mesos-slave.sh config.status: creating bin/valgrind-mesos-tests.sh config.status: creating src/deploy/mesos-daemon.sh config.status: creating src/deploy/mesos-start-agents.sh config.status: creating src/deploy/mesos-start-cluster.sh config.status: creating src/deploy/mesos-start-masters.sh config.status: creating src/deploy/mesos-start-slaves.sh config.status: creating src/deploy/mesos-stop-agents.sh config.status: creating src/deploy/mesos-stop-cluster.sh config.status: creating src/deploy/mesos-stop-masters.sh config.status: creating src/deploy/mesos-stop-slaves.sh config.status: creating include/mesos/version.hpp config.status: creating src/java/generated/org/apache/mesos/MesosNativeLibrary.java config.status: creating mpi/mpiexec-mesos config.status: creating src/examples/java/test-exception-framework config.status: creating src/examples/java/test-executor config.status: creating src/examples/java/test-framework config.status: creating src/examples/java/test-multiple-executors-framework config.status: creating src/examples/java/test-log config.status: creating src/examples/java/v1-test-framework config.status: creating src/java/mesos.pom config.status: creating src/examples/python/test-executor config.status: creating src/examples/python/test-framework config.status: creating src/python/setup.py config.status: creating src/python/cli/setup.py config.status: creating src/python/interface/setup.py config.status: creating src/python/native_common/ext_modules.py config.status: creating src/python/executor/setup.py config.status: creating src/python/native/setup.py config.status: creating src/python/scheduler/setup.py config.status: linking src/python/native_common/ext_modules.py to src/python/executor/ext_modules.py config.status: linking src/python/native_common/ext_modules.py to src/python/scheduler/ext_modules.py config.status: executing depfiles commands config.status: executing libtool commands configure: Build option summary: CXX: g++ CXXFLAGS: -g1 -O0 -Wno-unused-local-typedefs -std=c++11 CPPFLAGS: -I/usr/include/subversion-1 -I/usr/include/apr-1 -I/usr/include/apr-1.0 LDFLAGS: LIBS: -lz -lsvn_delta-1 -lsvn_subr-1 -lsasl2 -lcurl -lapr-1 -lrt JAVA_TEST_LDFLAGS: -L/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server -R/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server -Wl,-ljvm JAVA_JVM_LIBRARY: /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server/libjvm.so ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9509","01/03/2019 18:02:35",5,"Benchmark command health checks in default executor ""TCP/HTTP health checks were extensively scale tested as part of https://mesosphere.com/blog/introducing-mesos-native-health-checks-apache-mesos-part-2/. We should do the same for command checks by default executor because it uses a very different mechanism (agent fork/execs the check command as a nested container) and will have very different scalability characteristics. We should also use these benchmarks as an opportunity to produce perf traces of the Mesos agent (both with and without process inheritance) so that a thorough analysis of the performance can be done as part of MESOS-9513.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-9513","01/08/2019 19:29:59",5,"Investigate command health check performance ""Users have reported performance issues caused by too many command health checks performed too quickly by the default executor. Use of the agent's LAUNCH_NESTED_CONTAINER_SESSION call can impact the containerizer's ability to successfully launch containers in general. We need to investigate this issue and decide on a path forward to improve the performance of command health checks.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9514","01/08/2019 23:12:36",2,"Reviewboard bot fails on verify-reviews.py. ""Seeing this on our Azure based Mesos CI for review requests. This is happening pretty much exactly since we landed https://github.com/apache/mesos/commit/3badf7179992e61f30f5a79da9d481dd451c7c2f#diff-0bcbb572aad3fe39e0e5c3c8a8c3e515"""," Started by timer [EnvInject] - Loading node environment variables. Building remotely on dummy-slave-01 (dummy-slave) in workspace /home/jenkins/workspace/mesos-reviewbot > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url https://github.com/apache/mesos # timeout=10 Pruning obsolete local branches Cleaning workspace > git rev-parse --verify HEAD # timeout=10 Resetting working tree > git reset --hard # timeout=10 > git clean -fdx # timeout=10 Fetching upstream changes from https://github.com/apache/mesos > git --version # timeout=10 > git fetch --tags --progress https://github.com/apache/mesos +refs/heads/*:refs/remotes/origin/* --prune > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision 3478e344fb77d931f6122980c6e94cd3913c441d (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 3478e344fb77d931f6122980c6e94cd3913c441d Commit message: """"Sent SIGKILL to I/O switchboard server as a safeguard."""" > git rev-list --no-walk 3478e344fb77d931f6122980c6e94cd3913c441d # timeout=10 [mesos-reviewbot] $ /usr/bin/env bash /tmp/jenkins5023908134863801311.sh git rev-parse HEAD Traceback (most recent call last): File """"/home/jenkins/workspace/mesos-reviewbot/mesos/support/verify-reviews.py"""", line 101, in HEAD = shell(""""git rev-parse HEAD"""") File """"/home/jenkins/workspace/mesos-reviewbot/mesos/support/verify-reviews.py"""", line 97, in shell out = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=True) File """"/usr/lib/python3.5/subprocess.py"""", line 626, in check_output **kwargs).stdout File """"/usr/lib/python3.5/subprocess.py"""", line 708, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command 'git rev-parse HEAD' returned non-zero exit status 128 Build step 'Execute shell' marked build as failure Finished: FAILURE ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9517","01/09/2019 16:49:01",8,"SLRP should treat gRPC timeouts as non-terminal errors, instead of reporting OPERATION_FAILED. ""1. framework executes a CREATE_DISK operation. 2. The SLRP issues a CreateVolume RPC to the plugin 3. The RPC call times out 4. The agent/SLRP translates non-terminal gRPC timeout errors (DeadlineExceeded) for """"CreateVolume"""" calls into OPERATION_FAILED, which is terminal. 5. framework receives a *terminal* OPERATION_FAILED status, so it executes another CREATE_DISK operation. 6. The second CREATE_DISK operation does not timeout. 7. The first CREATE_DISK operation was actually completed by the plugin, unbeknownst to the SLRP. 8. There's now an orphan volume in the storage system that no one is tracking. Proposed solution: the SLRP makes more intelligent decisions about non-terminal gRPC errors. For example, timeouts are likely expected for potentially long-running storage operations and should not be considered terminal. In such cases, the SLRP should NOT report OPERATION_FAILED and instead should re-issue the **same** (idempotent) CreateVolume call to the plugin to ascertain the status of the requested volume creation. Agent logs for the 3 orphan vols above: """," [jdefelice@ec101 DCOS-46889]$ grep -e 3bd1a1a9-43d3-485c-9275-59cebd64b07c agent.log Jan 09 11:10:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:10:27.896306 13189 provider.cpp:1548] Received CREATE_DISK operation 'a1BdfrEhy4ZLSNPZbDrzp1h-0' (uuid: 3bd1a1a9-43d3-485c-9275-59cebd64b07c) Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: E0109 11:11:27.904057 13190 provider.cpp:1605] Failed to apply operation (uuid: 3bd1a1a9-43d3-485c-9275-59cebd64b07c): Deadline Exceeded Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.904058 13192 status_update_manager_process.hpp:152] Received operation status update OPERATION_FAILED (Status UUID: 8c1ddad1-4adb-4df5-91fe-235d265a71d8) for operation UUID 3bd1a1a9-43d3-485c-9275-59cebd64b07c (framework-supplied ID 'a1BdfrEhy4ZLSNPZbDrzp1h-0') of framework 'c0b7cc7e-db35-450d-bf25-9e3183a07161-0002' on agent c0b7cc7e-db35-450d-bf25-9e3183a07161-S1 Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.904331 13192 status_update_manager_process.hpp:929] Checkpointing UPDATE for operation status update OPERATION_FAILED (Status UUID: 8c1ddad1-4adb-4df5-91fe-235d265a71d8) for operation UUID 3bd1a1a9-43d3-485c-9275-59cebd64b07c (framework-supplied ID 'a1BdfrEhy4ZLSNPZbDrzp1h-0') of framework 'c0b7cc7e-db35-450d-bf25-9e3183a07161-0002' on agent c0b7cc7e-db35-450d-bf25-9e3183a07161-S1 Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.947286 13189 slave.cpp:7696] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: 3bd1a1a9-43d3-485c-9275-59cebd64b07c) for framework c0b7cc7e-db35-450d-bf25-9e3183a07161-0002 (latest state: OPERATION_FAILED, status update state: OPERATION_FAILED)' Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.947376 13189 slave.cpp:8034] Updating the state of operation 'a1BdfrEhy4ZLSNPZbDrzp1h-0' (uuid: 3bd1a1a9-43d3-485c-9275-59cebd64b07c) for framework c0b7cc7e-db35-450d-bf25-9e3183a07161-0002 (latest state: OPERATION_FAILED, status update state: OPERATION_FAILED) Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.947407 13189 slave.cpp:7890] Forwarding status update of operation 'a1BdfrEhy4ZLSNPZbDrzp1h-0' (operation_uuid: 3bd1a1a9-43d3-485c-9275-59cebd64b07c) for framework c0b7cc7e-db35-450d-bf25-9e3183a07161-0002 Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.952689 13193 status_update_manager_process.hpp:252] Received operation status update acknowledgement (UUID: 8c1ddad1-4adb-4df5-91fe-235d265a71d8) for stream 3bd1a1a9-43d3-485c-9275-59cebd64b07c Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.952725 13193 status_update_manager_process.hpp:929] Checkpointing ACK for operation status update OPERATION_FAILED (Status UUID: 8c1ddad1-4adb-4df5-91fe-235d265a71d8) for operation UUID 3bd1a1a9-43d3-485c-9275-59cebd64b07c (framework-supplied ID 'a1BdfrEhy4ZLSNPZbDrzp1h-0') of framework 'c0b7cc7e-db35-450d-bf25-9e3183a07161-0002' on agent c0b7cc7e-db35-450d-bf25-9e3183a07161-S1 [jdefelice@ec101 DCOS-46889]$ grep -e 4acf1495-1a36-4939-a71b-75ca5aa73657 agent.log Jan 09 11:10:28 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:10:28.452811 13192 provider.cpp:1548] Received CREATE_DISK operation 'a5MU6JqxYpT9IWXM75cwuHO-0' (uuid: 4acf1495-1a36-4939-a71b-75ca5aa73657) Jan 09 11:11:28 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: E0109 11:11:28.460510 13190 provider.cpp:1605] Failed to apply operation (uuid: 4acf1495-1a36-4939-a71b-75ca5aa73657): Deadline Exceeded Jan 09 11:11:28 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:28.460511 13186 status_update_manager_process.hpp:152] Received operation status update OPERATION_FAILED (Status UUID: e810608b-58ac-47eb-bf19-9abcca6907a2) for operation UUID 4acf1495-1a36-4939-a71b-75ca5aa73657 (framework-supplied ID 'a5MU6JqxYpT9IWXM75cwuHO-0') of framework 'c0b7cc7e-db35-450d-bf25-9e3183a07161-0002' on agent c0b7cc7e-db35-450d-bf25-9e3183a07161-S1 Jan 09 11:11:28 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:28.460793 13186 status_update_manager_process.hpp:929] Checkpointing UPDATE for operation status update OPERATION_FAILED (Status UUID: e810608b-58ac-47eb-bf19-9abcca6907a2) for operation UUID 4acf1495-1a36-4939-a71b-75ca5aa73657 (framework-supplied ID 'a5MU6JqxYpT9IWXM75cwuHO-0') of framework 'c0b7cc7e-db35-450d-bf25-9e3183a07161-0002' on agent c0b7cc7e-db35-450d-bf25-9e3183a07161-S1 Jan 09 11:11:28 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:28.504062 13191 slave.cpp:7696] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: 4acf1495-1a36-4939-a71b-75ca5aa73657) for framework c0b7cc7e-db35-450d-bf25-9e3183a07161-0002 (latest state: OPERATION_FAILED, status update state: OPERATION_FAILED)' Jan 09 11:11:28 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:28.504133 13191 slave.cpp:8034] Updating the state of operation 'a5MU6JqxYpT9IWXM75cwuHO-0' (uuid: 4acf1495-1a36-4939-a71b-75ca5aa73657) for framework c0b7cc7e-db35-450d-bf25-9e3183a07161-0002 (latest state: OPERATION_FAILED, status update state: OPERATION_FAILED) Jan 09 11:11:28 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:28.504159 13191 slave.cpp:7890] Forwarding status update of operation 'a5MU6JqxYpT9IWXM75cwuHO-0' (operation_uuid: 4acf1495-1a36-4939-a71b-75ca5aa73657) for framework c0b7cc7e-db35-450d-bf25-9e3183a07161-0002 Jan 09 11:11:28 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:28.509495 13194 status_update_manager_process.hpp:252] Received operation status update acknowledgement (UUID: e810608b-58ac-47eb-bf19-9abcca6907a2) for stream 4acf1495-1a36-4939-a71b-75ca5aa73657 Jan 09 11:11:28 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:28.509521 13194 status_update_manager_process.hpp:929] Checkpointing ACK for operation status update OPERATION_FAILED (Status UUID: e810608b-58ac-47eb-bf19-9abcca6907a2) for operation UUID 4acf1495-1a36-4939-a71b-75ca5aa73657 (framework-supplied ID 'a5MU6JqxYpT9IWXM75cwuHO-0') of framework 'c0b7cc7e-db35-450d-bf25-9e3183a07161-0002' on agent c0b7cc7e-db35-450d-bf25-9e3183a07161-S1 [jdefelice@ec101 DCOS-46889]$ grep -e ca2bed2f-480e-4d35-af9e-1161a44c5b9b agent.log Jan 09 11:10:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:10:27.458933 13186 provider.cpp:1548] Received CREATE_DISK operation 'a3AvAF97UsHU6zIIPhyGdrY-0' (uuid: ca2bed2f-480e-4d35-af9e-1161a44c5b9b) Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: E0109 11:11:27.469853 13189 provider.cpp:1605] Failed to apply operation (uuid: ca2bed2f-480e-4d35-af9e-1161a44c5b9b): Deadline Exceeded Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.469859 13186 status_update_manager_process.hpp:152] Received operation status update OPERATION_FAILED (Status UUID: bb7807e8-dc2f-4f64-b611-d24a1e559317) for operation UUID ca2bed2f-480e-4d35-af9e-1161a44c5b9b (framework-supplied ID 'a3AvAF97UsHU6zIIPhyGdrY-0') of framework 'c0b7cc7e-db35-450d-bf25-9e3183a07161-0002' on agent c0b7cc7e-db35-450d-bf25-9e3183a07161-S1 Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.470120 13186 status_update_manager_process.hpp:929] Checkpointing UPDATE for operation status update OPERATION_FAILED (Status UUID: bb7807e8-dc2f-4f64-b611-d24a1e559317) for operation UUID ca2bed2f-480e-4d35-af9e-1161a44c5b9b (framework-supplied ID 'a3AvAF97UsHU6zIIPhyGdrY-0') of framework 'c0b7cc7e-db35-450d-bf25-9e3183a07161-0002' on agent c0b7cc7e-db35-450d-bf25-9e3183a07161-S1 Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.513059 13192 slave.cpp:7696] Handling resource provider message 'UPDATE_OPERATION_STATUS: (uuid: ca2bed2f-480e-4d35-af9e-1161a44c5b9b) for framework c0b7cc7e-db35-450d-bf25-9e3183a07161-0002 (latest state: OPERATION_FAILED, status update state: OPERATION_FAILED)' Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.513129 13192 slave.cpp:8034] Updating the state of operation 'a3AvAF97UsHU6zIIPhyGdrY-0' (uuid: ca2bed2f-480e-4d35-af9e-1161a44c5b9b) for framework c0b7cc7e-db35-450d-bf25-9e3183a07161-0002 (latest state: OPERATION_FAILED, status update state: OPERATION_FAILED) Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.513147 13192 slave.cpp:7890] Forwarding status update of operation 'a3AvAF97UsHU6zIIPhyGdrY-0' (operation_uuid: ca2bed2f-480e-4d35-af9e-1161a44c5b9b) for framework c0b7cc7e-db35-450d-bf25-9e3183a07161-0002 Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.518623 13191 status_update_manager_process.hpp:252] Received operation status update acknowledgement (UUID: bb7807e8-dc2f-4f64-b611-d24a1e559317) for stream ca2bed2f-480e-4d35-af9e-1161a44c5b9b Jan 09 11:11:27 ip-10-10-0-28.us-west-2.compute.internal mesos-agent[13170]: I0109 11:11:27.518656 13191 status_update_manager_process.hpp:929] Checkpointing ACK for operation status update OPERATION_FAILED (Status UUID: bb7807e8-dc2f-4f64-b611-d24a1e559317) for operation UUID ca2bed2f-480e-4d35-af9e-1161a44c5b9b (framework-supplied ID 'a3AvAF97UsHU6zIIPhyGdrY-0') of framework 'c0b7cc7e-db35-450d-bf25-9e3183a07161-0002' on agent c0b7cc7e-db35-450d-bf25-9e3183a07161-S1 ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0 +"MESOS-9519","01/10/2019 23:52:17",2,"Unable to build Mesos with CMake on Ubuntu 14.04. ""Running the following command to build Mesos on Ubuntu 14.04 will lead to the error shown below: The reason is that gRPC's CMake rules does not disable ALPN on systems with OpenSSL 1.0.1."""," OS=ubuntu:14.04 BUILDTOOL=cmake COMPILER=gcc CONFIGURATION='--verbose --enable-libevent --enable-ssl' ENVIRONMENT='GLOG_v=1 MESOS_VERBOSE=1' JOBS=48 nice support/docker-build.sh /mesos/build/3rdparty/grpc-1.10.0/src/grpc-1.10.0/src/core/tsi/ssl_transport_security.cc: In function 'tsi_result ssl_handshaker_extract_peer(tsi_handshaker*, tsi_peer*)': /mesos/build/3rdparty/grpc-1.10.0/src/grpc-1.10.0/src/core/tsi/ssl_transport_security.cc:1011:71: error: 'SSL_get0_alpn_selected' was not declared in this scope    SSL_get0_alpn_selected(impl->ssl, &alpn_selected, &alpn_selected_len);                                                                        ^ /mesos/build/3rdparty/grpc-1.10.0/src/grpc-1.10.0/src/core/tsi/ssl_transport_security.cc: In function 'tsi_result tsi_create_ssl_client_handshaker_factory(const tsi_ssl_pem_key_cert_pair*, const char*, const char*, const char**, uint16_t, tsi_ssl_client_handshaker_factory**)': /mesos/build/3rdparty/grpc-1.10.0/src/grpc-1.10.0/src/core/tsi/ssl_transport_security.cc:1417:73: error: 'SSL_CTX_set_alpn_protos' was not declared in this scope                static_cast(impl->alpn_protocol_list_length))) {                                                                          ^ /mesos/build/3rdparty/grpc-1.10.0/src/grpc-1.10.0/src/core/tsi/ssl_transport_security.cc: In function 'tsi_result tsi_create_ssl_server_handshaker_factory_ex(const tsi_ssl_pem_key_cert_pair*, size_t, const char*, tsi_client_certificate_request_type, const char*, const char**, uint16_t, tsi_ssl_server_handshaker_factory**)': /mesos/build/3rdparty/grpc-1.10.0/src/grpc-1.10.0/src/core/tsi/ssl_transport_security.cc:1557:79: error: 'SSL_CTX_set_alpn_select_cb' was not declared in this scope server_handshaker_factory_alpn_callback, impl);                                                                                ^ make[7]: *** [CMakeFiles/grpc.dir/src/core/tsi/ssl_transport_security.cc.o] Error 1 make[7]: Leaving directory `/mesos/build/3rdparty/grpc-1.10.0/src/grpc-1.10.0-build' make[6]: *** [CMakeFiles/grpc.dir/all] Error 2 make[6]: Leaving directory `/mesos/build/3rdparty/grpc-1.10.0/src/grpc-1.10.0-build' make[5]: *** [CMakeFiles/grpc.dir/rule] Error 2 make[5]: Leaving directory `/mesos/build/3rdparty/grpc-1.10.0/src/grpc-1.10.0-build' make[4]: *** [grpc] Error 2 make[4]: Leaving directory `/mesos/build/3rdparty/grpc-1.10.0/src/grpc-1.10.0-build' make[3]: *** [3rdparty/grpc-1.10.0/src/grpc-1.10.0-stamp/grpc-1.10.0-build] Error 2 make[3]: Leaving directory `/mesos/build' make[2]: *** [3rdparty/CMakeFiles/grpc-1.10.0.dir/all] Error 2 make[2]: *** Waiting for unfinished jobs....",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9523","01/15/2019 17:21:12",5,"Add per-framework allocatable resources matcher/filter. ""Currently, Mesos has a single global flag `min_allocatable_resources` that provides some control over the shape of the offer. But, being a global flag, finding a one-size-fits-all shape is hard and less than ideal. It will be great if frameworks can specify different shapes based on their needs. In addition to extending this flag to be per-framework. It is also a good opportunity to see if it can be more than `min_alloctable` e.g. providing more predicates such as max, (not) contain and etc. ""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9525","01/16/2019 18:17:58",3,"Agent capability for operation feedback on default resources ""We should add an agent capability to prevent the master from sending operations on agent default resources which request feedback to older agents which are not able to handle them.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9532","01/22/2019 22:21:50",2,"ResourceOffersTest.ResourceOfferWithMultipleSlaves is flaky. "" caused by this commit https://github.com/apache/mesos/commit/07bccc6377a180267d4251897a765acba9fa0c4d"""," 09:48:57 I0114 09:48:57.153340 6468 credentials.hpp:86] Loading credential for authentication from '/tmp/4X6jRy/credential' 09:48:57 E0114 09:48:57.153373 6468 slave.cpp:296] EXIT with status 1: Empty credential file '/tmp/4X6jRy/credential' (see --credential flag) ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9533","01/23/2019 01:00:40",2,"CniIsolatorTest.ROOT_CleanupAfterReboot is flaky. "" It was from this commit https://github.com/apache/mesos/commit/c338f5ada0123c0558658c6452ac3402d9fbec29"""," Error Message ../../src/tests/containerizer/cni_isolator_tests.cpp:2685 Mock function called more times than expected - returning directly. Function call: statusUpdate(0x7fffc7c05aa0, @0x7fe637918430 136-byte object <80-24 29-45 E6-7F 00-00 00-00 00-00 00-00 00-00 3E-E8 00-00 00-00 00-00 00-B8 0E-20 F0-55 00-00 C0-03 07-18 E6-7F 00-00 20-17 05-18 E6-7F 00-00 10-50 05-18 E6-7F 00-00 50-D1 04-18 E6-7F 00-00 ... 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 F0-89 16-E9 58-2B D7-41 00-00 00-00 01-00 00-00 18-00 00-00 0B-00 00-00>) Expected: to be called 3 times Actual: called 4 times - over-saturated and active Stacktrace ../../src/tests/containerizer/cni_isolator_tests.cpp:2685 Mock function called more times than expected - returning directly. Function call: statusUpdate(0x7fffc7c05aa0, @0x7fe637918430 136-byte object <80-24 29-45 E6-7F 00-00 00-00 00-00 00-00 00-00 3E-E8 00-00 00-00 00-00 00-B8 0E-20 F0-55 00-00 C0-03 07-18 E6-7F 00-00 20-17 05-18 E6-7F 00-00 10-50 05-18 E6-7F 00-00 50-D1 04-18 E6-7F 00-00 ... 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 F0-89 16-E9 58-2B D7-41 00-00 00-00 01-00 00-00 18-00 00-00 0B-00 00-00>) Expected: to be called 3 times Actual: called 4 times - over-saturated and active ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0 +"MESOS-9537","01/25/2019 20:21:43",3,"SLRP sends inconsistent status updates for dropped operations. ""The bug manifests in the following scenario: 1. Upon receiving profile updates, the SLRP sends an {{UPDATE_STATE}} to the agent with a new resource version. 2. At the same time, the agent sends an {{APPLY_OPERATION}} to the SLRP with the original resource version. 3. The SLRP asks the status update manager (SUM) to reply with an {{OPERATION_DROPPED}} to the framework because of the resource version mismatch. The status update is required to be acked. Then, it simply discards the operation (i.e., no bookkeeping). 4. The agent finds a missing operation in the {{UPDATE_STATE}} so it sends a {{RECONCILE_OPERATIONS}}. 5. The SLRP asks the SUM to reply with an {{OPERATION_DROPPED}} to the agent (without a framework ID set) because it no longer knows about the operation. 6. The SUM returns an error because the latter {{OPERATION_DROPPED}} is inconsistent with the earlier one since it does not have a framework ID.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9538","01/25/2019 22:38:25",3,"Agent `ReconcileOperations` handler should handle operation affecting default resources ""{{Slave::reconcileOperations()}} has to be updated to send {{OPERATION_DROPPED}} for unknown operations that don't have a resource provider ID.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9540","01/28/2019 20:09:54",2,"Support `DESTROY_DISK` on preprovisioned CSI volumes. ""Currently the experimental {{DESTROY_DISK}} operation only applies to {{BLOCK}} or {{MOUNT}} disk resources. We should consider supporting {{DESTROY_DISK}} on {{RAW}} disk resources with source IDs as well. This could be handy in e.g., the following scenario: 1. The framework issued a {{CREATE_DISK}}. 2. The SLRP received {{CREATE_DISK}} and translated to a {{CreateVolume}} CSI call. 3. While the {{CreateVolume}} is ongoing, the agent was restarted with a new agent ID, causing the SLRP to lose its bookkeeping and start with a new RP ID as well, and hence the {{CREATE_DISK}} operation was lost. 4. The {{CreateVolume}} call succeeded and the new SLRP picked it up as a preprovisioned CSI volume. In the above case, the framework should be able to choose to either """"import"""" the CSI volume through {{CREATE_DISK}}, or directly reclaim the space through {{DESTROY_DISK}}. Currently the framework needs to always import the CSI volume before reclaiming the space.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9542","01/29/2019 12:09:39",5,"Hierarchical allocator check failure when an operation on a shutdown framework finishes ""When a non-speculated operation like e.g., {{CREATE_DISK}} becomes terminal after the originating framework was torn down, we run into an assertion failure in the allocator. With non-speculated operations like e.g., {{CREATE_DISK}} it became possible that operations outlive their originating framework. This was not possible with speculated operations like {{RESERVE}} which were always applied immediately by the master. The master does not take this into account, but instead unconditionally calls {{Allocator::updateAllocation}} which asserts that the framework is still known to the allocator. Reproducer: * register a framework with the master. * add a master with a resource provider. * let the framework trigger a non-speculated operation like {{CREATE_DISK.}} * tear down the framework before a terminal operation status update reaches the master; this causes the master to e.g., remove the framework from the allocator. * let a terminal, successful operation status update reach the master * 💥  To solve this we should cleanup the lifetimes of operations. Since operations can outlive their framework (unlike e.g., tasks), we probably need a different approach here."""," I0129 11:55:35.764394 57857 master.cpp:11373] Updating the state of operation 'operation' (uuid: 10a782bd-9e60-42da-90d6-c00997a25645) for framework a4d0499b-c0d3-4abf-8458-73e595d061ce-0000 (latest state: OPERATION_PENDING, status update state: OPERATION_FINISHED) F0129 11:55:35.764744 57925 hierarchical.cpp:834] Check failed: frameworks.contains(frameworkId)",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9544","01/31/2019 02:29:37",5,"SLRP does not clean up destroyed persistent volumes. ""When a persistent volume created on a {{ROOT}} disk is destroyed, the agent will clean up its data: https://github.com/apache/mesos/blob/f44535bca811720fc272c9abad2bc78652d61fe3/src/slave/slave.cpp#L4397 However, this is not the case for PVs on SLRP disks. The agent relies on the SLRP to do the cleanup: https://github.com/apache/mesos/blob/f44535bca811720fc272c9abad2bc78652d61fe3/src/slave/slave.cpp#L4472 But SLRP simply updates its metadata and do nothing: https://github.com/apache/mesos/blob/f44535bca811720fc272c9abad2bc78652d61fe3/src/resource_provider/storage/provider.cpp#L2805 This would lead to data leakage if the framework does not call `CREATE_DISK` but just unreserve it.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9554","02/05/2019 16:33:08",5,"Allocator might skip allocations because a single framework is incapable of receiving certain resources. ""Currently in the hierarchical allocator allocation loops we compute {{available}} resources by taking into account the capabilities of the current framework. Further down in the loop we might then {{break}} out of the iteration under the assumption that no other framework can receive the resources in question. This is only correct if all considered frameworks have identical capabilities.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9555","02/05/2019 22:26:56",3,"Allocator CHECK failure: reservationScalarQuantities.contains(role). ""We recently upgraded our Mesos cluster from version 1.3 to 1.5, and since then have been getting periodic master crashes due to this error: Full stack trace is at the end of this issue description. When the master fails, we automatically restart it and it rejoins the cluster just fine. I did some initial searching and was unable to find any existing bug reports or other people experiencing this issue. We run a cluster of 3 masters, and see crashes on all 3 instances. Right before the crash, we saw a {{Removed agent:...}} log line noting that it was agent 9b912afa-1ced-49db-9c85-7bc5a22ef072-S6 that was removed. I saved the full log from the master, so happy to provide more info from it, or anything else about our current environment. Full stack trace is below. """," Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: F0205 15:53:57.385118 8434 hierarchical.cpp:2630] Check failed: reservationScalarQuantities.contains(role) 294929:Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: I0205 15:53:57.384759 8432 master.cpp:9893] Removed agent 9b912afa-1ced-49db-9c85-7bc5a22ef072-S6 at slave(1)@10.0.18.78:5051 (10.0.18.78): the agent unregistered Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e9170a7d google::LogMessage::Fail() Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e9172830 google::LogMessage::SendToLog() Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e9170663 google::LogMessage::Flush() Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e9173259 google::LogMessageFatal::~LogMessageFatal() Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e8443cbd mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::untrackReservations() Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e8448fcd mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::removeSlave() Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e90c4f11 process::ProcessBase::consume() Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e90dea4a process::ProcessManager::resume() Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e90e25d6 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e6700c80 (unknown) Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e5f136ba start_thread Feb 5 15:53:57 ip-10-0-16-140 mesos-master[8414]: @ 0x7f87e5c4941d (unknown)",0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9557","02/06/2019 18:12:33",2,"Operations are leaked in Framework struct when agents are removed ""Currently, when agents are removed from the master, their operations are not removed from the {{Framework}} structs. We should ensure that this occurs in all cases.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9560","02/07/2019 07:25:52",5,"ContentType/AgentAPITest.MarkResourceProviderGone/1 is flaky ""We observed a segfault in {{ContentType/AgentAPITest.MarkResourceProviderGone/1}} on test teardown. """," I0131 23:55:59.378453 6798 slave.cpp:923] Agent terminating I0131 23:55:59.378813 31143 master.cpp:1269] Agent a27bcaba-70cc-4ec3-9786-38f9512c61fd-S0 at slave(1112)@172.16.10.236:43229 (ip-172-16-10-236.ec2.internal) disconnected I0131 23:55:59.378831 31143 master.cpp:3272] Disconnecting agent a27bcaba-70cc-4ec3-9786-38f9512c61fd-S0 at slave(1112)@172.16.10.236:43229 (ip-172-16-10-236.ec2.internal) I0131 23:55:59.378846 31143 master.cpp:3291] Deactivating agent a27bcaba-70cc-4ec3-9786-38f9512c61fd-S0 at slave(1112)@172.16.10.236:43229 (ip-172-16-10-236.ec2.internal) I0131 23:55:59.378891 31143 hierarchical.cpp:793] Agent a27bcaba-70cc-4ec3-9786-38f9512c61fd-S0 deactivated F0131 23:55:59.378891 31149 logging.cpp:67] RAW: Pure virtual method called @ 0x7f633aaaebdd google::LogMessage::Fail() @ 0x7f633aab6281 google::RawLog__() @ 0x7f6339821262 __cxa_pure_virtual @ 0x55671cacc113 testing::internal::UntypedFunctionMockerBase::UntypedInvokeWith() @ 0x55671b532e78 mesos::internal::tests::resource_provider::MockResourceProvider<>::disconnected() @ 0x7f633978f6b0 process::AsyncExecutorProcess::execute<>() @ 0x7f633979f218 _ZN5cpp176invokeIZN7process8dispatchI7NothingNS1_20AsyncExecutorProcessERKSt8functionIFvvEES9_EENS1_6FutureIT_EERKNS1_3PIDIT0_EEMSE_FSB_T1_EOT2_EUlSt10unique_ptrINS1_7PromiseIS3_EESt14default_deleteISP_EEOS7_PNS1_11ProcessBaseEE_JSS_S7_SV_EEEDTclcl7forwardISB_Efp_Espcl7forwardIT0_Efp0_EEEOSB_DpOSX_ @ 0x7f633a9f5d01 process::ProcessBase::consume() @ 0x7f633aa1a08a process::ProcessManager::resume() @ 0x7f633aa1db06 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv @ 0x7f633acc9f80 execute_native_thread_routine @ 0x7f6337142e25 start_thread @ 0x7f6336241bad __clone ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9564","02/12/2019 01:19:31",5,"Logrotate container logger lets tasks execute arbitrary commands in the Mesos agent's namespace ""The non-default {{LogrotateContainerLogger}} module allows tasks to configure sandbox log rotation (See http://mesos.apache.org/documentation/latest/logging/#Containers ). The {{logrotate_stdout_options}} and {{logrotate_stderr_options}} in particular let the task specify free-form text, which is written to a configuration file located in the task's sandbox. The module does not sanitize or check this configuration at all. The logger itself will eventually run {{logrotate}} against the written configuration file, but the logger is not isolated in the same way as the task. For both the Mesos and Docker containerizers, the logger binary will run in the same namespace as the Mesos agent. This makes it possible to affect files outside of the task's mount namespace. Two modes of attack are known to be problematic: * Changing or adding entries to the configuration file. Normally, the configuration file contains a single file to rotate: It is trivial to add text to the {{logrotate_stdout_options}} to add a new entry: * Logrotate's {{postrotate}} option allows for execution of arbitrary commands. This can again be supplied with the {{logrotate_stdout_options}} variable. Some potential fixes to consider: * Overwrite the .logrotate.conf files each time. This would give only milliseconds between writing and calling logrotate for a thirdparty to modify the config files maliciously. This would not help if the task itself had postrotate options in its environment variables. * Sanitize the free-form options field in the environment variables to remove postrotate or injection attempts like }\n/path/to/some/file\noptions{. * Refactor parts of the Mesos isolation code path so that the logger and IO switchboard binary live in the same namespaces as the container (instead of the agent). This would also be nice in that the logger's CPU usage would then be accounted for within the container's resources."""," /path/to/sandbox/stdout { } /path/to/sandbox/stdout { } /path/to/other/file/on/disk { } /path/to/sandbox/stdout { postrotate rm -rf / endscript } ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9565","02/12/2019 05:56:42",5,"Unit tests for creating and destroying persistent volumes in SLRP. ""The plan is to add/update the following unit tests to test persistent volume destroy: * CreateDestroyDisk * CreateDestroyDiskWithRecovery * CreateDestroyPersistentMountVolume * CreateDestroyPersistentMountVolumeWithRecovery * CreateDestroyPersistentMountVolumeWithReboot * CreateDestroyPersistentBlockVolume * DestroyPersistentMountVolumeFailed * DestroyUnpublishedPersistentVolume * DestroyUnpublishedPersistentVolumeWithRecovery * DestroyUnpublishedPersistentVolumeWithReboot""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9568","02/13/2019 01:37:24",2,"SLRP does not clean up mount directories for destroyed MOUNT disks. ""When staging or publishing a CSI volume, SLRP will create the following mount points for these operations: These directories are cleaned up when the volume is unpublished/unstaged. However, their parent directory, namly {{/csi///mounts/}} is never cleaned up."""," /csi///mounts//staging /csi///mounts//target ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9573","02/13/2019 22:44:42",2,"Agent should not try to recover operation status update streams that haven't been created yet. ""If the agent fails over after having checkpointed a new operation but before the operation status update stream is created, the recovery process will fail. This happens because agent will try to recover the operation status update streams even if it hasn't been created yet. In order to prevent recovery failures, the agent should obtain the ids of the streams to recover by walking the directory in which operation status updates streams are stored. The agent should also garbage collect streams if the checkpointed state doesn't contain a corresponding operation.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9574","02/13/2019 22:48:46",2,"Operation status update streams are not properly garbage collected. ""After successfully handling the acknowledgment of a terminal operation status update for an operation affecting agent's default resources, the agent should garbage collect the corresponding operation status update stream.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9578","02/15/2019 14:20:51",1,"Document per framework minimal allocatable resources in framework development guides ""With MESOS-9523 we introduced fields into {{FrameworkInfo}} to give frameworks a way to express their resource requirements. We should document this feature in the framework development guide(s).""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9581","02/16/2019 02:51:16",1,"Mesos package naming appears to be undeterministic. ""Transcribed from slack; https://mesos.slack.com/archives/C7N086PK2/p1550158266006900 It appears there are a number of RPM packages called “mesos-1.7.1-2.0.1.el7.x86_64.rpm” in the wild. I’ve caught specimens with build dates February 1st, 7th and 13th. While it’s somewhat troubling in itself, none of these packages is the one referred to in Yum repository metadata (repos.mesosphere.com), which is a package built today on the 14th, so I can’t install Mesos right now. Could it be that your pipeline is creating a new package with the same verson and release in every nightly build? Repository metadata """," sqlite3 *primary.sqlite """"select name, version, release, strftime('%d-%m-%Y %H:%M', datetime(time_build, 'unixepoch')) build_as_string, rpm_buildhost from packages where name = 'mesos' and version = '1.7.1';"""" mesos|1.7.1|2.0.1|14-02-2019 12:30|ip-172-16-10-254.ec2.internal Packages downloaded while investigating over the past few days Name : mesos Version : 1.7.1 Release : 2.0.1 Architecture: x86_64 Install Date: (not installed) Group : misc Size : 298787793 License : Apache-2.0 Signature : RSA/SHA256, Fri 01 Feb 2019 11:38:47 PM UTC, Key ID df7d54cbe56151bf Source RPM : mesos-1.7.1-2.0.1.src.rpm Build Date : Fri 01 Feb 2019 11:15:17 PM UTC Build Host : ip-172-16-10-11.ec2.internal Relocations : / Packager : dev@mesos.apache.org URL : https://mesos.apache.org/ Summary : Cluster resource manager with efficient resource isolation Description : [snip] Name : mesos Version : 1.7.1 Release : 2.0.1 Architecture: x86_64 Install Date: (not installed) Group : misc Size : 298791347 License : Apache-2.0 Signature : RSA/SHA256, Thu 07 Feb 2019 10:33:06 PM UTC, Key ID df7d54cbe56151bf Source RPM : mesos-1.7.1-2.0.1.src.rpm Build Date : Thu 07 Feb 2019 10:31:02 PM UTC Build Host : ip-172-16-10-4.ec2.internal Relocations : / Packager : dev@mesos.apache.org URL : https://mesos.apache.org/ Summary : Cluster resource manager with efficient resource isolation Description : [snip] Name : mesos Version : 1.7.1 Release : 2.0.1 Architecture: x86_64 Install Date: (not installed) Group : misc Size : 298789309 License : Apache-2.0 Signature : RSA/SHA256, Wed Feb 13 04:35:02 2019, Key ID df7d54cbe56151bf Source RPM : mesos-1.7.1-2.0.1.src.rpm Build Date : Wed Feb 13 04:32:41 2019 Build Host : ip-172-16-10-83.ec2.internal Relocations : / Packager : dev@mesos.apache.org URL : https://mesos.apache.org/ Summary : Cluster resource manager with efficient resource isolation Description :  ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9582","02/17/2019 09:45:44",2,"Reviewbot jenkins jobs stops validating any reviews as soon as it sees a patch which does not apply ""The reviewbot Jenkins setup fetches all Mesos reviews since some time stamp, filters that list down to reviews which need to be validated, and then one by one validates each of the remaining review requests. In doing that it applies patches with {{support/apply-reviews.py}} which is invoked by shelling out wth a function {{shell}} in {{support/verify-reviews.py}}. If that function sees any error from the shell command {{exit(1)}} is called which immediately terminates the Jenkins job. As {{support/apply-reviews.py}} can fail if a patch does not apply cleanly anymore this means that any review requests which cannot be applied can largely disable reviewbot. We should avoid calling {{exit}} in low-level functions in {{support/verify-reviews.py}} and instead bubble the error up to be handled at a larger scope. It looks like the script was alreadt designed to handle exceptions which might work much better here.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9594","02/21/2019 21:40:18",3,"Test `StorageLocalResourceProviderTest.RetryRpcWithExponentialBackoff` is flaky. ""Observed on ASF CI: Full log: [^RetryRpcWithExponentialBackoff-badrun.txt] """," /tmp/SRC/src/tests/storage_local_resource_provider_tests.cpp:5027 Failed to wait 1mins for offers ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9605","02/23/2019 02:30:22",2,"mesos/mesos-centos nightly docker image has to include the SHA of the build. ""As a snapshot build, we need to identify the exact HEAD of the branch build. Our current snapshot builds lack this information due to the way the build is setup. The current build identifies e.g. when running the agent like this; Note we lack a user in the first output line and the GIT sha altogether. Only tagged builds should commonly lack the SHA as it is not needed. """," $ docker run -it docker.io/mesos/mesos-centos:master-2019-02-15 mesos-slave --work_dir=/tmp --master=127.0.0.1:5050 I0223 02:22:43.317088 1 main.cpp:349] Build: 2019-02-15 22:46:47 by I0223 02:22:43.317643 1 main.cpp:350] Version: 1.7.2 I0223 02:22:43.332036 1 systemd.cpp:240] systemd version `219` detected I0223 02:22:43.332067 1 main.cpp:452] Initializing systemd state E0223 02:22:43.332135 1 main.cpp:461] EXIT with status 1: Failed to initialize systemd: Failed to locate systemd runtime directory: /run/systemd/system I0215 08:28:20.871155 34809 main.cpp:358] Git SHA: dff75bb705dca473a5c4019d9ed6e2d3530e3865 ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9607","02/25/2019 16:38:20",2,"Removing a resource provider with consumers breaks resource publishing. ""Currently, the agent publishes all resources considered """"used"""" via the resource provider manager whenever it is asked to publish a subportion. If a resource provider with active users (e.g., tasks or even just executors) was removed, but a user stays around this will fail _any resource publishing_ on that node since a """"used"""" resource provider is not subscribed. We should either update the agent code to just deltas, or provide a workaround of the same effect in the resource provider manager.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0 +"MESOS-9610","02/26/2019 02:05:09",3,"Fetcher vulnerability - escaping from sandbox ""I have noticed that there is a possibility to exploit fetcher and  overwrite any file on the agent host. scenario to reproduce: 1) prepare a file with any content and name a file like """"../../../etc/test"""" and archive it. We can use python and zipfile module to achieve that: 2) prepare a service that will use our artifact (exploit.zip) 3) run service at the end in /etc we will get our file. As you can imagine there is a lot possibility how we can use it.    """," >>> import zipfile >>> zip = zipfile.ZipFile(""""exploit.zip"""", """"w"""") >>> zip.writestr(""""../../../../../../../../../../../../etc/mariusz_was_here.txt"""", """"some content"""") >>> zip.close() ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9612","02/26/2019 17:14:35",5,"Resource provider manager assumes all operations are triggered by frameworks ""When the agent tries to apply an operation to resource provider resources, it invokes {{ResourceProviderManager::applyOperation}} which in turn invokes {{ResourceProviderManagerProcess::applyOperation}}. That function currently assumes that the received message contains a valid {{FrameworkID}}, Since {{FrameworkID}} is not a trivial proto types, but instead one with a {{required}} field {{value}}, the message composed with the {{frameworkId}} below cannot be serialized which leads to a failure below which in turn triggers a {{CHECK}} failure in the agent's function interfacing with the manager. A typical scenario where we would want to support operator API calls here is to destroy leftover persistent volumes or reservations."""," void ResourceProviderManagerProcess::applyOperation( const ApplyOperationMessage& message) { const Offer::Operation& operation = message.operation_info(); const FrameworkID& frameworkId = message.framework_id(); // `framework_id` is `optional`. ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9613","02/26/2019 21:54:12",3,"Support seccomp `unconfined` option for whitelisting. ""Support seccomp `unconfined` option for whitelisting. Authorization needs to be implemented for this protobuf option.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9616","02/27/2019 23:12:26",2,"`Filters.refuse_seconds` declines resources not in offers. ""The [documentation|http://mesos.apache.org/documentation/latest/scheduler-http-api/#accept] of {{Filters.refuse_seconds}} says: {quote} Also, any of the offer’s resources not used in the ACCEPT call (e.g., to launch a task or task group) are considered declined and might be reoffered to other frameworks, meaning that they will not be reoffered to the scheduler for the amount of time defined by the filter. {quote} Consider an {{ACCEPT}} call with just a {{CREATE}} operation, but no {{LAUNCH}} or {{LAUNCH_GROUP}}. The {{CREATE}} call will generate a persistent volume resource that is *not* in the offer's resources, but it will still not be reoffered to the scheduler for the amount of time defined by the filter. Also, the term *used* is vague here. If we have an {{ACCEPT}} call with a {{CREATE}} on a disk followed by a {{DESTROY}} on the created persistent volume, would the disk be considered *used*?""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9618","02/28/2019 16:58:04",3,"Display quota consumption in the webui. ""Currently, the Roles table in the webui displays allocation and quota guarantees / limits. However, quota """"consumption"""" is different from allocation, in that reserved resources are always considered consumed against the quota. This discrepancy has led to confusion from users. One exampled occurred when an agent was added with a large reservation exceeding the memory quota guarantee. The user sees memory chopping in offers, and since the scheduler didn't want to use the reservation, it can't launch its tasks. If consumption is shown in the UI, we should include a tool tip that indicates how consumed is calculated so that users know how to interpret it.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9619","03/01/2019 00:22:30",3,"Mesos Master Crashes with Launch Group when using Port Resources ""Original Issue: [https://lists.apache.org/thread.html/979c8799d128ad0c436b53f2788568212f97ccf324933524f1b4d189@%3Cuser.mesos.apache.org%3E]  When the ports resources is removed, Mesos functions normally (I'm able to launch the task as many times as possible, while it always fails continually). Attached is a snippet of the mesos master log from OFFER to crash. ""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9621","03/01/2019 09:11:45",1,"Mesos failed to build due to error LNK2019 on Windows using MSVC. ""Issue description: Mesos failed to build due to error Could you please take a look? Reproduce steps: ErrorMessage:    """," LNK2019: unresolved external symbol """"public: __cdecl mesos::internal::slave::VolumeGidManager::~VolumeGidManager(void)"""" (??1VolumeGidManager@slave@internal@mesos@@QEAA@XZ) referenced in function """"public: void * __cdecl mesos::internal::slave::VolumeGidManager::`scalar deleting destructor'(unsigned int)""""{color} on Windows using MSVC. It can be first reproduced on mesos master branch [c03e51f|https://github.com/apache/mesos/commit/c03e51f1fe9cc7137635a7fe586fd890f7c7bdae]. # git clone -c core.autocrlf=true https://github.com/apache/mesos D:\mesos\src # Open a VS 2017 x64 command prompt as admin and browse to D:\mesos # cd src # .\bootstrap.bat # cd .. # mkdir build_x64 && pushd build_x64 # cmake ..\src -G """"Visual Studio 15 2017 Win64"""" -DCMAKE_SYSTEM_VERSION=10.0.17134.0 -DENABLE_LIBEVENT=1 -DHAS_AUTHENTICATION=0 -DPATCHEXE_PATH=""""C:\gnuwin32\bin"""" -T host=x64 # msbuild Mesos.sln /p:Configuration=Debug /p:Platform=x64 /maxcpucount:4 /t:Rebuild main.obj : error LNK2019: unresolved external symbol """"public: __cdecl mesos::internal::slave::VolumeGidManager::~VolumeGidManager(void)"""" (??1VolumeGidManager@slave@internal@mesos@@QEAA@XZ) referenced in function """"public: void * __cdecl mesos::internal::slave::VolumeGidManager::`scalar deleting destructor'(unsigned int)"""" (??_GVolumeGidManager@slave@internal@mesos@@QEAAPEAXI@Z) [D:\Mesos\build_x64\src\slave\mesos-agent.vcxproj]    107>D:\Mesos\build_x64\src\mesos-agent.exe : fatal error LNK1120: 1 unresolved externals [D:\Mesos\build_x64\src\slave\mesos-agent.vcxproj]    107>Done Building Project """"D:\Mesos\build_x64\src\slave\mesos-agent.vcxproj"""" (Rebuild target(s)) -- FAILED.     27>Done Building Project """"D:\Mesos\build_x64\src\slave\mesos-agent.vcxproj.metaproj"""" (Rebuild target(s)) -- FAILED.      2>Done Building Project """"D:\Mesos\build_x64\ALL_BUILD.vcxproj.metaproj"""" (Rebuild target(s)) -- FAILED.      1>Done Building Project """"D:\Mesos\build_x64\Mesos.sln"""" (Rebuild target(s)) -- FAILED. Build FAILED. ",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9622","03/01/2019 19:25:37",5,"Refactor SLRP with a CSI volume manager. ""To support both CSI v0 and v1, SLRP needs to be agnostic to CSI versions. This could be achieved by refactoring all CSI volume management code into a CSI volume manager that can be implemented with CSI v0 and v1. Also, the volume state proto needs to be agnostic to CSI spec version as well. Design doc: https://docs.google.com/document/d/1LPy839zwFw6UcRhmr65iKeMaHcoj6uUX25yJVbMknlY/edit#heading=h.1iswiwd3imin""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9624","03/01/2019 19:31:18",3,"Bundle CSI spec v1.0 in Mesos. ""We need to bundle both CSI v0 and v1 in Mesos. This requires some redesign of the source code filesystem layout.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9625","03/01/2019 19:36:34",3,"Make `DiskProfileAdaptor` agnostic to CSI spec version. ""To support multiple CSI versions, the {{DiskProfileAdaptor}} module needs to be decoupled from CSI version. Mainly, we'll have to introduce a version-agnostic {{VolumeCapability}}.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9626","03/01/2019 19:40:12",3,"Make SLRP pick the appropriate CSI versions for plugins. ""To detect the CSI version supported by a plugin, we could call {{v1.Probe}}, and fallback to {{v0.Probe}} if the previous call fails. Alternatively, we could introduce a field in {{CSIPluginInfo}} to specify the plugin version to use.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9627","03/01/2019 19:45:25",3,"Test CSI v1 in SLRP unit tests. ""We could add a command line flag in the test CSI plugin to switch to either v0 and v1, and parameterize the CSI version in SLRP unit tests.""","",0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9632","03/04/2019 23:35:23",5,"Refactor SLRP with a CSI service manager. ""The CSI volume manager relies on service containers, which should be agnostic to CSI versions. As the first step of MESOS-9622, we should first refactor SLRP with a CSI service manager that manages service container lifecycles before refactoring it with a CSI volume manager.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9637","03/07/2019 12:44:05",1,"Impossible to CREATE a volume on resource provider resources over the operator API ""Currently the master HTTP handler for operator API {{CREATE}} requests strips away the whole {{DiskInfo}} in any passed resources to calculate the consumed resources. This is incorrect for resource provider disk resources where the {{DiskInfo}} contains information unrelated to the persistence. The handler should remove exclusively information created by the operation.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9639","03/07/2019 20:11:14",1,"Make CSI plugin RPC metrics agnostic to CSI versions. ""Currently SLRP provides per-CSI-call metrics, e.g.: If we are to continue to provide such fine-grained metrics, when operators upgrade their CSI plugins to CSI v1, then SLRP would report another set of metrics for v1, which would be inconvenient to operators. Also the fine-grained metrics are not very useful for operators, as most information are highly correlated to per-operation metrics. So most likely operators would simply aggregate the per-CSI-call metrics for monitoring CSI plugins, and use per-operation metrics to monitor volume creation/destroy/etc. So instead of provide such fine-grained metrics, we could just provide a set of aggregated rpc metrics that are agnostic to CSI versions, such as: """," resource_providers/./csi_plugin/rpcs/csi.v0.controller.CreateVolume/successes resource_providers/./csi_plugin/rpcs/csi.v0.node.NodeGetId/errors resource_providers/./csi_plugin/rpcs_pending resource_providers/./csi_plugin/rpcs_finished resource_providers/./csi_plugin/rpcs_failed resource_providers/./csi_plugin/rpcs_cancelled ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9655","03/15/2019 03:41:39",2,"Improving SLRP tests for preprovisioned volumes. ""We should improve SLRP tests for preprovisioned volumes: 1. Test that {{CREATE_DISK}} fails if the specified profile is unknown. 2. Update test {{AgentRegisteredWithNewId}} to ensure that a recovered published volumes can be consumed by new tasks.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9661","03/18/2019 18:12:53",1,"Agent crashes when SLRP recovers dropped operations. ""MESOS-9537 is fixed by persisting dropped operations in SLRP, but the recovery codepath doesn't account for that: [https://github.com/apache/mesos/blob/master/src/resource_provider/storage/provider.cpp#L1278] Which caused the agent to crash with the following message during SLRP recovery: """," Reached unreachable statement at /pkg/src/mesos/src/resource_provider/storage/provider.cpp:1283",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9667","03/21/2019 17:03:31",3,"Check failure when executor for task using resource provider resources subscribes before agent is registered ""When an executor for a task using resource provider resources subscribes before the agent has registered with the master, we trigger a fatal assertion, The reason for this failure is that we attempt to publish resources to the resource provider via the resource provider manager, but the resource provider manager is only created once the agent has registered with the master. As a workaround one can terminate the executors and their tasks, and let the framework relaunch the tasks (provided it supports that). A possible workaround could be to prevent such executors from subscribing until the resource provider manager is available."""," Mar 21 13:42:47 agent1 mesos-agent[17277]: F0321 13:42:46.845535 17295 slave.cpp:8834] Check failed: 'resourceProviderManager.get()' Must be non NULL Mar 21 13:42:47 agent1 mesos-agent[17277]: *** Check failure stack trace: *",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9672","03/22/2019 20:04:31",2,"Docker containerizer should ignore pids of executors that do not pass the connection check. ""When recovering executors with a tracked pid we first try to establish a connection to its libprocess address to avoid reaping an irrelevant process: https://github.com/apache/mesos/blob/4580834471fb3bc0b95e2b96e04a63d34faef724/src/slave/containerizer/docker.cpp#L1019-L1054 If the connection fails to establish, we should not track its pid: https://github.com/apache/mesos/blob/4580834471fb3bc0b95e2b96e04a63d34faef724/src/slave/containerizer/docker.cpp#L1071 One trouble this might cause is that if the pid is being used by another executor, this could lead to duplicate pid error and lead the agent into a crash loop: https://github.com/apache/mesos/blob/4580834471fb3bc0b95e2b96e04a63d34faef724/src/slave/containerizer/docker.cpp#L1066-L1068""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9677","03/25/2019 15:20:23",3,"RPM packages should be built with launcher sealing ""We should consider enabling launcher sealing in the Mesos RPM packages. Since this feature is built conditionally, it is hard to write e.g., module code against Mesos packages since required functions might be missing (e.g., [https://github.com/dcos/dcos-mesos-modules/commit/8ce70e6cc789054831daa3058647e326b2b11bc9] cannot be linked against the default RPM package anymore). The RPM's target platform centos7 should include a recent enough kernel for this.""","",0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9688","03/28/2019 22:31:05",3,"Quota is not enforced properly when subroles have reservations. ""Note: the discussion here concerns quota enforcement for top-level role, setting quota on sublevel role is not supported. If a subrole directly makes a reservation, the accounting of `roleConsumedQuota` will be off: https://github.com/apache/mesos/blob/master/src/master/allocator/mesos/hierarchical.cpp#L1703-L1705 Specifically, in this formula: `Consumed Quota = reservations + allocation - allocated reservations` The `reservations` part does not account subrole's reservation to its ancestors. If a reservation is made directly for role """"a/b"""", its reservation is accounted only for """"a/b"""" but not for """"a"""". Similarly, if a top role ( """"a"""") reservation is refined to a subrole (""""a/b""""), the current code first subtracts the reservation from """"a"""" and then track that under """"a/b"""". We should make it hierarchical-aware. The """"allocation"""" and """"allocated reservations"""" are both tracked in the sorter where the hierarchical relationship is considered -- allocations are added hierarchically.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9691","03/28/2019 22:53:01",3,"Quota headroom calculation is off when subroles are involved. ""Quota """"availableHeadroom"""" calculation: https://github.com/apache/mesos/blob/6276f7e73b0dbe7df49a7315cd1b83340d66f4ea/src/master/allocator/mesos/hierarchical.cpp#L1751-L1754 is off when subroles are involved. Specifically, in the formula -The """"allocated resources"""" part is hierarchical-aware and aggregate that across all roles, thus allocations to subroles will be counted multiple times (in the case of """"a/b"""", once for """"a"""" and once for """"a/b"""").- Looks like due to the presence of `INTERNAL` node, `roleSorter->allocationScalarQuantities(role)` is *not* hierarchical. Thus this is not an issue. (If role `a/b` consumes 1cpu and `a` consumes 1cpu, if we query `roleSorter->allocationScalarQuantities(""""a"""");` It will return 1cpu, which is correct. In the sorter, there are four nodes, root, `a` (internal, 1cpu), `a/.` (leaf, 1cpu), `a/b` (leaf, 1cpu). Query `a` will return `a/.`) The """"total reservations"""" is correct, since today it is """"flat"""" (reservations made to """"a/b"""" are not counted to """"a""""). Thus all reservations are only counted once -- which is the correct semantic here. However, once we fix MESOS-9688 (which likely requires reservation tracking to be hierarchical-aware), we need to ensure that the accounting is still correct. -The """"allocated reservations"""" is hierarchical-aware, thus overlap accounting would occur.- Similar to the `""""allocated resources""""` above, this is also not an issue at the moment. Basically, when calculating the available headroom, we need to ensure """"single-counting"""". Ideally, we only need to look at the root's consumptions."""," available headroom = total resources - allocated resources - (total reservations - allocated reservations) - unallocated revocable resources ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9692","03/29/2019 18:29:43",2,"Quota may be under allocated for disk resources. ""Due to a bug in the resources chopping logic: https://github.com/apache/mesos/blob/1915150c6a83cd95197e25a68a6adf9b3ef5fb11/src/master/allocator/mesos/hierarchical.cpp#L1665-L1668 When chopping different resources with the same name (e.g. vanilla disk and mount disk), we only include one of the resources. For example, if a role has a quota of 100disk, and an agent has 50 vanilla disk and 50 mount disk, the offer will only contain 50 disk (either vanilla or the mount type). The correct behavior should be that both disks should be offered. Since today, only disk resources might have the same name but different meta-data (for unreserved/nonrevocable/nonshared resources -- we only chop this), this bug should only affect disk resources today. The correct code should be: """," if (Resources::shrink(&resource, limitScalar.get())) { targetScalarQuantites[resource.name()] -= limitScalar.get(); // bug result += std::move(resource); } if (Resources::shrink(&resource, limitScalar.get())) { targetScalarQuantites[resource.name()] -= resource.scalar(); // Only subtract the shrunk resource scalar result += std::move(resource); } ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9693","03/29/2019 23:25:02",3,"Add master validation for SeccompInfo. ""1. if seccomp is not enabled, we should return failure if any fw specify seccompInfo and return appropriate status update. 2. at most one field of profile_name and unconfined should be set. better to validate in master""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9695","04/02/2019 08:02:37",2,"Remove the duplicate pid check in Docker containerizer ""In `DockerContainerizerProcess::_recover`, we check if there are two executors use duplicate pid, and error out if we find duplicate pid (see [here|https://github.com/apache/mesos/blob/1.7.2/src/slave/containerizer/docker.cpp#L1068:L1078] for details). However I do not see the value this check can give us but it will cause serious issue (agent crash loop when restarting) in rare case (a new executor reuse pid of an old executor), so I think we'd better to remove it from Docker containerizer.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9696","04/03/2019 09:36:09",1,"Test MasterQuotaTest.AvailableResourcesSingleDisconnectedAgent is flaky ""The test {{MasterQuotaTest.AvailableResourcesSingleDisconnectedAgent}} is flaky, especially under additional system load.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9697","04/03/2019 10:40:49",3,"Release RPMs are not uploaded to bintray ""While we currently build release RPMs, e.g., [https://builds.apache.org/view/M-R/view/Mesos/job/Packaging/job/CentOS/job/1.7.x/], these artifacts are not uploaded to bintray. Due to that RPM links on the downloads page [http://mesos.apache.org/downloads/] are broken.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9701","04/04/2019 22:24:11",3,"Allocator's roles map should track reservations. ""Currently, the allocator's {{roles}} map only tracks roles that have allocations or framework subscriptions: https://github.com/apache/mesos/blob/1.7.2/src/master/allocator/mesos/hierarchical.hpp#L531-L535 And we separately track a map of total reservations for each role: https://github.com/apache/mesos/blob/1.7.2/src/master/allocator/mesos/hierarchical.hpp#L541-L547 Confusingly, the {{roles}} map won't have an entry when there is a reservation for a role but no allocations or frameworks subscribed. We should ensure that the map has an entry when there are reservations. Also, we can consolidate the reservation information and framework ids into the same map, e.g.: """," struct Role { hashset frameworkIds; ResourceQuantities totalReservations; }; hashmap roles; ",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9704","04/05/2019 07:06:46",3,"Support docker manifest v2s2 config GC. ""After docker manifest v2s2 support, layer GC is still properly supported. However, the manifest config is not garbage collected. Need to add the config dir to the checkpointed LAYERS_FILE to support config GC.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9710","04/08/2019 21:22:54",3,"Add tests to ensure random sorter performs correct weighted sorting. ""We added tests for the weighted shuffle algorithm, but didn't test that the RandomSorter's sort() function behaves correctly. We should also test that hierarchical weights in the random sorter behave correctly.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9711","04/09/2019 12:48:45",1,"Avoid shutting down executors registering before a required resource provider. ""If an HTTP-based executor resubscribes after agent failover before a resource provider exposing some of its resources has subscribed itself the agent currently does not know how to inform the resource provider about the existing resource user and shuts the executor down. This is not optimal as the resource provider might subscribe soon, but we fail the task nevertheless. We should consider improving on that, e.g., by deferring executor subscription until all providers have resubscribed or their registration timeout is reached, see MESOS-7554.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9712","04/09/2019 15:38:20",3,"StorageLocalResourceProviderTest.CsiPluginRpcMetrics is flaky. ""From an internal CI run: """," [ RUN ] StorageLocalResourceProviderTest.CsiPluginRpcMetrics 06:56:26 I0409 06:56:26.350445 23181 cluster.cpp:176] Creating default 'local' authorizer 06:56:26 malloc_consolidate(): invalid chunk size 06:56:26 *** Aborted at 1554792986 (unix time) try """"date -d @1554792986"""" if you are using GNU date *** 06:56:26 PC: @ 0x7f1cf4481f3b (unknown) 06:56:26 *** SIGABRT (@0x5a8d) received by PID 23181 (TID 0x7f1ce9be8700) from PID 23181; stack trace: *** 06:56:26 @ 0x7f1cf461b8e0 __GI___pthread_rwlock_rdlock 06:56:26 @ 0x7f1cf4481f3b (unknown) 06:56:26 @ 0x7f1cf44832f1 (unknown) 06:56:26 @ 0x7f1cf44c4867 (unknown) 06:56:26 @ 0x7f1cf44cae0a (unknown) 06:56:26 @ 0x7f1cf44cb10e (unknown) 06:56:26 @ 0x7f1cf44cddad (unknown) 06:56:26 @ 0x7f1cf44cf7dd (unknown) 06:56:26 @ 0x7f1cf4a647a8 (unknown) 06:56:26 @ 0x7f1cf88d0805 google::LogMessage::Init() 06:56:26 @ 0x7f1cf88d10ac google::LogMessage::LogMessage() 06:56:26 @ 0x7f1cf752a46a mesos::internal::master::Master::initialize() 06:56:26 @ 0x7f1cf882bd72 process::ProcessManager::resume() 06:56:26 @ 0x7f1cf88303c6 _ZNSt6thread11_State_implISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv 06:56:26 @ 0x7f1cf4a8ee6f (unknown) 06:56:26 @ 0x7f1cf4610f2a (unknown) 06:56:26 @ 0x7f1cf4543edf (unknown) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9727","04/11/2019 01:07:54",1,"Heartbeat calls from executor to agent are reported as errors ""These HEARTBEAT calls and events were added in MESOS-7564.  HEARTBEAT calls are generated by the executor library, which does not have access to the executor's Framework/Executor IDs. The library therefore uses some dummy values instead, because HEARTBEAT calls do not really require required fields. When the agent receives these dummy values, it returns a 400 Bad Request. It should return 202 Accepted instead.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-9733","04/17/2019 19:16:49",3,"Random sorter generates non-uniform result for hierarchical roles. ""In the presence of hierarchical roles, the random sorter shuffles roles level by level and then pick the active leave nodes using DFS: https://github.com/apache/mesos/blob/7e7cd8de1121589225049ea33df0624b2a1bd754/src/master/allocator/sorter/random/sorter.cpp#L513-L529 This makes the result less random because subtrees are always picked together. For example, random sorting result such as `[a/., c/d, a/b, …]` is impossible. ""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9750","05/01/2019 02:54:19",2,"Agent V1 GET_STATE response may report a complete executor's tasks as non-terminal after a graceful agent shutdown ""When the following steps occur: 1) A graceful shutdown is initiated on the agent (i.e. SIGUSR1 or /master/machine/down). 2) The executor is sent a kill, and the agent counts down on {{executor_shutdown_grace_period}}. 3) The executor exits, before all terminal status updates reach the agent. This is more likely if {{executor_shutdown_grace_period}} passes. This results in a completed executor, with non-terminal tasks (according to status updates). When the agent starts back up, the completed executor will be recovered and shows up correctly as a completed executor in {{/state}}. However, if you fetch the V1 {{GET_STATE}} result, there will be an entry in {{launched_tasks}} even though nothing is running. This happens because we combine executors and completed executors when constructing the response. The terminal task(s) with non-terminal updates appear under completed executors. https://github.com/apache/mesos/blob/89c3dd95a421e14044bc91ceb1998ff4ae3883b4/src/slave/http.cpp#L1734-L1756"""," get_tasks { launched_tasks { name: """"test-task"""" task_id { value: """"dff5a155-47f1-4a71-9b92-30ca059ab456"""" } framework_id { value: """"4b34a3aa-f651-44a9-9b72-58edeede94ef-0000"""" } executor_id { value: """"default"""" } agent_id { value: """"4b34a3aa-f651-44a9-9b72-58edeede94ef-S0"""" } state: TASK_RUNNING resources { ... } resources { ... } resources { ... } resources { ... } statuses { task_id { value: """"dff5a155-47f1-4a71-9b92-30ca059ab456"""" } state: TASK_RUNNING agent_id { value: """"4b34a3aa-f651-44a9-9b72-58edeede94ef-S0"""" } timestamp: 1556674758.2175469 executor_id { value: """"default"""" } source: SOURCE_EXECUTOR uuid: """"xPmn\234\236F&\235\\d\364\326\323\222\224"""" container_status { ... } } } } get_executors { completed_executors { executor_info { executor_id { value: """"default"""" } command { value: """""""" } framework_id { value: """"4b34a3aa-f651-44a9-9b72-58edeede94ef-0000"""" } } } } get_frameworks { completed_frameworks { framework_info { user: """"user"""" name: """"default"""" id { value: """"4b34a3aa-f651-44a9-9b72-58edeede94ef-0000"""" } checkpoint: true hostname: """"localhost"""" principal: """"test-principal"""" capabilities { type: MULTI_ROLE } capabilities { type: RESERVATION_REFINEMENT } roles: """"*"""" } } } ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-9759","05/01/2019 20:18:20",2,"Log required quota headroom and available quota headroom in the allocator. ""This would ease the debugging of allocation issues.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9765","05/03/2019 20:03:32",1,"Test `ROOT_CreateDestroyPersistentMountVolumeWithReboot` is flaky. ""Observed a failure on test {{CSIVersion/StorageLocalResourceProviderTest.ROOT_CreateDestroyPersistentMountVolumeWithReboot/v0}} with the following error: The problem is that the task was OOM-killed: This might happen to other SLRP root tests as well."""," unknown file: Failure Unexpected mock function call - returning directly. Function call: statusUpdate(0x7ffea80beb70, @0x7f6027ff6b90 136-byte object ) Google Mock tried the following 6 expectations, but none matched: /home/centos/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-centos-7/mesos/src/tests/storage_local_resource_provider_tests.cpp:2552: tried expectation #0: EXPECT_CALL(sched, statusUpdate(&driver, TaskStatusStateEq(TASK_STARTING)))... Expected arg #1: task status state eq TASK_STARTING Actual: 136-byte object Expected: to be called once Actual: called once - saturated and active /home/centos/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-centos-7/mesos/src/tests/storage_local_resource_provider_tests.cpp:2553: tried expectation #1: EXPECT_CALL(sched, statusUpdate(&driver, TaskStatusStateEq(TASK_RUNNING)))... Expected arg #1: task status state eq TASK_RUNNING Actual: 136-byte object Expected: to be called once Actual: called once - saturated and active /home/centos/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-centos-7/mesos/src/tests/storage_local_resource_provider_tests.cpp:2554: tried expectation #2: EXPECT_CALL(sched, statusUpdate(&driver, TaskStatusStateEq(TASK_FINISHED)))... Expected arg #1: task status state eq TASK_FINISHED Actual: 136-byte object Expected: to be called once Actual: called once - saturated and active /home/centos/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-centos-7/mesos/src/tests/storage_local_resource_provider_tests.cpp:2630: tried expectation #3: EXPECT_CALL(sched, statusUpdate(&driver, TaskStatusStateEq(TASK_STARTING)))... Expected arg #1: task status state eq TASK_STARTING Actual: 136-byte object Expected: to be called once Actual: called once - saturated and active /home/centos/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-centos-7/mesos/src/tests/storage_local_resource_provider_tests.cpp:2631: tried expectation #4: EXPECT_CALL(sched, statusUpdate(&driver, TaskStatusStateEq(TASK_RUNNING)))... Expected arg #1: task status state eq TASK_RUNNING Actual: 136-byte object Expected: to be called once Actual: never called - unsatisfied and active /home/centos/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-centos-7/mesos/src/tests/storage_local_resource_provider_tests.cpp:2632: tried expectation #5: EXPECT_CALL(sched, statusUpdate(&driver, TaskStatusStateEq(TASK_FINISHED)))... Expected arg #1: task status state eq TASK_FINISHED Actual: 136-byte object Expected: to be called once Actual: never called - unsatisfied and active I0503 01:04:24.410918 24105 memory.cpp:515] OOM detected for container df47d711-3ee8-4430-b90d-b29bcb90d407 I0503 01:04:24.411118 24105 memory.cpp:555] Memory limit exceeded: Requested: 32MB Maximum Used: 32MB MEMORY STATISTICS: cache 471040 rss 33083392 rss_huge 14680064 mapped_file 0 swap 0 pgpgin 4662 pgpgout 47 pgfault 77439 pgmajfault 0 inactive_anon 471040 active_anon 33083392 inactive_file 0 active_file 0 unevictable 0 hierarchical_memory_limit 33554432 hierarchical_memsw_limit 9223372036854771712 total_cache 471040 total_rss 33083392 total_rss_huge 14680064 total_mapped_file 0 total_swap 0 total_pgpgin 4662 total_pgpgout 47 total_pgfault 77439 total_pgmajfault 0 total_inactive_anon 471040 total_active_anon 33083392 total_inactive_file 0 total_active_file 0 total_unevictable 0 I0503 01:04:24.412195 24105 containerizer.cpp:3147] Container df47d711-3ee8-4430-b90d-b29bcb90d407 has reached its limit for resource [{""""name"""":""""mem"""",""""scalar"""":{""""value"""":32.0},""""type"""":""""SCALAR""""}] and will be terminated I0503 01:04:24.412314 24105 containerizer.cpp:2586] Destroying container df47d711-3ee8-4430-b90d-b29bcb90d407 in RUNNING state ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9766","05/03/2019 20:50:09",3,"/__processes__ endpoint can hang. ""A user reported that the {{/\_\_processes\_\_}} endpoint occasionally hangs. Stack traces provided by [~alexr] revealed that all the threads appeared to be idle waiting for events. After investigating the code, the issue was found to be possible when a process gets terminated after the {{/\_\_processes\_\_}} route handler dispatches to it, thus dropping the dispatch and abandoning the future.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9774","05/08/2019 18:54:52",8,"Design client side SSL certificate verification in Libprocess. ""Notes from an offline discussion with [~vinodkone], [~tillt], [~jgehrcke], [~CarlDellar]. * Authentication can happen at the transport and/or at the application layer. There is no real benefit in doing it at both layers. * Authentication at the application layer allows for subsequent authorization. * We would like to have an option to mutually authenticate all components in a Mesos cluster, including external tooling, regardless at which layer, to secure communication channels. * Mutual authentication at the transport layer everywhere can be hard because some components can't or don't want to provide certificates, e.g., a Lua HTTP client reading master's state. * Theoretically, some components, e.g., Mesos masters and agents, can form an ensemble inside which all connections are authenticated on both sides at the transport layer (TLS certificate verification). Practically, it may then be hard to implement communication with the components outside such ensemble, e.g., frameworks, executors, since at least two types of connections/sockets should be distinguished: with and without client certificate verification (Libprocess can't do it now), or all the traffic between the ensemble and outside components should go via a proxy. * An alternative is to combine server side TLS certificate verification with the client side application layer authentication. For that to be secure, we need to implement client authentication for Mesos components, e.g., master with agent, replica with other replica (see MESOS-9638). Plus relax certificate verification option in Libprocess for outgoing connections only. For non-streaming connections a secret connection identifier should be passed by the client to prove they are the entity that has been previously authenticated. * Whatever path we choose, truly secure communication channels will become when separate certificates for Mesos components are used, either signed by a different root CA or using a specific CN/SAN, which can't be obtained by everyone. What needs to be done: * Introduce or adjust the Libprocess flag for verifying certificates for outgoing connections only. * Verify how replicas in the master's replicated log discover other replicas and what harm a rogue replica can do if it tries to join the quorum. Estimate whether master's replicated log can use its own copy of Libprocess. * Implement Mesos master authentication with Mesos agents, MESOS-9638.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9778","05/09/2019 23:00:04",1,"Randomized the agents in the second allocation stage. ""Agents are currently randomized before the 1st allocation stage (the quota allocation stage) but not in the 2nd stage. One perceived issue is that resources on the agents in the front of the queue are likely to be mostly allocated in the 1st stage, leaving only slices of resources available for the second stage. Thus we may see consistently low quality offers for role/frameworks that get allocated first in the 2nd stage. Consider randomizing the agents in the second allocation stage.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9779","05/10/2019 23:11:36",1,"`UPDATE_RESOURCE_PROVIDER_CONFIG` agent call returns 404 ambiguously. ""The {{UPDATE_RESOURCE_PROVIDER_CONFIG}} API call returns 404 if the specified resource provider does not exist. However, libprocess also returns 404 when the `/api/v1` route is not set up. As a result, a client will get confused when receiving 404 and wouldn't know the actual state of the resource provider config. We should not overload 404 with different errors. The other codes for client errors returned by this call are: * 400 if the request is not well-formed. * 403 if the call is not authorized. To avoid ambiguity, we could keep 404 to represent that the requested URI does not exist, and use 409 to indicate that based on the current the current agent state, the update request cannot be done because the specified resource provider config does not exist, similar to what a PATCH command would return if certain elements do not exist in the requsted resource (https://www.ietf.org/rfc/rfc5789.txt): Adapting 409 also makes {{UPDATE_RESOURCE_PROVIDER_CONFIG}} symmetric to {{ADD_RESOURCE_PROVIDER_CONFIG}}."""," Conflicting state: Can be specified with a 409 (Conflict) status code when the request cannot be applied given the state of the resource. For example, if the client attempted to apply a structural modification and the structures assumed to exist did not exist (with XML, a patch might specify changing element 'foo' to element 'bar' but element 'foo' might not exist). ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0 +"MESOS-9782","05/13/2019 13:32:19",1,"Random sorter fails to clear removed clients. ""In `RandomSorter::SortInfo::updateRelativeWeights()`, we do not clear the stale `clients` and `weights` vector if the state is dirty. This would result in an allocator crash due to including removed framework and roles in a sorted result e.g. check failure would occur here (https://github.com/apache/mesos/blob/62f0b6973b2268a3305fd631a914433a933c6757/src/master/allocator/mesos/hierarchical.cpp#L1849).""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9785","05/14/2019 22:45:06",2,"Frameworks recovered from reregistered agents are not reported to master `/api/v1` subscribers. ""Currently when an operator subscribes to the {{/api/v1}} master endpoint, it would receive a {{SUBSCRIBED}} event carrying information about all known frameworks, including registered ones and unregistered ones. If an unregistered framework reregisters later, a {{FRAMEWORK_UPDATED}} event would be sent to the operator. However, if an operator subscribes to the {{/api/v1}} master endpoint after a master failover but before any of the frameworks and agents reregisters, {{SUBSCRIBED}} would contain no recovered framework information. When a agent with running tasks reregisters later, unregistered frameworks of those tasks will be recovered, but no {{FRAMEWORK_ADDED}} will be sent to the operator, so the operator will receive {{TASK_ADDED}} for those tasks with unknown framework IDs.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9788","05/17/2019 11:21:23",8,"Configurable IPC namespace and shared memory in `namespaces/ipc` isolator ""See [design doc|https://docs.google.com/document/d/10t1jf97vrejUWEVSvxGtqw4vhzfPef41JMzb5jw7l1s/edit?usp=sharing] for the background of this improvement and how we are going to implement it.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9794","05/22/2019 18:30:18",8,"Design doc for container debug endpoint. ""Design doc for container debug endpoint.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9802","05/29/2019 11:31:11",2,"Remove quota role sorter in the allocator. ""Remove the dedicated quota role sorter in favor of using the same sorting between satisfying guarantees and bursting above guarantees up to limits. This is tech debt from when a """"quota role"""" was considered different from a """"non-quota"""" role. However, they are the same, one just has a default quota. The only practical difference between quota role sorter and role sorter now is that quota role sorter ignores the revocable resources both in its total resource pool as well as role allocations. Thus when using DRF, it does not count revocable resources which is arguably the right behavior. By removing the quota sorter, we will have all roles sorted together. When using DRF, in the 1st quota guarantee allocation stage, its share calculation will also include revocable resources.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9803","05/30/2019 04:06:51",2,"Memory leak caused by an infinite chain of futures in `UriDiskProfileAdaptor`. ""Before MESOS-8906, {{UriDiskProfileAdaptor}} only update its promise for watchers if the polled profile matrix becomes larger in size, and this prevents the following code in the {{watch}} function from creating an infinite chain of futures when the profile matrix keeps the same: https://github.com/apache/mesos/blob/fa410f2fb8efb988590f4da2d4cfffbb2ce70637/src/resource_provider/storage/uri_disk_profile_adaptor.cpp#L159-L160 However, the patch of MESOS-8906 removes the size check in the {{notify}} function to allow profile selectors to be updated. As a result, once the watch function is called, the returned future will be chained with a new promise every time a poll is made, hence creating a memory leak. A jemalloc call graph for a 2hr trace is attached.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9806","06/03/2019 01:54:05",5,"Address allocator performance regression due to the addition of quota limits. ""In MESOS-9802, we removed the quota role sorter which is tech debt. However, this slows down the allocator. The problem is that in the first stage, even though a cluster might have no active roles with non-default quota, the allocator will now have to sort and go through each and every role in the cluster. Benchmark result shows that for 1k roles with 2k frameworks, the allocator could experience ~50% performance degradation. There are a couple of ways to address this issue. For example, we could make the sorter aware of quota. And add a method, say `sortQuotaRoles`, to return all the roles with non-default quota. Alternatively, an even better approach would be to deprecate the sorter concept and just have two standalone functions e.g. sortRoles() and sortQuotaRoles() that takes in the role tree structure (not yet exist in the allocator) and return the sorted roles. In addition, when implementing MESOS-8068, we need to do more during the allocation cycle. In particular, we need to call shrink many more times than before. These all contribute to the performance slowdown. Specifically, for the quota oriented benchmark `HierarchicalAllocator_WithQuotaParam.LargeAndSmallQuota/2` we can observe 2-3x slowdown compared to the previous release (1.8.1): Current master: QuotaParam/BENCHMARK_HierarchicalAllocator_WithQuotaParam.LargeAndSmallQuota/2 Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with drf sorter Made 3500 allocations in 32.051382735secs Made 0 allocation in 27.976022773secs 1.8.1: HierarchicalAllocator_WithQuotaParam.LargeAndSmallQuota/2 Made 3500 allocations in 13.810811063secs Made 0 allocation in 9.885972984secs""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9807","06/03/2019 17:15:15",5,"Introduce a `struct Quota` wrapper. ""We should introduce: struct Qutota { ResourceQuantities guarantees; ResourceLimits limits; } There are a couple of small hurdles. First, there is already a struct Quota wrapper in """"include/mesos/quota/quota.hpp"""", we need to deprecate that first. Second, `ResourceQuantities` and `ResourceLimits` are right now only used in internal headers. We probably want to move them into public header, since this struct will also be used in allocator interface which is also in the public header. (Looking at this line, the boundary is alreayd breached: https://github.com/apache/mesos/blob/master/include/mesos/allocator/allocator.hpp#L41)""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9813","06/04/2019 22:26:41",3,"Track role consumed quota for all roles in the allocator. ""We are already tracking role consumed quota for roles with non-default quota in the allocator. We should expand that to track all roles' consumptions which will then be exposed through metrics later.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9814","06/04/2019 22:44:19",8,"Implement DrainAgent master/operator call with associated registry actions ""We want to add several calls associated with agent draining: Each field will be persisted in the registry: """," message Call { enum Type { . . . DRAIN_AGENT = 37; DEACTIVATE_AGENT = 38; REACTIVATE_AGENT = 39; } . . . message DrainAgents { message DrainConfig { required AgentID agent = 1; // The duration after which the agent should complete draining. // If tasks are still running after this time, they will // be forcefully terminated. optional Duration max_grace_period = 2; // Whether or not this agent will be removed permanently // from the cluster when draining is complete. optional bool destructive = 3 [default = false]; } repeated DrainConfig drain_config = 1; } message DeactivateAgents { repeated AgentID agents = 1; } message ReactivateAgents { repeated AgentID agents = 1; } } message Registry { . . . message Slave { . . . optional DrainInfo drain_info = 2; } . . . message UnreachableSlave { . . . optional DrainInfo drain_info = 3; } } ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9815","06/04/2019 22:46:02",5,"Deprecate maintenance primitives ""The existing maintenance primitives should be marked deprecated in the protobuf definitions, and the documentation should be mostly removed with links which redirect to the new agent draining feature. The {{updateMaintenanceSchedule()}} handler code path should also be updated to verify that the new agent draining feature is not in use before allowing maintenance schedules to be created.""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9816","06/04/2019 22:47:16",3,"Add draining state information to master state endpoints ""The response for {{GET_STATE}} and {{GET_AGENTS}} should include the new fields indicating deactivation or draining states: The {{/state}} and {{/state-summary}} handlers should also expose this information."""," message Response { . . . message GetAgents { message Agent { . . . optional bool deactivated = 12; optional DrainInfo drain_info = 13; . . . } } . . . } ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9817","06/04/2019 22:52:50",3,"Add minimum master capability for draining and deactivation states ""Since we are adding new fields to the registry to represent agent draining/deactivation, we cannot allow downgrades of masters while such features are in use. A new minimum capability should be added to the registry with the appropriate documentation: https://github.com/apache/mesos/blob/663bfa68b6ab68f4c28ed6a01ac42ac2ad23ac07/src/master/master.cpp#L1681-L1688 http://mesos.apache.org/documentation/latest/downgrades/""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9818","06/04/2019 23:06:52",2,"Implement minimal agent-side draining handler ""To unblock other work that can be done in parallel, this ticket captures the implementation of a handler for the {{DrainSlaveMessage}} in the agent which will: * Checkpoint the {{DrainInfo}} * Populate a new data member in the agent with the {{DrainInfo}} ""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9819","06/04/2019 23:09:23",5,"Update the agent's behavior when marked GONE ""Currently, when an agent is marked GONE, the master sends a {{ShutdownMessage}} to the agent, which causes it to shutdown all frameworks and then terminate. As part of the agent draining work, we would like to change this behavior so that instead of terminating, the agent will sleep indefinitely once all frameworks are shut down. This will avoid the issue of a flapping agent process when the agent is managed by an init service like systemd.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9820","06/05/2019 04:46:42",5,"Add `updateQuota()` method to the allocator. ""This is the method that underlies the `UPDATE_QUOTA` operator call. This will allow the allocator to set different values for guarantees and limits. The existing `setQuota` and `removeQuota` methods in the allocator will be deprecated. This will likely break many existing allocator tests. We should fix and refactor tests to verify the bursting up to limits feature.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9821","06/05/2019 17:54:00",2,"Agent kills all tasks when draining ""The agent's {{DrainSlaveMessage}} handler should kill all tasks when draining is initiated, specifying a kill policy with a grace period equal to the minimum of the task's grace period and the min_grace_period specified in the drain message.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9822","06/05/2019 17:55:20",2,"Agent recovery code for task draining ""In the case where the agent crashes while it's in the process of killing tasks due to agent draining, it must recover the checkpointed {{DrainInfo}} and kill any tasks which did not have KILL events sent to them already.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9823","06/05/2019 17:56:59",3,"Agent should modify status updates while draining ""While it's draining, the agent should decorate TASK_KILLING and TASK_KILLED status updates with REASON_AGENT_DRAINING. It should also convert TASK_KILLED to TASK_GONE_BY_OPERATOR in the {{mark_gone}} case, ensuring that TASK_GONE_BY_OPERATOR is the state checkpointed to disk.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9831","06/05/2019 22:36:32",2,"Master should not report disconnected resource providers. ""MESOS-9384 attempted to make the master to garbage-collect disconnected resource providers. However, if there are disconnected resource providers but none of the connected ones changes, the following code snippet would make the master ignore the agent update and skip the garbage collection: https://github.com/apache/mesos/blob/2ae1296c668686d234be92b00bd7abbc0a6194b0/src/master/master.cpp#L8186-L8234 The condition to ignore the agent update will be triggered in one of the following conditions: 1. The resource provider has no resource, so the agent's total resource remains the same. 2. When the agent restarts and reregisters, its resource provider resources will be reset. As a result, the master will still keep records for the disconnected resource providers and report them.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9835","06/11/2019 01:42:12",2,"`QuotaRoleAllocateNonQuotaResource` is failing. "" The test is failing because: After agent3 is added, it misses a settle call where the allocation of agent3 is racy. In addition, after https://github.com/apache/mesos/commit/7df8cc6b79e294c075de09f1de4b31a2b88423c8 we now offer nonquota resources on an agent (even that means """"chopping"""") on top of role's satisfied guarantees, the test needs to be updated in accordance with the behavior change."""," [ RUN ] HierarchicalAllocatorTest.QuotaRoleAllocateNonQuotaResource ../../src/tests/hierarchical_allocator_tests.cpp:4094: Failure Value of: allocations.get().isPending() Actual: false Expected: true [ FAILED ] HierarchicalAllocatorTest.QuotaRoleAllocateNonQuotaResource (12 ms) ",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9836","06/11/2019 04:27:55",5,"Docker containerizer overwrites `/mesos/slave` cgroups. ""The following bug was observed on our internal testing cluster. The docker containerizer launched a container on an agent: After the container was launched, the docker containerizer did a {{docker inspect}} on the container and cached the pid: [https://github.com/apache/mesos/blob/0c431dd60ae39138cc7e8b099d41ad794c02c9a9/src/slave/containerizer/docker.cpp#L1764] The pid should be slightly greater than 13716. The docker executor sent a {{TASK_FINISHED}} status update around 16 minutes later: After receiving the terminal status update, the agent asked the docker containerizer to update {{cpu.cfs_period_us}}, {{cpu.cfs_quota_us}} and {{memory.soft_limit_in_bytes}} of the container through the cached pid: [https://github.com/apache/mesos/blob/0c431dd60ae39138cc7e8b099d41ad794c02c9a9/src/slave/containerizer/docker.cpp#L1696] Note that the cgroup of {{cpu.shares}} was {{/mesos/slave}}. This was possibly because that over the 16 minutes the pid got reused: It was highly likely that the container itself exited around 06:09:35, way before the docker executor detected and reported the terminal status update, and then its pid was reused by another forked child of the agent, and thus {{cpu.cfs_period_us}}, {{cpu.quota_us}} and {{memory.soft_limit_in_bytes}} of the {{/mesos/slave}} cgroup was mistakenly overwritten."""," I0523 06:00:53.888579 21815 docker.cpp:1195] Starting container 'f69c8a8c-eba4-4494-a305-0956a44a6ad2' for task 'apps_docker-sleep-app.1fda5b8e-7d20-11e9-9717-7aa030269ee1' (and executor 'apps_docker-sleep-app.1fda5b8e-7d20-11e9-9717-7aa030269ee1') of framework 415284b7-2967-407d-b66f-f445e93f064e-0011 I0523 06:00:54.524171 21815 docker.cpp:783] Checkpointing pid 13716 to '/var/lib/mesos/slave/meta/slaves/60c42ab7-eb1a-4cec-b03d-ea06bff00c3f-S2/frameworks/415284b7-2967-407d-b66f-f445e93f064e-0011/executors/apps_docker-sleep-app.1fda5b8e-7d20-11e9-9717-7aa030269ee1/runs/f69c8a8c-eba4-4494-a305-0956a44a6ad2/pids/forked.pid' I0523 06:16:17.287595 21809 slave.cpp:5566] Handling status update TASK_FINISHED (Status UUID: 4e00b786-b773-46cd-8327-c7deb08f1de9) for task apps_docker-sleep-app.1fda5b8e-7d20-11e9-9717-7aa030269ee1 of framework 415284b7-2967-407d-b66f-f445e93f064e-0011 from executor(1)@172.31.1.7:36244 I0523 06:16:17.290447 21815 docker.cpp:1868] Updated 'cpu.shares' to 102 at /sys/fs/cgroup/cpu,cpuacct/mesos/slave for container f69c8a8c-eba4-4494-a305-0956a44a6ad2 I0523 06:16:17.290660 21815 docker.cpp:1895] Updated 'cpu.cfs_period_us' to 100ms and 'cpu.cfs_quota_us' to 10ms (cpus 0.1) for container f69c8a8c-eba4-4494-a305-0956a44a6ad2 I0523 06:16:17.889816 21815 docker.cpp:1937] Updated 'memory.soft_limit_in_bytes' to 32MB for container f69c8a8c-eba4-4494-a305-0956a44a6ad2 # zgrep 'systemd.cpp:98\]' /var/log/mesos/archive/mesos-agent.log.12.gz ... I0523 06:00:54.525178 21815 systemd.cpp:98] Assigned child process '13716' to 'mesos_executors.slice' I0523 06:00:55.078546 21808 systemd.cpp:98] Assigned child process '13798' to 'mesos_executors.slice' I0523 06:00:55.134096 21808 systemd.cpp:98] Assigned child process '13799' to 'mesos_executors.slice' ... I0523 06:06:30.997439 21808 systemd.cpp:98] Assigned child process '32689' to 'mesos_executors.slice' I0523 06:06:31.050976 21808 systemd.cpp:98] Assigned child process '32690' to 'mesos_executors.slice' I0523 06:06:31.110514 21815 systemd.cpp:98] Assigned child process '32692' to 'mesos_executors.slice' I0523 06:06:33.143726 21818 systemd.cpp:98] Assigned child process '446' to 'mesos_executors.slice' I0523 06:06:33.196251 21818 systemd.cpp:98] Assigned child process '447' to 'mesos_executors.slice' I0523 06:06:33.266332 21816 systemd.cpp:98] Assigned child process '449' to 'mesos_executors.slice' ... I0523 06:09:34.870056 21808 systemd.cpp:98] Assigned child process '13717' to 'mesos_executors.slice' I0523 06:09:34.937762 21813 systemd.cpp:98] Assigned child process '13744' to 'mesos_executors.slice' I0523 06:09:35.073971 21817 systemd.cpp:98] Assigned child process '13754' to 'mesos_executors.slice' ... ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9837","06/12/2019 11:53:42",5,"Implement `FutureTracker` class along with helper functions. ""Both `track()` and `pending_futures()` helper functions depend on the `FutureTracker` actor. `FutureTracker` actor must be available globally and there must be only one instance of this actor.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9843","06/12/2019 15:35:26",3,"Implement tests for the `containerizer/debug` endpoint. ""Implement tests for container stuck issues and check that the agent's `containerizer/debug` endpoint returns a JSON object containing information about pending operations.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9845","06/13/2019 18:28:45",5,"Add docs for automatic agent draining ""Will probably require: * A separate page describing the feature (in lieu or superceding the maintenance doc) * Updates to the API docs, for master and agent APIs. Any GET_STATE or similar call changes will also be included.""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9846","06/13/2019 18:30:15",3,"Update UI for agent draining ""We should expose the new agent metadata in the web UI: * Drain info * Deactivation state It may also be worth exposing unreachable and gone agents in some way, so that agents do not simply disappear from the UI when they transition to unreachable and/or gone, during or after maintenance.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9847","06/14/2019 00:45:40",5,"Docker executor doesn't wait for status updates to be ack'd before shutting down. ""The docker executor doesn't wait for pending status updates to be acknowledged before shutting down, instead it sleeps for one second and then terminates: This would result in racing between task status update (e.g. TASK_FINISHED) and executor exit. The latter would lead agent generating a `TASK_FAILED` status update by itself, leading to the confusing case where the agent handles two different terminal status updates."""," void _stop() { // A hack for now ... but we need to wait until the status update // is sent to the slave before we shut ourselves down. // TODO(tnachen): Remove this hack and also the same hack in the // command executor when we have the new HTTP APIs to wait until // an ack. os::sleep(Seconds(1)); driver.get()->stop(); } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-9849","06/17/2019 17:44:15",5,"Add support for per-role REVIVE / SUPPRESS to V0 scheduler driver. ""Unfortunately, there are still schedulers that are using the v0 bindings and are unable to move to v1 before wanting to use the per-role REVIVE / SUPPRESS calls. We'll need to add per-role REVIVE / SUPPRESS into the v1 scheduler driver.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9852","06/19/2019 17:47:15",3,"Slow memory growth in master due to deferred deletion of offer filters and timers. ""The allocator does not keep a handle to the offer filter timer, which means it cannot remove the timer overhead (in this case memory) when removing the offer filter earlier (e.g. due to revive): https://github.com/apache/mesos/blob/1.8.0/src/master/allocator/mesos/hierarchical.cpp#L1338-L1352 In addition, the offer filter is allocated on the heap but not deleted until the timer fires (which might take forever!): https://github.com/apache/mesos/blob/1.8.0/src/master/allocator/mesos/hierarchical.cpp#L1321 https://github.com/apache/mesos/blob/1.8.0/src/master/allocator/mesos/hierarchical.cpp#L1408-L1413 https://github.com/apache/mesos/blob/1.8.0/src/master/allocator/mesos/hierarchical.cpp#L2249 We'll need to try to backport this to all active release branches.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9856","06/21/2019 17:49:44",3,"REVIVE call with specified role(s) clears filters for all roles of a framework. ""As pointed out by [~asekretenko], the REVIVE implementation in the allocator incorrectly clears decline filters for all of the framework's roles, rather than only those that were specified in the REVIVE call: https://github.com/apache/mesos/blob/1.8.0/src/master/allocator/mesos/hierarchical.cpp#L1392 This should only clear filters for the roles specified in the REVIVE call.""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9860","06/24/2019 17:07:39",2,"Agent should erase DrainInfo when draining complete ""When the agent is in the DRAINING state and it sees that all terminal acknowledgements for completed operations and tasks have been received, it should clear the checkpointed {{DrainInfo}} from disk and from memory so that it no longer believes it is DRAINING. It will then be ready to receive new tasks/operations if it is reactivated.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9861","06/24/2019 23:50:38",1,"Make PushGauges support floating point stats. ""Currently, PushGauges are modeled against counters. Thus it does not support floating point stats. This prevents many existing PullGauges to use it. We need to add support for floating point stat.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1 +"MESOS-9871","06/28/2019 22:24:59",3,"Expose quota consumption in /roles endpoint. ""As part of exposing quota consumption to users and displaying quota consumption in the ui, we will need to add it to the /roles endpoint (which is currently what the ui uses for the roles table).""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9874","07/01/2019 22:16:35",3,"Add environment variable `MESOS_ALLOCATION_ROLE` to the task/container. ""Set this env var as the role from the task resource. Here is an example: https://github.com/apache/mesos/blob/master/src/master/readonly_handler.cpp#L197 We probably want to set this env from executors, by adding this env to CommandInfo. Mesos and docker containerizers should be supported.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9875","07/02/2019 00:29:15",8,"Mesos did not respond correctly when operations should fail ""For testing persistent volumes with {{OPERATION_FAILED/ERROR}} feedbacks, we sshed into the mesos-agent and made it unable to create subdirectories in {{/srv/mesos/work/volumes}}, however, mesos did not respond any operation failed response. Instead, we received {{OPERATION_FINISHED}} feedback. Steps to recreate the issue: 1. Ssh into a magent. 2. Make it impossible to create a persistent volume (we expect the agent to crash and reregister, and the master to release that the operation is {{OPERATION_DROPPED}}): * cd /srv/mesos/work (if it doesn't exist mkdir /srv/mesos/work/volumes) * chattr -RV +i volumes (then no subdirectories can be created) 3. Launch a service with persistent volumes with the constraint of only using the magent modified above.     Logs for the scheduler for receiving `OPERATION_FINISHED`: (Also see screenshot)   2019-06-27 21:57:11.879 [12768651|rdar://12768651] [Jarvis-mesos-dispatcher-105] INFO c.a.j.s.ServicePodInstance - Stored operation=4g3k02s1gjb0q_5f912b59-a32d-462c-9c46-8401eba4d2c1 and feedback=OPERATION_FINISHED in podInstanceID=4g3k02s1gjb0q on serviceID=yifan-badagents-1   * 2019-06-27 21:55:23: task reached state TASK_FAILED for mesos reason: REASON_CONTAINER_LAUNCH_FAILED with mesos message: Failed to launch container: Failed to change the ownership of the persistent volume at '/srv/mesos/work/volumes/roles/test-2/19b564e8-3a90-4f2f-981d-b3dd2a5d9f90' with uid 264 and gid 264: No such file or directory""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9882","07/04/2019 01:13:18",1,"Mesos.UpdateFrameworkV0Test.SuppressedRoles is flaky. ""Observed in CI, log attached. """," mesos-ec2-ubuntu-14.04-SSL.Mesos.UpdateFrameworkV0Test.SuppressedRoles (from UpdateFrameworkV0Test) Error Message ../../src/tests/master/update_framework_tests.cpp:1117 Mock function called more times than expected - returning directly. Function call: agentAdded(@0x7fb254001c40 32-byte object <90-7A 6C-85 B2-7F 00-00 00-00 00-00 00-00 00-00 01-00 00-00 00-00 00-00 F0-85 00-54 B2-7F 00-00>) Expected: to be called once Actual: called twice - over-saturated and active Stacktrace ../../src/tests/master/update_framework_tests.cpp:1117 Mock function called more times than expected - returning directly. Function call: agentAdded(@0x7fb254001c40 32-byte object <90-7A 6C-85 B2-7F 00-00 00-00 00-00 00-00 00-00 01-00 00-00 00-00 00-00 F0-85 00-54 B2-7F 00-00>) Expected: to be called once Actual: called twice - over-saturated and active ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9887","07/10/2019 18:46:26",8,"Race condition between two terminal task status updates for Docker/Command executor. ""h2. Overview Expected behavior: Task successfully finishes and sends TASK_FINISHED status update. Observed behavior: Task successfully finishes, but the agent sends TASK_FAILED with the reason """"REASON_EXECUTOR_TERMINATED"""". In normal circumstances, Docker executor [sends|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/docker/executor.cpp#L758] final status update TASK_FINISHED to the agent, which then [gets processed|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/slave.cpp#L5543] by the agent before termination of the executor's process. However, if the processing of the initial TASK_FINISHED gets delayed, then there is a chance that Docker executor terminates and the agent [triggers|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/slave.cpp#L6662] TASK_FAILED which will [be handled|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/slave.cpp#L5816-L5826] prior to the TASK_FINISHED status update. See attached logs which contain an example of the race condition. h2. Reproducing bug 1. Add the following code: to the [`ComposingContainerizerProcess::status`|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/containerizer/composing.cpp#L578] and to the [`DockerContainerizerProcess::status`|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/containerizer/docker.cpp#L2167]. 2. Recompile mesos 3. Launch mesos master and agent locally 4. Launch a simple Docker task via `mesos-execute`: h2. Race condition - description 1. Mesos agent receives TASK_FINISHED status update and then subscribes on [`containerizer->status()`|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/slave.cpp#L5754-L5761]. 2. `containerizer->status()` operation for TASK_FINISHED status update gets delayed in the composing containerizer (e.g. due to switch of the worker thread that executes `status` method). 3. Docker executor terminates and the agent [triggers|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/slave.cpp#L6662] TASK_FAILED. 4. Docker containerizer destroys the container. A registered callback for the `containerizer->wait` call in the composing containerizer dispatches [lambda function|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/containerizer/composing.cpp#L368-L373] that will clean up `containers_` map. 5. Composing c'zer resumes and dispatches `[status()|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/containerizer/composing.cpp#L579]` method to the Docker containerizer for TASK_FINISHED, which in turn hangs for a few seconds. 6. Corresponding `containerId` gets removed from the `containers_` map of the composing c'zer. 7. Mesos agent subscribes on [`containerizer->status()`|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/slave.cpp#L5754-L5761] for the TASK_FAILED status update. 8. Composing c'zer returns [""""Container not found""""|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/containerizer/composing.cpp#L576] for TASK_FAILED. 9. `[Slave::_statusUpdate|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/slave.cpp#L5826]` stores TASK_FAILED terminal status update in the executor's data structure. 10. Docker containerizer resumes and finishes processing of `status()` method for TASK_FINISHED. Finally, it returns control to the `Slave::_statusUpdate` continuation. This method [discovers|https://github.com/apache/mesos/blob/0026ea46dc35cbba1f442b8e425c6cbaf81ee8f8/src/slave/slave.cpp#L5808-L5814] that the executor has already been destroyed."""," static int c = 0; if (++c == 3) { // to skip TASK_STARTING and TASK_RUNNING status updates. ::sleep(2); } # cd build ./src/mesos-execute --master=""""`hostname`:5050"""" --name=""""a"""" --containerizer=docker --docker_image=alpine --resources=""""cpus:1;mem:32"""" --command=""""ls"""" ",0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9893","07/16/2019 09:05:38",3,"`volume/secret` isolator should cleanup the stored secret from runtime directory when the container is destroyed ""`volume/secret` isolator writes secret into a file (its filename is a UUID) under `/run/mesos/.secret` when launching container, but it does not clean up that file when the container is destroyed. Over time, the `/run/mesos/.secret` directory may take up all disk space on the partition.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9894","07/16/2019 09:29:25",1,"Mesos failed to build due to fatal error C1083 on Windows using MSVC. ""Mesos failed to build due to fatal error C1083: Cannot open include file: 'slave/volume_gid_manager/state.pb.h': No such file or directory on Windows using MSVC. It can be first reproduced on 6a026e3 reversion on master branch. Could you please take a look at this isssue? Thanks a lot! Reproduce steps: 1. git clone -c core.autocrlf=true https://github.com/apache/mesos D:\mesos\src 2. Open a VS 2017 x64 command prompt as admin and browse to D:\mesos 3. cd src 4. .\bootstrap.bat 5. cd .. 6. mkdir build_x64 && pushd build_x64 7. cmake ..\src -G """"Visual Studio 15 2017 Win64"""" -DCMAKE_SYSTEM_VERSION=10.0.17134.0 -DENABLE_LIBEVENT=1 -DHAS_AUTHENTICATION=0 -DPATCHEXE_PATH=""""C:\gnuwin32\bin"""" -T host=x64 8. msbuild Mesos.sln /p:Configuration=Debug /p:Platform=x64 /maxcpucount:4 /t:Rebuild   ErrorMessage: D:\Mesos\src\include\mesos/docker/spec.hpp(29): fatal error C1083: Cannot open include file: 'mesos/docker/spec.pb.h': No such file or directory D:\Mesos\src\src\slave/volume_gid_manager/state.hpp(21): fatal error C1083: Cannot open include file: 'slave/volume_gid_manager/state.pb.h': No such file or directory D:\Mesos\src\src\slave/volume_gid_manager/state.hpp(21): fatal error C1083: Cannot open include file: 'slave/volume_gid_manager/state.pb.h': No such file or directory    ""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9901","07/23/2019 02:57:41",3,"jsonify uses non-standard mapping for protobuf map fields. ""Jsonify current treats protobuf as a regular repeated field. For example, for the schema it will produce: This output cannot be parsed back to proto messages. We need to specialize jsonify for Maps type to get the standard output: """," message QuotaConfig { required string role = 1; map guarantees = 2; map limits = 3; } { """"configs"""": [ { """"role"""": """"role1"""", """"guarantees"""": [ { """"key"""": """"cpus"""", """"value"""": { """"value"""": 1 } }, { """"key"""": """"mem"""", """"value"""": { """"value"""": 512 } } ] } ] } { """"configs"""": [ { """"role"""": """"role1"""", """"guarantees"""": { """"cpus"""": 1, """"mem"""": 512 } } ] } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9906","07/24/2019 10:53:10",1,"Libprocess tests hangs on arm "" https://builds.apache.org/job/Mesos-Buildbot-ARM/BUILDTOOL=cmake,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-java%20--disable-python%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1%20%20MESOS_TEST_AWAIT_TIMEOUT=60secs%20JOBS=16%20GTEST_FILTER=-DiskQuotaTest.SlaveRecovery,label_exp=arm/lastBuild/console"""," 1: [  PASSED  ] 361 tests. 1/3 Test #1: StoutTests .......................   Passed   11.44 sec test 2    Start 2: ProcessTests 2: Test command: /tmp/SRC/build/3rdparty/libprocess/src/tests/libprocess-tests 2: Test timeout computed to be: 9.99988e+06 Build timed out (after 300 minutes). Marking the build as failed. Build was aborted Recording test results ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9907","07/25/2019 17:01:56",1,"Retain agent draining start time in master ""The master should store in memory the last time that a {{DrainSlaveMessage}} was sent to the agent so that this time can be displayed in the web UI. This would help operators determine the expected time at which the agent should transition to DRAINED. We should update the webui to use that time as a starting point and the {{DrainConfig}}'s {{max_grace_period}} to calculate the expected maximum time until the agent is drained.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9908","07/26/2019 00:30:11",5,"Introduce a new agent flag and support docker volume chown to task user. ""Currently, docker volume is always mounted as root, which is not accessible by non-root task users. For security concerns, there are use cases that operator may only allow non-root users to run as container user and docker volume needs to be supported for those non-root users. A new agent flag is needed to make this support configurable, because chown-ing a docker volume may be limited to some use case - e.g., multiple non-root users on different hosts sharing the same docker volume simultaneously. Operators are expected to turn on this flag if their cluster's docker volume is not shared by multiple non-root users.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9909","07/28/2019 10:33:36",2,"Mesos agent crashes after recovery when there is nested container joins a CNI network ""Reproduce steps: 1. Use `mesos-execute` to launch a task group with checkpoint enabled. The task in the task group joins a CNI network `net1` and has health check enabled, and the health check will succeed for the first time, fail for the second time, and succeed for the third time, ... The reason that we do health check in this way is that we want to keep generating status updates for this task after recovery.  2. Restart Mesos agent, and then we will see Mesos agent crashes when it handles `TASK_RUNNING` status update triggered by the health check.  """," $ mesos-execute --master=:5050 --task_group=file:///tmp/task_group.json --checkpoint $ cat /tmp/task_group.json { """"tasks"""":[ { """"name"""" : """"test"""", """"task_id"""" : {""""value"""" : """"test""""}, """"agent_id"""": {""""value"""" : """"""""}, """"resources"""": [ {""""name"""": """"cpus"""", """"type"""": """"SCALAR"""", """"scalar"""": {""""value"""": 0.1}}, {""""name"""": """"mem"""", """"type"""": """"SCALAR"""", """"scalar"""": {""""value"""": 32}} ], """"command"""": { """"value"""": """"ip a && sleep 55555"""" }, """"container"""": { """"type"""": """"MESOS"""", """"network_infos"""": [ { """"name"""": """"net1"""" } ] }, """"health_check"""": { """"type"""": """"COMMAND"""", """"command"""": { """"value"""": """"if test -f file; then rm -rf file && exit 1; else touch file && exit 0; fi"""" } } } ] } I0728 16:44:34.485939 3513 slave.cpp:5702] Handling status update TASK_RUNNING (Status UUID: 81fa5c56-4d79-4da4-846a-05e94591728b) for task test in health state healthy of framework 990a6379-5727-4490-9abe-7869ff8a1cf2-0000 F0728 16:44:34.528841 3510 cni.cpp:1462] CHECK_SOME(containerNetwork.networkInfo): is NONE *** Check failure stack trace: *** @ 0x7ffff5000e12 google::LogMessage::Fail() @ 0x7ffff5000d5b google::LogMessage::SendToLog() @ 0x7ffff50006e7 google::LogMessage::Flush() @ 0x7ffff5003dfe google::LogMessageFatal::~LogMessageFatal() @ 0x5555555f90b0 _CheckFatal::~_CheckFatal() @ 0x7ffff372f994 mesos::internal::slave::NetworkCniIsolatorProcess::status() @ 0x7ffff2e16a90 _ZZN7process8dispatchIN5mesos15ContainerStatusENS1_8internal5slave20MesosIsolatorProcessERKNS1_11ContainerIDES8_EENS_6FutureIT_EERKNS_3PIDIT0_EEMSD_FSB_T1_EOT2_ENKUlSt10unique_ptrINS_7PromiseIS2_EESt14default_deleteISO_EEOS6_PNS_11ProcessBaseEE_clESR_SS_SU_ @ 0x7ffff2e20d57 _ZN5cpp176invokeIZN7process8dispatchIN5mesos15ContainerStatusENS3_8internal5slave20MesosIsolatorProcessERKNS3_11ContainerIDESA_EENS1_6FutureIT_EERKNS1_3PIDIT0_EEMSF_FSD_T1_EOT2_EUlSt10unique_ptrINS1_7PromiseIS4_EESt14default_deleteISQ_EEOS8_PNS1_11ProcessBaseEE_JST_S8_SW_EEEDTclcl7forwardISC_Efp_Espcl7forwardIT0_Efp0_EEEOSC_DpOSY_ @ 0x7ffff2e1ff2f _ZN6lambda8internal7PartialIZN7process8dispatchIN5mesos15ContainerStatusENS4_8internal5slave20MesosIsolatorProcessERKNS4_11ContainerIDESB_EENS2_6FutureIT_EERKNS2_3PIDIT0_EEMSG_FSE_T1_EOT2_EUlSt10unique_ptrINS2_7PromiseIS5_EESt14default_deleteISR_EEOS9_PNS2_11ProcessBaseEE_JSU_S9_St12_PlaceholderILi1EEEE13invoke_expandISY_St5tupleIJSU_S9_S10_EES13_IJOSX_EEJLm0ELm1ELm2EEEEDTcl6invokecl7forwardISD_Efp_Espcl6expandcl3getIXT2_EEcl7forwardISG_Efp0_EEcl7forwardISK_Efp2_EEEEOSD_OSG_N5cpp1416integer_sequenceImJXspT2_EEEEOSK_ @ 0x7ffff2e1f75e _ZNO6lambda8internal7PartialIZN7process8dispatchIN5mesos15ContainerStatusENS4_8internal5slave20MesosIsolatorProcessERKNS4_11ContainerIDESB_EENS2_6FutureIT_EERKNS2_3PIDIT0_EEMSG_FSE_T1_EOT2_EUlSt10unique_ptrINS2_7PromiseIS5_EESt14default_deleteISR_EEOS9_PNS2_11ProcessBaseEE_JSU_S9_St12_PlaceholderILi1EEEEclIJSX_EEEDTcl13invoke_expandcl4movedtdefpT1fEcl4movedtdefpT10bound_argsEcvN5cpp1416integer_sequenceImJLm0ELm1ELm2EEEE_Ecl16forward_as_tuplespcl7forwardIT_Efp_EEEEDpOS16_ @ 0x7ffff2e1f20e _ZN5cpp176invokeIN6lambda8internal7PartialIZN7process8dispatchIN5mesos15ContainerStatusENS6_8internal5slave20MesosIsolatorProcessERKNS6_11ContainerIDESD_EENS4_6FutureIT_EERKNS4_3PIDIT0_EEMSI_FSG_T1_EOT2_EUlSt10unique_ptrINS4_7PromiseIS7_EESt14default_deleteIST_EEOSB_PNS4_11ProcessBaseEE_JSW_SB_St12_PlaceholderILi1EEEEEJSZ_EEEDTclcl7forwardISF_Efp_Espcl7forwardIT0_Efp0_EEEOSF_DpOS14_ @ 0x7ffff2e1ef11 _ZN6lambda8internal6InvokeIvEclINS0_7PartialIZN7process8dispatchIN5mesos15ContainerStatusENS7_8internal5slave20MesosIsolatorProcessERKNS7_11ContainerIDESE_EENS5_6FutureIT_EERKNS5_3PIDIT0_EEMSJ_FSH_T1_EOT2_EUlSt10unique_ptrINS5_7PromiseIS8_EESt14default_deleteISU_EEOSC_PNS5_11ProcessBaseEE_JSX_SC_St12_PlaceholderILi1EEEEEJS10_EEEvOSG_DpOT0_ @ 0x7ffff2e1ead6 _ZNO6lambda12CallableOnceIFvPN7process11ProcessBaseEEE10CallableFnINS_8internal7PartialIZNS1_8dispatchIN5mesos15ContainerStatusENSA_8internal5slave20MesosIsolatorProcessERKNSA_11ContainerIDESH_EENS1_6FutureIT_EERKNS1_3PIDIT0_EEMSM_FSK_T1_EOT2_EUlSt10unique_ptrINS1_7PromiseISB_EESt14default_deleteISX_EEOSF_S3_E_JS10_SF_St12_PlaceholderILi1EEEEEEclEOS3_ @ 0x7ffff4f0ad6b _ZNO6lambda12CallableOnceIFvPN7process11ProcessBaseEEEclES3_ @ 0x7ffff4ecdb4a process::ProcessBase::consume() @ 0x7ffff4ef79d0 _ZNO7process13DispatchEvent7consumeEPNS_13EventConsumerE @ 0x5555555f9c1e process::ProcessBase::serve() @ 0x7ffff4eca4e8 process::ProcessManager::resume() @ 0x7ffff4ec695e _ZZN7process14ProcessManager12init_threadsEvENKUlvE_clEv @ 0x7ffff4ed5c7f _ZSt13__invoke_implIvZN7process14ProcessManager12init_threadsEvEUlvE_JEET_St14__invoke_otherOT0_DpOT1_ @ 0x7ffff4ed28ae _ZSt8__invokeIZN7process14ProcessManager12init_threadsEvEUlvE_JEENSt15__invoke_resultIT_JDpT0_EE4typeEOS4_DpOS5_ @ 0x7ffff4ef0e2c _ZNSt6thread8_InvokerISt5tupleIJZN7process14ProcessManager12init_threadsEvEUlvE_EEE9_M_invokeIJLm0EEEEDTcl8__invokespcl10_S_declvalIXT_EEEEESt12_Index_tupleIJXspT_EEE @ 0x7ffff4eefea0 _ZNSt6thread8_InvokerISt5tupleIJZN7process14ProcessManager12init_threadsEvEUlvE_EEEclEv @ 0x7ffff4eeece0 _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZN7process14ProcessManager12init_threadsEvEUlvE_EEEEE6_M_runEv @ 0x7fffe6eb957f (unknown) @ 0x7fffe69cc6db start_thread @ 0x7fffe66f588f clone ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0 +"MESOS-9917","07/31/2019 02:36:44",8,"Store a role tree in the allocator. ""Currently, the client (role and framework) tree for the allocator is stored in the sorter abstraction. This is not ideal. The role/framework tree is generic information that is needed regardless of the sorter used. The current sorter interface and its associated states are tech debts that contribute to performance slowdown and code convolution. We should store a role/framework tree in the allocator.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9919","07/31/2019 18:04:45",5,"Health check performance decreases on large machines ""In recent testing, it appears that the performance of Mesos command health checks decreases dramatically on nodes with large numbers of cores and lots of memory. This may be due to the changes in the cost of forking the agent process on such nodes. We need to investigate this issue to understand the root cause.""","",0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9922","08/02/2019 14:04:58",1,"MasterQuotaTest.RescindOffersEnforcingLimits is flaky ""Showed up on ASF CI: https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Buildbot/6657/BUILDTOOL=cmake,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--enable-parallel-test-execution=no,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1%20MESOS_TEST_AWAIT_TIMEOUT=60secs,OS=centos:7,label_exp=(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!ubuntu-4)&&(!H21)&&(!H23)&&(!H26)&&(!H27)/consoleFull If I understand correctly, the offer with resources on a second slave went to the fiirst framework, because the allocator wasn't aware about the second framework when the second slave was added. """," 3: [ RUN ] MasterQuotaTest.RescindOffersEnforcingLimits 3: I0802 09:57:05.017333 15861 cluster.cpp:177] Creating default 'local' authorizer 3: I0802 09:57:05.029503 15877 master.cpp:440] Master 9dd926f8-c8be-42ad-a1c7-ef0d88a99199 (148706d6d9ee) started on 172.17.0.2:41613 3: I0802 09:57:05.029911 15877 master.cpp:443] Flags at startup: --acls="""""""" --agent_ping_timeout=""""15secs"""" --agent_reregister_timeout=""""10mins"""" --allocation_interval=""""50ms"""" --allocator=""""hierarchical"""" --authenticate_agents=""""true"""" --authenticate_frameworks=""""true"""" --authenticate_http_frameworks=""""true"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""true"""" --authentication_v0_timeout=""""15secs"""" --authenticators=""""crammd5"""" --authorizers=""""local"""" --credentials=""""/tmp/450qM2/credentials"""" --filter_gpu_resources=""""true"""" --framework_sorter=""""drf"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_authenticators=""""basic"""" --http_framework_authenticators=""""basic"""" --initialize_driver_logging=""""true"""" --log_auto_initialize=""""true"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_agent_ping_timeouts=""""5"""" --max_completed_frameworks=""""50"""" --max_completed_tasks_per_framework=""""1000"""" --max_operator_event_stream_subscribers=""""1000"""" --max_unreachable_tasks_per_framework=""""1000"""" --memory_profiling=""""false"""" --min_allocatable_resources=""""cpus:0.01|mem:32"""" --port=""""5050"""" --publish_per_framework_metrics=""""true"""" --quiet=""""false"""" --recovery_agent_removal_limit=""""100%"""" --registry=""""in_memory"""" --registry_fetch_timeout=""""1mins"""" --registry_gc_interval=""""15mins"""" --registry_max_agent_age=""""2weeks"""" --registry_max_agent_count=""""102400"""" --registry_store_timeout=""""100secs"""" --registry_strict=""""false"""" --require_agent_domain=""""false"""" --role_sorter=""""drf"""" --root_submissions=""""true"""" --version=""""false"""" --webui_dir=""""/usr/local/share/mesos/webui"""" --work_dir=""""/tmp/450qM2/master"""" --zk_session_timeout=""""10secs"""" 3: I0802 09:57:05.030527 15877 master.cpp:492] Master only allowing authenticated frameworks to register 3: I0802 09:57:05.030567 15877 master.cpp:498] Master only allowing authenticated agents to register 3: I0802 09:57:05.030601 15877 master.cpp:504] Master only allowing authenticated HTTP frameworks to register 3: I0802 09:57:05.030634 15877 credentials.hpp:37] Loading credentials for authentication from '/tmp/450qM2/credentials' 3: I0802 09:57:05.031009 15877 master.cpp:548] Using default 'crammd5' authenticator 3: I0802 09:57:05.031306 15877 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readonly' 3: I0802 09:57:05.031747 15877 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-readwrite' 3: I0802 09:57:05.032049 15877 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-master-scheduler' 3: I0802 09:57:05.032627 15877 master.cpp:629] Authorization enabled 3: I0802 09:57:05.033092 15880 hierarchical.cpp:241] Initialized hierarchical allocator process 3: I0802 09:57:05.033552 15892 whitelist_watcher.cpp:77] No whitelist given 3: I0802 09:57:05.052621 15877 master.cpp:2168] Elected as the leading master! 3: I0802 09:57:05.052700 15877 master.cpp:1664] Recovering from registrar 3: I0802 09:57:05.052943 15873 registrar.cpp:339] Recovering registrar 3: I0802 09:57:05.054116 15873 registrar.cpp:383] Successfully fetched the registry (0B) in 1.131008ms 3: I0802 09:57:05.054304 15873 registrar.cpp:487] Applied 1 operations in 73189ns; attempting to update the registry 3: I0802 09:57:05.055423 15873 registrar.cpp:544] Successfully updated the registry in 1.02912ms 3: I0802 09:57:05.055572 15873 registrar.cpp:416] Successfully recovered registrar 3: I0802 09:57:05.057384 15876 hierarchical.cpp:280] Skipping recovery of hierarchical allocator: nothing to recover 3: I0802 09:57:05.057569 15877 master.cpp:1817] Recovered 0 agents from the registry (143B); allowing 10mins for agents to reregister 3: W0802 09:57:05.074198 15861 process.cpp:2877] Attempted to spawn already running process files@172.17.0.2:41613 3: I0802 09:57:05.086482 15880 hierarchical.cpp:1508] Performed allocation for 0 agents in 119003ns 3: I0802 09:57:05.089071 15861 containerizer.cpp:318] Using isolation { environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni } 3: W0802 09:57:05.090075 15861 backend.cpp:76] Failed to create 'overlay' backend: OverlayBackend requires root privileges 3: W0802 09:57:05.090544 15861 backend.cpp:76] Failed to create 'aufs' backend: AufsBackend requires root privileges 3: W0802 09:57:05.090595 15861 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges 3: I0802 09:57:05.090658 15861 provisioner.cpp:300] Using default backend 'copy' 3: I0802 09:57:05.093798 15861 cluster.cpp:518] Creating default 'local' authorizer 3: I0802 09:57:05.099562 15882 slave.cpp:267] Mesos agent started on (17)@172.17.0.2:41613 3: I0802 09:57:05.099623 15882 slave.cpp:268] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/450qM2/jqdwI0/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authentication_timeout_max=""""1mins"""" --authentication_timeout_min=""""5secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_destroy_timeout=""""1mins"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/450qM2/jqdwI0/credential"""" --default_role=""""*"""" --disallow_sharing_agent_ipc_namespace=""""false"""" --disallow_sharing_agent_pid_namespace=""""false"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_ignore_runtime=""""false"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/450qM2/jqdwI0/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/450qM2/jqdwI0/fetch"""" --fetcher_cache_size=""""2GB"""" --fetcher_stall_timeout=""""1mins"""" --frameworks_home=""""/tmp/450qM2/jqdwI0/frameworks"""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --gc_non_executor_container_sandboxes=""""false"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/450qM2/jqdwI0/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/tmp/SRC/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --memory_profiling=""""false"""" --network_cni_metrics=""""true"""" --network_cni_root_dir_persist=""""false"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --reconfiguration_policy=""""equal"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:1;mem:1024"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/MasterQuotaTest_RescindOffersEnforcingLimits_Nys8o1"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/MasterQuotaTest_RescindOffersEnforcingLimits_XCzL51"""" --zk_session_timeout=""""10secs"""" 3: I0802 09:57:05.100447 15882 credentials.hpp:86] Loading credential for authentication from '/tmp/450qM2/jqdwI0/credential' 3: I0802 09:57:05.100728 15882 slave.cpp:300] Agent using credential for: test-principal 3: I0802 09:57:05.100751 15882 credentials.hpp:37] Loading credentials for authentication from '/tmp/450qM2/jqdwI0/http_credentials' 3: I0802 09:57:05.101040 15882 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' 3: I0802 09:57:05.101528 15882 disk_profile_adaptor.cpp:78] Creating default disk profile adaptor module 3: I0802 09:57:05.102972 15882 slave.cpp:615] Agent resources: [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":1.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":3749337.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] 3: I0802 09:57:05.103250 15882 slave.cpp:623] Agent attributes: [ ] 3: I0802 09:57:05.103266 15882 slave.cpp:632] Agent hostname: 148706d6d9ee 3: I0802 09:57:05.104347 15886 task_status_update_manager.cpp:181] Pausing sending task status updates 3: I0802 09:57:05.104373 15878 status_update_manager_process.hpp:379] Pausing operation status update manager 3: I0802 09:57:05.107336 15877 state.cpp:67] Recovering state from '/tmp/MasterQuotaTest_RescindOffersEnforcingLimits_XCzL51/meta' 3: I0802 09:57:05.120726 15882 slave.cpp:7444] Finished recovering checkpointed state from '/tmp/MasterQuotaTest_RescindOffersEnforcingLimits_XCzL51/meta', beginning agent recovery 3: I0802 09:57:05.121412 15892 task_status_update_manager.cpp:207] Recovering task status update manager 3: I0802 09:57:05.122179 15882 containerizer.cpp:821] Recovering Mesos containers 3: I0802 09:57:05.122614 15882 containerizer.cpp:1147] Recovering isolators 3: I0802 09:57:05.126801 15882 containerizer.cpp:1186] Recovering provisioner 3: I0802 09:57:05.127704 15876 provisioner.cpp:500] Provisioner recovery complete 3: I0802 09:57:05.140862 15882 hierarchical.cpp:1508] Performed allocation for 0 agents in 107533ns 3: I0802 09:57:05.141611 15878 composing.cpp:339] Finished recovering all containerizers 3: I0802 09:57:05.142130 15878 slave.cpp:7908] Recovering executors 3: I0802 09:57:05.142243 15878 slave.cpp:8061] Finished recovery 3: I0802 09:57:05.143589 15888 task_status_update_manager.cpp:181] Pausing sending task status updates 3: I0802 09:57:05.143630 15886 status_update_manager_process.hpp:379] Pausing operation status update manager 3: I0802 09:57:05.143709 15892 slave.cpp:1351] New master detected at master@172.17.0.2:41613 3: I0802 09:57:05.143868 15892 slave.cpp:1416] Detecting new master 3: W0802 09:57:05.144331 15861 process.cpp:2877] Attempted to spawn already running process version@172.17.0.2:41613 3: I0802 09:57:05.146203 15861 sched.cpp:239] Version: 1.9.0 3: I0802 09:57:05.147024 15887 sched.cpp:343] New master detected at master@172.17.0.2:41613 3: I0802 09:57:05.147194 15887 sched.cpp:408] Authenticating with master master@172.17.0.2:41613 3: I0802 09:57:05.147215 15887 sched.cpp:415] Using default CRAM-MD5 authenticatee 3: I0802 09:57:05.147907 15890 authenticatee.cpp:121] Creating new client SASL connection 3: I0802 09:57:05.148314 15874 master.cpp:10578] Authenticating scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 3: I0802 09:57:05.148574 15874 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(41)@172.17.0.2:41613 3: I0802 09:57:05.149073 15874 authenticator.cpp:98] Creating new server SASL connection 3: I0802 09:57:05.149623 15874 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 3: I0802 09:57:05.149660 15874 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 3: I0802 09:57:05.149797 15874 authenticator.cpp:204] Received SASL authentication start 3: I0802 09:57:05.149865 15874 authenticator.cpp:326] Authentication requires more steps 3: I0802 09:57:05.150003 15874 authenticatee.cpp:259] Received SASL authentication step 3: I0802 09:57:05.150120 15874 authenticator.cpp:232] Received SASL authentication step 3: I0802 09:57:05.150152 15874 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '148706d6d9ee' server FQDN: '148706d6d9ee' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 3: I0802 09:57:05.150167 15874 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 3: I0802 09:57:05.150226 15874 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 3: I0802 09:57:05.150254 15874 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '148706d6d9ee' server FQDN: '148706d6d9ee' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 3: I0802 09:57:05.150266 15874 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 3: I0802 09:57:05.150275 15874 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 3: I0802 09:57:05.150293 15874 authenticator.cpp:318] Authentication success 3: I0802 09:57:05.150494 15875 authenticatee.cpp:299] Authentication success 3: I0802 09:57:05.150573 15874 master.cpp:10610] Successfully authenticated principal 'test-principal' at scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 3: I0802 09:57:05.150702 15874 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(41)@172.17.0.2:41613 3: I0802 09:57:05.150995 15875 sched.cpp:520] Successfully authenticated with master master@172.17.0.2:41613 3: I0802 09:57:05.151018 15875 sched.cpp:835] Sending SUBSCRIBE call to master@172.17.0.2:41613 3: I0802 09:57:05.151211 15875 sched.cpp:870] Will retry registration in 24.561819ms if necessary 3: I0802 09:57:05.151538 15875 master.cpp:2908] Received SUBSCRIBE call for framework 'default' at scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 3: I0802 09:57:05.151566 15875 master.cpp:2240] Authorizing framework principal 'test-principal' to receive offers for roles '{ role1 }' 3: I0802 09:57:05.152302 15891 master.cpp:2995] Subscribing framework default with checkpointing disabled and capabilities [ MULTI_ROLE, RESERVATION_REFINEMENT ] 3: I0802 09:57:05.152519 15887 slave.cpp:1443] Authenticating with master master@172.17.0.2:41613 3: I0802 09:57:05.152618 15887 slave.cpp:1452] Using default CRAM-MD5 authenticatee 3: I0802 09:57:05.153172 15887 authenticatee.cpp:121] Creating new client SASL connection 3: I0802 09:57:05.157215 15891 master.cpp:10808] Adding framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 (default) at scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 with roles { } suppressed 3: I0802 09:57:05.160701 15888 hierarchical.cpp:368] Added framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 3: I0802 09:57:05.161007 15888 hierarchical.cpp:1508] Performed allocation for 0 agents in 131361ns 3: I0802 09:57:05.161232 15888 sched.cpp:751] Framework registered with 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 3: I0802 09:57:05.161291 15888 sched.cpp:770] Scheduler::registered took 43881ns 3: I0802 09:57:05.161705 15891 master.cpp:10578] Authenticating slave(17)@172.17.0.2:41613 3: I0802 09:57:05.161986 15891 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(42)@172.17.0.2:41613 3: I0802 09:57:05.162549 15891 authenticator.cpp:98] Creating new server SASL connection 3: I0802 09:57:05.162926 15891 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 3: I0802 09:57:05.162976 15891 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 3: I0802 09:57:05.163103 15891 authenticator.cpp:204] Received SASL authentication start 3: I0802 09:57:05.163193 15891 authenticator.cpp:326] Authentication requires more steps 3: I0802 09:57:05.163318 15891 authenticatee.cpp:259] Received SASL authentication step 3: I0802 09:57:05.163466 15881 authenticator.cpp:232] Received SASL authentication step 3: I0802 09:57:05.163518 15881 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '148706d6d9ee' server FQDN: '148706d6d9ee' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 3: I0802 09:57:05.168320 15881 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 3: I0802 09:57:05.168431 15881 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 3: I0802 09:57:05.168474 15881 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '148706d6d9ee' server FQDN: '148706d6d9ee' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 3: I0802 09:57:05.168493 15881 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 3: I0802 09:57:05.168503 15881 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 3: I0802 09:57:05.168534 15881 authenticator.cpp:318] Authentication success 3: I0802 09:57:05.168964 15881 authenticatee.cpp:299] Authentication success 3: I0802 09:57:05.169320 15881 master.cpp:10610] Successfully authenticated principal 'test-principal' at slave(17)@172.17.0.2:41613 3: I0802 09:57:05.169478 15881 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(42)@172.17.0.2:41613 3: I0802 09:57:05.170004 15881 slave.cpp:1543] Successfully authenticated with master master@172.17.0.2:41613 3: I0802 09:57:05.170763 15881 slave.cpp:1993] Will retry registration in 4.399581ms if necessary 3: I0802 09:57:05.171401 15881 master.cpp:7086] Received register agent message from slave(17)@172.17.0.2:41613 (148706d6d9ee) 3: I0802 09:57:05.171723 15881 master.cpp:4202] Authorizing agent providing resources 'cpus:1; mem:1024; disk:3749337; ports:[31000-32000]' with principal 'test-principal' 3: I0802 09:57:05.177142 15878 slave.cpp:1993] Will retry registration in 37.129856ms if necessary 3: I0802 09:57:05.178035 15878 master.cpp:7079] Ignoring register agent message from slave(17)@172.17.0.2:41613 (148706d6d9ee) as registration is already in progress 3: I0802 09:57:05.178143 15878 master.cpp:7153] Authorized registration of agent at slave(17)@172.17.0.2:41613 (148706d6d9ee) 3: I0802 09:57:05.178246 15878 master.cpp:7265] Registering agent at slave(17)@172.17.0.2:41613 (148706d6d9ee) with id 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 3: I0802 09:57:05.179272 15888 registrar.cpp:487] Applied 1 operations in 362994ns; attempting to update the registry 3: I0802 09:57:05.180107 15888 registrar.cpp:544] Successfully updated the registry in 748032ns 3: I0802 09:57:05.184391 15879 master.cpp:7313] Admitted agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 at slave(17)@172.17.0.2:41613 (148706d6d9ee) 3: I0802 09:57:05.185465 15879 master.cpp:7358] Registered agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 at slave(17)@172.17.0.2:41613 (148706d6d9ee) with cpus:1; mem:1024; disk:3749337; ports:[31000-32000] 3: I0802 09:57:05.186156 15879 hierarchical.cpp:617] Added agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 (148706d6d9ee) with cpus:1; mem:1024; disk:3749337; ports:[31000-32000] (allocated: {}) 3: I0802 09:57:05.187729 15879 hierarchical.cpp:1508] Performed allocation for 1 agents in 1.368422ms 3: I0802 09:57:05.187886 15879 slave.cpp:1576] Registered with master master@172.17.0.2:41613; given agent ID 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 3: I0802 09:57:05.192725 15876 task_status_update_manager.cpp:188] Resuming sending task status updates 3: I0802 09:57:05.192859 15879 slave.cpp:1611] Checkpointing SlaveInfo to '/tmp/MasterQuotaTest_RescindOffersEnforcingLimits_XCzL51/meta/slaves/9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0/slave.info' 3: I0802 09:57:05.193363 15871 master.cpp:10393] Sending offers [ 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-O0 ] to framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 (default) at scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 3: I0802 09:57:05.193572 15873 status_update_manager_process.hpp:385] Resuming operation status update manager 3: I0802 09:57:05.194280 15873 sched.cpp:934] Scheduler::resourceOffers took 116258ns 3: I0802 09:57:05.194945 15873 hierarchical.cpp:1508] Performed allocation for 1 agents in 375907ns 3: I0802 09:57:05.195372 15879 slave.cpp:1663] Forwarding agent update {""""operations"""":{},""""resource_providers"""":{},""""resource_version_uuid"""":{""""value"""":""""Chjy8C/USQqbsT32vnA1uw==""""},""""slave_id"""":{""""value"""":""""9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0""""},""""update_oversubscribed_resources"""":false} 3: I0802 09:57:05.196919 15893 master.cpp:8457] Ignoring update on agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 at slave(17)@172.17.0.2:41613 (148706d6d9ee) as it reports no changes 3: W0802 09:57:05.205972 15861 process.cpp:2877] Attempted to spawn already running process files@172.17.0.2:41613 3: I0802 09:57:05.207845 15861 containerizer.cpp:318] Using isolation { environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni } 3: W0802 09:57:05.222391 15861 backend.cpp:76] Failed to create 'overlay' backend: OverlayBackend requires root privileges 3: W0802 09:57:05.222438 15861 backend.cpp:76] Failed to create 'aufs' backend: AufsBackend requires root privileges 3: W0802 09:57:05.222460 15861 backend.cpp:76] Failed to create 'bind' backend: BindBackend requires root privileges 3: I0802 09:57:05.222501 15861 provisioner.cpp:300] Using default backend 'copy' 3: I0802 09:57:05.238737 15861 cluster.cpp:518] Creating default 'local' authorizer 3: I0802 09:57:05.249315 15876 hierarchical.cpp:1508] Performed allocation for 1 agents in 408138ns 3: I0802 09:57:05.276511 15872 slave.cpp:267] Mesos agent started on (18)@172.17.0.2:41613 3: I0802 09:57:05.276571 15872 slave.cpp:268] Flags at startup: --acls="""""""" --appc_simple_discovery_uri_prefix=""""http://"""" --appc_store_dir=""""/tmp/450qM2/TDawGY/store/appc"""" --authenticate_http_readonly=""""true"""" --authenticate_http_readwrite=""""false"""" --authenticatee=""""crammd5"""" --authentication_backoff_factor=""""1secs"""" --authentication_timeout_max=""""1mins"""" --authentication_timeout_min=""""5secs"""" --authorizer=""""local"""" --cgroups_cpu_enable_pids_and_tids_count=""""false"""" --cgroups_destroy_timeout=""""1mins"""" --cgroups_enable_cfs=""""false"""" --cgroups_hierarchy=""""/sys/fs/cgroup"""" --cgroups_limit_swap=""""false"""" --cgroups_root=""""mesos"""" --container_disk_watch_interval=""""15secs"""" --containerizers=""""mesos"""" --credential=""""/tmp/450qM2/TDawGY/credential"""" --default_role=""""*"""" --disallow_sharing_agent_ipc_namespace=""""false"""" --disallow_sharing_agent_pid_namespace=""""false"""" --disk_watch_interval=""""1mins"""" --docker=""""docker"""" --docker_ignore_runtime=""""false"""" --docker_kill_orphans=""""true"""" --docker_registry=""""https://registry-1.docker.io"""" --docker_remove_delay=""""6hrs"""" --docker_socket=""""/var/run/docker.sock"""" --docker_stop_timeout=""""0ns"""" --docker_store_dir=""""/tmp/450qM2/TDawGY/store/docker"""" --docker_volume_checkpoint_dir=""""/var/run/mesos/isolators/docker/volume"""" --enforce_container_disk_quota=""""false"""" --executor_registration_timeout=""""1mins"""" --executor_reregistration_timeout=""""2secs"""" --executor_shutdown_grace_period=""""5secs"""" --fetcher_cache_dir=""""/tmp/450qM2/TDawGY/fetch"""" --fetcher_cache_size=""""2GB"""" --fetcher_stall_timeout=""""1mins"""" --frameworks_home=""""/tmp/450qM2/TDawGY/frameworks"""" --gc_delay=""""1weeks"""" --gc_disk_headroom=""""0.1"""" --gc_non_executor_container_sandboxes=""""false"""" --help=""""false"""" --hostname_lookup=""""true"""" --http_command_executor=""""false"""" --http_credentials=""""/tmp/450qM2/TDawGY/http_credentials"""" --http_heartbeat_interval=""""30secs"""" --initialize_driver_logging=""""true"""" --isolation=""""posix/cpu,posix/mem"""" --launcher=""""posix"""" --launcher_dir=""""/tmp/SRC/build/src"""" --logbufsecs=""""0"""" --logging_level=""""INFO"""" --max_completed_executors_per_framework=""""150"""" --memory_profiling=""""false"""" --network_cni_metrics=""""true"""" --network_cni_root_dir_persist=""""false"""" --oversubscribed_resources_interval=""""15secs"""" --perf_duration=""""10secs"""" --perf_interval=""""1mins"""" --port=""""5051"""" --qos_correction_interval_min=""""0ns"""" --quiet=""""false"""" --reconfiguration_policy=""""equal"""" --recover=""""reconnect"""" --recovery_timeout=""""15mins"""" --registration_backoff_factor=""""10ms"""" --resources=""""cpus:1;mem:1024"""" --revocable_cpu_low_priority=""""true"""" --runtime_dir=""""/tmp/MasterQuotaTest_RescindOffersEnforcingLimits_TODS3Y"""" --sandbox_directory=""""/mnt/mesos/sandbox"""" --strict=""""true"""" --switch_user=""""true"""" --systemd_enable_support=""""true"""" --systemd_runtime_directory=""""/run/systemd/system"""" --version=""""false"""" --work_dir=""""/tmp/MasterQuotaTest_RescindOffersEnforcingLimits_w13frZ"""" --zk_session_timeout=""""10secs"""" 3: I0802 09:57:05.277412 15872 credentials.hpp:86] Loading credential for authentication from '/tmp/450qM2/TDawGY/credential' 3: I0802 09:57:05.277693 15872 slave.cpp:300] Agent using credential for: test-principal 3: I0802 09:57:05.277715 15872 credentials.hpp:37] Loading credentials for authentication from '/tmp/450qM2/TDawGY/http_credentials' 3: I0802 09:57:05.277998 15872 http.cpp:975] Creating default 'basic' HTTP authenticator for realm 'mesos-agent-readonly' 3: I0802 09:57:05.278468 15872 disk_profile_adaptor.cpp:78] Creating default disk profile adaptor module 3: I0802 09:57:05.279918 15872 slave.cpp:615] Agent resources: [{""""name"""":""""cpus"""",""""scalar"""":{""""value"""":1.0},""""type"""":""""SCALAR""""},{""""name"""":""""mem"""",""""scalar"""":{""""value"""":1024.0},""""type"""":""""SCALAR""""},{""""name"""":""""disk"""",""""scalar"""":{""""value"""":3749337.0},""""type"""":""""SCALAR""""},{""""name"""":""""ports"""",""""ranges"""":{""""range"""":[{""""begin"""":31000,""""end"""":32000}]},""""type"""":""""RANGES""""}] 3: I0802 09:57:05.280186 15872 slave.cpp:623] Agent attributes: [ ] 3: I0802 09:57:05.280205 15872 slave.cpp:632] Agent hostname: 148706d6d9ee 3: I0802 09:57:05.283205 15872 task_status_update_manager.cpp:181] Pausing sending task status updates 3: I0802 09:57:05.283267 15872 status_update_manager_process.hpp:379] Pausing operation status update manager 3: I0802 09:57:05.288424 15872 state.cpp:67] Recovering state from '/tmp/MasterQuotaTest_RescindOffersEnforcingLimits_w13frZ/meta' 3: I0802 09:57:05.289094 15872 slave.cpp:7444] Finished recovering checkpointed state from '/tmp/MasterQuotaTest_RescindOffersEnforcingLimits_w13frZ/meta', beginning agent recovery 3: I0802 09:57:05.289788 15875 task_status_update_manager.cpp:207] Recovering task status update manager 3: I0802 09:57:05.290593 15872 containerizer.cpp:821] Recovering Mesos containers 3: I0802 09:57:05.290952 15872 containerizer.cpp:1147] Recovering isolators 3: I0802 09:57:05.292505 15885 containerizer.cpp:1186] Recovering provisioner 3: I0802 09:57:05.293473 15885 provisioner.cpp:500] Provisioner recovery complete 3: I0802 09:57:05.294914 15885 composing.cpp:339] Finished recovering all containerizers 3: I0802 09:57:05.295361 15885 slave.cpp:7908] Recovering executors 3: I0802 09:57:05.295472 15885 slave.cpp:8061] Finished recovery 3: W0802 09:57:05.297149 15861 process.cpp:2877] Attempted to spawn already running process version@172.17.0.2:41613 3: I0802 09:57:05.297420 15885 slave.cpp:1351] New master detected at master@172.17.0.2:41613 3: I0802 09:57:05.297458 15880 task_status_update_manager.cpp:181] Pausing sending task status updates 3: I0802 09:57:05.297533 15885 slave.cpp:1416] Detecting new master 3: I0802 09:57:05.297538 15880 status_update_manager_process.hpp:379] Pausing operation status update manager 3: I0802 09:57:05.305006 15887 hierarchical.cpp:1508] Performed allocation for 1 agents in 317611ns 3: I0802 09:57:05.305449 15861 sched.cpp:239] Version: 1.9.0 3: I0802 09:57:05.306335 15882 sched.cpp:343] New master detected at master@172.17.0.2:41613 3: I0802 09:57:05.306438 15882 sched.cpp:408] Authenticating with master master@172.17.0.2:41613 3: I0802 09:57:05.306457 15882 sched.cpp:415] Using default CRAM-MD5 authenticatee 3: I0802 09:57:05.307082 15882 authenticatee.cpp:121] Creating new client SASL connection 3: I0802 09:57:05.307696 15882 master.cpp:10578] Authenticating scheduler-76d82e9d-f02c-4174-894e-be6b1a164320@172.17.0.2:41613 3: I0802 09:57:05.307929 15882 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(43)@172.17.0.2:41613 3: I0802 09:57:05.308429 15876 slave.cpp:1443] Authenticating with master master@172.17.0.2:41613 3: I0802 09:57:05.308714 15882 authenticator.cpp:98] Creating new server SASL connection 3: I0802 09:57:05.309056 15882 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 3: I0802 09:57:05.309088 15882 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 3: I0802 09:57:05.309224 15882 authenticator.cpp:204] Received SASL authentication start 3: I0802 09:57:05.309303 15882 authenticator.cpp:326] Authentication requires more steps 3: I0802 09:57:05.309432 15882 authenticatee.cpp:259] Received SASL authentication step 3: I0802 09:57:05.309566 15882 authenticator.cpp:232] Received SASL authentication step 3: I0802 09:57:05.309602 15882 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '148706d6d9ee' server FQDN: '148706d6d9ee' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 3: I0802 09:57:05.309623 15882 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 3: I0802 09:57:05.309686 15882 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 3: I0802 09:57:05.309722 15882 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '148706d6d9ee' server FQDN: '148706d6d9ee' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 3: I0802 09:57:05.309739 15882 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 3: I0802 09:57:05.309751 15882 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 3: I0802 09:57:05.309772 15882 authenticator.cpp:318] Authentication success 3: I0802 09:57:05.310050 15882 authenticatee.cpp:299] Authentication success 3: I0802 09:57:05.310207 15870 master.cpp:10610] Successfully authenticated principal 'test-principal' at scheduler-76d82e9d-f02c-4174-894e-be6b1a164320@172.17.0.2:41613 3: I0802 09:57:05.310354 15870 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(43)@172.17.0.2:41613 3: I0802 09:57:05.310809 15876 slave.cpp:1452] Using default CRAM-MD5 authenticatee 3: I0802 09:57:05.311417 15876 authenticatee.cpp:121] Creating new client SASL connection 3: I0802 09:57:05.311868 15876 master.cpp:10578] Authenticating slave(18)@172.17.0.2:41613 3: I0802 09:57:05.312104 15876 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(44)@172.17.0.2:41613 3: I0802 09:57:05.316658 15876 authenticator.cpp:98] Creating new server SASL connection 3: I0802 09:57:05.317152 15876 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5 3: I0802 09:57:05.317185 15876 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5' 3: I0802 09:57:05.317333 15876 authenticator.cpp:204] Received SASL authentication start 3: I0802 09:57:05.317412 15876 authenticator.cpp:326] Authentication requires more steps 3: I0802 09:57:05.317543 15876 authenticatee.cpp:259] Received SASL authentication step 3: I0802 09:57:05.317685 15876 authenticator.cpp:232] Received SASL authentication step 3: I0802 09:57:05.317723 15876 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '148706d6d9ee' server FQDN: '148706d6d9ee' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 3: I0802 09:57:05.317742 15876 auxprop.cpp:181] Looking up auxiliary property '*userPassword' 3: I0802 09:57:05.317811 15876 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5' 3: I0802 09:57:05.317848 15876 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '148706d6d9ee' server FQDN: '148706d6d9ee' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 3: I0802 09:57:05.317867 15876 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true 3: I0802 09:57:05.317879 15876 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true 3: I0802 09:57:05.317901 15876 authenticator.cpp:318] Authentication success 3: I0802 09:57:05.318174 15876 authenticatee.cpp:299] Authentication success 3: I0802 09:57:05.318508 15876 master.cpp:10610] Successfully authenticated principal 'test-principal' at slave(18)@172.17.0.2:41613 3: I0802 09:57:05.318650 15876 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(44)@172.17.0.2:41613 3: I0802 09:57:05.319175 15876 slave.cpp:1543] Successfully authenticated with master master@172.17.0.2:41613 3: I0802 09:57:05.320288 15873 master.cpp:7086] Received register agent message from slave(18)@172.17.0.2:41613 (148706d6d9ee) 3: I0802 09:57:05.320581 15873 master.cpp:4202] Authorizing agent providing resources 'cpus:1; mem:1024; disk:3749337; ports:[31000-32000]' with principal 'test-principal' 3: I0802 09:57:05.321904 15889 master.cpp:7153] Authorized registration of agent at slave(18)@172.17.0.2:41613 (148706d6d9ee) 3: I0802 09:57:05.322043 15889 master.cpp:7265] Registering agent at slave(18)@172.17.0.2:41613 (148706d6d9ee) with id 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 3: I0802 09:57:05.322157 15876 slave.cpp:1993] Will retry registration in 13.968289ms if necessary 3: I0802 09:57:05.322919 15873 sched.cpp:520] Successfully authenticated with master master@172.17.0.2:41613 3: I0802 09:57:05.322943 15873 sched.cpp:835] Sending SUBSCRIBE call to master@172.17.0.2:41613 3: I0802 09:57:05.322984 15889 registrar.cpp:487] Applied 1 operations in 419808ns; attempting to update the registry 3: I0802 09:57:05.323137 15873 sched.cpp:870] Will retry registration in 1.840847468secs if necessary 3: I0802 09:57:05.323462 15873 master.cpp:2908] Received SUBSCRIBE call for framework 'default' at scheduler-76d82e9d-f02c-4174-894e-be6b1a164320@172.17.0.2:41613 3: I0802 09:57:05.323485 15873 master.cpp:2240] Authorizing framework principal 'test-principal' to receive offers for roles '{ role1/child }' 3: I0802 09:57:05.324035 15889 registrar.cpp:544] Successfully updated the registry in 973056ns 3: I0802 09:57:05.328315 15873 master.cpp:7313] Admitted agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 at slave(18)@172.17.0.2:41613 (148706d6d9ee) 3: I0802 09:57:05.329753 15875 hierarchical.cpp:617] Added agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 (148706d6d9ee) with cpus:1; mem:1024; disk:3749337; ports:[31000-32000] (allocated: {}) 3: I0802 09:57:05.331312 15875 hierarchical.cpp:1508] Performed allocation for 1 agents in 1.343027ms 3: I0802 09:57:05.331818 15884 slave.cpp:1576] Registered with master master@172.17.0.2:41613; given agent ID 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 3: I0802 09:57:05.332558 15884 slave.cpp:1611] Checkpointing SlaveInfo to '/tmp/MasterQuotaTest_RescindOffersEnforcingLimits_w13frZ/meta/slaves/9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1/slave.info' 3: I0802 09:57:05.334188 15884 slave.cpp:1663] Forwarding agent update {""""operations"""":{},""""resource_providers"""":{},""""resource_version_uuid"""":{""""value"""":""""S/Rp1HolSa65hmTUbrhp9Q==""""},""""slave_id"""":{""""value"""":""""9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1""""},""""update_oversubscribed_resources"""":false} 3: I0802 09:57:05.334450 15884 task_status_update_manager.cpp:188] Resuming sending task status updates 3: I0802 09:57:05.334573 15884 status_update_manager_process.hpp:385] Resuming operation status update manager 3: I0802 09:57:05.336449 15873 master.cpp:7358] Registered agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 at slave(18)@172.17.0.2:41613 (148706d6d9ee) with cpus:1; mem:1024; disk:3749337; ports:[31000-32000] 3: I0802 09:57:05.336958 15873 master.cpp:2995] Subscribing framework default with checkpointing disabled and capabilities [ MULTI_ROLE, RESERVATION_REFINEMENT ] 3: I0802 09:57:05.349050 15873 master.cpp:10808] Adding framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 (default) at scheduler-76d82e9d-f02c-4174-894e-be6b1a164320@172.17.0.2:41613 with roles { } suppressed 3: I0802 09:57:05.351337 15873 master.cpp:10393] Sending offers [ 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-O1 ] to framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 (default) at scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 3: I0802 09:57:05.352591 15873 master.cpp:8457] Ignoring update on agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 at slave(18)@172.17.0.2:41613 (148706d6d9ee) as it reports no changes 3: I0802 09:57:05.353605 15881 hierarchical.cpp:368] Added framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 3: I0802 09:57:05.355470 15873 sched.cpp:934] Scheduler::resourceOffers took 47676ns 3: I0802 09:57:05.355579 15881 hierarchical.cpp:1508] Performed allocation for 2 agents in 577092ns 3: I0802 09:57:05.356719 15892 hierarchical.cpp:1508] Performed allocation for 2 agents in 446921ns 3: I0802 09:57:05.354157 15883 sched.cpp:751] Framework registered with 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 3: I0802 09:57:05.360374 15883 sched.cpp:770] Scheduler::registered took 75574ns 3: I0802 09:57:05.409284 15875 hierarchical.cpp:1508] Performed allocation for 2 agents in 493987ns 3: I0802 09:57:05.465063 15870 hierarchical.cpp:1508] Performed allocation for 2 agents in 508165ns ........... 3: I0802 09:58:05.034294 15880 hierarchical.cpp:1508] Performed allocation for 2 agents in 557279ns 3: I0802 09:58:05.086269 15876 hierarchical.cpp:1508] Performed allocation for 2 agents in 561517ns 3: I0802 09:58:05.104465 15886 slave.cpp:7359] Current disk usage 16.47%. Max allowed age: 5.146766280423298days 3: I0802 09:58:05.138171 15890 hierarchical.cpp:1508] Performed allocation for 2 agents in 517343ns 3: I0802 09:58:05.190285 15888 hierarchical.cpp:1508] Performed allocation for 2 agents in 564397ns 3: I0802 09:58:05.241955 15889 hierarchical.cpp:1508] Performed allocation for 2 agents in 569996ns 3: I0802 09:58:05.282167 15884 slave.cpp:7359] Current disk usage 16.47%. Max allowed age: 5.146765275367859days 3: I0802 09:58:05.293754 15873 hierarchical.cpp:1508] Performed allocation for 2 agents in 521450ns 3: /tmp/SRC/src/tests/master_quota_tests.cpp:2111: Failure 3: Failed to wait 1mins for offers2 3: I0802 09:58:05.308029 15870 master.cpp:1410] Framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 (default) at scheduler-76d82e9d-f02c-4174-894e-be6b1a164320@172.17.0.2:41613 disconnected 3: I0802 09:58:05.308115 15870 master.cpp:3360] Deactivating framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 (default) at scheduler-76d82e9d-f02c-4174-894e-be6b1a164320@172.17.0.2:41613 3: /tmp/SRC/src/tests/master_quota_tests.cpp:2106: Failure 3: Actual function call count doesn't match EXPECT_CALL(sched2, offerRescinded(&framework2, _))... 3: Expected: to be called once 3: Actual: never called - unsatisfied and active 3: /tmp/SRC/src/tests/master_quota_tests.cpp:2101: Failure 3: Actual function call count doesn't match EXPECT_CALL(sched2, resourceOffers(&framework2, _))... 3: Expected: to be called at least once 3: Actual: never called - unsatisfied and active 3: I0802 09:58:05.308616 15870 master.cpp:3337] Disconnecting framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 (default) at scheduler-76d82e9d-f02c-4174-894e-be6b1a164320@172.17.0.2:41613 3: I0802 09:58:05.308703 15870 master.cpp:1425] Giving framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 (default) at scheduler-76d82e9d-f02c-4174-894e-be6b1a164320@172.17.0.2:41613 0ns to failover 3: I0802 09:58:05.308718 15882 hierarchical.cpp:475] Deactivated framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 3: I0802 09:58:05.309415 15890 slave.cpp:924] Agent terminating 3: I0802 09:58:05.310885 15890 master.cpp:1295] Agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 at slave(18)@172.17.0.2:41613 (148706d6d9ee) disconnected 3: I0802 09:58:05.310933 15890 master.cpp:3397] Disconnecting agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 at slave(18)@172.17.0.2:41613 (148706d6d9ee) 3: I0802 09:58:05.311017 15890 master.cpp:3416] Deactivating agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 at slave(18)@172.17.0.2:41613 (148706d6d9ee) 3: I0802 09:58:05.311187 15871 hierarchical.cpp:799] Agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 deactivated 3: I0802 09:58:05.312307 15890 master.cpp:12685] Removing offer 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-O1 3: I0802 09:58:05.312551 15885 sched.cpp:960] Rescinded offer 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-O1 3: I0802 09:58:05.312649 15885 sched.cpp:971] Scheduler::offerRescinded took 42088ns 3: I0802 09:58:05.313125 15870 master.cpp:10185] Framework failover timeout, removing framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 (default) at scheduler-76d82e9d-f02c-4174-894e-be6b1a164320@172.17.0.2:41613 3: I0802 09:58:05.313181 15870 master.cpp:11184] Removing framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 (default) at scheduler-76d82e9d-f02c-4174-894e-be6b1a164320@172.17.0.2:41613 3: I0802 09:58:05.313244 15879 hierarchical.cpp:1218] Recovered cpus(allocated: role1):1; mem(allocated: role1):1024; disk(allocated: role1):3749337; ports(allocated: role1):[31000-32000] (total: cpus:1; mem:1024; disk:3749337; ports:[31000-32000], allocated: {}) on agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 from framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 3: I0802 09:58:05.313500 15880 slave.cpp:4056] Asked to shut down framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 by master@172.17.0.2:41613 3: I0802 09:58:05.313560 15880 slave.cpp:4071] Cannot shut down unknown framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 3: I0802 09:58:05.314222 15870 hierarchical.cpp:1432] Allocation paused 3: I0802 09:58:05.315065 15870 hierarchical.cpp:417] Removed framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0001 3: I0802 09:58:05.315119 15870 hierarchical.cpp:1442] Allocation resumed 3: I0802 09:58:05.345412 15874 hierarchical.cpp:1508] Performed allocation for 2 agents in 515090ns 3: I0802 09:58:05.386776 15887 master.cpp:1410] Framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 (default) at scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 disconnected 3: I0802 09:58:05.386883 15887 master.cpp:3360] Deactivating framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 (default) at scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 3: I0802 09:58:05.387162 15874 hierarchical.cpp:475] Deactivated framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 3: I0802 09:58:05.387805 15861 slave.cpp:924] Agent terminating 3: I0802 09:58:05.387997 15887 master.cpp:12685] Removing offer 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-O0 3: I0802 09:58:05.388083 15887 master.cpp:3337] Disconnecting framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 (default) at scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 3: I0802 09:58:05.388149 15887 master.cpp:1425] Giving framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 (default) at scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 0ns to failover 3: I0802 09:58:05.389353 15893 hierarchical.cpp:1218] Recovered cpus(allocated: role1):1; mem(allocated: role1):1024; disk(allocated: role1):3749337; ports(allocated: role1):[31000-32000] (total: cpus:1; mem:1024; disk:3749337; ports:[31000-32000], allocated: {}) on agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 from framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 3: I0802 09:58:05.392590 15877 master.cpp:10185] Framework failover timeout, removing framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 (default) at scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 3: I0802 09:58:05.392665 15877 master.cpp:11184] Removing framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 (default) at scheduler-e08ce575-9286-4e6e-9572-c83259dad792@172.17.0.2:41613 3: I0802 09:58:05.397346 15881 hierarchical.cpp:1508] Performed allocation for 2 agents in 495689ns 3: I0802 09:58:05.405243 15890 hierarchical.cpp:1432] Allocation paused 3: I0802 09:58:05.405948 15890 hierarchical.cpp:417] Removed framework 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-0000 3: I0802 09:58:05.406090 15890 hierarchical.cpp:1442] Allocation resumed 3: I0802 09:58:05.406883 15884 master.cpp:1295] Agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 at slave(17)@172.17.0.2:41613 (148706d6d9ee) disconnected 3: I0802 09:58:05.406925 15884 master.cpp:3397] Disconnecting agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 at slave(17)@172.17.0.2:41613 (148706d6d9ee) 3: I0802 09:58:05.407009 15884 master.cpp:3416] Deactivating agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 at slave(17)@172.17.0.2:41613 (148706d6d9ee) 3: I0802 09:58:05.407382 15871 hierarchical.cpp:799] Agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 deactivated 3: I0802 09:58:05.449240 15884 hierarchical.cpp:1508] Performed allocation for 2 agents in 264454ns 3: I0802 09:58:05.486388 15861 master.cpp:1135] Master terminating 3: I0802 09:58:05.487401 15877 hierarchical.cpp:775] Removed all filters for agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 3: I0802 09:58:05.487440 15877 hierarchical.cpp:650] Removed agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S1 3: I0802 09:58:05.487804 15877 hierarchical.cpp:775] Removed all filters for agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 3: I0802 09:58:05.487824 15877 hierarchical.cpp:650] Removed agent 9dd926f8-c8be-42ad-a1c7-ef0d88a99199-S0 3: I0802 09:58:05.500919 15888 hierarchical.cpp:1508] Performed allocation for 0 agents in 143260ns 3: [ FAILED ] MasterQuotaTest.RescindOffersEnforcingLimits (60493 ms) ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9925","08/06/2019 04:08:29",2,"Default executor takes a couple of seconds to start and subscribe Mesos agent ""When launching a task group, it may take 6 seconds for default executor to start and subscribe Mesos agent: This is obviously too long which may affect the performance of launching task groups."""," # Agent log: I0730 01:18:57.908911 10107 containerizer.cpp:3302] Transitioning the state of container 593f6750-e36d-4838-89c7-34c77b30ba99 from FETCHING to RUNNING I0730 01:19:03.829246 10073 http.cpp:1115] HTTP POST for /slave(1)/api/v1/executor from 10.0.49.2:36798 # Executor stderr: Marked '/' as rslave I0730 01:19:03.617830 10438 executor.cpp:206] Version: 1.9.0 I0730 01:19:03.842535 10464 default_executor.cpp:205] Received SUBSCRIBED event ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9930","08/08/2019 19:10:52",3,"DRF sorter may omit clients in sorting after removing an inactive leaf node. ""The sorter assumes inactive leaf nodes are placed in the tail in the children list of a node. However, when collapsing a parent node with a single """"."""" virtual child node, its position may fail to be updated due to a bug in `Sorter::remove()`: This bug would manifest, if (1) we have a/b and a/. (2) deactivate(a), i.e. a/. becomes inactive_leaf (3) remove(a/b) When these happens, a/. will collapse to `a` as an inactive_leaf, due to the bug above, however, it will not be placed at the end, resulting in all the clients after `a` not included in the sort(). Luckily, this should never happen in practice, because only frameworks will get deactivated, and frameworks don’t have sub clients. """," CHECK(child->isLeaf()); .... current->kind = child->kind; ... if (current->kind == Node::INTERNAL) { } ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9932","08/09/2019 20:31:01",3,"Removal of a role from the suppression list should be equivalent to REVIVE. ""[~timcharper] and [~zen-dog] pointed out that removal of a role from the suppression list (e.g. via UPDATE_FRAMEWORK) does not clear filters. This means that schedulers have to issue a separate explicit REVIVE for the roles they want to remove. It seems like these are not the semantics we want, and we should instead be clearing filters upon removing a role from the suppression list.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0 +"MESOS-9934","08/12/2019 19:11:56",3,"Master does not handle returning unreachable agents as draining/deactivated ""The master has two code paths for handling agent reregistration messages, one culminating in {{Master::___reregisterSlave}} and the other in {{Master::}}{{__reregisterSlave}}. The two paths are not continuations of each other. Looks like we missed the double-underscore case in the initial implementation. This is the path that unreachable agents take, when/if they come back to the cluster. The result is that when unreachable agents are marked for draining, they do not get sent the appropriate message unless they are forced to reregister again (i.e. restarted manually).""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9935","08/12/2019 22:23:10",2,"The agent crashes after the disk du isolator supporting rootfs checks. ""This issue was broken by this patch: https://github.com/apache/mesos/commit/8ba0682521c6051b42f33b3dd96a37f4d46a290d#diff-33089e53bdf9f646cdb9317c212eda02 A task can be launched without disk resource. However, after this patch, if the disk resource does not exist, the agent crashes - because the info->paths only add an entry 'path' when there is a quota and the quota comes from the disk resource. """," Aug 09 14:54:00 ip-172-12-2-196.us-west-2.compute.internal mesos-agent[15492]: F0809 14:54:00.017730 15498 process.cpp:3057] Aborting libprocess: 'posix-disk-isolator(1)@172.12.2.196:5051' threw exception: _Map_base::at Aug 09 14:54:00 ip-172-12-2-196.us-west-2.compute.internal mesos-agent[15492]: *** Check failure stack trace: *** Aug 09 14:54:00 ip-172-12-2-196.us-west-2.compute.internal mesos-agent[15492]: @ 0x7f65f7d585cd google::LogMessage::Fail() Aug 09 14:54:00 ip-172-12-2-196.us-west-2.compute.internal mesos-agent[15492]: @ 0x7f65f7d5a828 google::LogMessage::SendToLog() Aug 09 14:54:00 ip-172-12-2-196.us-west-2.compute.internal mesos-agent[15492]: @ 0x7f65f7d58163 google::LogMessage::Flush() Aug 09 14:54:00 ip-172-12-2-196.us-west-2.compute.internal mesos-agent[15492]: @ 0x7f65f7d5b169 google::LogMessageFatal::~LogMessageFatal() Aug 09 14:54:00 ip-172-12-2-196.us-west-2.compute.internal mesos-agent[15492]: @ 0x7f65f7cb8dbd process::ProcessManager::resume() Aug 09 14:54:00 ip-172-12-2-196.us-west-2.compute.internal mesos-agent[15492]: @ 0x7f65f7cbe926 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv Aug 09 14:54:00 ip-172-12-2-196.us-west-2.compute.internal mesos-agent[15492]: @ 0x7f65f3976070 (unknown) Aug 09 14:54:00 ip-172-12-2-196.us-west-2.compute.internal mesos-agent[15492]: @ 0x7f65f3194e25 start_thread Aug 09 14:54:00 ip-172-12-2-196.us-west-2.compute.internal mesos-agent[15492]: @ 0x7f65f2ebebad __clone ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9938","08/13/2019 21:14:22",3,"Standalone container documentation ""We should add documentation for standalone containers.""","",0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9948","08/21/2019 17:37:29",3,"master::Slave::hasExecutor occupies 37% of a 150 second perf sample. ""If you drop the attached perf stacks into flamescope, you can see that mesos::internal::master::Slave::hasExecutor occupies 37% of the overall samples! This function does 3 hashmap lookups, 1 can be eliminated for a quick win. However, the larger improvement here will come from eliminating many of the calls to this function. This was reported by [~carlone].""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9949","08/21/2019 19:50:45",5,"Track allocated/offered in the allocator's role tree. ""Currently the allocator's role tree only tracks the reserved resources for each role subtree. For metrics purposes, it would be ideal to track offered / allocated as well. This requires augmenting the allocator's structs and recoverResources to hold the two categories independently and transition from offered -> allocated as applicable when recovering resources. This might require a slight change to the recoverResources interface.""","",0,0,0,1,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9952","08/22/2019 14:15:49",1,"ExampleTest.DiskFullFramework is slow ""Executing {{ExampleTest.DiskFullFramework}} on my setup takes almost 18s in a not optimized build. This is way too long for a default-enabled test.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9956","08/29/2019 22:52:54",2,"CSI plugins reporting duplicated volumes will crash the agent. ""The CSI spec requires volumes to be uniquely identifiable by ID, and thus SLRP currently assumes that a {{ListVolumes}} call does not return duplicated volumes. However, if a SLRP uses a non-conforming CSI plugin that reports duplicated volumes, these volumes would corrupt the SLRP checkpoint and cause the agent to crash at the next reconciliation: MESOS-9254 introduces periodic reconciliation which make this problem much easier to manifest."""," F0829 07:13:55.171332 12721 provider.cpp:1089] Check failed: !checkpointedMap.contains(resource.disk().source().id())",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"MESOS-9958","09/02/2019 20:24:40",1,"New CLI is not included in distribution tarball ""The files needed to build the new CLI are not included in distribution tarballs. This makes it impossible to build the CLI from released tarballs, and users have instead build directly from the git sources.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0 +"MESOS-9961","09/04/2019 22:08:15",2,"Agent could fail to report completed tasks. ""When agent reregisters with a master, we don't report completed executors for active frameworks. We only report completed executors if the framework is also completed on the agent: https://github.com/apache/mesos/blob/1.7.x/src/slave/slave.cpp#L1785-L1832""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9964","09/10/2019 14:01:20",8,"Support destroying UCR containers in provisioning state ""Currently when destroying a UCR container, if the container is in provisioning state, we will wait for the provisioner to finish provisioning before we start destroying the container, see [here|https://github.com/apache/mesos/blob/1.9.0/src/slave/containerizer/mesos/containerizer.cpp#L2685:L2693] for details. This may cause the container stuck at destroying, and more seriously it may cause the subsequent containers created from the same image stuck at provisioning state, because if the first container was stuck at pulling the image somehow, the subsequent containers have to wait for the puller to finish the pulling, see [here|https://github.com/apache/mesos/blob/1.9.0/src/slave/containerizer/mesos/provisioner/docker/store.cpp#L341:L345] for details. So we'd better to support destroying the container in provisioning state so that the subsequent containers created from the same image will not be affected.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9965","09/12/2019 23:02:21",1,"agent should not send `TASK_GONE_BY_OPERATOR` if the framework is not partition aware. ""The Mesos agent should not send `TASK_GONE_BY_OPERATOR` if the framework is not partition-aware. We should distinguish the framework capability and send different updates to legacy frameworks. The issue is exposed from here: https://github.com/apache/mesos/blob/f0be23765531b05661ed7f1b124faf96744aa80b/src/slave/slave.cpp#L5803 An example to follow: https://github.com/apache/mesos/blob/f0be23765531b05661ed7f1b124faf96744aa80b/src/master/master.cpp#L9921""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9966","09/13/2019 09:53:59",3,"Agent crashes when trying to destroy orphaned nested container if root container is orphaned as well ""Noticed an agent crash-looping when trying to recover. It recognized a container and its nested container as orphaned. When trying to destroy the nested container, the agent crashes. Probably when trying to [get the sandbox path of the root container|https://github.com/apache/mesos/blob/master/src/slave/containerizer/mesos/containerizer.cpp#L2966]. """," 2019-09-09 05:04:26: I0909 05:04:26.382326 89950 linux_launcher.cpp:286] Recovering Linux launcher 2019-09-09 05:04:26: I0909 05:04:26.383162 89950 linux_launcher.cpp:331] Not recovering cgroup mesos/a127917b-96fe-4100-b73d-5f876ce9ffc1/mesos 2019-09-09 05:04:26: I0909 05:04:26.383199 89950 linux_launcher.cpp:343] Recovered container a127917b-96fe-4100-b73d-5f876ce9ffc1.9783e2bb-7c2e-4930-9d39-4225bb6f1b97 2019-09-09 05:04:26: I0909 05:04:26.383216 89950 linux_launcher.cpp:331] Not recovering cgroup mesos/a127917b-96fe-4100-b73d-5f876ce9ffc1/mesos/9783e2bb-7c2e-4930-9d39-4225bb6f1b97/mesos 2019-09-09 05:04:26: I0909 05:04:26.383229 89950 linux_launcher.cpp:343] Recovered container 2ee154e2-3cc4-420a-99fb-065e740f3091 2019-09-09 05:04:26: I0909 05:04:26.383237 89950 linux_launcher.cpp:343] Recovered container a127917b-96fe-4100-b73d-5f876ce9ffc1 2019-09-09 05:04:26: I0909 05:04:26.383249 89950 linux_launcher.cpp:343] Recovered container 2ee154e2-3cc4-420a-99fb-065e740f3091.49fe2bf9-17af-415f-92b6-92a4db619436 2019-09-09 05:04:26: I0909 05:04:26.383260 89950 linux_launcher.cpp:331] Not recovering cgroup mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos 2019-09-09 05:04:26: I0909 05:04:26.383271 89950 linux_launcher.cpp:331] Not recovering cgroup mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos/49fe2bf9-17af-415f-92b6-92a4db619436/mesos 2019-09-09 05:04:26: I0909 05:04:26.383280 89950 linux_launcher.cpp:437] 2ee154e2-3cc4-420a-99fb-065e740f3091.49fe2bf9-17af-415f-92b6-92a4db619436 is a known orphaned container 2019-09-09 05:04:26: I0909 05:04:26.383289 89950 linux_launcher.cpp:437] a127917b-96fe-4100-b73d-5f876ce9ffc1 is a known orphaned container 2019-09-09 05:04:26: I0909 05:04:26.383296 89950 linux_launcher.cpp:437] 2ee154e2-3cc4-420a-99fb-065e740f3091 is a known orphaned container 2019-09-09 05:04:26: I0909 05:04:26.383304 89950 linux_launcher.cpp:437] a127917b-96fe-4100-b73d-5f876ce9ffc1.9783e2bb-7c2e-4930-9d39-4225bb6f1b97 is a known orphaned container 2019-09-09 05:04:26: I0909 05:04:26.383414 89950 containerizer.cpp:1092] Recovering isolators 2019-09-09 05:04:26: I0909 05:04:26.385931 89977 memory.cpp:478] Started listening for OOM events for container a127917b-96fe-4100-b73d-5f876ce9ffc1 2019-09-09 05:04:26: I0909 05:04:26.386118 89977 memory.cpp:590] Started listening on 'low' memory pressure events for container a127917b-96fe-4100-b73d-5f876ce9ffc1 2019-09-09 05:04:26: I0909 05:04:26.386152 89977 memory.cpp:590] Started listening on 'medium' memory pressure events for container a127917b-96fe-4100-b73d-5f876ce9ffc1 2019-09-09 05:04:26: I0909 05:04:26.386175 89977 memory.cpp:590] Started listening on 'critical' memory pressure events for container a127917b-96fe-4100-b73d-5f876ce9ffc1 2019-09-09 05:04:26: I0909 05:04:26.386227 89977 memory.cpp:478] Started listening for OOM events for container 2ee154e2-3cc4-420a-99fb-065e740f3091 2019-09-09 05:04:26: I0909 05:04:26.386248 89977 memory.cpp:590] Started listening on 'low' memory pressure events for container 2ee154e2-3cc4-420a-99fb-065e740f3091 2019-09-09 05:04:26: I0909 05:04:26.386270 89977 memory.cpp:590] Started listening on 'medium' memory pressure events for container 2ee154e2-3cc4-420a-99fb-065e740f3091 2019-09-09 05:04:26: I0909 05:04:26.386376 89977 memory.cpp:590] Started listening on 'critical' memory pressure events for container 2ee154e2-3cc4-420a-99fb-065e740f3091 2019-09-09 05:04:26: I0909 05:04:26.386694 89921 containerizer.cpp:1131] Recovering provisioner 2019-09-09 05:04:26: I0909 05:04:26.388226 90010 metadata_manager.cpp:286] Successfully loaded 64 Docker images 2019-09-09 05:04:26: I0909 05:04:26.388420 89932 provisioner.cpp:494] Provisioner recovery complete 2019-09-09 05:04:26: I0909 05:04:26.388530 90003 containerizer.cpp:1203] Cleaning up orphan container a127917b-96fe-4100-b73d-5f876ce9ffc1.9783e2bb-7c2e-4930-9d39-4225bb6f1b97 2019-09-09 05:04:26: I0909 05:04:26.388562 90003 containerizer.cpp:2520] Destroying container a127917b-96fe-4100-b73d-5f876ce9ffc1.9783e2bb-7c2e-4930-9d39-4225bb6f1b97 in RUNNING state 2019-09-09 05:04:26: I0909 05:04:26.388576 90003 containerizer.cpp:3187] Transitioning the state of container a127917b-96fe-4100-b73d-5f876ce9ffc1.9783e2bb-7c2e-4930-9d39-4225bb6f1b97 from RUNNING to DESTROYING 2019-09-09 05:04:26: I0909 05:04:26.388640 90003 containerizer.cpp:1203] Cleaning up orphan container a127917b-96fe-4100-b73d-5f876ce9ffc1 2019-09-09 05:04:26: I0909 05:04:26.388650 90003 containerizer.cpp:2520] Destroying container a127917b-96fe-4100-b73d-5f876ce9ffc1 in RUNNING state 2019-09-09 05:04:26: I0909 05:04:26.388659 90003 containerizer.cpp:3187] Transitioning the state of container a127917b-96fe-4100-b73d-5f876ce9ffc1 from RUNNING to DESTROYING 2019-09-09 05:04:26: I0909 05:04:26.388689 90003 containerizer.cpp:1203] Cleaning up orphan container 2ee154e2-3cc4-420a-99fb-065e740f3091.49fe2bf9-17af-415f-92b6-92a4db619436 2019-09-09 05:04:26: I0909 05:04:26.388698 90003 containerizer.cpp:2520] Destroying container 2ee154e2-3cc4-420a-99fb-065e740f3091.49fe2bf9-17af-415f-92b6-92a4db619436 in RUNNING state 2019-09-09 05:04:26: I0909 05:04:26.388706 90003 containerizer.cpp:3187] Transitioning the state of container 2ee154e2-3cc4-420a-99fb-065e740f3091.49fe2bf9-17af-415f-92b6-92a4db619436 from RUNNING to DESTROYING 2019-09-09 05:04:26: I0909 05:04:26.388720 90003 containerizer.cpp:1203] Cleaning up orphan container 2ee154e2-3cc4-420a-99fb-065e740f3091 2019-09-09 05:04:26: I0909 05:04:26.388729 90003 containerizer.cpp:2520] Destroying container 2ee154e2-3cc4-420a-99fb-065e740f3091 in RUNNING state 2019-09-09 05:04:26: I0909 05:04:26.388737 90003 containerizer.cpp:3187] Transitioning the state of container 2ee154e2-3cc4-420a-99fb-065e740f3091 from RUNNING to DESTROYING 2019-09-09 05:04:26: I0909 05:04:26.388783 90003 containerizer.cpp:3026] Container 2ee154e2-3cc4-420a-99fb-065e740f3091.49fe2bf9-17af-415f-92b6-92a4db619436 has exited 2019-09-09 05:04:26: I0909 05:04:26.388837 89929 linux_launcher.cpp:576] Asked to destroy container a127917b-96fe-4100-b73d-5f876ce9ffc1.9783e2bb-7c2e-4930-9d39-4225bb6f1b97 2019-09-09 05:04:26: I0909 05:04:26.388904 89929 linux_launcher.cpp:618] Destroying cgroup '/sys/fs/cgroup/freezer/mesos/a127917b-96fe-4100-b73d-5f876ce9ffc1/mesos/9783e2bb-7c2e-4930-9d39-4225bb6f1b97' 2019-09-09 05:04:26: I0909 05:04:26.389147 89929 linux_launcher.cpp:576] Asked to destroy container 2ee154e2-3cc4-420a-99fb-065e740f3091.49fe2bf9-17af-415f-92b6-92a4db619436 2019-09-09 05:04:26: I0909 05:04:26.389173 89929 linux_launcher.cpp:618] Destroying cgroup '/sys/fs/cgroup/freezer/mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos/49fe2bf9-17af-415f-92b6-92a4db619436' 2019-09-09 05:04:26: I0909 05:04:26.389261 89947 cgroups.cpp:2854] Freezing cgroup /sys/fs/cgroup/freezer/mesos/a127917b-96fe-4100-b73d-5f876ce9ffc1/mesos/9783e2bb-7c2e-4930-9d39-4225bb6f1b97 2019-09-09 05:04:26: I0909 05:04:26.389269 89948 cgroups.cpp:2854] Freezing cgroup /sys/fs/cgroup/freezer/mesos/a127917b-96fe-4100-b73d-5f876ce9ffc1/mesos/9783e2bb-7c2e-4930-9d39-4225bb6f1b97/mesos 2019-09-09 05:04:26: I0909 05:04:26.389454 89953 cgroups.cpp:2854] Freezing cgroup /sys/fs/cgroup/freezer/mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos/49fe2bf9-17af-415f-92b6-92a4db619436 2019-09-09 05:04:26: I0909 05:04:26.389530 89956 cgroups.cpp:1242] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/a127917b-96fe-4100-b73d-5f876ce9ffc1/mesos/9783e2bb-7c2e-4930-9d39-4225bb6f1b97/mesos after 166912ns 2019-09-09 05:04:26: I0909 05:04:26.389582 89965 cgroups.cpp:2854] Freezing cgroup /sys/fs/cgroup/freezer/mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos/49fe2bf9-17af-415f-92b6-92a4db619436/mesos 2019-09-09 05:04:26: I0909 05:04:26.389605 89937 cgroups.cpp:1242] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/a127917b-96fe-4100-b73d-5f876ce9ffc1/mesos/9783e2bb-7c2e-4930-9d39-4225bb6f1b97 after 269056ns 2019-09-09 05:04:26: I0909 05:04:26.389679 89964 cgroups.cpp:1242] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos/49fe2bf9-17af-415f-92b6-92a4db619436 after 145920ns 2019-09-09 05:04:26: I0909 05:04:26.389761 89963 cgroups.cpp:2872] Thawing cgroup /sys/fs/cgroup/freezer/mesos/a127917b-96fe-4100-b73d-5f876ce9ffc1/mesos/9783e2bb-7c2e-4930-9d39-4225bb6f1b97/mesos 2019-09-09 05:04:26: I0909 05:04:26.389888 89969 cgroups.cpp:1242] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos/49fe2bf9-17af-415f-92b6-92a4db619436/mesos after 219136ns 2019-09-09 05:04:26: I0909 05:04:26.389904 89974 cgroups.cpp:2872] Thawing cgroup /sys/fs/cgroup/freezer/mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos/49fe2bf9-17af-415f-92b6-92a4db619436 2019-09-09 05:04:26: I0909 05:04:26.390111 89980 cgroups.cpp:2872] Thawing cgroup /sys/fs/cgroup/freezer/mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos/49fe2bf9-17af-415f-92b6-92a4db619436/mesos 2019-09-09 05:04:26: I0909 05:04:26.390151 89987 cgroups.cpp:1271] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos/49fe2bf9-17af-415f-92b6-92a4db619436 after 128us 2019-09-09 05:04:26: I0909 05:04:26.390199 89980 cgroups.cpp:1271] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos/49fe2bf9-17af-415f-92b6-92a4db619436/mesos after 47104ns 2019-09-09 05:04:26: I0909 05:04:26.390290 89956 cgroups.cpp:2872] Thawing cgroup /sys/fs/cgroup/freezer/mesos/a127917b-96fe-4100-b73d-5f876ce9ffc1/mesos/9783e2bb-7c2e-4930-9d39-4225bb6f1b97 2019-09-09 05:04:26: I0909 05:04:26.390463 89983 linux_launcher.cpp:650] Destroying cgroup '/sys/fs/cgroup/systemd/mesos/2ee154e2-3cc4-420a-99fb-065e740f3091/mesos/49fe2bf9-17af-415f-92b6-92a4db619436' 2019-09-09 05:04:26: I0909 05:04:26.392710 89995 cgroups.cpp:1271] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/a127917b-96fe-4100-b73d-5f876ce9ffc1/mesos/9783e2bb-7c2e-4930-9d39-4225bb6f1b97 after 2.397184ms 2019-09-09 05:04:26: I0909 05:04:26.394942 89976 containerizer.cpp:2812] Checkpointing termination state to nested container's runtime directory '/var/run/mesos/containers/2ee154e2-3cc4-420a-99fb-065e740f3091/containers/49fe2bf9-17af-415f-92b6-92a4db619436/termination' 2019-09-09 05:04:26: mesos-agent: /pkg/src/mesos/3rdparty/stout/include/stout/option.hpp:119: T& Option::get() & [with T = std::basic_string]: Assertion `isSome()' failed. 2019-09-09 05:04:26: *** Aborted at 1568019866 (unix time) try """"date -d @1568019866"""" if you are using GNU date *** 2019-09-09 05:04:26: PC: @ 0x7f8229cc02c7 __GI_raise 2019-09-09 05:04:26: *** SIGABRT (@0x15f32) received by PID 89906 (TID 0x7f820c148700) from PID 89906; stack trace: *** 2019-09-09 05:04:26: @ 0x7f822a066680 (unknown) 2019-09-09 05:04:26: @ 0x7f8229cc02c7 __GI_raise 2019-09-09 05:04:26: @ 0x7f8229cc19b8 __GI_abort 2019-09-09 05:04:26: @ 0x7f8229cb90e6 __assert_fail_base 2019-09-09 05:04:26: @ 0x7f8229cb9192 __GI___assert_fail 2019-09-09 05:04:26: @ 0x7f822d306e33 _ZNR6OptionISsE3getEv.part.137 2019-09-09 05:04:26: @ 0x7f822d317c4f mesos::internal::slave::MesosContainerizerProcess::______destroy() 2019-09-09 05:04:26: I0909 05:04:26.418018 89974 token_retriever.cpp:422] Successfuly acquired token with expiration set at 2019-09-09 09:09:26+00:00 2019-09-09 05:04:26: I0909 05:04:26.418375 89974 token_retriever.cpp:280] Scheduling token refresh tu run at 2019-09-09 09:08:56.041828249+00:00 2019-09-09 05:04:26: @ 0x7f822de72fc1 process::ProcessBase::consume() 2019-09-09 05:04:26: @ 0x7f822de899ac process::ProcessManager::resume() 2019-09-09 05:04:26: @ 0x7f822de8f466 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv 2019-09-09 05:04:26: @ 0x7f822a840070 (unknown) 2019-09-09 05:04:26: @ 0x7f822a05edd5 start_thread 2019-09-09 05:04:26: @ 0x7f8229d88bfd __clone 2019-09-09 05:04:26: dcos-mesos-slave.service: main process exited, code=killed, status=6/ABRT ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9968","09/17/2019 09:01:14",1,"WWWAuthenticate header parsing fails when commas are in (quoted) realm ""This was discovered when trying to launch the {{[nvcr.io/nvidia/tensorflow:19.08-py3|http://nvcr.io/nvidia/tensorflow:19.08-py3]}} image using the Mesos containerizer. This launch fails with This is because the [header tokenization in libprocess|https://github.com/apache/mesos/blob/master/3rdparty/libprocess/src/http.cpp#L640] can't handle commas in quoted realm values."""," Failed to launch container: Failed to get WWW-Authenticate header: Unexpected auth-param format: 'realm=""""https://nvcr.io/proxy_auth?scope=repository:nvidia/tensorflow:pull' in 'realm=""""https://nvcr.io/proxy_auth?scope=repository:nvidia/tensorflow:pull,push""""' ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9971","09/18/2019 09:46:05",1,"'dist' and 'distcheck' cmake targets are implemented as shell scripts, so fail on Windows/MSVC. ""Mesos failed to build due to error MSB6006: """"cmd.exe"""" exited with code 1 on Windows using MSVC. It can be first reproduced on {color:#24292e}e0f7e2d{color} reversion on master branch. Could you please take a look at this isssue? Thanks a lot! Reproduce steps: 1. git clone -c core.autocrlf=true [https://github.com/apache/mesos] D:\mesos\src 2. Open a VS 2017 x64 command prompt as admin and browse to D:\mesos 3. cd src 4. .\bootstrap.bat 5. cd .. 6. mkdir build_x64 && pushd build_x64 7. cmake ..\src -G """"Visual Studio 15 2017 Win64"""" -DCMAKE_SYSTEM_VERSION=10.0.17134.0 -DENABLE_LIBEVENT=1 -DHAS_AUTHENTICATION=0 -DPATCHEXE_PATH=""""C:\gnuwin32\bin"""" -T host=x64 8. msbuild Mesos.sln /p:Configuration=Debug /p:Platform=x64 /maxcpucount:4 /t:Rebuild   ErrorMessage: 67>PrepareForBuild:          Creating directory """"x64\Debug\dist\dist.tlog\"""".        InitializeBuildStatus:          Creating """"x64\Debug\dist\dist.tlog\unsuccessfulbuild"""" because """"AlwaysCreate"""" was specified. 67>C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\VC\VCTargets\Microsoft.CppCommon.targets(209,5): error MSB6006: """"cmd.exe"""" exited with code 1. [D:\Mesos\build_x64\dist.vcxproj] 67>Done Building Project """"D:\Mesos\build_x64\dist.vcxproj"""" (Rebuild target(s)) -- FAILED.  ""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9975","09/19/2019 00:09:05",2,"Sorter may leak clients allocations. ""In MESOS-9015, we allowed resource quantities to change when updating an existing allocation. When the allocation is updated to empty, however, we forget to remove the client in the map in the `sorter::update()` if the `newAllocation` is `empty()`. https://github.com/apache/mesos/blob/master/src/master/allocator/mesos/sorter/drf/sorter.hpp#L382-L384 The above case could happen, for example, when a CSI volume with a stale profile is destroyed, it would be better to convert it into an empty resource since the disk space is no longer available. ""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-9978","09/24/2019 19:29:08",1,"Nvml isolator cannot be disabled which makes it impossible to exclude non-free code ""We currently do not allow disabling of the link against {{libnvml}} which is probably not under a free license. This makes it hard to include Mesos at all in distributions requiring only free licenses, see e.g., https://bugzilla.redhat.com/show_bug.cgi?id=1749383. We should add a configuration time flag to disable this feature completely until we can provide a free replacement.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10007","10/04/2019 17:34:14",1,"Command executor can miss exit status for short-lived commands due to double-reaping. ""Hi, While testing Mesos to see if we could use it at work, I encountered a random bug which I believe happens when a command exits really quickly, when run via the command executor. See the attached test case, but basically all it does is constantly start """"exit 0"""" tasks. At some point, a task randomly fails with the error """"Failed to get exit status for Command"""":      I've had a look at the code, and I found something which could potentially explain it - it's the first time I look at the code so apologies if I'm missing something.  We can see the error originates from `reaped`: [https://github.com/apache/mesos/blob/master/src/launcher/executor.cpp#L1017]   Looking at the code, we can see that the `status_` future can be set to `None` in `ReaperProcess::reap`: [https://github.com/apache/mesos/blob/master/3rdparty/libprocess/src/reap.cpp#L69]         So we could have this if the process has already been reaped (`kill -0` will fail).   Now, looking at the code path which spawns the process: `launchTaskSubprocess` [https://github.com/apache/mesos/blob/master/src/launcher/executor.cpp#L724]   calls `subprocess`: [https://github.com/apache/mesos/blob/master/3rdparty/libprocess/src/subprocess.cpp#L315]   If we look at the bottom of the function we can see the following: [https://github.com/apache/mesos/blob/master/3rdparty/libprocess/src/subprocess.cpp#L462]         So at this point we've already called `process::reap`.   And after that, the executor also calls `process::reap`: [https://github.com/apache/mesos/blob/master/src/launcher/executor.cpp#L801]         But if we look at the implementation of `process::reap`: [https://github.com/apache/mesos/blob/master/3rdparty/libprocess/src/reap.cpp#L152]     We can see that `ReaperProcess::reap` is going to get called asynchronously.   Doesn't this mean that it's possible that the first call to `reap` set up by `subprocess` ([https://github.com/apache/mesos/blob/master/3rdparty/libprocess/src/subprocess.cpp#L462)|https://github.com/apache/mesos/blob/master/3rdparty/libprocess/src/subprocess.cpp#L462] will get executed first, and if the task has already exited by that time, the child will get reaped before the call to `reap` set up by the executor ([https://github.com/apache/mesos/blob/master/src/launcher/executor.cpp#L801]) gets a chance to run?   In that case, when it runs   would return false, `reap` would set the future to None which would result in this error.  """," 'state': 'TASK_FAILED', 'message': 'Failed to get exit status for Command', 'source': 'SOURCE_EXECUTOR', } else if (status_->isNone()) { taskState = TASK_FAILED; message = """"Failed to get exit status for Command""""; } else { Future> ReaperProcess::reap(pid_t pid) { // Check to see if this pid exists. if (os::exists(pid)) { Owned>> promise(new Promise>()); promises.put(pid, promise); return promise->future(); } else { return None(); } } // We need to bind a copy of this Subprocess into the onAny callback // below to ensure that we don't close the file descriptors before // the subprocess has terminated (i.e., because the caller doesn't // keep a copy of this Subprocess around themselves). process::reap(process.data->pid) .onAny(lambda::bind(internal::cleanup, lambda::_1, promise, process)); return process; // Monitor this process. process::reap(pid.get()) .onAny(defer(self(), &Self::reaped, pid.get(), lambda::_1)); Future> reap(pid_t pid) { // The reaper process is instantiated in `process::initialize`. process::initialize(); return dispatch( internal::reaper, &internal::ReaperProcess::reap, pid); } if (os::exists(pid)) {",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-10010","10/10/2019 00:19:03",5,"Implement an SSL socket for Windows, using OpenSSL directly """""," class WindowsSSLSocketImpl : public SocketImpl { public: // This will be the entry point for Socket::create(SSL). static Try> create(int_fd s); WindowsSSLSocketImpl(int_fd _s); ~WindowsSSLSocketImpl() override; // Overrides for the 'SocketImpl' interface below. // Unreachable. Future connect(const Address& address) override; // This will initialize SSL objects then call windows::connect() // and chain that onto the appropriate call to SSL_do_handshake. Future connect( const Address& address, const openssl::TLSClientConfig& config) override; // These will call SSL_read or SSL_write as appropriate. // As long as the SSL context is set up correctly, these will be // thin wrappers. (More details after the code block.) Future recv(char* data, size_t size) override; Future send(const char* data, size_t size) override; Future sendfile(int_fd fd, off_t offset, size_t size) override; // Nothing SSL here, just a plain old listener. Try listen(int backlog) override; // This will initialize SSL objects then call windows::accept() // and then perform handshaking. Any downgrading will // happen here. Since we control the event loop, we can // easily peek at the first few bytes to check SSL-ness. Future> accept() override; SocketImpl::Kind kind() const override { return SocketImpl::Kind::SSL; } } ",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10017","10/22/2019 01:04:45",1,"Log all reverse DNS lookup failures in 'legacy' TLS (SSL) hostname validation scheme. ""There were being logged at VLOG(2): https://github.com/apache/mesos/blob/1.9.0/3rdparty/libprocess/src/openssl.cpp#L859-L860 In the same spirit as MESOS-9340, we'd like to log all networking related errors as warnings and include any relevant information (IP address, etc).""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10026","11/02/2019 23:34:10",13,"Improve v1 operator API read performance. ""Currently, the v1 operator API has poor performance relative to the v0 json API. The following initial numbers were provided by [~Will Mahler] from our state serving benchmark:   |OPTIMIZED - Master (baseline)| | | | | |Test setup|1000 agents with a total of 10000 running tasks and 10000 completed tasks|10000 agents with a total of 100000 running tasks and 100000 completed tasks|20000 agents with a total of 200000 running tasks and 200000 completed tasks|40000 agents with a total of 400000 running tasks and 400000 completed tasks| |v0 'state' response|0.17|1.66|8.96|12.42| |v1 x-protobuf|0.35|3.21|9.47|19.09| |v1 json|0.45|4.72|10.81|31.43| There is quite a lot of variance, but v1 protobuf consistently slower than v0 (sometimes significantly so) and v1 json is consistently slower than v1 protobuf (sometimes significantly so). The reason that the v1 operator API is slower is that it does the following: (1) Construct temporary unversioned state response object by copying in-memory un-versioned state into overall response object. (expensive!) (2) Evolve it to v1: serialize, de-serialize into v1 overall state object. (expensive!) (3) Serialize the overall v1 state object to protobuf or json. (4) Destruct the temporaries (expensive! but is done after response starts serving) On the other hand, the v0 jsonify approach does the following: (1) Serialize the in-memory unversioned state into json, by traversing state and accumulating the overall serialized json. This means that v1 has substantial overhead vs v0, and we need to remove it to bring v1 on-par or better than v0. v1 should serialize directly to json (straightforward with jsonify) or protobuf (this can be done via a io::CodedOutputStream).""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10038","11/21/2019 17:36:37",5,"Implement agent code to listen on a domain socket ""On an agent with executor domain sockets enabled, we need to implement code such that the agent listens for incoming connections on its domain sockets, and creates `Connection` objects through which executor <-> agent v1 communication can happen. The existing implementation of the I/O switchboard might give some inspiration on how this can be implemented.""","",0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10041","11/22/2019 20:22:51",1,"Libprocess SSL verification can leak memory ""In {{process::network::openssl::verify()}}, when the SSL hostname validation scheme is set to """"openssl"""", the function can return without freeing an {{X509}} object, leading to a memory leak.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10048","11/27/2019 13:24:34",5,"Update the memory subsystem in the cgroup isolator to set container's memory resource limits and `oom_score_adj` ""Update the memory subsystem in the cgroup isolator to set container’s memory resource limits and `oom_score_adj`""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10063","12/03/2019 02:25:02",2,"Update default executor to call `LAUNCH_CONTAINER` to launch nested containers ""The default executor will be updated to use the LAUNCH_CONTAINER call instead of the LAUNCH_NESTED_CONTAINER call when launching nested containers. This will allow the default executor to set task limits when launching its task containers.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"MESOS-10064","12/04/2019 13:50:57",3,"Accommodate the ""Infinity"" value in JSON ""See [here|https://docs.google.com/document/d/1iEXn2dBg07HehbNZunJWsIY6iaFezXiRsvpNw4dVQII/edit?ts=5de78977#heading=h.ejuvxat6x3eb] for what need to be done for this ticket.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10076","12/23/2019 13:06:03",3,"Cgroups isolator: create nested cgroups ""Update Cgroups isolator to create nested cgroups for a nested container, which supports nested cgroups, during container launch preparation.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10077","12/23/2019 13:09:18",3,"Cgroups isolator: allow updating and isolating resources for nested cgroups ""Allow Cgroups isolator to update and isolate resources for nested cgroups.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10079","12/23/2019 13:16:54",5,"Cgroups isolator: recover nested cgroups ""Update recovery of Cgroups isolator to recover nested cgroups for those nested containers, which were launched in nested cgroups.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10080","12/23/2019 13:18:42",5,"Cgroups isolator: update cleanup logic to support nested cgroups ""Update Cgroups isolator to cleanup a nested cgroup for a nested container taking into account hierarchical layout of cgroups. Lowest nested cgroups should be destroyed first.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10094","02/11/2020 20:58:07",1,"Master's agent draining VLOG prints incorrect task counts. ""This logic is printing the framework counts of these maps rather than the task counts: https://github.com/apache/mesos/blob/4575c9b452c25f64e6c6cc3eddc12ed3b1f8538b/src/master/master.cpp#L6318-L6319 Since these are {{hashmap>}}."""," // Check if the agent has any tasks running or operations pending. if (!slave->pendingTasks.empty() || !slave->tasks.empty() || !slave->operations.empty()) { VLOG(1) << """"DRAINING Agent """" << slaveId << """" has """" << slave->pendingTasks.size() << """" pending tasks, """" << slave->tasks.size() << """" tasks, and """" << slave->operations.size() << """" operations""""; return; } ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10095","02/11/2020 22:19:26",1,"Agent draining logging makes it hard to tell which tasks did not terminate. ""When draining an agent, it's hard to tell which tasks failed to terminate. The master prints a count of the tasks remaining (only as VLOG(1) however), but not the IDs: The agent does not print how many or which ones. It would be helpful to at least see which tasks need to be drained when it begins, and possibly, upon each check, which ones remain."""," I1223 13:19:49.021764 30480 master.cpp:6367] DRAINING Agent c0146010-8af6-4a9d-bcdb-99e30a778663-S6 has 0 pending tasks, 1 tasks, and 0 operations ",0,1,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10096","02/11/2020 22:23:05",3,"Reactivating a draining agent leaves the agent in draining state. ""When reactivating an agent that's in the draining state, the master erases it from its draining maps, and erases its estimated drain time. However, it doesn't send any message to the agent, so if the agent is still draining and waiting for tasks to terminate, it will stay in that state, ultimately making any tasks that then get launched get DROPPED due to the agent still being in a draining state. Seems like we should either: * Disallow the user from reactivating if still in draining, or * Send a message to the agent, and have the agent move itself out of draining.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10097","02/13/2020 11:19:52",1,"After HTTP framework disconnects, heartbeater idle-loops instead of being deleted. ""In some cases, Master closes connection of HTTP framework without deleting the heartbeater: https://github.com/apache/mesos/blob/65e18bef2c5ff356ef74bac9aa79b128c5b186d9/src/master/master.cpp#L3323 https://github.com/apache/mesos/blob/65e18bef2c5ff356ef74bac9aa79b128c5b186d9/src/master/master.cpp#L10910 It can be argued that this does not constitute a leak, because old heartbeaters are deleted on reconnection/removal. However, this means that for each disconnected framework there is a ResponseHeartbeaterProcess that performs an idle loop.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10098","02/24/2020 15:23:49",3,"Mesos agent fails to start on outdated systemd. ""Mesos agent refuses to start due to a failure caused by the systemd-specific code: It turns out that some versions of systemd do not set environment variables `LISTEN_PID`, `LISTEN_FDS` and `LISTEN_FDNAMES` to the Mesos agent process, if its systemd unit is [ill-formed|https://github.com/dcos/dcos/pull/6886/files]. If this happens, `listenFdsWithName` returns an empty list, therefore leading to the error above. After fixing the problem with the systemd unit, systemd sets the value for `LISTEN_FDNAMES` taken from the `FileDescriptorName` field. In our case, the env variable is set to `systemd:dcos-mesos-slave`. Since the value is expected to be equal to """"systemd:unknown"""" (for the compatibility with older systemd versions), the mismatch of values happens and we see the same error message.  """," E0220 12:03:02.943467 22298 main.cpp:670] EXIT with status 1: Expected exactly one socket with name unknown, got 0 instead ",0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10109","04/03/2020 13:31:31",1,"After failover, master crashes on re-adding an agent with maintenance schedule set. ""Stacktrace: This immediately follows re-adding an agent after master failover. The issue was introduced by this patch: https://reviews.apache.org/r/71428 which didn't account for the fact that `addSlave()` takes as an argument per-framework used resources that potentially can contain frameworks that were not added to allocator yet. (Note that when master re-registers an agent, it first calls addSlave(), and only then calls addFramework() for the frameworks recovered from the agent.)"""," 2020-04-03 08:34:58.007285 +0000 UTC F0403 08:34:58.003100 2717 hierarchical.cpp:2461] Check failed: 'getFramework(frameworkId)' Must be SOME 2020-04-03 08:34:58.007563 +0000 UTC *** Check failure stack trace: *** 2020-04-03 08:34:58.007827 +0000 UTC I0403 08:34:58.003136 2713 master.cpp:1721] Sending register ACK to: overlay-agent@172.16.39.81:5051 2020-04-03 08:34:58.008064 +0000 UTC I0403 08:34:58.003142 2715 master.cpp:9963] Adding framework b4fd9630-674e-4dea-b072-c3c48ccfdd42-0000 (marathon) with roles { } suppressed 2020-04-03 08:34:58.008305 +0000 UTC I0403 08:34:58.004185 2714 master.cpp:7635] Ignoring update on agent b4fd9630-674e-4dea-b072-c3c48ccfdd42-S38 at slave(1)@172.16.6.89:5051 (172.16.6.89) as it reports no changes 2020-04-03 08:34:58.008568 +0000 UTC @ 0x7fb70eda72ad google::LogMessage::Fail() 2020-04-03 08:34:58.010292 +0000 UTC @ 0x7fb70eda9508 google::LogMessage::SendToLog() 2020-04-03 08:34:58.010583 +0000 UTC @ 0x7fb70eda6e43 google::LogMessage::Flush() 2020-04-03 08:34:58.012035 +0000 UTC @ 0x7fb70eda9e49 google::LogMessageFatal::~LogMessageFatal() 2020-04-03 08:34:58.013252 +0000 UTC @ 0x7fb70d94748d _check_not_none<>() 2020-04-03 08:34:58.014963 +0000 UTC @ 0x7fb70d940f84 mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::generateInverseOffers() 2020-04-03 08:34:58.016681 +0000 UTC @ 0x7fb70d9414a1 mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::_generateOffers() 2020-04-03 08:34:58.017498 +0000 UTC @ 0x7fb70d94ee32 _ZNO6lambda12CallableOnceIFvPN7process11ProcessBaseEEE10CallableFnINS_8internal7PartialIZNS1_8dispatchI7NothingN5mesos8internal6master9allocator8internal28HierarchicalAllocatorProcessEEENS1_6FutureIT_EERKNS1_3PIDIT0_EEMSL_FSI_vEEUlSt10unique_ptrINS1_7PromiseISA_EESt14default_deleteIST_EES3_E_ISW_St12_PlaceholderILi1EEEEEEclEOS3_ 2020-04-03 08:34:58.020673 +0000 UTC @ 0x7fb70ecf34b1 process::ProcessBase::consume() 2020-04-03 08:34:58.022404 +0000 UTC @ 0x7fb70ed0812b process::ProcessManager::resume() 2020-04-03 08:34:58.023133 +0000 UTC @ 0x7fb70ed0eb36 _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv 2020-04-03 08:34:58.023782 +0000 UTC @ 0x7fb70a9772b0 (unknown) 2020-04-03 08:34:58.024105 +0000 UTC @ 0x7fb70a195e65 start_thread 2020-04-03 08:34:58.024669 +0000 UTC @ 0x7fb709ebe88d __clone ",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10117","04/15/2020 13:56:51",3,"Update the `usage()` method of containerizer to set resource limits in the `ResourceStatistics` protobuf message ""In the `ResourceStatistics` protobuf message, there are a couple of issues: # There are already `cpu_limit` and `mem_limit_bytes` fields, but they are actually CPU & memory requests when resources limits are specified for a task. # There is already `mem_soft_limit_bytes` field, but this field seems not set anywhere. So we need to update this protobuf message and also the related containerizer code which set the fields of this protobuf message.""","",0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10120","04/20/2020 11:24:08",1,"Authorization for /logging/toggle and /metrics/snapshot is skipped on Windows. ""Due to path::join without specifying a separator being used to join an URI when looking for the authorization callback: https://github.com/apache/mesos/blob/5e5783d748af17dfb1502df5870a5397879c82f1/3rdparty/libprocess/src/process.cpp#L3845""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"MESOS-10126","05/13/2020 04:02:39",3,"Docker volume isolator needs to clean up the `info` struct regardless the result of unmount operation ""Currently when [DockerVolumeIsolatorProcess::cleanup()|https://github.com/apache/mesos/blob/1.9.0/src/slave/containerizer/mesos/isolators/docker/volume/isolator.cpp#L610] is called, we will unmount the volume first, but if the unmount operation fails we will not remove the container's checkpoint directory and NOT erase the container's `info` struct from `infos`. This is problematic, because the remaining `info` in the `infos` will cause the reference count of the volume is larger than 0, but actually the volume is not being used by any containers. And next time when another container using this volume is destroyed, we will NOT unmount the volume since its reference count will be larger than 1 (see [here|https://github.com/apache/mesos/blob/1.9.0/src/slave/containerizer/mesos/isolators/docker/volume/isolator.cpp#L631:L651] for details) which should be 2, so we will never have chance to unmount this volume. We have this issue since Mesos 1.0.0 release when Docker volume isolator was introduced.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0