diff --git "a/database/tawos/tfidf/STL_tfidf-se.csv" "b/database/tawos/tfidf/STL_tfidf-se.csv" new file mode 100644--- /dev/null +++ "b/database/tawos/tfidf/STL_tfidf-se.csv" @@ -0,0 +1,207 @@ +"issuekey","created","storypoint","context","codesnippet","t_New.Feature","t_Bug","t_Improvement","t_Task","c_Valdator.Server","c_Go.SDK","c_Packaging","c_XO.Family","c_Documentation","c_Validator.Global.State","c_Settings.Family","c_sawtooth.core","c_Validator.Txn.Execution","c_Continuous.Integration","c_REST.API","c_Javascript.SDK","c_Python.SDK","c_PoET","c_Supply_Chain","c_Validator.Journal","c_OMI.Summer.Lab","c_Hyperledger","c_sawtooth.CLI","c_Validator.Networking","c_Burrow.EVM","c_Java.SDK","c_Private.Transaction","c_Design","c_C...SDK","c_Poet2" +"STL-34","05/02/2017 22:08:57",2,"xo create's --wait option does not wait ""When creating a game with xo, it returns before the game is committed, even with --wait: Expected behavior is for the CLI to pause until either an error is returned or the transaction has been committed to a winning block."""," ubuntu@ubuntu-xenial:/project$ xo create game005 --wait Response: { """"link"""": """"http://127.0.0.1:8080/batch_status?id=551a807fd5455c30fb6a5c6e4550c260c1d5e9eac0990712e45e4df92ea01bcb2e98896feb99895ce65bd25c2b8d38a368b204e2d9c984d8a230c6d46001c06e"""" } ",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-35","05/02/2017 22:10:48",1,"Research Go SDK Distribution Model ""Output of this task should be additional stories for distributing the SDK ""","",1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-39","05/02/2017 22:57:24",1,"Startup without initializing genesis data can be confusing ""During development, it is common to startup a single validator. This requires two steps prior to staring the validator: a) create a validator key with 'sawtooth admin keygen' and b) create data for the genesis block with 'sawtooth admin genesis'. If a user using the single validator forgets step (b), the validator will currently startup and start peering. The validator is essentially waiting for a genesis block before it will start publishing it's own blocks. Peering happens in one thread while the journal is running in it's own threads. (This maybe an imprecise description of the validator's behavior, as the implementation is fairly detailed.) Current output in this situation looks like this: If the user mistakenly forgot the step and is not starting another validator, this debug output does not help. In fact, the 'None' statements look buggy as it looks like str(a) where 'a is None' in python slipped through and got logged. (This is intentional currently, so it's not actually buggy.) Recommend we have the genesis code identify this situation and log an INFO-level message with: 'No genesis block is present, resulting in no initial chain head; block publishing is disabled until a chain head is updated' or similar and removing the logging statements where 'None' occurs (or changing them for clarity)."""," ./bin/validator -vv --public-uri tcp://localhost:8800/ ... [19:51:14.832 DEBUG genesis] genesis_batch_file: None [19:51:14.832 DEBUG genesis] chain_head: None False [19:51:14.833 DEBUG genesis] block_chain_id: None [19:51:14.833 DEBUG genesis] Requires genesis: False [19:51:14.834 DEBUG selector_events] Using selector: ZMQSelector [19:51:14.835 DEBUG dispatch] Added send_message function for connection ServerThread [19:51:14.838 INFO chain] Chain controller initialized with chain head: None [19:51:14.838 INFO publisher] Now building on top of block: None ... ",1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-55","05/03/2017 20:08:47",5,"Implement state handling in parallel scheduler branch ""Finish implementation of state handling in the parallel scheduler branch, with appropriate integration with the context manager. At the end of this effort, the parallel scheduler should work well enough that it can be used in place of the serial scheduler. ""","",1,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-56","05/03/2017 20:09:41",1,"Add configuration option to select between schedulers ""Add command line option and file-based configuration to switch between schedulers. Update sawtooth cluster command to enable switching scheduler.""","",1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-61","05/03/2017 20:18:00",1,"Write Transaction Family Spec for XO ""We do not currently have a transaction family specification for xo in our docs. Since we use xo as our example transaction family for tutorial purposes, it would make sense to also provide a transaction family specification. Following the format of the Sawtooth Config or Validator Registry transaction families, write a transaction family spec for the XO Family.""","",1,0,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-64","05/03/2017 20:27:37",3,"Implement State Delta Subscription Catch-up ""State Export clients can send \{\{last_known_block_ids}} which may be older than the current chain head.  Provide a way to send the successive events between the block id sent and the chain head.""","",1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-65","05/03/2017 20:28:44",2,"Implement Settings Key Naming Strategy ""Addresses are made up of 70 hexadecimal digits, with the first 6 making up a transaction family namespace, and the remaining 64 make up the address of the data within the namespace.  With on-chain settings, the namespace is 000000.  The remainder of the data is made up from a series of hashes of key parts.   In order to allow users to to query for groups of settings, without having to query the whole of the the settings, there needs to be a correspondence between the key parts and the address.     The remaining 32 bytes/64 hex characters of a setting address can be broken up into 4 parts, each corresponding to a part of the setting key.  Each part of the key is hashed with sha256 and the first 16 characters of the hash are used.  For keys with less than 4 parts, the remaining hashes are produced using empty strings.  For keys with more than 4 parts, the last part will be the remaining subkey (that is, 3 individual parts, and the 4 made up of the remain subkey). Breaking it into 4 parts limits the granularity of subqueries, for deeply nested settings, but a consistent position for the keys is necessary for subgroup queries to be predictable, without the application developer requiring knowledge of how deeply nested the settings are.""","",1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-66","05/03/2017 20:29:40",3,"Implement State Delta Registration with Fork Resolution ""State Export clients may send {{last_known_block_ids}} which may no longer exist on the chain (e.g. the connected validator had a fork that was discarded).   Implement a way for the validator and subscriber to negotiate for a known block id.""","",1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-82","05/03/2017 20:55:07",1,"Migrate Namespace/Address Documentation to Public Sphinx Doc ""Migrate this document into the appropriate place in the public facing documentation:""","",1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-93","05/04/2017 17:49:58",3,"Update xo tests ""The xo TP and smoke tests don't test very much right now. xo smoke, for instance, verifies that nothing breaks when xo is run, but doesn't do any state checking at all [1]. They should be more extensive. [1] In this sense it's a true smoke test: """"The phrase smoke test comes from electronic hardware testing. You plug in a new board and turn on the power. If you see smoke coming from the board, turn off the power. You don't have to do any more testing.""""""","",1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-97","05/05/2017 00:36:12",1,"Clarify Python REST instructions are for Python 3 ""The instructions for submitting a transaction to the REST API from a python client are Python 3 specific, but do not state the Python version requirement. In Python 2.7, these instructions result in an error: The page should clearly indicate that it is for Python 3."""," Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] on linux2 Type """"help"""", """"copyright"""", """"credits"""" or """"license"""" for more information. >>> import secp256k1 >>> >>> key_handler = secp256k1.PrivateKey() >>> private_key_bytes = key_handler.private_key >>> public_key_bytes = key_handler.pubkey.serialize() /usr/local/lib/python2.7/dist-packages/secp256k1/__init__.py:228: UserWarning: implicit cast from 'char *' to a different pointer type: will be forbidden in the future (check that the types are as you expect; use an explicit ffi.cast() if they are correct) self.ctx, res_compressed, outlen, self.public_key, compflag) >>> >>> public_key_hex = public_key_bytes.hex() Traceback (most recent call last): File """""""", line 1, in AttributeError: 'str' object has no attribute 'hex' ",1,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-103","05/05/2017 15:17:26",2,"In docs, code blocks which include source files have many empty lines ""In our documentation, use of literalinclude results in empty lines at the end of the block (see attached image). """," .. literalinclude:: ../../../protos/batch.proto :language: protobuf :caption: File: protos/batch.proto :linenos: ",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-104","05/05/2017 15:45:45",1,"Fix xo state format to handle hash collisions ""Currently, the xo examples do not handle possible hash collisions. Instead, we have warnings like the following in xo python: Since we use xo for our tutorials, we should provide a correct implementation which stores a list of games at the address instead of a single game, thus handling hash collisions properly. Note that this will require an update to all xo TP implementations as well as any impacted documentation."""," # NOTE: Since the game data is stored in a Merkle tree, there is a # small chance of collision. A more correct usage would be to store # a dictionary of games so that multiple games could be store at # the same location. See the python intkey handler for an example # of this. if stored_name != name: raise InternalError(""""Hash collision"""") ",0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-114","05/05/2017 17:36:43",1,"Publish stable docs as default, with docs from master at a sub-url ""Currently, our published documentation is not in-sync with our stable debian packages and stable docker images. This is causing app developer confusion. Instead, we should publish stable documentation at the same time we publish the debian/docker artifacts. We should also generate master as we do now, but in a different URL (similar to what we do for 0.7).""","",1,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-131","05/08/2017 05:02:46",1,"Refactor - create MockConsensusState() ""Create a MockConsensusState() to eliminate duplicate code in most tests in test_consensus.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-132","05/08/2017 05:06:50",3,"PoET Fork Resolver - current & new fork heads are both PoET blocks ""- current fork head and new fork head previous blocks are the same (i.e., they both build on the same block) - current fork wait certificate duration < new fork wait certificate duration - current fork wait certificate duration > new fork wait certificate duration - current fork wait certificate duration == new fork wait certificate duration - current fork header signature > new fork header signature - new fork header signature >= current fork header signature - current fork head and new fork head previous blocks are not the same - current fork consensus state aggregate local mean > new fork consensus state aggregate local mean + new fork wait certificate local mean - current fork consensus state aggregate local mean < new fork consensus state aggregate local mean + new fork wait certificate local mean - current fork consensus state aggregate local mean == new fork consensus state aggregate local mean + new fork wait certificate local mean - current fork header signature > new fork header signature - new fork header signature >= current fork header signature""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-133","05/08/2017 05:08:30",5,"PoET Block Publisher - initialize_block() Failure & Success Cases ""- Verify that it returns False if the validator does not have an entry in the validator registry - Verify that it returns False if the PoET public key in the validator registry is not current - Verify that it returns False if the K (block claim count) test fails - Verify that it returns False if the validator sign up info was committed too late - Verify that it returns False if the C policy (block claim delay) test fails - Verify that it returns False if the Z policy (block claim frequency) test fails - Verify that it returns True if the validator passes all the tests above""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-134","05/08/2017 05:13:13",2,"PoET Fork Resolver - current & new fork heads are not PoET blocks ""Test conditions: Current fork head is not a PoET block New fork head is not a Poet block - new fork head builds directly on top of it (i.e., the new fork head has the current fork head as its previous block) - new fork head does not build directly on top of it""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-135","05/08/2017 05:19:22",2,"PoET Block Publisher - claim readiness if the wait timer has expired ""Check that PoET Block Publisher claims readiness only when it is time to claim the current candidate blocks - the wait timer has expired. check_publish_block() return self._wait_timer.has_expired(now=time.time())""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-143","05/08/2017 16:43:34",2,"Remove Dependency on kwargs in PoET Enclave Simulator ""The PoET simulator enclave still is dependent upon the old configuration model (i.e., passing all configuration via kwargs dictionary).  This dependency needs to be removed and the PoET simulator needs to be updated to use either the new config transaction family or the new local configuration being work on by [~dan.middleton@intel.com].""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-144","05/08/2017 16:45:34",1,"Make Setting the IAS Enclave Measurement Configurable for PoET consensus ""Currently, the validator registry transaction processor and the PoET enclave simulator use a predetermined enclave measurement to check against the attestation verification report. The SGX enclave checks against its own enclave measurement value. This enclave measurement value needs to be configurable.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-146","05/08/2017 16:47:18",1,"Make Setting the IAS Enclave Basename Configurable for PoET Consensus ""Currently, the validator registry transaction processor and the PoET enclave simulator use a predetermined enclave basename to check against the attestation verification report. The SGX enclave checks against its own enclave basename value. This enclave basename value needs to be configurable.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-147","05/08/2017 16:49:26",3,"Make Setting the IAS REPORT_PUBLIC_KEY_PEM Configurable for PoET Consensus ""Currently, the validator registry transaction processor and the PoET enclave simulator use a predetermined IAS verification key to check attestation verification report signature. This IAS verification key needs to be configurable.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-148","05/08/2017 16:51:20",1,"Remove Signup Information Verification from PoET Enclave Simulator ""Signup information is verified in the validator registry transaction processor, removing the need to verify it in the enclave.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-153","05/08/2017 18:27:03",5,"Create yaml based scheduler test ""With this test (framework) it will be possible to test correctness on any scheduler implementation by writing a yaml file using particular data (The test framework will create transactions and batches and mark transaction as valid or invalid as specified by the yaml). The test of correctness is that the correct merkle root is available in the scheduler for the block production/validation component to query.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-155","05/08/2017 19:17:08",1,"Complete rebasing of Python Txn Tutorial ""The Python tutorial/guide for writing a transaction processor (NML-2432) is mainly complete, but is divided into two PR's due to a process anomaly.  Get the commits in the PR's merged into a single PR and clean up.  ""","",1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-157","05/08/2017 19:30:49",1,"App Dev Guide: Writing Transaction Processors-JavaScript ""Partially done fro NML-2436. Use specially written JavaScript code fragments to insert into tutorial template already created.   ""","",1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-159","05/08/2017 20:40:49",2,"Handle Validator Disconnects ""Handle validator disconnect events in the Go SDK correctly. When the validator is restarted, the Go SDK does not re-connect. Expected behavior is that a restart of the validator should not require restarting the TPs such as intkey go.""","",0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-160","05/08/2017 20:59:24",3,"Update Poet WaitTimer and WaitCertificate Tests ""Create mock_poet_enclave_wait_timer and mock_poet_enclave to allow faster testing without using sleeps.  include Jamie on the review.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-185","05/10/2017 20:36:28",1,"xo create should send a friendly response after waiting ""Creating a game with xo currently prints out a raw JSON response from the REST API. It should instead wait for the block to commit, and then print a friendlier XO-specific confirmation.""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-188","05/11/2017 18:30:07",2,"Fix PoET Block Publisher State Persistence ""The PoET block publisher uses one or more class variables to persist state across creation of PoET block publisher objects.  While this works, it is quite ugly when it comes to devising unit tests.  This """"global"""" state needs to be persisted in a better way.  Of highest importance is keeping track of the current PoET public key, which can probably be persisted in the PoET Key Store, an already existing global state persistence used by the PoET block publisher.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-194","05/12/2017 04:33:50",2,"App Dev Guide should not reference git or require git clone of sawtooth-core repository ""Currently, the Application Developer's guide's Environment Setup section for Docker requires the user to clone the git repository. Instead, the guide should provide a link to download the compose file from the documentation.  The cleanest approach would probably be to copy the yaml file into the app dev guide directory within the Makefile, then link to that file. This may be a bit tricky for PDF generation.  In that instance, we will want to link to the file on github or in-line the file within the PDF text itself. Sphinx-doc allows for format conditionals.""","",1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-195","05/12/2017 05:39:22",3,"App Dev Guide flow is confusing when using docker ""The Environment Setup section describes how to startup a sawtooth environment easily with Docker.  This is probably the easiest/best option for most application developers. The """"Getting Started with Hyperledger Sawtooth"""" section describes things in a way that would be more appropriate for native Ubuntu installs or vagrant.  For example, when using Docker, you do not manually startup the validator or worry about genesis block creation as it is done for you in the docker compose file. Some of this content should be rewritten and generalized.  Other portions of the content should move into the Ubuntu install section of the Environment Setup guide.  (Similar to how we show docker compose up/down, we could show basic operations when installing via deb packages.)""","",1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-196","05/12/2017 05:47:48",1,"App Dev Guide has duplicate sections for transaction family tutorial ""The Application developer guide has both a top-level section called 'Transaction Family Tutorial' as well as the Python SDK/Transaction Processor Tutorial Python.  There should only be one TF tutorial section for python.""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-197","05/12/2017 05:54:44",2,"App Dev Guide's Transaction Processor Tutorial Python is vagrant specific ""The """"Transaction Processor Tutorial Python"""" contains vagrant-specific instructions, which are invalid in this context (vagrant is a core guide topic).  This includes path references to /project, for example.  This needs to be cleaned up so it makes sense in terms of the environment setup provided in the Application Development Guide.""","",1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-198","05/12/2017 06:00:14",2,"App Dev Guide's Transaction Processor Tutorial lacks xo CLI examples ""The tutorial as it currently exists covers the creation of a transaction processor, for which we have a full implementation to serve as an example. Instead of ending as it does currently, the tutorial should continue by showing some client examples as well, including game play using the xo CLI. This could potentially change the tutorial from a transaction processor tutorial to a more comprehensive application development tutorial covering transaction processor and client-side transaction submission and state viewing.""","",1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-209","05/12/2017 18:58:14",1,"Remove PoET as a Service ""The PoET as a Service code should be removed from the repo.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-213","05/16/2017 01:36:40",1,"Docker compose_client_1 image should initialize user keys ""After starting a network with: and logging into compose_client_1: the user is required to run 'sawtooth keygen' prior to using commands like 'xo': It would be a better experience if keys were pre-generated. Presumably we would not want these keys to be distributed with the image, but rather generated when doing the docker-compose up. """," docker-compose -f docker/compose/sawtooth-demo.yaml up docker exec -it compose_client_1 bash root@7b6f84c67e9e:/# xo create game000 Traceback (most recent call last): File """"/usr/lib/python3/dist-packages/sawtooth_xo/xo_client.py"""", line 43, in __init__ with open(keyfile) as fd: FileNotFoundError: [Errno 2] No such file or directory: '/root/.sawtooth/keys/root.priv' During handling of the above exception, another exception occurred: Traceback (most recent call last): File """"/usr/lib/python3/dist-packages/sawtooth_xo/xo_cli.py"""", line 356, in main_wrapper main() File """"/usr/lib/python3/dist-packages/sawtooth_xo/xo_cli.py"""", line 339, in main do_create(args, config) File """"/usr/lib/python3/dist-packages/sawtooth_xo/xo_cli.py"""", line 274, in do_create keyfile=key_file) File """"/usr/lib/python3/dist-packages/sawtooth_xo/xo_client.py"""", line 47, in __init__ raise IOError(""""Failed to read keys."""") OSError: Failed to read keys. ",1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-219","05/16/2017 15:50:19",2,"Test Javascript SDK client modules in browser ""Test the Javascript SDK Client module in the browser, using at least one of Browserify or Webpack.   * Ensure that the client functionality works via the REST API. * Verify that the signing libraries work in browser ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-223","05/16/2017 20:00:19",5,"Provide State Delta Export Proxy Over Web Sockets ""Extend the Rest API component to provide State Export Deltas over a WebSocket.  Proposed solution here would be to have the REST API component subscribe to all state delta events.   Any client connecting via a websocket would interact in the following way:   # Send last known block id and address prefixes # Receive State Delta Event The rest api would manage the filtering, and the catch-up, but the clients should not need to send unregister messages, as this is simply a consequence of the WS closing.""","",1,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-225","05/16/2017 22:08:55",2,"App Dev Guide needs pointers to pages describing docker proxy configuration ""The application developer guide should provide links to resources which help the app developer configure proxies when using docker-compose. This should cover at least Windows and MacOS proxy configuration, and perhaps Linux.""","",1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-228","05/16/2017 22:16:19",1,"Rename compose file sawtooth-demo.yaml to sawtooth-default.yaml ""The word 'demo' should be avoided in this context; this is our recommended starting point for app developers .""","",1,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-230","05/16/2017 22:31:00",1,"App Dev Guide's docker install section needs more detail ""The current docker installation section provides a bare minimum of links and help. While we do not want to document docker itself, we should provide more assistance in finding the right information to get docker and docker compose installed such that the rest of our docker-compose instructions will work.""","",1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-242","05/17/2017 22:08:45",1,"Rename Config Transaction Family ""Rename {{sawtooth-config}} family to something more representative of the fact that it is a reference implementation.""","",0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-245","05/18/2017 02:38:28",1,"PoET Block Publisher - finalize_block() success case ""Check that PoET block finalze_block() only finalizes a block to be claimed when the candidate block is good and should be generated.  Check if the wait_certificate was correctly created.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-246","05/18/2017 02:46:46",1,"It is the BlockPublisher that sends the block out through Gossip, not Consensus.BlcokPublisher ""The journal diagram in [http://intelledger.github.io/architecture/journal.html] has a bug, that diagram shows an arrow from Consensus.BlcokPublisher to Gossip, but it is  is the BlockPublisher that sends the block out through Gossip.""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"STL-247","05/18/2017 04:15:35",1,"Rename the config transaction family to settings transaction family ""The term 'config' is overloaded, in that we have used it for two purposes: 1) off-chain file-based configuration; and 2) on-chain configuration. To assist with this, we could rename the config transaction family to 'settings'. In usual conversation, 'config' would refer to file-based configuration and 'settings' to on-chain settings. This should be discussed and approved by the maintainers prior to implementation.""","",1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-248","05/18/2017 14:49:17",1,"Complete Supplychain Transaction Processor Permission enforcement. ""Review the permission checks and update the transaction handler to enforce proper permission enforcement in the Transaction Handler. The code below has TBD comments in many of the places to update.        class RecordHandler(object): @classmethod def apply(cls, transaction, state): payload = json.loads(transaction.payload.decode()) LOGGER.debug(""""apply payload: %s"""", repr(payload)) tnx_action = payload.get('Action', None) txnrecord_id = payload.get('record_id', None) header = TransactionHeader() header.ParseFromString(transaction.header) tnx_originator = addressing.get_agent_index( addressing.get_agent_id(header.signer_pubkey)) # Retrieve the stored record data if an ID is provided. record_id = txnrecord_id record_store_key = record_id record_store = state_get_single(state, record_store_key) # TBD: the originator should be a registered agent # TBD: if it's not Create then the record should exist # Check Action if tnx_action == 'Create': if txnrecord_id is None: raise InvalidTransaction( 'Record id expected for CreateRecord') record_store = \{} cls.create_record(tnx_originator, record_id, payload, state, record_store) # TBD: If there are parents they should not be final and they should # have accepted applications from the txn originator. # TBD: If it is an ownership transfer then then the record should # not have custodians. # TBD: What if the sensor is already registered? Should we require # an unregister operation? elif tnx_action == """"CreateApplication"""": if txnrecord_id is None: raise InvalidTransaction( 'Record id expected for create_application') cls.create_application(tnx_originator, record_id, payload, state, record_store) # TBD: Check for existing application - for now only one at a # time. # TBD: applicationtype is owner or custodian, terms should be # defined # TBD: If app for ownership, then there should be no custodians elif tnx_action == """"AcceptApplication"""": if txnrecord_id is None: raise InvalidTransaction( 'Record id expected for accept_application') cls.accept_application(tnx_originator, record_id, payload, state, record_store) # TBD: must be the record holder # TBD: there must be an open application elif tnx_action == """"RejectApplication"""": if txnrecord_id is None: raise InvalidTransaction( 'Record id expected for reject_application') cls.reject_application(tnx_originator, record_id, payload, state, record_store) # TBD: must be the record holder # TBD: there must be an open application elif tnx_action == """"CancelApplication"""": if txnrecord_id is None: raise InvalidTransaction( 'Record id expected for cancel_application') cls.cancel_application(tnx_originator, record_id, payload, state, record_store) # TBD: must be the application creator # TBD: there must be an open application elif tnx_action == """"Finalize"""": if txnrecord_id is None: raise InvalidTransaction( 'Record id expected for Finalize') cls.finalize_record(tnx_originator, record_id, payload, state, record_store) # TBD: must be the record owner. The only way a custodian can # finalize a record is by making a child. # TBD: record must not be final already else: raise InvalidTransaction('Action \{} is not valid'. format(tnx_action)) # Store the record data back state_put_single(state, record_store_key, record_store) @classmethod def create_record(cls, originator, record_id, payload, state, my_store): sensor_id = payload.get('Sensor', None) sensor_idx = None if sensor_id != None: sensor_idx = addressing.get_sensor_index(sensor_id) record_info = \{} # Owner set below record_info['CurrentHolder'] = originator # Custodians set below record_info['Parents'] = payload.get('Parents', None) record_info['Timestamp'] = payload.get('Timestamp') record_info['Sensor'] = sensor_idx record_info['Final'] = False record_info['ApplicationFrom'] = None record_info['ApplicationType'] = None record_info['ApplicationTerms'] = None record_info['ApplicationStatus'] = None record_info['EncryptedConsumerAcccessible'] = None record_info['EncryptedOwnerAccessible'] = None my_store['RecordInfo'] = record_info my_store['StoredTelemetry'] = payload.get('Telemetry', \{}) my_store['DomainAttributes'] = payload.get('DomainAttributes', \{}) # Determine if this record has parents has_parents = record_info['Parents'] != None and \ len(record_info['Parents']) > 0 # If there are parents update Owner and Custodian depending on the # ApplicationType if has_parents: # Use the first parent # TBD: how to handle multiple parents? If there are no custodians # then it seems straight forward to transfer to the new owner. # Maybe if there are custodians then all but the first need to be # held by the owner and we handle it based on the first parent. # One thing that could be useful here is to be able to combine # multiple parents and keep the owner/custodians the same - right # now we can only add a custodian or pop the stack. parent_id = record_info['Parents'][0] parent_store = state_get_single(state, parent_id) if parent_store['RecordInfo']['ApplicationType'] == """"Owner"""": # Transfer ownership - in this case there should be # no custodians. assert len(parent_store['RecordInfo']['Custodians']) == 0 record_info['Owner'] = originator record_info['Custodians'] = [] else: # Transfer custodianship record_info['Owner'] = \ parent_store['RecordInfo']['Owner'] record_info['Custodians'] = \ list(parent_store['RecordInfo']['Custodians']) # Check the next to last element of the Custodians array. If it # is the new holder, then this is a 'pop' operation. It's also # a pop if here is one custodian and the applicant is the # owner. is_pop = False if len(record_info['Custodians']) > 1 and \ record_info['Custodians'][-2] == originator: is_pop = True elif len(record_info['Custodians']) == 1 and \ record_info['Owner'] == originator: is_pop = True if is_pop: record_info['Custodians'].pop() else: record_info['Custodians'].append(originator) else: # No parents, just create a new record record_info['Owner'] = originator record_info['Custodians'] = [] # If there are parents mark them as final. if has_parents: for parent in record_info['Parents']: parent_store = state_get_single(state, parent) parent_store['RecordInfo']['Final'] = True state_put_single(state, parent, parent_store) # Remove the record from the former owner - even if this # is a custodian transfer we need to store the new # record ID with the owner. AgentHandler.remove_record_owner(state, parent_store['RecordInfo'][""""Owner""""], parent) # Remove the previous holder AgentHandler.remove_record_holder(state, parent_store['RecordInfo'][""""CurrentHolder""""], parent) # Remove the accepted application from the new owner AgentHandler.remove_accepted_application(state, parent_store['RecordInfo']['ApplicationFrom'], parent) # Record the owner of the new record in the agent AgentHandler.add_record_owner(state, record_info[""""Owner""""], record_id, record_info[""""Owner""""] == record_info[""""CurrentHolder""""]) # Record the new record holder in the agent AgentHandler.add_record_holder(state, record_info[""""CurrentHolder""""], record_id) # Register the sensor if sensor_id != None: if state_get_single(state, sensor_idx) != None: sensor_store = state_get_single(state, sensor_idx) else: sensor_store = \{} sensor_store[""""Record""""] = record_id sensor_store[""""Name""""] = sensor_id state_put_single(state, sensor_idx, sensor_store) @classmethod def create_application(cls, originator, record_id, payload, state, my_store): record_info = my_store['RecordInfo'] # Agent ID who initiated the application record_info['ApplicationFrom'] = originator # custodian or owner record_info['ApplicationType'] = payload['ApplicationType'] # Should be encrypted? record_info['ApplicationTerms'] = payload['ApplicationTerms'] # To indicate acceptance (or not) of the application. record_info['ApplicationStatus'] = """"Open"""" # Record the new application in the current holder AgentHandler.add_open_application(state, record_info['ApplicationFrom'], record_info['CurrentHolder'], record_id) @classmethod def accept_application(cls, originator, record_id, payload, state, my_store): # Mark the application as accepted. After this the new # owner/custodian is able to make a new record with this # record as the parent. record_info = my_store['RecordInfo'] record_info['ApplicationStatus'] = """"Accepted"""" # Record the accepted application in the new holder AgentHandler.remove_open_application(state, record_info['ApplicationFrom'], record_info['CurrentHolder'], record_id) AgentHandler.add_accepted_application(state, record_info['ApplicationFrom'], record_id, record_info['Sensor']) @classmethod def reject_application(cls, originator, record_id, payload, state, my_store): # Mark the application as rejected. record_info = my_store['RecordInfo'] record_info['ApplicationStatus'] = """"Rejected"""" # Record the rejected application in the agent AgentHandler.remove_open_application(state, record_info['ApplicationFrom'], record_info['CurrentHolder'], record_id) @classmethod def cancel_application(cls, originator, record_id, payload, state, my_store): # Mark the application as cancelled. record_info = my_store['RecordInfo'] record_info['ApplicationStatus'] = """"Cancelled"""" # Record the cancelled application in the agent AgentHandler.remove_open_application(state, record_info['ApplicationFrom'], record_info['CurrentHolder'], record_id) @classmethod def finalize_record(cls, originator, record_id, payload, state, my_store): record_info = my_store['RecordInfo'] # TBD: check that there are no custodians before finalizing record_info['Final'] = True # Remove the record from the agent # TBD: handle any pending applications assert record_info['Owner'] == originator assert record_info['CurrentHolder'] == originator AgentHandler.remove_record_owner(state, originator, record_id) AgentHandler.remove_record_holder(state, originator, record_id)""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"STL-252","05/19/2017 17:50:58",2,"Write OMI TP spec/defintion "" Use Cases - Cataloging, attributing and distributing live DJ mixes -- Register Artist/DJ -- Register Song/Mix/Asset -- Reference Other Songs/Assets in New Asset - Commercializing mixtapes built from original material and back catalogs -- Register Artist -- Register Song/Asset -- Reference Other Songs/Assets in New Asset - Compensating musicians for visual works using their songs as data -- Provide Payment information for Songs/Assets (Who/How Much) - Identifying individuals for their contribution to single tracks in new works -- Register Individuals -- Reference Individuals as Contributors In Songs/Assets Transaction Actions - Register Entity/Identity (Artist / DJ / Individuals) -- Fields --- Name - Update by Owner/Registrar - Register Asset (Song/Mix/Compilation/Visual Work) -- Fields (Not all are req'd) -- 'Owner' of Asset (Could be label/PRO/etc.) -- Title of Work -- Type (Recording/Composition/Mix/Visual/etc.) -- ISRC, ISWC, IPI, ISNI -- Label -- Songwriter / Composer -- Publisher -- Reference Entities as Contributors In Songs/Assets -- Reference Other Assets in New Asset --- Entirely new works have no references --- Derivatives have a single reference --- Mixes/Compilations have multiple references -- Provide Payment Information for Songs/Assets --- Contact Information --- OR -- Addresses and Royalty Split/Amounts -- splits to contributors and publishers. -- Update Fields by Owner/Registrar""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"STL-253","05/19/2017 17:52:36",5,"Implement the OMI transaction Processor ""This can be done on top of the supplychain transaction processor or as it's own TP, which ever path is more expedient.    ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"STL-254","05/19/2017 17:53:16",5,"OMI Domain Specific Javascript Library ""Write a domain specific JS library to enable easy access to the validator data store and submission of Transactions to the validator. This should include a tutorial and sufficient documentation to enable easy use. This library should run in the broswer environment or under node.js""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"STL-255","05/19/2017 17:55:43",2,"OMI Plan for deployment ""Create plan and collateral necessary to support the OMI Summer labs. This may include standing up servers to host validators, creating Docker compose files to allow the participants to run validators with the OMI TP (potentially local copies they have modified).    ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"STL-256","05/19/2017 17:57:22",2,"Plan & document training and training collateral ""What do the participants need to know? How do they bring up and interact with a tests system?    Generate any tutorials documentation and guides the OMI Summer Lab participants may need in order to use the OMI TP.     ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"STL-258","05/19/2017 18:06:18",5,"Package OMI transaction processor ""Package transaction processor in deb or docker as appropriate to execute on plan for deployment. This includes appropriate documentation.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"STL-259","05/19/2017 18:07:48",2,"Package OMI domain-specific Javascript ""Package javascript library for in-browser use as appropriate to execute on the plan for deployment.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0 +"STL-260","05/19/2017 19:04:28",2,"Finalize design of api-key based authorization ""* What format will the key be in? * What endpoints and query parameters will be added? * What configuration options will be available?""","",1,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-269","05/20/2017 00:03:24",5,"Add Supplychain rest interface ""Add a domain specific rest api. Implement in accordance with the Spec writtent in STL-295 Associated python client issue is STL-491.""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"STL-270","05/20/2017 00:04:22",2,"Supplychain Transaction Processor Spec ""Write a transaction processor spec""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"STL-271","05/20/2017 00:06:22",1,"Update the Supplychain TP to handle Hash collisions. ""Currently the the supplychain TP assumes all hashes are unique, and will overwrite items in the case of a collision. It needs to be updated to follow the standard pattern of using a ordered dictionary or list at each address to allow for collisions. ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"STL-272","05/20/2017 00:08:31",1,"Update the Supplychain TP to use ProtoBuf for global state representation ""Currently the Supplychain TP uses ProtoBuf as global state storage. Protobuf is a better choice due to it's conciseness and repeat-ability. Updated as per comment below. ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"STL-277","05/20/2017 21:37:10",3,"Implement basic intkey show, list, inc, dec, set CLI commands ""The intkey command currently lacks the ability to generate specific inc/dec/set transactions as well as the ability to view the current intkey namespace. Propose the following be added to the intkey CLI: intkey inc NAME intkey dec NAME intkey set NAME VALUE intkey list intkey show NAME ""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-279","05/20/2017 21:50:27",1,"Update pylint support to 1.7.x and un-pin current pylint=1.6.5 in docker images ""The release of pylint 1.7.x broke builds. As a workaround, we pinned the docker pip installs at 1.6.5 so builds continue to work. Linting in the vagrant environment is currently broken as it installs 1.7.x which the code does not pass. At the completion of this task: - sawtooth-core will support the latest pylint 1.7.x - pylint will no longer be pinned to a specific version withing docker or vagrant ""","",1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-281","05/20/2017 22:02:16",3,"Publish JS SDK to NPM registry ""In order to simplify development for app developers, publish the Javascript SDK to the NPM registry.  This will allow user to simple run {{npm install -S sawtooth-sdk}} ""","",1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-286","05/22/2017 04:04:43",1,"xo show returns 404 when game doesn't exist ""xo show gives an unfriendly message when a game does not exist: % ./bin/xo show game000 Error: Error 404: Not Found Instead, this should give """"Error: no such game: game000"""".""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-289","05/22/2017 15:30:23",3,"Create Supplychain python client and CLI ""Create a command line cli to allow for submission SC transactions and query of SC State. This should also include a document that provides detailed instructions on the usages and walks thru an example use case. ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"STL-291","05/22/2017 15:41:55",1,"Move Curve ZMQ keys out of core.py to local config files ""The hardcoded 'public network instance' Curve ZMQ keys should be moved out of the code into a local config file (depends on the completion of local config stories). Moving the keys to local config would allow a basic configuration of a secured network based on sideband sharing of a single network keypair to all participating nodes. Make sure that the choice of configuration plays nicely with the planned design for a stronger 'per-node' key scheme.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0 +"STL-297","05/22/2017 15:58:21",5,"Design a mechanism for permissioned access to a network based on validator keys and on-chain config ""Validators should be able to determine whether messages delivered to them should be handled or dropped based on a set of identities stored in the config namespace. Permissioning rules could determine the roles an identity is able to play on the network (e.g. connect, peer, publish blocks, forward blocks/batches). Message handlers should examine incoming messages against the permissioning rules and the current configuration and either permit, drop, or respond with an error to the sending node. In certain cases (e.g. attempting to peer without permission), it may make sense to forcibly close the connection.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0 +"STL-301","05/22/2017 16:52:04",5,"Implement integration test for REST API behind Apache Proxy ""Implement an integration test for the REST API, such that an instance of the REST API sits behind an Apache server, which acts as a proxy.   Use this test to verify: * the a the rest api properly responds and produces links that are relative to the proxy, not localhost * the rest api functions behind a proxy with HTTP basic auth""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-302","05/22/2017 17:08:33",2,"Update sawtooth CLI to support HTTP Basic Auth ""Add support to the sawtooth CLI for a {{--user}} and {{--password}} optional arguments for specifying basic auth login information. This assumes that HTTP basic auth has been configured on a proxy.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0 +"STL-303","05/22/2017 17:11:47",2,"Update Intkey CLI to support Basic Auth ""Add support to the Intkey CLI for a {{--user}} and {{--password}} optional arguments for specifying basic auth login information.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-304","05/22/2017 17:14:25",2,"Update XO CLI to support Basic Auth ""Add support to the XO CLI for a {{--user}} and {{--password}} optional arguments for specifying basic auth login information.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-305","05/22/2017 17:29:51",2,"Extend integration test to include HTTPS proxy ""Extend the integration test built in STL-301 to include verifying support for proxies running HTTPS.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-307","05/22/2017 18:09:57",1,"Resolve Fossology documentation licensing ""HL Fossology report flagged unlicensed documentation files.  Have pointed out to HL that other major Apache-licensed projects do not license individual document files. Awaiting response.  Once resolved, issue JIRA item(s) as needed.""","",0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-308","05/22/2017 19:27:56",2,"Document Apache proxy REST API setup ""In the System Administrator's Guide, document the process of putting the REST API behind an Apache proxy for authorization.""","",1,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-310","05/22/2017 19:52:02",3,"Enter Documentation Epics & Stories into JIRA ""We need to fill out the JIRA epics and stories we have planned for documentation. At a minimum this include architecture and app dev guide.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0 +"STL-311","05/22/2017 20:15:13",1,"Add Previous Block Id to State Delta Event Message ""In order to improve fork resolution and lost events, include the {{previous_block_id}} in the StateDeltaEvent protobuf message. This will allow subscribers/clients to request the previous block's state deltas if the haven't missed or dropped the original event. ""","",0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-316","05/24/2017 20:45:58",1,"Update Supply chain records to use native addressing ""Supplychain records should use serial numbers or some other natural addressing scheme to look up items. In the case of raw materials this should be a lot number or maybe even the sensor tag or nfc tag attached to the item upon harvesting. ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"STL-317","05/24/2017 20:49:47",1,"Refactor the Application to allow it to be tracked ""Encapsulate the Application to be an object in the Record, that is either not set or the current object application.  Possible distinct object in it's own namespace, that is referenced by both the Agent(applicant) and the Record.  Extend the Record to track the history of applications and the results of those applications.     ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"STL-322","05/25/2017 16:23:59",1,"Update Namespaces and Address Config example ""The config transaction family setting address scheme needs to be updated to match the more complex method of converting a key to an address. Suggest using the intkey family scheme as the simple example. See the config transaction family spec for details on the scheme""","",0,1,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-327","05/30/2017 21:39:56",1,"Fix --enclave_module command-line argument in PoET CLI ""PoET CLI commands/sub-commands should not have underscores.  Specifically `–poet_enclave` should be `--poet-enclave`.  Fix PoET CLI as well as all scripts and documentation.""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-334","06/01/2017 04:16:37",1,"App Dev Guide: Introductory text for ""Installing and Running Sawtooth"" section ""The """"Installing and Running Sawtooth"""" section needs introductory text prior to the sections specific to docker and ubuntu. This introductory text should describe what the section covers and the choices available to the reader.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-335","06/01/2017 04:19:24",2,"App Dev Guide: section ""Using Sawtooth with Docker"" ""Complete """"Using Sawtooth with Docker"""" section with roughly the following TOC: *Using Sawtooth with Docker* """," - Introductory Text (in parent section) - Install Docker Engine and Docker Compose - Environment Setup - Download the Docker Compose File - Starting Sawtooth - Stopping Sawtooth - Logging into the Client Container - Using the CLI Commands - Creating and Submitting Transactions # intkey workload, sawtooth submit - Viewing the Block Chain # sawtooth list/show blocks, transactions - Viewing Global State # sawtooth list/show state - Connecting to the REST API - From the Client Container - From the Host Operating System - Container Overview # show log in to each, show what's running - The Client Container - The Validator Container - The REST API Container - The Config Transaction Processor Container - The Intkey Transaction Processor Container - The Xo Transaction Processor Container - Viewing Log Files ",0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-338","06/02/2017 03:50:40",2,"Handle Ethereum nonces correctly ""Currently they are ignored except when creating contract accounts. Below is a partial list of the additional work that needs to be done: * Whenever a contract is called, the nonce of the account associated with the contract must be incremented. * If a transaction submitted contains a nonce that does not match the (sender’s?) account, the transaction is invalid. * Burrow stores nonces as an int64 but since the nonce is a monotonically increasing value starting at 0, it probably makes more sense for this to be a uint64. Currently a type cast is performed when translating between the transaction payload and the EVM call.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-340","06/02/2017 03:58:29",3,"Research all the Ethereum/Burrow layers and decide where to ultimately position Sawtooth ""* Research how Solidity contracts are actually used by developers * Research and identify all the layers between front-end developers, Solidity developers, Ethereum and Burrow, and the EVM * Decide where it is best to position the Sawtooth Burrow-EVM integration""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-342","06/02/2017 22:40:33",2,"Refactor XO Python for improved documentation readability ""The current structure of XO python is very difficult to consume, from a learning perspective. Refactor the transaction handler to improve readability for a new sawtooth app developer.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-350","06/05/2017 05:51:15",2,"Add PoET SGX Enclave Code to sawtooth-core Repository ""The PoET SGX enclave code currently lives in a separate repository.  It should be moved to the sawtooth-core repository under `consensus/poet` according to the repository layout specified in Google doc.""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-352","06/05/2017 05:53:27",2,"Update PoET SGX Build to Use Existing Debian Packages ""The PoET SGX enclave code requires the json-c and cryptopp libraries as dependencies.  Currently the code is manually built and libraries and headers are copied to a defined directory.  There exist pre-built Debian packages for crypto++ and json-c.  Migrate the code and build for PoET SGX to use these libraries.""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-353","06/05/2017 05:56:41",1,"Supply All Code/Licenses to Tom for Verification ""In order to get approval for releasing the PoET SGX code for open source, all licenses for libraries and tools used need to be submitted to IP/license verifiction system.""","",1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-354","06/05/2017 05:58:03",1,"Update All PoET SGX Code License Headers ""All current PoET SGX enclave code has the standard Intel license text header.  The header needs to be updated to be open source friendly version of Intel license header.""","",1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-356","06/05/2017 06:04:49",3,"Create docker Container for Building PoET SGX Enclave ""A docker container needs to be created for building the PoET SGX enclave.  This includes automatic installation of all necessary required components (i.e., PoET SGX SDK, etc.)""","",1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-366","06/05/2017 15:03:51",2,"Enter App Dev Guide stories into JIRA based on TOC ""Based on the TOC described in STL-329, open up JIRA stories based on section. Work with the team to figure out the right granularity of the JIRA stories.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-367","06/05/2017 15:05:31",2,"Send Errors Back via REST API ""Currently, if a transaction fails, the REST API reports that the transaction is PENDING. This is misleading to the end-user. This should return a response with more information: """," { """"data"""": { """""""": { """"status"""": """"PENDING|COMMITED|REJECTED"""", """"detail"""": """""""" | null } } } ",0,0,1,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-368","06/05/2017 15:06:32",2,"Review App Dev Guide TOC ""Review the TOC for the App Dev Guide with the team and integrate any changes into the epic.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-369","06/05/2017 15:07:44",1,"Update TpProcessResponse to include Error message ""Add fields to the TpProcessResponse to include * {{string message}} * {{bytes data}} where message is the string message of the exception, and data is extended exception information. Both fields are optional.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-370","06/05/2017 15:11:00",2,"Update Python SDK to pass error information in responses ""Update the Python SDK to take the current exception messages and set the values on the TpProcessResponse message. This depends on STL-369's completion.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-371","06/05/2017 15:13:38",3,"Cache Invalid Transactions for Client Communication ""Cache invalid transactions to be reported back to the transaction submitter when requesting batch status.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-372","06/05/2017 15:28:51",2,"Enter Architecture Doc TOC into Epic description ""Determine the key entries and structure of the Architecture Document ToC and add to the Architecture Documentation Epic as predecessor to subsequent stories to create content.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-373","06/05/2017 15:29:36",2,"Enter Architecture Doc stories into JIRA based on TOC ""Create stories for building content for the Architecture Document based upon the agreed-upon ToC.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-374","06/05/2017 15:31:22",2,"Enter System Admin's Guide TOC into Epic description ""System Administartor's Guide -Recommended Set-up -Diagrams & Descriptions -Networking -Deployment -Upgrading -SGX -Installing from Repository -Ansible -Install files & locations -Validator Key Definition -Starting with System D -Validator Genesis -Log Configuration -Overview -Default -Configuration -Examples -Using a Proxy Server to Authorize the REST API -Forwarding URL Info with Headers -Apache Proxy Setup Guide -Monitoring & Troubleshooting ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-377","06/05/2017 16:00:14",1,"Update Java SDK to pass error information in responses ""Update the Java SDK to take the current exception message and set the values on the TpProcessResponse message. This depends on STL-369's completion.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0 +"STL-378","06/05/2017 16:01:24",2,"Update JavaScript SDK to pass error information in responses ""Update the Python SDK to take the current exception message and set the values on the TpProcessResponse message. This depends on STL-369's completion.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-379","06/05/2017 16:04:40",2,"Update Go SDK to pass error information in responses ""Update the Go SDK to take the current exception message and set the values on the TpProcessResponse message. This depends on STL-369's completion.""","",0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-384","06/06/2017 00:07:09",1,"[Documentation] Environment Setup - Problem adding to /etc/apt/sources.list ""Following the steps at [https://intelledger.github.io/app_developers_guide/environment_setup.html#env-ubuntu] to setup my newly created Ubuntu 16.04.02 server, I ran into the following issue: {{*tkuhrt@sawtooth*:*~*$ sudo echo """"deb http://repo.sawtooth.me/ubuntu/0.8/stable xenial universe"""" >> /etc/apt/sources.list}} {{-bash: /etc/apt/sources.list: Permission denied}} Instead, I had to manually edit /etc/apt/sources.list  ""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-385","06/06/2017 00:13:10",1,"[Documentation] Environment Setup - Sudo required ""Following [https://intelledger.github.io/app_developers_guide/intro_to_sawtooth.html#create-genesis-block] after setting up my environment on Ubuntu 16.04.02 using [https://intelledger.github.io/app_developers_guide/environment_setup.html#env-ubuntu,] I received the following error because I did not use sudo, which was not mentioned as being required on the page. {{*tkuhrt@sawtooth*:*~*$ sawtooth admin genesis}} {{Generating /var/lib/sawtooth/genesis.batch}} {{Traceback (most recent call last):}} {{  File """"/usr/lib/python3/dist-packages/sawtooth_cli/main.py"""", line 150, in main_wrapper}}{{    main()}} {{  File """"/usr/lib/python3/dist-packages/sawtooth_cli/main.py"""", line 126, in main}}{{    do_admin(args)}} {{  File """"/usr/lib/python3/dist-packages/sawtooth_cli/admin.py"""", line 25, in do_admin}}{{    do_genesis(args)}} {{  File """"/usr/lib/python3/dist-packages/sawtooth_cli/admin_command/genesis.py"""", line 75, in do_genesis}}{{    with open(genesis_file, 'wb') as out_file:}} {{PermissionError: [Errno 13] Permission denied: '/var/lib/sawtooth/genesis.batch'}}   {{*tkuhrt@sawtooth*:*~*$ ls -la /var/lib/sawtooth/}} {{total 8}} {{drwxr-xr-x  2 sawtooth sawtooth 4096 May 16 20:25 *.*}} {{drwxr-xr-x 41 root     root     4096 Jun  5 14:37 *..*}}   {{*tkuhrt@sawtooth*:*~*$ sudo sawtooth admin genesis}} {{Generating /var/lib/sawtooth/genesis.batch}}""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-386","06/06/2017 00:19:19",1,"[Documentation] App Developer Guide - Unable to create keys to run validator ""Using [https://intelledger.github.io/app_developers_guide/intro_to_sawtooth.html#start-validator,] I was unable to create keys due to incorrect path specified.  In addition, after Talking with Shawn Amudson, it seems that the wrong command was specified. {{*tkuhrt@sawtooth*:*~*$ sawtooth keygen --key-dir /home/ubuntu/sawtooth/keys/ validator}} {{Error: no such directory: /home/ubuntu/sawtooth/keys/}}   {{*tkuhrt@sawtooth*:*~*$ mkdir -p /home/tkuhrt/sawtooth/keys}} {{*tkuhrt@sawtooth*:*~*$ sawtooth keygen --key-dir /home/tkuhrt/sawtooth/keys validator}} {{writing file: /home/tkuhrt/sawtooth/keys/validator.priv}} {{writing file: /home/tkuhrt/sawtooth/keys/validator.pub}} However, after talking with Shawn, he recommended that I use the following command: {{*tkuhrt@sawtooth*:*~*$ sudo sawtooth admin keygen}} {{[sudo] password for tkuhrt:}} {{writing file: /etc/sawtooth/keys/validator.priv}} {{writing file: /etc/sawtooth/keys/validator.pub}}""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-387","06/06/2017 00:24:36",1,"[Documentation] Creating and Submitting Transactions - Unable to submit transactions ""Following: [https://intelledger.github.io/app_developers_guide/intro_to_sawtooth.html#creating-and-submitting-transactions] on my Ubuntu 16.04.02 install, I do not know how to submit transactions using intkey load. *tkuhrt@sawtooth*:*~*$ intkey load -f batches.intkey [15:32:46 WARNING load] Unable to connect to """"http://localhost:8080"""": make sure URL is correct [15:32:46 WARNING load] Unable to connect to """"http://localhost:8080"""": make sure URL is correct [15:32:47 WARNING load] Unable to connect to """"http://localhost:8080"""": make sure URL is correct [15:32:47 WARNING load] Unable to connect to """"http://localhost:8080"""": make sure URL is correct [15:32:47 WARNING load] Unable to connect to """"http://localhost:8080"""": make sure URL is correct [15:32:47 WARNING load] Unable to connect to """"http://localhost:8080"""": make sure URL is correct [15:32:47 WARNING load] Unable to connect to """"http://localhost:8080"""": make sure URL is correct [15:32:47 WARNING load] Unable to connect to """"http://localhost:8080"""": make sure URL is correct [15:32:47 WARNING load] Unable to connect to """"http://localhost:8080"""": make sure URL is correct [15:32:47 WARNING load] Unable to connect to """"http://localhost:8080"""": make sure URL is correct [15:32:47 WARNING load] Unable to connect to """"http://localhost:8080"""": make sure URL is correct batches: 1001 batch/sec: 3262.4823347214206 I tried using -U with the following addresses as obtained from previous steps, but I still did not see the transactions loading: -U tcp://127.0.0.1:40000 (currently running transaction processor) -U tcp://127.0.0.1:8800 (currently running validator) Which address should I send to?  Regardless of address specified, nothing happened on the validator window.""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-393","06/07/2017 18:15:21",2,"Publish PoET Specification ""Revise format / content. Correct errors like zTest algorithm (one value is initialized and never gets updated).""","",1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-394","06/07/2017 22:14:31",2,"Validator Registry does not remove Validator ID on re-enrollment ""consensus.poet.families.sawtooth_validator_registry.validator_registry.processor.handler._update_validator_state(...) Manages structures associated with validator admission. On re-enrolling an existing anti-sybil ID (EPID), with a new Validator ID (OPK), the previous Validator ID will not be expunged from state, and the \{anti-sybil ID: Validator ID} map will not be updated appropriately. Correct the method to remove the old Validator ID record from state and update the \{anti-sybil ID: Validator ID} map to include the new Validator ID.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-396","06/08/2017 00:06:39",3,"Write Spec for Private UTXO Transaction Processor. ""Create a Sawtooth style Spec based onthe private UTXO transaction document. ""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0 +"STL-403","06/10/2017 19:50:19",2,"Update context manager to enforce address characteristics ""The context manager should enforce that gets and sets only use valid addresses: those having 70 lowercase hex characters.""","",0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-423","06/14/2017 14:19:58",2,"Create section for Introduction to the XO Transaction Family ""Create new or reuse material to complete the following sections:    - What is Xo (to inform users that have not heard of the game)    - How to play Xo (How to use the client)    - Xo Transaction Family Specification (include link to existing spec)""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-438","06/14/2017 14:28:29",1,"Remove Javascript example from “Development without an SDK” ""* Remove template, replace with a simple RST * Evaluate whether the text needs to be tweaked accordingly""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-444","06/14/2017 20:38:17",3,"Set-up LR30-SGX ""Set-up 5 machines.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-446","06/14/2017 20:49:39",2,"Determine all the validator stat metrics to collect ""The output of this work should be a list of metrics the validator could/should log. Descriptions of the metrics can be brief. It does not need to include OS metrics, as those are collected with telegraph outside of the validator process. This list does not need to be 100% complete - we can add metrics later. However, it does need to be close enough to inform the design of the stats/metrics library; in particular, we need to know the different types of metrics. For example, we know some metrics are additive (such as number of blocks published) and some might be set (current length of chain), while others might be an average (txn rate).""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-448","06/15/2017 14:31:47",2,"Design EVM permissioning model ""Design a model for setting Burrow-EVM account permissions within Sawtooth that can be integrated with the Burrow-EVM permission model (i.e., the code that already exists within the Burrow-EVM that is currently turned off). Check with the Burrow team to see if this scheme has changed in v0.17""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-449","06/15/2017 14:37:44",3,"Implement EVM permissions ""Implement the permissions model defined in STL-448""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-450","06/15/2017 14:49:20",1,"Define Burrow-EVM transaction receipts ""After a transaction is executed, additional “substate” information should be stored about the transaction execution including: * Return Value from the EVM (see below). * Any logs that were produced (see below). * The address of the contract that was created, if applicable. * Any other information that can be stored from: [https://github.com/ethereum/wiki/wiki/JSON-RPC#eth_gettransactionreceipt] This task should determine what information needs to be stored in the transaction receipts and define how the receipts will be stored so they can be searched and indexed inefficiently.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0 +"STL-451","06/15/2017 14:55:14",0,"Implement stored EVM return values ""Duplicate of STL-459""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-452","06/15/2017 15:04:18",3,"Implement logging event handler within Burrow-EVM ""A set of LOG instructions are defined by the EVM which can be used to place information into a separate “substate”. It is intended that these logs be stored in a manner that is easily indexed and searched. Smart contract languages have adopted the convention of using these LOG instructions to set up event pub/sub patterns. The LOG opcodes in the burrow-evm rely on an event-handling system that is part of the burrow project. This will likely need to be preserved in order to use the EVM as it exists in the burrow project. This event handling should be preserved if possible and used to forward events to the Sawtooth event system.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-453","06/15/2017 15:24:40",1,"Update txn inputs/outputs to enable contract chaining ""Currently, contract chaining is not possible because only the address of the sender account and receiver account are passed as inputs and outputs when submitting a transaction. To support contract chaining, it should be made possible to pass the burrow-EVM namespace prefix as inputs and output when submitting a transaction. The inputs/outputs are set here: [https://github.com/hyperledger/sawtooth-core/blob/master/families/burrow_evm/src/sawtooth_burrow_evm/client/client.go#L60] [https://github.com/hyperledger/sawtooth-core/blob/master/families/burrow_evm/src/sawtooth_burrow_evm/client/client.go#L106]  ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-454","06/15/2017 15:29:00",2,"Add receipt lookup to seth ""Add features to seth to: # Lookup a transaction receipt # When waiting for a submitted transaction to commit, print its receipt""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-455","06/15/2017 15:36:47",5,"Design the Burrow-EVM JSON-RPC ""Design the Burrow-EVM JSON-RPC server so that it: # Is concurrent and handles requests efficiently # Can be easily extended with new methods, as we will most likely support multiple JSON-RPC specs as projects diverge # Reuses existing libraries (see https://github.com/tendermint/go-rpc)""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-459","06/15/2017 15:42:04",3,"Implement Seth transaction receipts ""Create Seth transaction receipts""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-460","06/15/2017 20:01:49",3,"Python SDK: Add signing/encoder client functionality ""The client features of Python SDK should be updated to match the functionality of the JavaScript SDK. Specifically this means adding a module for creating/encoding Batches and Transactions, as well as a signing module which can sign, verify, generate private keys, and generate public keys. Decisions must made about whether or not the root level signing module already satisfies those requirements, or should be replaced by this SDK.   Add any appropriate unit tests for the new functionality.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-470","06/18/2017 15:59:00",2,"Review and Update BlockValidator Batch and Transaction validation ""Remove the completeness check (that is enforced by the Completer), verify that duplicate transactions and batch checking is implemented and working correctly.       ""","",0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"STL-471","06/18/2017 16:52:57",2,"Debug post to batches&wait= not returning promptly for the SupplyChain client ""When the supplychain client posts with a wait parameter to the REST API batches it does not return untill the time out expires even though the batch commits.    Details from the Slack chat about the issue:   ```Creating latest_validator_1 ... done Creating latest_restapi_1 ... done Creating latest_supplychain_unit_test_1 ... done Attaching to latest_validator_1, sawtooth-tp_settings-default, latest_tp_supplychain_python_1, latest_restapi_1, latest_supplychain_unit_test_1 validator_1 | writing file: /etc/sawtooth/keys/validator.priv validator_1 | writing file: /etc/sawtooth/keys/validator.pub validator_1 | creating key directory: /root/.sawtooth/keys validator_1 | writing file: /root/.sawtooth/keys/my_key.priv validator_1 | writing file: /root/.sawtooth/keys/my_key.pub validator_1 | Generated config-genesis.batch validator_1 | Processing config-genesis.batch... validator_1 | Generating /var/lib/sawtooth/genesis.batch validator_1 | [00:19:01.534 INFO path] Skipping path loading from non-existent config file: /etc/sawtooth/path.toml validator_1 | [00:19:01.535 INFO validator] Skipping validator config loading from non-existent config file: /etc/sawtooth/validator.toml validator_1 | [00:19:01.535 INFO keys] Loading signing key: /etc/sawtooth/keys/validator.priv validator_1 | [00:19:01.537 INFO cli] config [path]: config_dir = """"/etc/sawtooth"""" validator_1 | [00:19:01.538 INFO cli] config [path]: key_dir = """"/etc/sawtooth/keys"""" validator_1 | [00:19:01.538 INFO cli] config [path]: data_dir = """"/var/lib/sawtooth"""" validator_1 | [00:19:01.538 INFO cli] config [path]: log_dir = """"/var/log/sawtooth"""" validator_1 | [00:19:01.538 WARNING cli] Network key pair is not configured, Network communications between validators will not be authenticated or encrypted. validator_1 | [00:19:01.538 DEBUG core] database file is /var/lib/sawtooth/merkle-00.lmdb validator_1 | [00:19:01.539 DEBUG core] state delta store file is /var/lib/sawtooth/state-deltas-00.lmdb validator_1 | [00:19:01.541 DEBUG core] block store file is /var/lib/sawtooth/block-00.lmdb validator_1 | [00:19:01.548 DEBUG selector_events] Using selector: ZMQSelector validator_1 | [00:19:01.549 INFO interconnect] Listening on tcp://eth0:4004 validator_1 | [00:19:01.549 DEBUG dispatch] Added send_message function for connection ServerThread validator_1 | [00:19:01.551 DEBUG genesis] genesis_batch_file: /var/lib/sawtooth/genesis.batch validator_1 | [00:19:01.551 DEBUG genesis] block_chain_id: not yet specified validator_1 | [00:19:01.551 INFO genesis] Producing genesis block from /var/lib/sawtooth/genesis.batch validator_1 | [00:19:01.552 DEBUG genesis] Adding 1 batches validator_1 | [00:19:01.567 DEBUG executor] no transaction processors registered for processor type sawtooth_settings: 1.0: application/protobuf validator_1 | [00:19:01.574 INFO executor] Waiting for transaction processor (sawtooth_settings, 1.0, application/protobuf) validator_1 | [00:19:01.611 DEBUG interconnect] ServerThread receiving TP_REGISTER_REQUEST message: 125 bytes validator_1 | [00:19:01.617 INFO processor_handlers] registered transaction processor: connection_id=f099cda6b11521e5d8669f59dc2d227a3066ef682512d8d738e0006c91ae7aaf19b017664f2cd1ab78ffeafdb99754a5b4321cec78203a81c73b2ddf749f5679, family=sawtooth_settings, version=1.0, encoding=application/protobuf, namespaces=['000000'] validator_1 | [00:19:01.621 DEBUG interconnect] ServerThread sending TP_PROCESS_REQUEST to b'2bd9fdded71b4ba1' validator_1 | [00:19:01.630 DEBUG interconnect] ServerThread sending TP_REGISTER_RESPONSE to b'2bd9fdded71b4ba1' validator_1 | [00:19:01.647 DEBUG interconnect] ServerThread receiving TP_STATE_GET_REQUEST message: 177 bytes validator_1 | [00:19:01.655 DEBUG tp_state_handlers] GET: [('000000a87cb5eafdcca6a8cde0fb0dec1400c5ab274474a6aa82c12840f169a04216b7', None)] validator_1 | [00:19:01.660 DEBUG interconnect] ServerThread sending TP_STATE_GET_RESPONSE to b'2bd9fdded71b4ba1' validator_1 | [00:19:01.676 DEBUG interconnect] ServerThread receiving TP_STATE_GET_REQUEST message: 177 bytes validator_1 | [00:19:01.677 DEBUG tp_state_handlers] GET: [('000000a87cb5eafdcca6a8cde0fb0dec1400c5ab274474a6aa82c1918142591ba4e8a7', None)] validator_1 | [00:19:01.680 DEBUG interconnect] ServerThread sending TP_STATE_GET_RESPONSE to b'2bd9fdded71b4ba1' validator_1 | [00:19:01.683 DEBUG interconnect] ServerThread receiving TP_STATE_GET_REQUEST message: 177 bytes validator_1 | [00:19:01.699 DEBUG tp_state_handlers] GET: [('000000a87cb5eafdcca6a8cde0fb0dec1400c5ab274474a6aa82c12840f169a04216b7', None)] validator_1 | [00:19:01.704 DEBUG interconnect] ServerThread sending TP_STATE_GET_RESPONSE to b'2bd9fdded71b4ba1' validator_1 | [00:19:01.706 DEBUG interconnect] ServerThread receiving TP_STATE_SET_REQUEST message: 293 bytes validator_1 | [00:19:01.707 DEBUG tp_state_handlers] SET: ['000000a87cb5eafdcca6a8cde0fb0dec1400c5ab274474a6aa82c12840f169a04216b7'] validator_1 | [00:19:01.713 DEBUG interconnect] ServerThread sending TP_STATE_SET_RESPONSE to b'2bd9fdded71b4ba1' validator_1 | [00:19:01.715 DEBUG interconnect] ServerThread receiving TP_PROCESS_RESPONSE message: 69 bytes validator_1 | [00:19:01.720 DEBUG interconnect] message round trip: TP_PROCESS_RESPONSE 0.0964360237121582 validator_1 | [00:19:01.722 DEBUG genesis] Produced state hash 63250521e8448d4aabe007198b2be3cdf2a498c30a745514e8fc81d5a63b0d3c for genesis block. validator_1 | [00:19:01.726 INFO genesis] Genesis block created: f57d5892(0, S:63250521, P:00000000) validator_1 | [00:19:01.726 DEBUG chain_id_manager] writing block chain id validator_1 | [00:19:01.727 DEBUG genesis] Deleting genesis data. validator_1 | [00:19:01.727 DEBUG selector_events] Using selector: ZMQSelector validator_1 | [00:19:01.728 INFO interconnect] Listening on tcp://eth0:8800 validator_1 | [00:19:01.728 DEBUG dispatch] Added send_message function for connection ServerThread validator_1 | [00:19:01.732 INFO chain] Chain controller initialized with chain head: f57d5892(0, S:63250521, P:00000000) validator_1 | [00:19:01.734 INFO publisher] Now building on top of block: f57d5892(0, S:63250521, P:00000000) validator_1 | [00:19:01.891 DEBUG interconnect] ServerThread receiving TP_REGISTER_REQUEST message: 144 bytes validator_1 | [00:19:01.893 INFO processor_handlers] registered transaction processor: connection_id=cb5834ba71895c24d3883423cc71365b2b55ef2c2ab01cda03cc3ca51d3efc1fb4d40ff658c3557628e4b4d065e6422c4547ebe030dee300cf1d58c0dd7e9d5e, family=sawtooth_supplychain, version=0.5, encoding=application/protobuf, namespaces=['160343', '466f14', '8728e8'] validator_1 | [00:19:01.893 DEBUG interconnect] ServerThread sending TP_REGISTER_RESPONSE to b'763b0695f1394257' tp_supplychain_python_1 | [00:19:01.882 DEBUG selector_events] Using selector: ZMQSelector tp_supplychain_python_1 | [00:19:01.895 INFO core] register attempt: OK sawtooth-tp_settings-default | [00:19:00 DEBUG selector_events] Using selector: ZMQSelector restapi_1 | [00:19:01.984 DEBUG selector_events] Using selector: EpollSelector sawtooth-tp_settings-default | [00:19:01 INFO core] register attempt: OK sawtooth-tp_settings-default | [00:19:01 DEBUG core] received message of type: TP_PROCESS_REQUEST restapi_1 | [00:19:01.985 INFO rest_api] Creating handlers for validator at tcp://validator:4004 restapi_1 | [00:19:01.987 INFO rest_api] Starting REST API on restapi:8080 sawtooth-tp_settings-default | [00:19:01 INFO handler] Setting setting sawtooth.settings.vote.authorized_keys changed from None to 02845303a294114701414a14dd9f3ca54be691523c94ca1074417241daa3a5ad66 supplychain_unit_test_1 | INFO:test_supplychain_integration:_agent_create: 0256f9f2891a05594b932353b4e56a20bec120a7bffed83eb37dd7705587070332 supplychain_unit_test_1 | INFO:root:agent_create 0256f9f2891a05594b932353b4e56a20bec120a7bffed83eb37dd7705587070332 16034371e3ee7dcce5b5ad8fc29fa478bb337a26cc8665744a0443f6ae19e75d020158 supplychain_unit_test_1 | INFO:sawtooth_supplychain.client:http://restapi:8080/batches?wait=10 supplychain_unit_test_1 | INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): restapi restapi_1 | [00:19:02.447 INFO rest_api] Request abfade: """"POST /batches?wait=10"""" from 172.18.0.6 restapi_1 | [00:19:02.448 DEBUG route_handlers] Sending CLIENT_BATCH_SUBMIT_REQUEST request to validator validator_1 | [00:19:02.450 DEBUG interconnect] ServerThread receiving CLIENT_BATCH_SUBMIT_REQUEST message: 1042 bytes validator_1 | [00:19:02.454 DEBUG interconnect] ServerThread sending TP_PROCESS_REQUEST to b'763b0695f1394257' tp_supplychain_python_1 | [00:19:02.455 DEBUG core] received message of type: TP_PROCESS_REQUEST tp_supplychain_python_1 | [00:19:02.455 DEBUG handler] SupplyChainHandler.apply action: AGENT_CREATE tp_supplychain_python_1 | [00:19:02.456 DEBUG handler] _agent_create: 0256f9f2891a05594b932353b4e56a20bec120a7bffed83eb37dd7705587070332 16034371e3ee7dcce5b5ad8fc29fa478bb337a26cc8665744a0443f6ae19e75d020158 validator_1 | [00:19:02.457 DEBUG interconnect] ServerThread receiving TP_STATE_GET_REQUEST message: 177 bytes validator_1 | [00:19:02.457 DEBUG tp_state_handlers] GET: [('16034371e3ee7dcce5b5ad8fc29fa478bb337a26cc8665744a0443f6ae19e75d020158', None)] validator_1 | [00:19:02.458 DEBUG interconnect] ServerThread sending TP_STATE_GET_RESPONSE to b'763b0695f1394257' validator_1 | [00:19:02.461 DEBUG interconnect] ServerThread receiving TP_STATE_SET_REQUEST message: 259 bytes validator_1 | [00:19:02.462 DEBUG tp_state_handlers] SET: ['16034371e3ee7dcce5b5ad8fc29fa478bb337a26cc8665744a0443f6ae19e75d020158'] validator_1 | [00:19:02.464 DEBUG interconnect] ServerThread sending TP_STATE_SET_RESPONSE to b'763b0695f1394257' validator_1 | [00:19:02.468 DEBUG interconnect] ServerThread receiving TP_PROCESS_RESPONSE message: 69 bytes validator_1 | [00:19:02.469 DEBUG interconnect] message round trip: TP_PROCESS_RESPONSE 0.01513814926147461 validator_1 | [00:19:02.554 INFO publisher] Claimed Block: 641d07a6(1, S:98a859f6, P:f57d5892) validator_1 | [00:19:02.555 INFO publisher] Block publishing is suspended until new chain head arrives. validator_1 | [00:19:02.555 DEBUG chain] Block received: 641d07a6(1, S:98a859f6, P:f57d5892) validator_1 | [00:19:02.556 INFO chain] Starting block validation of : 641d07a6(1, S:98a859f6, P:f57d5892) validator_1 | [00:19:02.558 DEBUG interconnect] ServerThread sending TP_PROCESS_REQUEST to b'763b0695f1394257' tp_supplychain_python_1 | [00:19:02.559 DEBUG core] received message of type: TP_PROCESS_REQUEST tp_supplychain_python_1 | [00:19:02.560 DEBUG handler] SupplyChainHandler.apply action: AGENT_CREATE tp_supplychain_python_1 | [00:19:02.561 DEBUG handler] _agent_create: 0256f9f2891a05594b932353b4e56a20bec120a7bffed83eb37dd7705587070332 16034371e3ee7dcce5b5ad8fc29fa478bb337a26cc8665744a0443f6ae19e75d020158 validator_1 | [00:19:02.562 DEBUG interconnect] ServerThread receiving TP_STATE_GET_REQUEST message: 177 bytes validator_1 | [00:19:02.562 DEBUG tp_state_handlers] GET: [('16034371e3ee7dcce5b5ad8fc29fa478bb337a26cc8665744a0443f6ae19e75d020158', None)] validator_1 | [00:19:02.563 DEBUG interconnect] ServerThread sending TP_STATE_GET_RESPONSE to b'763b0695f1394257' validator_1 | [00:19:02.566 DEBUG interconnect] ServerThread receiving TP_STATE_SET_REQUEST message: 259 bytes validator_1 | [00:19:02.566 DEBUG tp_state_handlers] SET: ['16034371e3ee7dcce5b5ad8fc29fa478bb337a26cc8665744a0443f6ae19e75d020158'] validator_1 | [00:19:02.567 DEBUG interconnect] ServerThread sending TP_STATE_SET_RESPONSE to b'763b0695f1394257' validator_1 | [00:19:02.569 DEBUG interconnect] ServerThread receiving TP_PROCESS_RESPONSE message: 69 bytes validator_1 | [00:19:02.571 INFO dev_mode_consensus] Choose new fork 641d07a6: New fork head switches consensus to DevMode validator_1 | [00:19:02.571 INFO chain] on_block_validated: 641d07a6(1, S:98a859f6, P:f57d5892) validator_1 | [00:19:02.572 INFO chain] Chain head updated to: 641d07a6(1, S:98a859f6, P:f57d5892) validator_1 | [00:19:02.572 INFO publisher] Now building on top of block: 641d07a6(1, S:98a859f6, P:f57d5892) validator_1 | [00:19:02.575 DEBUG chain] Verify descendant blocks: 641d07a6(1, S:98a859f6, P:f57d5892) ([]) validator_1 | [00:19:02.575 DEBUG state_delta_processor] Publishing state delta from 641d07a6(1, S:98a859f6, P:f57d5892) validator_1 | [00:19:02.575 INFO chain] Finished block validation of: 641d07a6(1, S:98a859f6, P:f57d5892) validator_1 | [00:19:02.576 DEBUG interconnect] message round trip: TP_PROCESS_RESPONSE 0.011431694030761719 validator_1 | [00:19:12.454 DEBUG interconnect] ServerThread sending CLIENT_BATCH_SUBMIT_RESPONSE to b'3ac55e5d8d8b469d' restapi_1 | [00:19:12.455 DEBUG route_handlers] Received CLIENT_BATCH_SUBMIT_RESPONSE response from validator with status OK restapi_1 | [00:19:12.457 INFO rest_api] Response abfade: 201 status, 175B size, in 10.009s supplychain_unit_test_1 | INFO:test_supplychain_integration:\{ supplychain_unit_test_1 | """"link"""": """"http://restapi:8080/batches?id=c22a771cd7125d440f62dc4849ddfe4ca23100af7775517f635b5a436e2789516fc02691c06e793fb47057ae3276ffd1d5469c78f3f46afd8ad04718da88b332"""" supplychain_unit_test_1 | } supplychain_unit_test_1 | .set username: root supplychain_unit_test_1 | set url: 127.0.0.1:8080 supplychain_unit_test_1 | writing file: /root/.sawtooth/keys/root.priv supplychain_unit_test_1 | writing file: /root/.sawtooth/keys/root.addr supplychain_unit_test_1 | supplychain_unit_test_1 | -------------------------------------------------------------------Removing latest_supplychain_unit_test_1 ... done Removing latest_restapi_1 ... done Removing sawtooth-tp_settings-default ... done Removing latest_tp_supplychain_python_1 ... done Removing latest_validator_1 ... done Removing network latest_default Add Comment Click to expand inline 131 lines cintel [5:21 PM] @zac-intel there you go, there are still some of my debug logs in there. I am tracking down an issue with my address generation. mitchell-intel [5:39 PM] Oh well, I tried cintel [5:44 PM] :slightly_smiling_face: zac-intel [7:54 PM] @cintel Everything appears to be working in these logs [7:55] Unless I'm missing something [7:55] restapi_1                | [00:19:02.447 INFO     rest_api] Request  abfade: """"POST /batches?wait=10"""" from 172.18.0.6 restapi_1                | [00:19:02.448 DEBUG    route_handlers] Sending CLIENT_BATCH_SUBMIT_REQUEST request to validator validator_1              | [00:19:02.450 DEBUG    interconnect] ServerThread receiving CLIENT_BATCH_SUBMIT_REQUEST message: 1042 bytes [7:56] validator_1              | [00:19:02.575 INFO     chain] Finished block validation of: 641d07a6(1, S:98a859f6, P:f57d5892) validator_1              | [00:19:02.576 DEBUG    interconnect] message round trip: TP_PROCESS_RESPONSE 0.011431694030761719 validator_1              | [00:19:12.454 DEBUG    interconnect] ServerThread sending CLIENT_BATCH_SUBMIT_RESPONSE to b'3ac55e5d8d8b469d' restapi_1                | [00:19:12.455 DEBUG    route_handlers] Received CLIENT_BATCH_SUBMIT_RESPONSE response from validator with status OK restapi_1                | [00:19:12.457 INFO     rest_api] Response abfade: 201 status, 175B size, in 10.009s (edited) [7:58] Ah [7:58] I see, the response came back 10 seconds after the block was validated cintel [8:00 PM] yes, that is exactly what I am seeing. In these logs I submitted with http://restapi:8080/batches?wait=10 zac-intel [8:05 PM] Works fine on master, using intkey and the sawtooth CLI [8:07] Although someone is sending a bizarre number of state requests [8:07] It's not going through the REST API [8:09] I guess that's just the Intkey TP doing it's thing zac-intel [8:12 PM] added this Plain Text snippet: Validator Logs [03:06:21.468 DEBUG interconnect] ServerThread receiving CLIENT_BATCH_SUBMIT_REQUEST message: 3038 bytes [03:06:21.473 DEBUG interconnect] ServerThread sending CLIENT_BATCH_SUBMIT_RESPONSE to b'f8e79be9bbca42b0' [03:06:21.475 DEBUG interconnect] ServerThread sending TP_PROCESS_REQUEST to b'102ca0fcdc2848c8' . . . Add Comment Click to expand inline 23 lines zac-intel [8:13 PM] added this Plain Text snippet: REST API Logs [03:06:21.465 INFO rest_api] Request a7d001: """"POST /batches"""" from 127.0.0.1 [03:06:21.466 DEBUG route_handlers] Sending CLIENT_BATCH_SUBMIT_REQUEST request to validator [03:06:21.477 DEBUG route_handlers] Received CLIENT_BATCH_SUBMIT_RESPONSE response from validator with status OK [03:06:21.477 INFO rest_api] Response a7d001: 202 status, 311B size, in 0.012s [03:06:21.487 INFO rest_api] Request a7d0db: """"POST /batch_status?wait=10"""" from 127.0.0.1 Add Comment Click to expand inline 8 lines zac-intel [8:14 PM] added this Plain Text snippet: CLI Commands $ intkey create_batch Writing to batches.intkey... $ sawtooth batch submit -f batches.intkey --wait 10 batches: 2, batch/sec: 48.52917729684074 All batches committed in 0.157849 sec Add Comment zac-intel [8:14 PM] Let me double check and make sure the POST wait works as well cintel [8:15 PM] There is not wait parameter on that post in my logs it reports as 'restapi_1                | [00:19:02.447 INFO     rest_api] Request  abfade: """"POST /batches?wait=10"""" from 172.18.0.6' [8:15] oops I missed the 2nd post in your logs. [8:16] That is to batch_status though, not batches [8:17] an off topic question, why is batch_status a post and not a get? zac-intel [8:17 PM] Yes, CLI uses a wait on batch_status [8:17] Sam mechanism though [8:18] And I just did a POST manually and it worked fine [8:18] $ curl --request POST \ >     --header """"Content-Type: application/octet-stream"""" \ >     --data-binary @batches.intkey \ >     """"http://localhost:8080/batches?wait=10"""" \{  """"link"""": """"http://localhost:8080/batches?id=9a8d32b69d4a14ce49e40c411bdc8ed8befa966acd26a71959fde8355963d34c17fe51e9aa04a011f9bd1d1b6435c9c55637b4d248010adb94efecdf9d3b4325,5435e3cf7f9e2486fe62fed8757a36ec5426dffef010d55a2cdb053ba5df8d5b38deab0bcb8c89bda0710cd6c7dc8fff0f3ea93e81a1f6842a7243bd57e0ba95"""" } [8:19] [03:16:49.300 INFO     rest_api] Request  079ccf: """"POST /batches?wait=10"""" from 127.0.0.1 [03:16:49.302 DEBUG    route_handlers] Sending CLIENT_BATCH_SUBMIT_REQUEST request to validator [03:16:49.657 DEBUG    route_handlers] Received CLIENT_BATCH_SUBMIT_RESPONSE response from validator with status OK [03:16:49.658 INFO     rest_api] Response 079ccf: 201 status, 306B size, in 0.358s [8:20] Side note: /batch_status accepts both a GET and a POST [8:20] The GET accepts a list of ids in a comma-separated query parameter id=a,b,c [8:21] However, since our id's are 128 characters, and URLs top out at 2048, that limits you to requesting the status of about 15 batches at a time cintel [8:21 PM] got it. zac-intel [8:21 PM] So the POST is a work around [8:22] you send a JSON body [""""a"""",""""b"""",""""c""""] and you can check on as many ids as you like [8:23] It is semantically inaccurate though [8:23] Anywho, your problem is weird [8:24] Your TP shouldn't be able to touch the wait mechanism [8:24] I would pull from master and make sure you have pristine validator code [8:25] If that's not it, maybe something weird with Docker? [8:25] I dunno though, by the logs the validator is clearly just hanging out, not doing anything [8:26] It's not like it's getting lost in some Docker middle-man or something cintel [8:33 PM] yeah, I have a lot of docker wrangling going on. [8:33] I agree my TP should not be able to affect that mechanism. [8:34] I'll shoot you a link to by branch once I get it cleaned up a bit more. zac-intel [8:36 PM] If you want to take a look in your validator, the relevant code is here:  • https://github.com/hyperledger/sawtooth-core/blob/master/validator/sawtooth_validator/journal/block_store.py#L138  • https://github.com/hyperledger/sawtooth-core/blob/master/validator/sawtooth_validator/state/client_handlers.py#L515  • https://github.com/hyperledger/sawtooth-core/blob/master/validator/sawtooth_validator/state/client_handlers.py#L534""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"STL-476","06/19/2017 02:32:54",3,"Handle txn in a predecessor's batch failing by replay of subsequent txns ""For the parallel-scheduler, If [a, b, c] are a batch and [d, e, f] are a batch and d implicitly depends on a, if a is valid, but c is invalid, d and any subsequent txns should be re-run as if 'a' hadn't been run. This will cause more txns than are in the scheduler to be run, but there should be only 1 final txn_result per txn. After this task there should be unittests and yaml based scheduler tests for this functionality and the parallel scheduler should be passing them.""","",0,0,1,0,0,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-478","06/19/2017 04:46:40",2,"Create Package for IAS Client ""The IAS client, because it is used by both the SGX enclave and the IAS proxy, needs to be put into its own package.""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-479","06/19/2017 04:48:14",3,"Create Unit Tests for IAS Client ""The IAS client, once moved to its own package, needs its own unit tests to verify that it behaves properly in the face of network errors (timeouts, HTTP error return codes, etc.).""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-480","06/19/2017 04:50:05",2,"Create Package for IAS Proxy ""Once the PoET SGX code has been committed to the repository, the IAS proxy needs its own separate package.""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-484","06/19/2017 05:00:33",2,"Investigate SGX SDK Simulator ""Determine and document the design of what would be needed to remove the standalone PoET simulator and instead use the SGX SDK simulator.""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-485","06/19/2017 13:44:04",2,"Update yaml scheduler tester to test gets from state ""To find errors, and provide more of a guarantee that the state """"seen"""" by the txn processor is correct in the parallel case it would be preferable to while running the scheduler using the run_scheduler method on the tester, to 'get' values that were inputs for the txn and assert that they are correct. When processing the yaml, collect the valid 'set' values to be asserted when run_scheduler is called.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-486","06/19/2017 13:48:36",3,"Update the ParallelScheduler to fail-fast on a batch ""Since batches are atomic, it would be preferable to not run any txns that don't need to be run. So after a txn fails, provide an INVALID_BATCH status to the TransactionExecutionResult. Provide unit-tests.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-487","06/19/2017 13:57:13",5,"Add another run_scheduler method to yaml scheduler tester ""The order of next_transaction, set_transaction_result, finalize, and complete can vary widely, with only some being possible.  The run_scheduler method currently is greedy on next_transaction getting the possible next_transaction after each subsequent set of set_transaction_result. Another method should be written that is greedy on set_transaction_result.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-488","06/19/2017 13:59:30",3,"Write more yaml scheduler tests ""These tests should include namespace inputs and outputs, mixed sizes of batches, batches that have predecessor transactions that are valid, with subsequent transactions in a batch that fail.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-495","06/19/2017 18:17:13",2,"Implement BLOCKHASH, TIMESTAMP, and NUMBER EVM opcodes ""These opcodes currently do not work correctly. The information needed to implement them can be obtained by querying the special addresses defined by the BlockInfo transaction family.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-497","06/19/2017 20:28:33",3,"Define Database Schema ""Define the database schema to support State Delta Subscription for the supply chain family. Minimally, there needs to be a table {{block}} which tracks the history of blocks: The remaining tables should support [slowly changing dimensions type 2|https://en.wikipedia.org/wiki/Slowly_changing_dimension#Type_2:_add_new_row]. This means that the tables should include, in addition to the domain-specific fields, the following: where id is a generated primary key, and indexes should be created on the natural key of the rows. """," block_id char(128), block_num integer, state_root_hash, char(64) id BIGSERIAL CONSTRAINT pk_setting PRIMARY KEY, start_block_num integer, end_block_num integer ",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"STL-498","06/19/2017 21:05:22",5,"Create State Delta Subscriber ""Create a delta state subscriber that will consume state values for the SupplyChain transaction family and store them in a database. This task involves creating the subscriber, processing the events to insert values. This client should handle disconnects and reconnects of the validator, delta catch-up, as well as fork resolution.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0 +"STL-499","06/19/2017 23:11:44",3,"Update ParallelScheduler to handle explicit dependencies ""If a txn has explicit dependencies, it should first look to see if that txn is before it* in the scheduler, and then only if it is use that explicitly dependent txn as a gate on it being next transaction. Because 'intkey workload' uses dependencies, the parallel scheduler won't work with intkey workload until this is done. * We are relying on the completer to never allow the dependency to be after that batch with the transaction that lists the dependency.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-506","06/22/2017 15:07:21",3,"Debug scheduler never becoming complete ""On AWS often, and in test_poet_smoke.yaml sometimes, block validation within the validator fails to get a complete schedule; scheduler.complete(block=True) is called and never returns.""","",0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-509","06/27/2017 00:05:19",1,"intkey and xo commands produce bad output without arguments ""{{docker exec -it sawtooth-client-default bash}} {{root@bdd0cd71cfd1:/# xo}} {{Error: invalid command: None}} {{root@bdd0cd71cfd1:/# intkey}} {{<_io.TextIOWrapper name='' mode='w' encoding='ANSI_X3.4-1968'> Error: invalid command: None}} {{It would be nice if intkey and xo would give usage messages if a required positional argument is missing.}}""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0 +"STL-510","06/27/2017 15:55:46",2,"Modify App Dev Guide ubuntu section to not require root user ""The App Dev guide should not assume all commands need to be run w/root. Instead, it should provide sudo when appropriate and run services as sawtooth and user commands as the regular user. ""","",1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-536","07/05/2017 22:33:52",2,"Rename cli commands ""Various commands for 0.8 were added with the assumption that we would at some point rename them for consistency across the project. Now seems an appropriate time to pick the final names and do the renaming. Commands will use a '-' separator instead of '_', since this is more consistent with Linux commands in general. The clients commands remain largely unchanged, with intkey_jvm being renamed to jvmsc (since it is an example of JVM smart contracts). The client commands: sawtooth poet intkey jvmsc xo noop The validator components will be prefixed with 'sawtooth-': sawtooth-validator sawtooth-rest-api Transaction processors will be renamed to conform to the general format of -tp[-]. This organization is intended to sort the CLI commands by family instead of function. The commands become: intkey-tp-go intkey-tp-java intkey-tp-javascript intkey-tp-python jvmsc-tp xo-tp-go xo-tp-javascript xo-tp-python noop-tp-go supplychain-tp poet-validator-registry-tp""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-541","07/07/2017 14:47:09",2,"Remove MustEncode() and MustDecode() from Go SDK ""Remove the MustEncode() and MustDecode() methods from the Go SDK and replace all calls with hex.EncodeToString() and hex.DecodeString() from the """"encoding/hex"""" Go library. These methods cause panics instead of handling errors gracefully, which is no longer desired. Handle the error returned from DecodeString().  ""","",0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-542","07/07/2017 14:53:07",2,"Add option to wait for transactions to commit in Burrow-EVM client ""The Burrow-EVM client only checks that transactions were accepted by the REST API. It does not check to see if the transaction was committed or what the error was. An argument should be added to client methods which, if set, will cause the client to block until: 1. A timeout has been reached, at which point it should return and error 2. The transaction has been committed, at which point it should return normally 3. There was a problem committing the transaction, at which point it should return a useful error Update the integration tests to use this feature once implemented.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-543","07/07/2017 14:55:20",2,"Add --wait flag to seth commands ""Add a --wait flag to seth commands that submit transactions which sets the wait option for the client.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-545","07/07/2017 16:40:59",2,"Write Transaction Family Specification for Benchmark ""Write the transaction family specification for the benchmark workload selected for benchmarking activity. ""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-546","07/07/2017 16:41:16",5,"Implement benchmark Transaction Processor & Workload client ""Based on the transaction family specification, implement benchmark transaction processor. Also implement a command-line client workload generator capable of generating and submitting workload consistent with the design intent of the benchmark.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-554","07/07/2017 16:58:54",5,"Design domain specific transaction submission endpoint with server side signing ""Provide the design for an endpoint which can: * Accept domain-specific transaction content. For example, this could be submitted in JSON. * Create a transaction body and batch for submission * Sign the transaction using server-side stored keys. The design is open to method on how these keys are associated with the submitter. Most likely, this will also require one or more authentication endpoints.""","",0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-563","07/07/2017 19:48:35",3,"Design Burrow-EVM event subscriptions for Solidity events ""Output is a professional design doc describing how to use Sawtooth events to support Solidity events.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-564","07/07/2017 19:50:38",3,"Research and document non-standard RPC calls to support ""Output is a list of additional RPC calls added to the JSON-RPC design doc""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-565","07/07/2017 19:56:03",3,"Design private key storage security for seth/RPC accounts ""Both seth and the JSON-RPC depend on storing private keys to sign transactions. These keys are stored as plaintext. Encryption should be used to store these keys more securely. Output is a design doc that describes how keys will be secured.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-566","07/07/2017 19:56:54",3,"Implement key storage security for seth ""The extent of the design described in milestone 2 is probably too broad for the time allowed to complete this task in Q3 plan. A simpler version should be implemented here, or the task should be moved to Q4 and given an appropriate amount of time to complete it.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-571","07/07/2017 20:03:29",0,"Implement Sawtooth generic transaction receipts and events per design ""Decomposed""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-573","07/07/2017 20:04:57",3,"Update seth to use JSON-RPC or REST-API ""The seth client currently interacts with the REST API. With the creation of the seth-rpc server, it would be nice to add support to the seth client for interacting with the seth-rpc server.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0 +"STL-578","07/07/2017 20:23:14",2,"remove wif transcoding ""# transparently change the internal representation so it loads wif but thereafter remains native. or # remove wif entirely and store raw hex to disk. or # Change the import/export to default to pem.""","",0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-579","07/07/2017 20:50:23",3,"Update the SerialScheduler to fail-fast on a batch ""No transactions after the first failed txn in a batch need to be run, so the subsequent txns should be marked as neither valid or invalid, but should not be run.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-580","07/07/2017 20:53:20",3,"Merge parallel-scheduler branch into master branch ""Deal with merge conflicts during the process of getting the parallel-scheduler branch merged into master.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-581","07/07/2017 20:57:14",3,"Update the SerialScheduler to handle explicit dependencies ""After an explicit dependency fails, the scheduler should mark transactions that listed this txn as a dependency as failed, also.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-603","07/11/2017 18:26:01",3,"Update sdks' State classes to have a delete method ""Each sdk needs to support deletes from state. The form, similar to gets and sets, should be .delete([add1, add2, add3]).   Needs to be added to Go and Javascript at the least. ""","",0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,1,0,0,1,0 +"STL-610","07/18/2017 22:27:22",1,"Update validator registry handler to use state delete method ""[https://github.com/hyperledger/sawtooth-core/blob/master/consensus/poet/families/sawtooth_validator_registry/validator_registry/processor/handler.py#L133] Clears state at specified address.  Instead, make use of new state delete feature. Anticipated in  [https://github.com/hyperledger/sawtooth-core/pull/718]  ""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-612","07/24/2017 18:55:55",3,"Update HL overview deck ""Update/Finalize Hyperledger's Slide deck on Sawtooth""","",1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-629","08/07/2017 15:18:01",2,"Migrate docs to hyperledger.org from legacy URL ""[~amundson] has new URL and process in mind.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-670","09/08/2017 16:42:30",0,"Change default REST API port to 8008 ""Currently the Sawtooth REST API defaults to `localhost:8080`, which is common enough that it is likely to conflict with many web services developers may be running. This default should be changed to `localhost:8008`, which is less likely to cause collisions, is still a common HTTP port, and happens to be exactly twice the validator's default port. This task includes changing _every_ default in sawtooth-core which expects to find the REST API at `localhost:8080`. This will include many CLIs and possible some other components as well.""","",0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0 +"STL-807","10/25/2017 20:08:30",3,"Add unschedule_incomplete_batches() to Scheduler ""In order to create blocks when the consensus module deems a node to have """"won the block"""", the scheduler needs to discard any unprocessed batches.  Otherwise, the remaining batches' processing time may extend past the window of opportunity for claiming the block. Add a {{scheduler.unschedule_incomplete_batches()}} function, which takes any uncompleted transactions and removes them from the schedule. The flow should be:  """," scheduler.unschedule_incomplete_batches() scheduler.finalize() scheduler.complete(block=True)",1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"STL-858","11/16/2017 21:54:52",1,"Wait option on sawtooth batch submit CLI fails ""batch submit works fine for this batch but adding the --wait option throws a stack trace.   sawtooth batch submit -f poet-settings.batch --url http://rest-api:8080 --wait batches: 1,  batch/sec: 53.31448691385644 Traceback (most recent call last):(   File """"/project/sawtooth-core/cli/sawtooth_cli/main.py"""", line 160, in main_wrapper     main()   File """"/project/sawtooth-core/cli/sawtooth_cli/main.py"""", line 144, in main     do_batch(args)   File """"/project/sawtooth-core/cli/sawtooth_cli/batch.py"""", line 152, in do_batch     do_batch_submit(args)   File """"/project/sawtooth-core/cli/sawtooth_cli/batch.py"""", line 275, in do_batch_submit     if all(s.status == 'COMMITTED' for s in statuses):   File """"/project/sawtooth-core/cli/sawtooth_cli/batch.py"""", line 275, in     if all(s.status == 'COMMITTED' for s in statuses): AttributeError: 'dict' object has no attribute 'status'""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0 +"STL-905","11/29/2017 19:08:48",2,"Relax PoET registration freshness requirement ""Validator registration requires a nonce referencing the latest block. That is overly restrictive. The freshness goal is to make it unlikely that a validator was revoked from the EPID group at the time of registration. Something on the order of 10 blocks should still be more than sufficient. The current harm from the 1 block requirement is that it's very likely that a validator's signup will fail in proportion to the load on the network. i.e. the signup transaction can't be guaranteed to commit within 1 block of the validator creating the registration transaction.   Update code and Spec.""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-967","01/03/2018 21:23:00",2,"Add logging guidelines to Contributor's Guide ""Should also include reaching consensus on these guidelines""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-968","01/03/2018 21:25:00",3,"Remove batches from block before broadcast after publishing ""Remove batches from block (excluding injected batches) before broadcast after publishing""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0 +"STL-969","01/03/2018 21:25:54",1,"Add metric for rate of rejected batches due to back pressure ""* batches rejected per node (count) * batches rejected per node (gauge)""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0 +"STL-970","01/03/2018 21:39:28",3,"Implement command to prune the block store from a given block. ""Example: {{sawadm prune }}""","",1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0 +"STL-994","01/13/2018 23:54:06",1,"Log full batch and transaction ids instead of truncating ""Example of a bad message that truncates the batch id: Jan  9 11:11:13 ec2-184-72-66-148.compute-1.amazonaws.com [psimlr7-node7]    DEBUG                     Thread-54                        chain Invalid batch f6fae4cf encountered during verification of block 52bdda9387194b6e71e25a07bfe1a8f59154a07ef959dc1531ce7ecef4d81f031799d610b227c4c0b610e952a379b4ee2f37acd5a436be1b61abf0562b53a2f9 (block_num:55, state:5aee65406b9ac2f442fb710bbd58f9148489359f8378b90c6d670768828e5beb, previous_block_id:83de41121c8801ec4bfd5228ece9c9cb2ed0c63fb871aae721ad7554fe747121615f09a9ce2344701945bb47b4f541453d9fc38c9475b4194b43ffc89e708925)""","",0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0 +"STL-1013","01/18/2018 02:08:23",2,"Add QUEUE_FULL response type to transaction processor messages ""This should add a new message protos and add support to the validator for handling this type of message.""","",1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1036","01/31/2018 22:14:00",1,"Add in-process transactions metric ""Add a metric to the executor to track the number of transactions that have been sent to a transaction processor and are awaiting a transaction response. This should be incremented whenever a new transaction process request is submitted and decremented whenever a response is received.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1037","01/31/2018 22:17:27",1,"Log at error level when an INTERNAL_ERROR is returned from a transaction processor ""The executor should log whenever a transaction processor sends a TpProcessResponse with status INTERNAL_ERROR. This should be logged at the ERROR level. Additional information about the transaction should also be logged for diagnosis.""","",0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1076","02/23/2018 18:12:20",2,"XO Family specification missing delete operation; clarify name fields; fix range ""Defect in version 1.0.1 The XO Transaction family specification should include the transaction action `delete` The `name` field of the transactions and the state element should indicate that is the name of the game and that is how the record is keyed. The implication being that all game names must be unique  The range of values for a `space` should be [1,9] inclusive.""","",0,1,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1119","03/13/2018 16:18:28",1,"sawtooth-rest-api CLI has old URL in --help text ""{{}}Wrong:      -B BIND, --bind BIND identify host and port for API to run on default: http://localhost:8080) Should be:      ... (default: http://localhost:8008)""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0 +"STL-1122","03/20/2018 15:56:05",1,"Wrong default URL (8800) in example validator config file ""The file sawtooth-core/validator/packaging/validator.toml.example has the wrong default URL. Currently:  {{     bind = [}} {{     """"network:tcp://127.0.0.1:8800"""",}} {{     """"component:tcp://127.0.0.1:4004""""}} {{     ]}} Should be:  {{     bind = [}} {{     """"network:tcp://127.0.0.1:8008"""",}} {{     """"component:tcp://127.0.0.1:4004""""}} {{     ]}}""","",0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1165","04/12/2018 18:20:19",3,"App Dev Docs for Transaction Processors is out of date ""The Transaction Processor tutorials for both Python and Javascript (unsure about Java), use the 0.8 API for instantiating the TransactionHandler. They erroneously include encoding, which is no longer used, and for JS, list family versions as a string. It should be an array. [https://sawtooth.hyperledger.org/docs/core/releases/latest/_autogen/sdk_TP_tutorial_js.html#the-xohandler-class] [https://sawtooth.hyperledger.org/docs/core/releases/latest/_autogen/sdk_TP_tutorial_python.html#the-xotransactionhandler-class] Might be worth give the rest of the document a once over for other 0.8 syntax as well.""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1243","05/16/2018 22:24:03",1,"Improve consistency of SDK Documentation cover page ""On the SDK API cover page (link below) the python sections are labeled """"processor package"""" and """"sawtooth_signing"""" package, and each has lots of sub-bullets, while the other SDK sections are labeled """"Transaction Processor"""" and """"Signing"""" and have no sub-bullets. Recommendation is to name sections consistently - it is difficult to tell what the different formats mean, e.g. whether the Python SDK is different from the other two in some important way. [https://sawtooth.hyperledger.org/docs/core/releases/1.0/sdks.html]  ""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1303","06/13/2018 21:13:24",3,"Missing information: Setting on-chain permissions for a transaction family ""Restore the 1.0.4 information in """"Configuring the List of Transaction Families"""": [https://sawtooth.hyperledger.org/docs/core/releases/latest/app_developers_guide/docker.html#configuring-the-list-of-transaction-families] The SysAdmin guide probably needs an expanded version of this information.  ""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1306","06/14/2018 21:19:49",2,"Document the process for generating the Curve ZMQ key pair ""Add details on how to generate a new key pair for the Curve ZMQ to ensure network encryption.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1350","07/12/2018 20:34:15",1,"Correct ""Transactions and Batches"" diagram ""Update with the current contents of protos/batch.proto and protos/transaction.proto.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1351","07/12/2018 21:05:29",5,"SUPERSEDED: Document consensus API - new arch doc & engine steps ""Describe the new consensus API and consensus engines. * Write a new Consensus Interface section for the Architecture Guide * Update the consensus-related steps in the Application Developer's Guide and System Administrator's Guide.  Source information: * Hyperledger blog post: [https://www.hyperledger.org/blog/2018/05/24/one-year-later-interoperability-standardization-shine-at-consensus]  * Adam Ludvik's presentation, """"Sawtooth Consensus engines"""" (for AMS hackathon 2018) * Consensus RFC: [https://github.com/hyperledger/sawtooth-rfcs/pull/4] ""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1352","07/12/2018 23:02:41",3,"Fix Journal section: Redo intro, delete obsolete content ""This includes deleting the Journal chapter itself.  The remaining (valid) sections will be reorganized and possibly retitled (covered in a separate story).""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1362","07/18/2018 23:46:24",1,"Update ""Sawtooth Architecture"" picture with consensus changes ""The consensus changes (consensus engine and proxy) affect this Sawtooth architecture picture: https://sawtooth.hyperledger.org/docs/core/nightly/master/_images/arch-sawtooth-overview.svg""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1364","07/23/2018 23:09:20",1,"Fix title of ""Genesis Operation"" ""Change to """"The Genesis Process"""" (the word """"operation"""" is confusing).""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1414","09/04/2018 22:21:00",5,"Gather changes for Sys Admin Guide update ""Work with sysadmin experts to identify errors, out-of-date information, and areas of confusion in the current System Administrator's Guide.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1415","09/04/2018 22:28:13",1,"Compare SGX and non-SGX procedures in Sys Admin Guide ""Work with sysadmin and Sawtooth experts to determine which steps in the SGX procedure are valid for a non-SGX Sawtooth node.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1433","09/13/2018 16:44:13",1,"Clarify Apache setup proc in 'Using a Proxy Server [for] REST API' ""Edit content to clarify information.   Note: The technical updates are covered in STL-1447: * Change paths from `/tmp` to a less temporary location. * For password-file-creation step, explain how the password is hashed. * For `openssl` step, add a link to `letsencrypt.org`. * Change Apache and Sawtooth component startup steps to use `systemctl`.  ""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1434","09/13/2018 16:59:20",3,"Update config steps for non-SGX proc in Sys Admin Guide ""Determine the necessary config steps (based on the ref info config section) and add them to the SGX and non-SGX procedures. Rework the existing """"Configuring Sawtooth"""" section to clarify that it's reference information; move it to the end of the chapter.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1435","09/13/2018 17:05:27",1,"Add new consensus steps to SGX + non-SGX procedures ""* Download and install consensus engine(s) * Configure consensus in validator config file (new bind options) * Start consensus engine with other Sawtooth components  ""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1436","09/13/2018 18:30:47",1,"Determine how to handle SGX procedure in Sys Admin Guide ""Research the existence (or lack) of a consensus engine for PoET SGX and update or remove the SGX procedure accordingly.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1443","09/14/2018 00:33:02",5,"Correct + reorg non-SGX install/config proc in Sys Admin Guide ""Collect the existing (separate) sections into one procedure. Reorganize into the correct order. Add missing steps: Generating keys, configuring services, etc. (Other JIRAs cover the missing steps for Sawtooth config and new consensus engine info.)""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1459","09/27/2018 09:09:27",1,"Update config steps for SGX proc in Sys Admin Guide ""Make changes discovered in STL-1434.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1530","04/29/2019 10:33:14",1,"poet-sgx use the version 2 of remote attestation protocol which is not supported anymore by Intel ""In the poet-sgx consensus, during the remote attestation protocol, the requests for get_signature_revocation_lists and post_verify_attestation are made for the v2 of the attestation protocol, which is not supported anymore. A simple fix is to change the path by v3 in order to use the correct version.""","",0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1 +"STL-1553","05/21/2019 15:57:29",2,"Add consensus engine info to Sysadmin + App Dev Guides ""Provide admin-specific information about the consensus interface for the System Administrator's Guide, with links to the dynamic consensus overview in the Introduction (and other consensus-related information as appropriate).  This info should summarize how to choose and configure a consensus engine. For the App Dev Guide, add links to this information. ------------------------ PREVIOUS CONTENT Provide appdev-specific information about the consensus interface for the Application Developer's Guide, with links to the consensus interface topic in the Architecture Guide.  For the App Dev Guide, this info should summarize how to choose and configure a consensus engine. ------------------------ ORIGINAL CONTENT Describe new consensus interface (API, consensus engines-, SDKs-) based on the RFC, presentations, and other info sources. This information is needed for the PBFT documentation: must briefly define/explain the Sawtooth consensus interface in once place, so that the consensus engine steps in the procedures can link to this information. Consider using the current """"Dynamic Consensus Algorithm"""" section in the Sawtooth introduction, and replacing the intro's section with a shorter summary. Also document consensus fallback mechanism for existing chains on networks upgraded from 1.0. Implementing PR: [https://github.com/hyperledger/sawtooth-core/pull/2056]""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1606","06/04/2019 18:40:07",1,"Add consensus endpoint to Ubuntu NW proc in App Dev Guide ""The Prerequisites section in """"Using Ubuntu for a Sawtooth Test Network"""" needs to include the ``consensus`` bind setting and mention the default (5050). [https://sawtooth.hyperledger.org/docs/core/nightly/master/app_developers_guide/ubuntu_test_network.html#prerequisites]""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1607","06/04/2019 21:55:35",2,"Write initial Which Consensus? topic ""Create a single topic that consolidates and summarizes consensus information (requirements, settings, tips, etc.) for devmode, PoET, PBFT, and Raft. Ideally, this topic should explain the pros/cons and best-practice tips for when to use each type of consensus. This high-level topic may require several rounds of reviews and rewrites to fine-tune the information. This story covers the initial version and first round of reviews.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1629","06/17/2019 23:10:30",1,"SysAdmin Guide: Add ownership + permission info to config file topics ""The topics that describe how to copy or download the example config file templates should provide the required/recommended ownership and perm info: Owner root, group sawtooth, and permissions 640. Note: The need for this info was discovered during Chime testing.    ""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1630","06/17/2019 23:49:11",1,"Doc: Double-quote each key in sawtooth.consensus.pbft.members ""Add new info to the App Dev and Sys Admin Guides: Each key in sawtooth.consensus.pbft.members must be surrounded with double quotes. If the setting is changed on the command line (with sawset proposal create), you must use single quotes around the entire string to protect the double quotes from the shell. Note: This issue was discovered during Chime testing.""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1631","06/18/2019 00:48:41",1,"INVALID: Add Identity TP info/steps to App Dev Guide network procs ""The Identity transaction processor is needed in the example network procedures in the App Dev Guide. Clarify info in all network procedures (Docker, Kubernetes, and Ubuntu). In the Ubuntu proc, add the steps to start, stop, and configure the Identity TP. Note: The problem was discovered during Chime testing.  ""","",0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1659","07/03/2019 18:59:18",1,"Delete App Dev AWS procedure from Chime & master docs ""The AWS Sawtooth version is currently 1.1.4.  There are no (known) plans to update it to Chime/1.2. Remove """"Using AWS for a Single Sawtooth Node""""  from the 1.2 and master docs. * [https://sawtooth.hyperledger.org/docs/core/nightly/1-2/app_developers_guide/aws.html] * [https://sawtooth.hyperledger.org/docs/core/nightly/master/app_developers_guide/aws.html]   Note: The Bumper/1.1 docs will continue to include the 1.1 version of the AWS procedure """"Using AWS with Sawtooth"""". * Release 1.1.5: [https://sawtooth.hyperledger.org/docs/core/releases/1.1.5/app_developers_guide/aws.html] * Nightly 1.1 doc build: [https://sawtooth.hyperledger.org/docs/core/nightly/1-1/app_developers_guide/aws.html]""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1665","07/12/2019 15:38:05",1,"Docs: Fix genesis block key info & notes ""Correct info on key needed to change settings - it's not necessarily the validator key.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 +"STL-1667","07/16/2019 16:40:46",2,"Change Ubuntu procs to use user key instead of validator key ""In the App Dev and Sys Admin Guides, the Ubuntu single-node and multi-node procedures create the genesis block with the validator's key. The preferred method is to use the user's key.""","",0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0