diff --git "a/database/tawos/deep/XD_deep-se.csv" "b/database/tawos/deep/XD_deep-se.csv" new file mode 100644--- /dev/null +++ "b/database/tawos/deep/XD_deep-se.csv" @@ -0,0 +1,812 @@ +"issuekey","created","title","description","storypoint" +"XD-7","04/12/2013 06:43:52","Tuple data structure","The tuple data structure should be backward compatible in functionality for use in spring batch. Porting over FieldSet tests in spring batch to use the tuple data structure is one way help ensure that compatibility.",1 +"XD-10","04/12/2013 06:44:56","Reactor based http ingestion","When there is support for boostrapping a http server in the reactor project, and inbound SI adapter and associated XD source module should be created.",5 +"XD-22","04/16/2013 11:20:36","Create Module base abstractions","A module groups together a collection of spring configuration files.",0 +"XD-24","04/16/2013 11:23:51","Create pipes and filters DSL for ingestion","Initial simple handcoded implementation for straight through pipe and filter model, e.g. a | b | c",2 +"XD-28","04/17/2013 08:48:08","Create simple gague service","A gauge just stores a number. Implementations for in-memory and redis.",1 +"XD-29","04/17/2013 08:49:15","Create rich gauge service","A rich gauge stores a number and also rmd, min, max. Implementations for in-memory and redis.",5 +"XD-30","04/17/2013 08:50:13","Create a simple counter service","A simple counters can increment/decrement a number. Implementations for in-memory and redis.",1 +"XD-31","04/17/2013 08:51:18","Create field-value counters","A field-value counter is useful for bar chart graphs, Strings on x-axis and count on y-axis. Maps well to zset in redis. Implementations for in-memory and redis.",5 +"XD-32","04/17/2013 11:22:10","Create base Channel Registry abstraction","Define the ChannelRegistry interface. ",0 +"XD-33","04/17/2013 11:24:06","Implement LocalChannelRegistry","This should be usable within a single JVM process. Lives within shared application context of the process. ",0 +"XD-37","04/22/2013 09:48:16","Gradle based multi-project build","multi project build. - look to Spring Framework for source of starting point.",2 +"XD-43","05/06/2013 07:35:36","Metric repositories should support Spring Data CrudRepository interface","This provides common CRUD behavior and a shared interface that can be useful in testing scenarios. ",5 +"XD-44","05/06/2013 07:39:10","Redis based repositories should use a NamingStrategy class to calculate the name of the key to use for persistence","RedisCounterRepository and RedisGaugeRepository have duplicated code that needs to be factored out into a one place. One such duplication is the determination of the key name to use for persistence. This should be abstracted out into a strategy helper class.",1 +"XD-45","05/06/2013 07:44:41","Remove the expiry of keys in Redis based repositories","There is duplicated code in Redis based repositories that related to expiry behavior, move into a common shared helper class and/or base class.",1 +"XD-50","05/07/2013 08:37:05","Add tap support to DIRT","syntax: {code} tap @ somechannel --key=value | somecounter {code} ",2 +"XD-53","05/08/2013 14:07:47","Design and document desired high level DSL for configuring data processing in XD","Start to explore how the DSL can cover both advanced (non-linear) spring integration flows as well as spring batch jobs.",13 +"XD-54","05/08/2013 14:17:42","XD Metrics backed Message Counter","A Spring Integration based @ServiceActivator that counts the number of messages using the Spring XD metrics support",1 +"XD-55","05/08/2013 14:18:45","SI ServiceActivator for an XD Metrics backed Field Value Counter","A Spring Integration based @ServiceActivator that counts the occurrence of field names, from either a tuple data structure or a POJO, using the Spring XD metrics support.",3 +"XD-56","05/08/2013 14:35:44","Switch to use Lettuce driver for Redis","Replace the use of Jedis with Lettuce as it has higher performance",2 +"XD-58","05/09/2013 14:02:50","build.gradle doesn't handle a small handful of libraries","Trying to build spring-xd for the first resulted in lots of errors inside STS (I had an empty .m2 repo).",1 +"XD-59","05/10/2013 10:27:04","Tuple should support storing nested tuples","Nested tuple structures shoudl be supported, getTuple(int index), getTuple(String name)",1 +"XD-60","05/11/2013 10:41:08","Saving a metric (Counter, Gauge..) with an existing name should throw an exception","The difference between saving a new metric and updating an existing one needs to be defined. Suggest that if we try to save when an existing counter is already in the database to throw exception, such as DataIntegrityViolationException.",1 +"XD-61","05/13/2013 08:47:41","Create distributable artifact that contains server application and start/stop scripts","The gradle application task should get us most of the way to create a distributable artifact akin to what you see when downloading tomcat/jetty etc. Now there is a launch task task(launch, dependsOn: 'classes', type: JavaExec) { main = 'org.springframework.xd.dirt.stream.StreamServer' classpath = sourceSets.test.runtimeClasspath } The same main should be referenced in the application plugin, a task to create a .zip distributable is needed. Ideally would be nice to 1. download .zip 2. unzip 3. cd spring-xd/bin 4. xdserver start and gracefully shutdown later with 5. xdserver stop I don't know if we can/should bundle redis, I think we should bundle it. The scripts can be for unix/linux and for windows. Discuss a brew based install as well. ",8 +"XD-62","05/13/2013 10:02:00","Use the tuple data structure to process data in a spring batch step ","Do not require a POJO in order to do end-to-end processing in a batch step.",5 +"XD-65","05/13/2013 10:26:16","Gemfire Sink to update a gemfire cache.","Update a gemfire region.",2 +"XD-67","05/14/2013 13:06:23","Submit a brew-based install for Spring XD","- Host the Spring XD distributable zip somewhere that is accessible by external http request. - Create brew formula for Spring XD install while specifying redis as dependency. - starting up stream server upon successful brew install couple of questions: - should we name the brew task springxd? (name not taken yet) - should we start the stream server as part of the brew install process? - should we specify redis as a recommended dependency? user can pass in 'brew install springxd --without-redis' to skip redis installation. by default, 'brew install springxd' will install redis as well.",8 +"XD-68","05/16/2013 23:46:42","Export of data from HDFS to a relational database","Based on a single process running a Spring Batch job, support the ETL of data from HDFS to a RDBMS",8 +"XD-69","05/16/2013 23:47:47","Export of data from HDFS to MongoDB","Based on a single process Spring Batch job, ETL of data from HDFS to MongoDB.",5 +"XD-71","05/17/2013 01:34:15","Remove UUID from Tuple class or replace with more efficient implementation","The Java UUID class is known not to be the fasted implementation available. See https://github.com/stephenc/eaio-uuid and http://mvnrepository.com/artifact/com.eaio.uuid/uuid for high perf impls. ",1 +"XD-72","05/17/2013 09:24:13","Provide a http source","stream should be able to ingest data from http ",5 +"XD-100","05/20/2013 23:28:13","Rename Tuple class in spring-xd-tuple","The Tuple classes in Reactor follow the more traditional data structure concept of Tuples, an immutable fixed length sequence of values where each value can have different types. They are ordered and can often be access by index. An example in a static language is the Tuple class found in .NET http://msdn.microsoft.com/en-us/library/system.tuple.aspx or in Scala http://www.tutorialspoint.com/scala/scala_tuples.htm Using this standard definition of a Tuple, they do not support named values. There is also a different tuple class instance for each length, e.g. Tuple, Tuple. The Tuple class in XD is more like a record or named tuple. Python has a named tuple concept - http://docs.python.org/2/library/collections.html#collections.namedtuple and http://stackoverflow.com/questions/1490413/languages-that-allow-named-tuples shows that other languages use the term 'Record' for a 'named tuple' - Haskell, Standard ML, OCaml, and F#. http://en.wikibooks.org/wiki/F_Sharp_Programming/Tuples_and_Records#Defining_Records So boiling it all down, to avoid conflicts of names, and also to open up the possibility of using Reactor tuples as keys (instead of strings for names), we should change the name to either NamedTuple or Record. ATM, there is no direct relationship between Reactor's Tuple and NamedTuple (such as inheritance) and so probably Record is the way to go. ",2 +"XD-102","05/22/2013 23:35:26","Create XDContainer class to start stream server","Provide optional command line arg to embed the container launcher, aka - xd-admin server. XDContainer.sh --embeddAdmin",1 +"XD-103","05/22/2013 23:37:24","Create XDAdmin server to start container launcher","This will launch the RedisContainerLauncher, in future will be able to select from a variety of middleware options.",1 +"XD-104","05/23/2013 00:37:43","Add README to be included in root directory of distribution","should explain basic layout of the distribution",1 +"XD-105","05/23/2013 00:38:49","Add LICENSE to be included in root directory of distribution","should contain apache licence",1 +"XD-106","05/23/2013 01:22:03","Container server does not log a message that it has started or stopped successfully","$ ./xd-container processing module 'Module [name=file, type=sink]' from group 'tailtest' with index: 1 processing module 'Module [name=tail, type=source]' from group 'tailtest' with index: 0 Logging of 'processing module' should have log level, time..",1 +"XD-108","05/23/2013 01:27:49","Build script should not package 'spring-xd-dirt' scripts ","We are packaging separate scripts to start XDAdmin and XDContainer. The Gradle application plugin will generate an unwanted 'spring-xd-dirt' scripts, this should be removed from the bin directory when creating a distribution zip.",1 +"XD-111","05/23/2013 09:39:54","Create final distribution zip across multiple projects","The final directory structure should look like /xd /redis /gemfire inside the XD directory /xd/bin - which has xd-container and xd-admin scripts /xd/lib inside the gemfire directory /gemfire/bin - has the gemfire-server script /gemfire/lib inside the redis directory is /redis/redis-latest-v.x.y.z.tar /redis/README /readis/install-redis - script that does the basic 4 commands to install redis. There should be a gradle task that runs after the distZip task, that will take the contents of different project directories, script diretories and 'redis-binary' directories and creates the final layout for the distribution.",5 +"XD-114","05/23/2013 12:38:32","Add install script for Redis","This assumes the redis source tar is available under $rootDir/redis/redis-2.6.13.tar.gz The install script does the following: - Check the platform OS & arch - unzip the tar, compile the sources",2 +"XD-117","05/23/2013 13:36:27","add spring-integration-groovy to container dependencies","This will enable the use of groovy scripts within modules.",1 +"XD-119","05/23/2013 15:22:22","HDFS sink should default to hdfs://localhost:8020","The current default is hdfs://localhost:9000 but most new distributions/installs use 8020",1 +"XD-122","05/24/2013 17:19:59","XD scripts need to have spring-integration milestone versions updated","Spring-integration version is changed to 3.0.0.M2 and since we manually create the XD scripts, they still point to the 3.0.0.BUILD-SNAPSHOT version. As discussed, we also need to have a better strategy on updating the lib directory inside the XD scripts.",2 +"XD-123","05/24/2013 17:41:26","XD scripts lib path needs to be dynamic","We currently have the manually created XD scripts. This makes it difficult to maintain as the lib path is error prone with the changes. We need to make sure that the properties such as lib path etc., are dynamically updated.",3 +"XD-124","05/28/2013 07:58:40","Clean shutdown of redis in xd-container","Need to shutdown cleanly, no exception messages are shown. Order of components in the stream should be shut down from 'first to last' (opposite of creation)",2 +"XD-126","05/28/2013 08:33:40","Documentation for sources, sinks, modules should define which attributes are required and which optional","This will eventually be supplied by the admin server, but for now write it up by hand in the documentation",2 +"XD-128","05/28/2013 08:36:05","Create TCP sink module","Based off SI tcp inbound adapter. This will allow for event fowarding.",3 +"XD-132","05/28/2013 10:47:39","Profile support for modules","To allow for groups of beans to be defined or not in the container that runs a module. When deploying a stream (e.g. via the REST API), it should be possible to also provide profile names. Then those would apply to any modules within that particular stream deployment.",8 +"XD-133","05/28/2013 10:53:20","Fail Sonar CI build if there are any package tangles violated.","Similar to what would show up on structure101 reports.",1 +"XD-134","05/28/2013 10:59:31","Investigate link checking tool for user guide","Asciidoc/doctor might have one as part of it toolchain",2 +"XD-136","05/28/2013 11:03:29","Documentation that points on how to install hadoop","Pointers to other documentation on how to install hadoop. ",3 +"XD-139","05/28/2013 11:33:26","Update README.txt to include instructions on how to build","Building XD should not be part of the out first out of the box experience, but we should include some instructions on what targets are available, such as distXD.",2 +"XD-140","05/28/2013 12:32:46","Parameterize syslog Source; Add Support for TCP","The syslog source currently is hard-coded to use udp on port 11111. Need to parameterize the port and provide an option to use TCP.",2 +"XD-141","05/28/2013 13:18:17","install-redis script should not use relative path to determine redis source dist","Currently, the install-redis script uses relative path to determine redis source dist file. Since this is error prone, we need to fix it.",1 +"XD-142","05/28/2013 15:11:05","StreamServer Context Lifecycle Issues","The {{ModuleDeployer}} calls {{getBeansOfType}} before the context has had its {{PropertySourcesPlaceholderConfigurer}} attached. This can cause issues with {{FactoryBean}} s with placeholders in constructor args because the unresolved placeholder is used when the {{FactoryBean}} is pre-instantiated to determine the type of object it will serve up.",8 +"XD-143","05/28/2013 16:26:55","Create externalized property file to support connectivity to redis","We need to have an externalized property file(under xd/conf/) for the xd-container & admin scripts to use as options. ",3 +"XD-146","05/29/2013 12:14:17","Change TCP Source/Sink to Use Profiles","Currently, the TCP source/sink use specific beans for the serializer/deserializer options; when profiles are available, they should be used to avoid having to declare a bean of each type.",0 +"XD-147","05/29/2013 12:45:24","Remove use of application plugin for redis project","Currently redis project uses application plugin to bundle distribution. This also includes 'java plugin' which causes java specific build behavior on this project. We should try removing the use of application plugin and use something similar or custom tasks that does the bundling. ",2 +"XD-150","05/30/2013 00:06:37","Publish Spring XD final distribution zip as part of Bamboo artifactory plugin","Currently, Bamboo's gradle artifactory plugin has the artifacts configured to projects target(build) directory 'archives'. We need to have a way to set the final distribution archive as one of the gradle 'configurations' in our build.gradle and refer it inside bamboo artifacts.",2 +"XD-151","05/30/2013 08:17:45","Add Redis binaries for Windows","Presently, Spring XD does not ship Windows binaries for Redis. However, Microsoft is actively working [1] on supporting Redis on Windows. You can download Windows Redis binaries from: https://github.com/MSOpenTech/redis/tree/2.6/bin/release [1] http://blogs.msdn.com/b/interoperability/archive/2013/04/22/redis-on-windows-stable-and-reliable.aspx",2 +"XD-152","05/30/2013 08:23:50","Create rich gauge module","Spring config for rich gauge plus message handler to coerce a numeric or string payload to a double.",5 +"XD-153","05/30/2013 08:26:06","create a gauge module","Spring config for simple gauge plus message handler to process message. There is some code common to RichGaugeHandler to coerce the payload to a double that should be refactored for reuse.",5 +"XD-154","05/30/2013 08:32:19","Provided console output of started server","Shouldn't we have something like a ContextRefreshedEvent Listener and output some informational messages to the console, so the user knows the Container is up (Which contexts. Maybe even print a link to the docs))? Maybe even some simple ascii art (for demos)? Right now it looks somewhat barren. Redis provides something similar. This may even go hand in hand to provided a better configuration model (storing common config parameters centrally) {code} _____ _ __ _______ / ____| (_) \ \ / / __ \ | (___ _ __ _ __ _ _ __ __ _ \ V /| | | | \___ \| '_ \| '__| | '_ \ / _` | > < | | | | ____) | |_) | | | | | | | (_| | / . \| |__| | |_____/| .__/|_| |_|_| |_|\__, | /_/ \_\_____/ | | __/ | |_| v1.0.0.M1 |___/ eXtreme Data Using Redis at localhost:6379 The Server (PID: 12345) is now ready on http://myserver:123/streams Documentation: https://github.com/SpringSource/spring-xd/wiki {code}",2 +"XD-155","05/30/2013 10:26:00","Add a groovy script processor module","A processor module that accepts either the location of a groovy script resource or an inline script (string). Also some discussion about a default classpath location for scripts. ",5 +"XD-156","05/30/2013 10:35:14","Create config support for Redis","We would like to have Redis driven from a config property file under XD_HOME.",2 +"XD-159","05/30/2013 12:46:58","Parameter parsing does not work if an argument contains '--'.","Parameter parsing does not work if an argument contains '--'. For example: {code} ... | transform --expression=42 | transform --expression=--payload |... {code} Also, I was surprised that this worked.. {code} | transform --expression=new StringBuilder(payload).reverse() | {code} ... but this didn't... {code} | transform --expression='new StringBuilder(payload).reverse()' | {code} I think we need to tokenize the argument (with ' if contains spaces) and remove any surrounding '...' from the result. This means if someone wants a SpEL literal they would have to use something like {code}--expression=''Hello, world!''{code} resulting in a SpEL literal 'Hello, world!'",1 +"XD-164","05/31/2013 08:46:30","Validate processing modules declare the required channels","Validate that modules have required channels declared according to their type. Currently the stream deployer accepts processors with no input, but the stream doesn't complete. We should fail earlier and more loudly.",2 +"XD-166","05/31/2013 15:58:00","Create config support based on channel registry type","We need to have the XD container & admin reading the registry specific property based on the registry type selected. From Mark F, on one of the code review comments: Maybe rather than having redis, rabbit, etc. properties all within a container.properties we should rely upon naming conventions instead. Specifically, we could have a single configurable property for the type of channel registry (""redis"", ""rabbit"", or ""local"" being possible values), and then we could use something like: ",3 +"XD-170","06/03/2013 10:50:44","Home wiki page improvements","Add more structure, more easily find the reference guide. The style that is here https://github.com/snowplow/snowplow/wiki is nice. ",2 +"XD-176","06/03/2013 12:19:38","Support exponential moving average in RichGauge ","This could easily be supported in the existing gauge by adding a setAlpha method to RichGaugeService and adding the extra parameter ""alpha"" to the gauge data (https://en.wikipedia.org/wiki/Exponential_moving_average). If not set it would default to the current behaviour (simple mean), otherwise it would calculate the exponential moving average in place of the mean.",2 +"XD-178","06/03/2013 13:55:23","DefaultContainer should have a default constructor that generates a UUID","The current incrementAndGet approach based off redis will not easily be applicable in local model deployment",1 +"XD-179","06/03/2013 13:57:55","Have three startup scripts, xd-singlenode, xd-admin, and xd-container","The xd-singlenode script will launch a main application that creates both the admin node (to process http admin requests) and the container node (to execute modules for data processing) within in the same process the xd-admin script will launch a main application that creates only the admin node (remove current embeddedContainer options) the xd-container script will launch a main application that creates only the container node (as it is now)",1 +"XD-180","06/03/2013 14:04:09","The command line for xd-admin and xd-container to support an additional option, pipeProtocol, that is used to determine the middleware for sending admin requests and data between processing steps","The name 'pipeProtocol' is tentative. 1. The command line scripts for xd-admin and xd-container would support a --pipeProtocol option, with the default being to use Redis. (Otherwise use xd-singlenode). 2. The xd-admin and xd-container scripts will use the value of pipeProtocol to set the java system property xd.pipeProtocol when launching the app. ",1 +"XD-181","06/03/2013 14:07:37","Update launcher.xml to have protocol independent beans defined and an import statement to load protocol specific defintiions from a system property defined location.","launcher.xml can make use of the system property xd.pipeProtocol inside an import statement. This determines which version of the XD infrastructure to load, for example what ChannelRegistry implementation, Local or Redis based, or specific message listener containers. File name conventions should be used, so if the option passed in from the command line is --pipeProtocol localChannel then the XML filename looked for has the 'Protocol' suffix applied, e.g. localChannelProtocol, and is loaded via the classpath. Redis and Local will not be the only options, other implementations will be provided in the future, e.g. Rabbit, and the user may be able to provide their own implementations of these infrastructure classes (an advanced task). ",3 +"XD-182","06/03/2013 14:09:02","Create redisProtocol.xml that will load all the Redis specific implementations to suppor the XD container runtime and administration","The redis specific beans that are defined in the current launcher.xml should move into this configuration file. ",3 +"XD-185","06/03/2013 15:27:19","Refactor StreamServer to an interface and create Redis and Local implementations","The current StreamServer depends on RedisStreamDeployer. Call this RedisStreamServer and extract interface to allow alternate implementations",2 +"XD-186","06/03/2013 15:31:02","Create a pipe protocol independent StreamDeployer","Create StreamDeployer that does not depend on an adapter implementation",2 +"XD-187","06/03/2013 18:32:06","Create XD script for xd-single node","This script will launch XD admin along with the module container. As part of this implementation, we will also remove the embedded options for XD admin & container scripts.",2 +"XD-190","06/04/2013 05:14:13","Cleanup embedded container story","The --embeddedX options are a bit confusing in code right now, as the Admin can embed the Container and vice-versa. I guess we should only keep the Admin>Container side of things.",1 +"XD-192","06/04/2013 20:18:57","Update getting started documentation to use xd-singlenode start script.","With the new option of starting without requiring redis, the getting started documentation should reflect this easier way to start processing data.",2 +"XD-193","06/05/2013 05:55:48","Need more unique resource locations for XD internal configuration","Currently internal config files are in META-INF/spring with fairly generic names. To avoid potential collisions if users add their own configuration in the classpath, we should have a more unique location, e.g. META-INF/spring/xd",3 +"XD-198","06/06/2013 08:50:29","Documentation for developing streams in the IDE needs to mention including scripts dir to project classpath","{{curl -X POST -d ""time --interval=3 | transform | log"" http://localhost:8080/streams/test}} results in the following stack trace in the DEBUG log. It's apparently benign, but ugly... {code} 2013-06-06 10:43:36,875 [task-scheduler-1] DEBUG: org.springframework.scripting.support.ResourceScriptSource - class path resource [transform.groovy] could not be resolved in the file system - current timestamp not available for script modification check java.io.FileNotFoundException: class path resource [transform.groovy] cannot be resolved to URL because it does not exist at org.springframework.core.io.ClassPathResource.getURL(ClassPathResource.java:177) at org.springframework.core.io.AbstractFileResolvingResource.lastModified(AbstractFileResolvingResource.java:170) at org.springframework.scripting.support.ResourceScriptSource.retrieveLastModifiedTime(ResourceScriptSource.java:101) at org.springframework.scripting.support.ResourceScriptSource.getScriptAsString(ResourceScriptSource.java:79) at org.springframework.integration.scripting.RefreshableResourceScriptSource.(RefreshableResourceScriptSource.java:46) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:147) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:121) at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:280) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1035) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:939) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:485) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:271) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:126) at org.springframework.beans.factory.support.ConstructorResolver.resolveConstructorArguments(ConstructorResolver.java:616) at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:148) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1035) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:939) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:485) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:271) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:126) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1360) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1118) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:589) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:925) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:472) at org.springframework.xd.module.SimpleModule.start(SimpleModule.java:97) at org.springframework.xd.dirt.module.ModuleDeployer.deployModule(ModuleDeployer.java:120) at org.springframework.xd.dirt.module.ModuleDeployer.handleMessageInternal(ModuleDeployer.java:108) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.expression.spel.support.ReflectiveMethodExecutor.execute(ReflectiveMethodExecutor.java:69) at org.springframework.expression.spel.ast.MethodReference.getValueInternal(MethodReference.java:84) at org.springframework.expression.spel.ast.CompoundExpression.getValueInternal(CompoundExpression.java:57) at org.springframework.expression.spel.ast.SpelNodeImpl.getTypedValue(SpelNodeImpl.java:102) at org.springframework.expression.spel.standard.SpelExpression.getValue(SpelExpression.java:102) at org.springframework.integration.util.AbstractExpressionEvaluator.evaluateExpression(AbstractExpressionEvaluator.java:126) at org.springframework.integration.util.MessagingMethodInvokerHelper.processInternal(MessagingMethodInvokerHelper.java:230) at org.springframework.integration.util.MessagingMethodInvokerHelper.process(MessagingMethodInvokerHelper.java:129) at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:73) at org.springframework.integration.handler.ServiceActivatingHandler.handleRequestMessage(ServiceActivatingHandler.java:67) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:137) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:73) at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:115) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:102) at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:178) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:149) at org.springframework.integration.core.MessagingTemplate.doSend(MessagingTemplate.java:304) at org.springframework.integration.core.MessagingTemplate.send(MessagingTemplate.java:165) at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:92) at org.springframework.integration.x.redis.RedisQueueInboundChannelAdapter.access$4(RedisQueueInboundChannelAdapter.java:1) at org.springframework.integration.x.redis.RedisQueueInboundChannelAdapter$ListenerTask.run(RedisQueueInboundChannelAdapter.java:110) at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:53) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} ",1 +"XD-201","06/06/2013 10:40:52","Fix XD scripts on windows","Currently the XD scripts are broken in windows. ",2 +"XD-206","06/06/2013 16:36:59","XD AdminMain & ContainerMain should check xd.home property from scripts","Currently, the system property xd.home is set as JVM_OPTS (via SPRING_XD_ADMIN_OPTS) into xd-admin & xd-container scripts. Inside the ContainerMain & AdminMain, we need to check if this system property is set and use it. It seems like, this check is missing now.",1 +"XD-210","06/06/2013 21:21:33","If output directory does not exist for a file sink, by default allow it to be created","There shouldn't be a need to do a mkdir -p before sending data to a file sink.",1 +"XD-212","06/07/2013 08:25:52","Add http port command line option to AdminMain","Currently StreamServer has setPort, but no way for end user to set it. ",2 +"XD-214","06/07/2013 20:56:00","Create documentation on the general DSL syntax","The asciidoc wiki should have a section (included in the _Sidebar.asciidoc as well) that describes the general usage of the DSL syntax.",3 +"XD-215","06/09/2013 11:42:27","Add authentication information to twittersearch source doc","Since the changes for XD-202, twittersearch requires authentication. Need to update the docs to reflect this.",1 +"XD-221","06/10/2013 10:33:28","Links in asciidoctor generated HTML+docbook documentation are broken","The issue arises because the link:document[Label] asciidoc macro is meant for ""external documents"" and creates {{}} in docbook / {{}} in html, whereas we want {{}} / {{}} resp. We also want it to continue working in github live view. I guess what could work is to have the macro (either override the link macro or create our own if github supports that) that looks like : {{link:document#anchor[Label]}} (the #anchor works out of the box in asciidoc and should work in github) but override it for the html and docbook backends to render to the correct form. The thing is, there are several ways to create/override macros (and templates they render to), some of which make sense to our setup: - having asciidoc.conf in the directory of the processed document (http://asciidoc.org/userguide.html#X27) - having docbook.conf/html.conf in the directory of the processed document (http://asciidoc.org/userguide.html#X27) - defining macros using attributes (http://asciidoc.org/userguide.html#_setting_configuration_entries) I tried all of those, but to no avail. These DO WORK with plain asciidoc, but not with our toolchain. Don't know if the problem is with asciidocTOR or with the gradle wrapper though. ",2 +"XD-222","06/10/2013 10:37:53","Add docs for Deleting a simple stream.","curl -X DELETE http://localhost:8080/streams/ticktock",1 +"XD-226","06/11/2013 22:15:18","Cleanup and Optimize gradle tasks to bundle spring-xd distribution","We need to cleanup some of the duplicate gradle tasks that bundle spring-xd distributions. Currently, distXD does the copy of distributions from ""spring-xd-dirt"", ""redis"" and ""spring-xd-gemfire-server"" projects into ""$rootDir/dist/spring-xd"". And, the task ""zipXD"" makes the zip archive. These tasks should be combined with the ""distZip"" & ""docZip"" tasks. We also need to remove the duplicate artifacts configuration from these tasks.",2 +"XD-227","06/13/2013 14:19:51","Add jetty-util-6.1.26.jar and jsr311-api-1.1.1.jar as required jars so they will be on the XD classpath","This is needed for the use of the webhdfs:// scheme to talk to HDFS over http.",1 +"XD-228","06/14/2013 16:26:07","Missing '=' in example of http stream","In documentation attached to M1, in Streams/Introduction section, there's {noformat} http --port 8091 | file --dir=/tmp/httpdata/ {noformat} while it should be: {noformat} http --port=8091 | file --dir=/tmp/httpdata/ {noformat} missing ""{{=}}"" in {{http}}",1 +"XD-236","06/17/2013 09:51:52","Create an Aggregate Counter","An aggregate counter rolls up counts into discrete time buckets. There is an existing POC implementation in Java based off the library https://github.com/thheller/timed-counter The README there has a good description of the desired feature set.",8 +"XD-244","06/17/2013 12:19:32","Create a Trigger","h2. Narrative As the XD system, I need to be able to execute a job (or potentially a stream) based on a given condition (time, data existence, etc). This story is intended is for a local trigger implementation but remote triggers will also need to exist. h2. Acceptance Criteria # Implement the ability to register a time based trigger {{trigger }} for example # Implement the ability to register a file existence based trigger {{trigger }} for example # Implement the ability to execute a job via an anonymous trigger: {{job_name @ }} # Implement the ability to execute a job via a job via the previously registered trigger: {{job_name @ trigger_name}} ",8 +"XD-245","06/17/2013 12:31:25","Deploy Batch Jobs on XD","h2. Narrative As a developer, I need a way to deploy job configurations as well as the related custom code to XD. h2. Acceptance Criteria # Provide the ability to register jobs that have been deployed as modules via something like {{curl -d ""job"" http://localhost:8080/streams/myJob}} where job is the name of the job definition located in /modules/job and myJob is the name of the resulting registered job # Confirm that both ""regular"" jobs and Spring Hadoop based jobs can be packaged/run.",8 +"XD-247","06/18/2013 06:21:57","Need to be able to specify password for Redis","Running on Cloud Foundry (and other managed environments) we need to be able to specify a Redis password in addition to host and port.",2 +"XD-248","06/18/2013 06:28:28","Provide a Spring Shell implementation for XD","Need to create a basic Spring Shell implementation to provide easier access to the XD REST API via an XD REST API Client library. ",0 +"XD-255","06/18/2013 06:44:30","Set up a project for XD Shell","Set up a basic Spring Shell project for XD Shell",3 +"XD-257","06/18/2013 06:47:07","Create the base implementation for XDCommands for the shell","This is the basic setup of the commands file - no specific command implementations",3 +"XD-270","06/19/2013 09:02:23","The HDFS Sink should support a file naming strategy to distinguish between file currently being written and completed files","A file that is in the process of being written to should have a customized suffix added to the name, e.g. 'temp'. Once the file is closed, the suffix is removed and replaced with another value - default value can be dependent on the serialization format used, but can be customized",8 +"XD-271","06/19/2013 09:04:47","The HDFS Sink should support a number of rollover options","A strategy to roll over files that allows the user to choose between 1) the size of the file 2) the number of events/items in the file 3) an idle timeout value that if exceeded, will close the file",8 +"XD-272","06/19/2013 09:06:44","A rotation file policy based on time","A strategy that will automaticaly roll over files based time of day. For example New files will be created every hour, or every 6 hours etc. The directory for files can also be rotated so that directory structures such as /data/{year}/{month}/{day} can easily be supported with a minimum of configuration. ",0 +"XD-273","06/19/2013 09:09:30","File name should support common date and time format strings","The file name should allow the use of date and time patterns, either JDK or Joda (TBD).",0 +"XD-274","06/19/2013 09:11:34","Headers in a Message that will indicate which HDFS file the data should be stored in.","Based on message processing, a header in a Message can be added that contains the output file name. This will work together with the hdfs writer module so it can read the header and write the contents of the message to the specified file. ",0 +"XD-275","06/19/2013 09:13:18","Support for in-memory grouping/aggregation of data before writing to HDFS","This should be an optimization, to be verified, that aggregating data in memory, for example at the size of a HDFS block (64Mb often) will result in increased performance vs. not aggregating data for writes.",0 +"XD-276","06/19/2013 09:14:34","Investigate throughput performance writing to HDFS","This could be an optimization, to be verified, that delegating the writing operations to Reactor (e.g. with a backing ringbuffer implementation) will increase the throughput performance. Other strategies, such as threads to handle writes to individual files concurrently, should be investigated.",0 +"XD-277","06/19/2013 09:15:30","Support writing to HDFS text file using the BZip2Codec","The BZip2 codec is splittable, making it a common choice.",0 +"XD-279","06/19/2013 09:16:23","Support writing to HDFS text file using the LZO codec","note, the LZO codes are GPL-licensed, so can't be included in the distribution. It is splittable, which makes it a good candidate for writing without any additional data file container structure such as sequence or avro files.",0 +"XD-280","06/19/2013 09:19:35","Support writing to HDFS using the Snappy codec","snappy codec can be included in the distribution. for more info http://blog.cloudera.com/blog/2011/09/snappy-and-hadoop/ Depends on using a file container format such as sequence or avro files.",0 +"XD-281","06/19/2013 09:21:03","Support writing to HDFS using a custom codec","The classname for the codec would be used to instantiate it. Note, the ReflectionUtils or CompressionCodeFactory should be used to be efficient.",0 +"XD-285","06/19/2013 09:31:37","Provide a strategy interface to obtain the key used when writing SequenceFiles ","the key used in writing key-value pairs should be able to be specified declaratively.",0 +"XD-287","06/19/2013 09:32:43","Support writing to HDFS using Protocol Buffers","See https://github.com/kevinweil/elephant-bird",0 +"XD-288","06/19/2013 09:33:03","Support writing to HDFS using Thrift","See https://github.com/kevinweil/elephant-bird",0 +"XD-290","06/19/2013 12:12:55","Redis backed container's RedisQueueInboundChannelAdapter is not performant","Currently, the RedisQueueInboundChannelAdapter has blocking operation when pulling the messages out of redis queue and this is not performant. There are few ideas from the discussion to make it better: 1) Get more items from the redis queue per connection 2) We will also have compression of messages(at the channel registry) before being sent to the redis queue We also need to investigate what redis connection strategy makes the RedisQueueInboundAdapter better. ",2 +"XD-291","06/19/2013 12:45:28","HTTP Source still listens on port 9000 after removal.","Steps to reproduce: 1. curl -d ""http | log"" http://localhost:8080/streams/testHttp 2. curl -X DELETE http://localhost:8080/streams/testHttp 3. curl -d ""http | log"" http://localhost:8080/streams/testHttp org.jboss.netty.channel.ChannelException: Failed to bind to: 0.0.0.0/0.0.0.0:9000",1 +"XD-295","06/21/2013 05:51:34","redis.properties values ignored","The container application loads {{redis.properties}}, but for some reason the values are ignored, and defaults are used instead. Repro steps: # Unpack Spring XD 1.0.0.M1 to a machine with no running Redis instance # Change /xd/config/redis.properties to specify a different hostname # Run /xd/bin/xd-container # Observe error about inability to connect to Redis on localhost Workaround * Pass -Dredis.hostname={desired IP} as a JVM parameter",1 +"XD-296","06/21/2013 08:13:31","Add log config file to gemfire in final distro","The changes for XD-144 mean that log4j files are no longer in the library jars. The admin server already has a logging configuration which should be activated by the startup scripts, but the separate gemfire app doesn't.",1 +"XD-302","06/24/2013 14:06:17","User wants ability to create a mock source","To send a pre-set message to process(es)",8 +"XD-303","06/24/2013 14:08:03","User wants ability to create a in-process sink or tap","So that we can validate the message content in the stream",8 +"XD-304","06/24/2013 14:09:14","User wants ability to test processors","Be able to point to the processor xml file, e.g. modules/processors/transformer.xml, and have access to a source channel that drives messages into the processor and a output channel where output messages are send. The outbound channel is queue backed. Test sending JSON to a processor module that uses Tuples. ",8 +"XD-305","06/24/2013 14:09:58","User wants ability to test sinks","Handled by 1245",8 +"XD-306","06/24/2013 14:10:49","User wants ability to test sources","Examples: 1. Be able to start the rabbitmq source just by pointing to modules/source/rabbit.xml, pass in some property file for parameters to be replaced, and outgoing message is placed in a in-memory queue backed channel for use with assertions to verify functionality. 2. Test for as many source types as is 'reasonable', e.g. MQTT/TCP testing might be harder than say rabbitmq. 3. Test that sending json, results in media-type header is set to json 4. Test that sending POJO, "" POJO 5. Test that sending Tuple, "" Tuple 6. Test that sending raw bytes, "" raw bytes ",8 +"XD-315","06/25/2013 07:41:05","Package Shell ""binary"" next to xd-admin and xd-container","The shell should be an 'executable' delivered out of the box in much the same way that xd-container and xd-admin are right now. If we follow how redis/mongo distribut the shell, it sits side by side with the other binaries",3 +"XD-316","06/25/2013 08:30:23","Create a common exception framework for XD","Need to capture exceptions from the various projects that make up XD and wrap them in XD Specific exceptions. An example of this is when leaving out the channels in the module definitions, we see NoSuchBeanExceptions and IllegalArgumentExceptions thrown based on which module and what channel is missing. ",5 +"XD-340","06/26/2013 14:21:29","Create script to extract table data from JSON based on a given HAWQ table structure","We should be able to write a script that can examine the table structure for a given HAWQ table and then extract the data from JSON without the custom script we are using now.",8 +"XD-341","06/27/2013 08:14:03","Document JMX features","Document jmx command line options and refer to jolokia",2 +"XD-342","06/27/2013 13:50:13","Fix classpath error caused by multiple conflicting servlet-api jars","There is some conflicting Servlet API jars on the claspath that needs cleanup. Building and running with xd-singlenode script gave this error: Jun 27, 2013 3:18:16 PM org.apache.coyote.http11.AbstractHttp11Processor process SEVERE: Error processing request java.lang.NoSuchMethodError: javax.servlet.ServletContext.getEffectiveSessionTrackingModes()Ljava/util/Set; at org.apache.catalina.connector.CoyoteAdapter.postParseRequest(CoyoteAdapter.java:674) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:402) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:680) ",3 +"XD-343","06/28/2013 07:34:54","Investigate JMX object naming of deployed modules and inbound/outbound channel adapters.","The object naming is still not ideal for XD since SI conventions add some noise. Likely need to design and implement a custom naming strategy",5 +"XD-347","07/01/2013 03:53:10","Investigate Redis connection timeout issues when running performance test","With the performance test run, the numbers (messages sent/received per second) keep varying as there are ""redis client connection timeout exceptions"" (Caused by: org.jboss.netty.channel.ConnectTimeoutException: connection timed out) at both redis inbound/outbound channel adapters as I increase the total number of messages being processed (max. 10K/second). Some of the exception messages for the review: 1) With connection pool (at Redis outbound): Caused by: org.springframework.data.redis.connection.PoolException: Could not get a resource from the pool; nested exception is com.lambdaworks.redis.RedisException: Unable to connect at org.springframework.data.redis.connection.lettuce.DefaultLettucePool.getResource(DefaultLettucePool.java:95) at org.springframework.data.redis.connection.lettuce.DefaultLettucePool.getResource(DefaultLettucePool.java:36) at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory.createLettuceConnector(LettuceConnectionFactory.java:318) at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory.getConnection(LettuceConnectionFactory.java:109) at org.springframework.data.redis.core.RedisConnectionUtils.doGetConnection(RedisConnectionUtils.java:81) at org.springframework.data.redis.core.RedisConnectionUtils.getConnection(RedisConnectionUtils.java:53) at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:157) at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:137) at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:84) at org.springframework.data.redis.core.DefaultListOperations.leftPush(DefaultListOperations.java:71) at org.springframework.data.redis.core.DefaultBoundListOperations.leftPush(DefaultBoundListOperations.java:67) at org.springframework.xd.perftest.redis.outbound.RedisQOutboundChannelAdapter.handleMessageInternal(RedisQOutboundChannelAdapter.java:71) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:73) ... 17 more Caused by: com.lambdaworks.redis.RedisException: Unable to connect at com.lambdaworks.redis.RedisClient.connect(RedisClient.java:176) at com.lambdaworks.redis.RedisClient.connectAsync(RedisClient.java:139) at org.springframework.data.redis.connection.lettuce.DefaultLettucePool$LettuceFactory.makeObject(DefaultLettucePool.java:252) at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1181) at org.springframework.data.redis.connection.lettuce.DefaultLettucePool.getResource(DefaultLettucePool.java:93) ... 29 more Caused by: org.jboss.netty.channel.ConnectTimeoutException: connection timed out: localhost/127.0.0.1:6379 at org.jboss.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:137) at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) 2) Without connection pool (at Redis inbound): Caused by: com.lambdaworks.redis.RedisException: Unable to connect at com.lambdaworks.redis.RedisClient.connect(RedisClient.java:176) at com.lambdaworks.redis.RedisClient.connectAsync(RedisClient.java:139) at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory.createLettuceConnector(LettuceConnectionFactory.java:321) ... 12 more Caused by: org.jboss.netty.channel.ConnectTimeoutException: connection timed out: localhost/127.0.0.1:6379 at org.jboss.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:137) at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) ... 3 more",2 +"XD-348","07/01/2013 08:22:24","Trigger - Add support for fixed-delay interval","Trigger - Add support for fixed-delay interval",1 +"XD-368","07/03/2013 11:11:27","Improve connection handling in RedisAggregateCounterService.","This is currently too chatty. It should be possible to use a single connection for each ""increment"" operation.",2 +"XD-396","07/08/2013 09:49:24","Add section to documentation that shows command line options available for each server","This should likely be in the ""start the runtime"" section of Getting Started section.",1 +"XD-397","07/08/2013 09:52:18","Document Monitoring & Management Features","This section should discuss what is exposed via JMX, how you can view it in JConsole, and how you can view it over http via Jolikia. in particular showing how some existing metrics for inbound message channel adapters or the 'inbound' channel of the stream, that indicate the number of messages processed per section. ",2 +"XD-398","07/08/2013 09:53:45","Update Getting Started chapter to use Shell commands instead of curl","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#getting-started ",1 +"XD-399","07/08/2013 09:54:28","Update Getting Started chapter to include a section on starting the shell.","The chapter on how to start up the shell should ocme right after ""start the runtime"" and before ""create the stream""",1 +"XD-400","07/08/2013 10:00:53","Update Streams Chapter to use shell commands instead of curl","the current streams chapter http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#streams shows creation and deleting streams using CURL - switch to use shell. Also add listing of a stream. there is also an example of creating a stream, this should be replaced as well. ",1 +"XD-401","07/08/2013 10:02:43","Create a shell command to post data to an http port for use with the http source module","the current streams chapter http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#streams shows using curl to post some data to a http source module, curl -d ""hello"" http://localhost:9000 create a shell command so curl doesn't have to be used. https://github.com/SpringSource/rest-shell has a command already developed for this.",1 +"XD-403","07/08/2013 16:28:33","Update Sources section to use Shell commands instead of curl","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#http ",1 +"XD-404","07/08/2013 16:36:08","Update documentation section ""Running in Distributed Mode"" to show use of RabbitMQ in addition to Redis","The documentation in the Running in Distributed Mode chapter should discuss that the distributed runtime can use essentially any middleware to communicate between nodes. This functionality is provided by the core ChannelRegistry abstraction. A new intro paragraph shoul convey that it isn't a 'redis' only or 'rabbitmq' only system. There should be ""Installing RabbitMQ"" and ""Starting RabbitMQ"" sections to match those for Redis. ""Starting Spring XD in Distributed Mode"" should cover how to configure the system to select to use Redis or Rabbit.",3 +"XD-405","07/08/2013 17:14:39","Update Sources tail section to use Shell commands instead of curl","http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#tail",1 +"XD-406","07/08/2013 17:15:16","Update Sources twitter search section to use Shell commands instead of curl","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#twittersearch",1 +"XD-407","07/08/2013 17:16:12","Update Sources Gemfire CQ section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#gemfire-cq",1 +"XD-408","07/08/2013 17:16:43","Update Source Syslog section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#syslog",1 +"XD-409","07/08/2013 17:17:49","Update Sources TCP section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#tcp",1 +"XD-410","07/08/2013 17:27:35","Update Processors Filter & JSon Filed Value Filter section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#filter http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#json-value-filter",1 +"XD-411","07/08/2013 17:28:44","Update Processors Transform section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#transform",1 +"XD-412","07/08/2013 17:29:33","Update Processors JSON Field Extractor section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#json-field-extractor",1 +"XD-413","07/08/2013 17:30:05","Update Processors Script section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#script",1 +"XD-414","07/08/2013 17:31:49","Update Sink's Log section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#log_sinks",1 +"XD-415","07/08/2013 17:32:27"," Update Sink's File section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#file_sinks",1 +"XD-416","07/08/2013 17:33:30","Update Sink's HDFS section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#hdfs",1 +"XD-417","07/08/2013 17:34:20","Update Sink's TCP section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#tcp_sinks",1 +"XD-418","07/08/2013 17:34:56","Update Sink's GemFire section to use Shell commands instead of curl ","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#gemfire ",1 +"XD-419","07/08/2013 17:38:57","Taps introduction section should show use of shell to create a real stream and a real tap using the shell","See http://static.springsource.org/spring-xd/docs/1.0.0.M1/reference/html/#taps The existing docs should be made to show a real stream being created with filter and/or transformer and then a tap that goes to logging. The shell syntax to also stop/undeploy a tap should be shown here as well since the lifecycle is discussed.",2 +"XD-429","07/09/2013 00:06:10","Document time source","time source is used in some examples, but it isn't documented explicitly, eg. --interval option in seconds.",1 +"XD-432","07/09/2013 11:05:15","User wants to configure MessageBus","XD-162 requires registering message converters with the ChannelRegistry. End user needs to configure this statically as the Spring configuration is not exposed.",0 +"XD-433","07/09/2013 11:40:19","Homogenize Container Initialization Failures ","If Redis is not running, the container fails to initialize in {{ContainerMain.launch()}} because the connection factory attempts to eagerly connect. If RabbitMQ is not running, the container fails to initialize in {{AbstractContainerLauncher.launch()}}. Make the failure behavior consistent from a user perspective and add a spring-retry {{RetryTemplate}} to retry container startup.",5 +"XD-434","07/09/2013 11:42:15","Consider removing the Topic/Queues when deleting the Stream","As a user, I'd like to have the option to delete the queues/topics so that we can include an _optional_ attribute as part of the stream destroy command to also clean-up the associated queues/topics. *Notes:* * Spring-AMQP {{RabbitAdmin}} now has a {{getQueueProperties()}} method which returns the number of consumers so it may be possible to use it for this purpose. * Consider the possibility of _producers_ and/or _queues_ still containing data * Consider the scenario even after the topics/queues are cleaned-up, what to do with fanout exchange? *Some Further Thoughts* * Consider using the upcoming Spring AMQP REST API {{RabbitManagementTemplate}} if the timing is not right, we could temporarily invoke the rabbit REST API directly. * Should be optional; perhaps via {{stream destroy foo --clean}} * Should this be done by the admin? Or, via a new plugin handling module undeployments - in the rabbit case, undeploying a consumer would check for us being the last consumer and remove the queue/binding/exchange, since we undeploy left->right, everything can be cleaned up on the consumer side. * Third option would be new methods on the bus {{cleanConsumer}} etc invoked by the {{StreamPlugin}} * Down side of doing it on the admin is that he wouldn't necessarily know which rabbit cluster a stream was deployed to - so it probably has to happen on the container - even so, we'd need the admin url(s) for the cluster.",5 +"XD-436","07/10/2013 07:23:03","Decouple transport from DIRT","Currently spring-xd-dirt has direct dependencies on Redis and Rabbit. Consider moving transport dependent classes to separate jars with ""runtime"" dependencies",0 +"XD-445","07/10/2013 18:04:11","Add support to set the read timeout for http request","We need to have the ability to set read timeout for http request. This is already implemented here: https://github.com/SpringSource/rest-shell/",1 +"XD-449","07/11/2013 15:26:53","The user needs the ability to set up a misfire policy for a Trigger","2 options are: 1) Fire the trigger immediate - Launch the job when trigger can gather the resources necessary start the job 2) Do nothing - Ignore this job fire time and catch this scenario can occur if XD is down or resources (threads) are not available at the time a job is to be launched. ",1 +"XD-469","07/12/2013 11:54:27","Upgrade to spring-data-hadoop 1.0.1.RC1","spring-data-hadoop 1.0.1.RC1 provides flavors for commonly used Hadoop distros/versions and we should make use of that.",1 +"XD-470","07/12/2013 11:55:37","Create JDBC sink","we need a JDBC sink for writing to HAWQ (using int-jdbc:outbound-channel-adapter and postgresql JDBC driver) ",3 +"XD-471","07/12/2013 11:57:37","Batching JDBC channel adapter","we need a batching JDBC channel adapter (int-jdbc:outbound-channel-adapter is not batching statements AFAICT) ",8 +"XD-472","07/12/2013 11:58:59","Add spring-xd-hadoop distro specific sub-projects","we need to modify build adding two sub-projects for spring-xd-hadoop: one for hadoop 1.1.2 and one for phd1 (Pivotal HD) to pull in transitive dependencies for correct Hadoop distro",8 +"XD-473","07/12/2013 12:01:25","Modify startup script of xdadmin/xdcontainer to allow specifying hadoop distro to use","we need to modify startup script to use hadoop 1.1.2 as default or phd1 when specified with --hadoopDistro=phd1",5 +"XD-474","07/12/2013 12:09:31","Create JSON to tab-delimited text transformer script","We need a generic script that can do JSON to tab-delimited text transformation for data written to HDFS/HAWQ external tables. Users should be able to specify columns/fields to be included.",8 +"XD-475","07/12/2013 12:14:54","Avro sink for HDFS","We need a sink that can write data in Avro serialized format. This story is for investigating what we would need to do to support that. The Spring Integration Kafka adapter provides Avro support for Kafka.",8 +"XD-478","07/15/2013 05:35:17","Add accepted type logic to module","A module can declare one ore more payload types it will accept. This will inform the runtime re. automatic payload conversion. This can be done in the module XML configuration and processed by StreamPlugin",3 +"XD-479","07/15/2013 05:45:57","Add conversion support to ChannelRegistrar and ChannelRegistry ","Implements automatic conversion. Provide APIs to channel registry to register Payload conversion. Includes Redis and Rabbit transport.",8 +"XD-480","07/15/2013 07:10:45","In certain scenarios a job can be redeployed more than once","In a scenario where we are using the same job definition i.e. Job.xml and we create Job Instance Foo. If I create and deploy Foo2 using Job.xml I will see only 2 job definitions(correct), but I will see the job run 3 times. If I create Foo3 & deploy, I will see 3 job definitions(correct), but the jobs will run 5 times. ",2 +"XD-485","07/15/2013 09:34:36","Create stories to enable the use of Spring Shell's 2.0 branch testing facilities ","We need a few steps 1. Investigate if we need to move off Spring Shell 1.0 dependency, e.g. need to use code in Spring Shell 2.0 branch 2. If we need to use code in Spring Shell 2.0 branch, we need to release a Spring Shell 1.1 M1 release with appropriate code changes. Create stories related to Shell release. 3. Determine and document the basic recipe for doing integration tests. 4. Create stories to provide integration tests for each existing command",5 +"XD-496","07/15/2013 11:50:59","Disable Collection to Object conversion in DefaultTuple","DefaultFormattingConversionService provides Collection -> Object conversion which will produce the first item if the target type matches. Here, this results in an unfortunate side effect, getTuple(List list) would return a Tuple which is misleading. In this case it is preferable to treat it as an error if the argument is not a Tuple.",3 +"XD-504","07/17/2013 12:14:23","Add ""How to Build Spring-XD"" instructions to the documentation","We need to determine where this information could fit in. It can be either in ""README"" at the project home page or ""Getting started"" wiki page.",1 +"XD-514","07/18/2013 13:40:10","Create proper test coverage for Controllers","Create proper test coverage for Controllers",8 +"XD-522","07/19/2013 21:31:48","Cannot create tap if you have already tried to create an invalid one of same name","From the shell: {code} > stream create --name aaa --definition ""time|log"" Created new stream 'aaa' > tap create --name aa --definition ""tap aaa | log"" Command failed org.springframework.xd.rest.client.impl.SpringXDException: XD111E:(pos 8): Unexpected token. Expected 'dot(.)' but was 'pipe(|)' tap aaa | log >tap create --name aa --definition ""tap aaa . log"" Command failed org.springframework.xd.rest.client.impl.SpringXDException: There is already a tap named 'aa' {code} Looks like the first tap was created even though there was a parse error. And so the second attempt to create the tap failed due to an existing tap.",1 +"XD-540","07/24/2013 11:48:47","Broadcast Undeploy Requests","Use an 'undeploy' topic to broadcast undeploy requests to all containers. Applies to Redis and Rabbit transports, not local. Also, rename {{ModuleDeploymentRequest}} to {{ModuleOperationRequest}} with an enum {{DEPLOY}}, {{UNDEPLOY}}.",5 +"XD-583","07/30/2013 09:38:18","Dispatcher Has No Subscriber Error when posting a message to a stream","This has been observed intermittently with Redis transport by myself and others when sending a message to a valid stream. Not sure how to recreate it yet. 11:27:10,082 ERROR ThreadPoolTaskScheduler-1 redis.RedisQueueInboundChannelAdapter:126 - Error sending message org.springframework.integration.MessageDeliveryException: Dispatcher has no subscribers for channel 'org.springframework.context.support.GenericApplicationContext@3f73865d.input'. ",5 +"XD-584","07/30/2013 09:49:20","Parsing stream definition with parameter containing single quotes not working","The documented gemfire-cq example (https://github.com/springsource/spring-xd/wiki/Sources#wiki-gemfire-cq) fails: xd:>stream create --name cqtest --definition ""gemfire-cq --query=""Select * from /Stocks where symbol= 'VMW'"" | file"" You cannot specify option 'name' when you have also specified '' in the same command xd:>stream create --name cqtest --definition ""gemfire-cq --query=Select * from /Stocks where symbol=' VMW' | file"" 10:01:46,249 WARN Spring Shell client.RestTemplate:524 - POST request for ""http://localhost:8080/str eams"" resulted in 400 (Bad Request); invoking error handler Command failed org.springframework.xd.rest.client.impl.SpringXDException: XD115E:(pos 26): unexpected data in stream definition '*' gemfire-cq --query=Select * from /Stocks where symbol='VMW' | file ^ ",0 +"XD-585","07/30/2013 10:28:55","Deploying with twittersearch source throws Jackson ClassDefNotFound exception","The upgrade to Jackson 2.2 included the following change to the build script {code} project('spring-xd-dirt') { description = 'Spring XD DIRT' configurations { [runtime,testRuntime]*.exclude group: 'org.codehaus.jackson' } {code} Spring social twitter template depends on these classes ",1 +"XD-592","08/01/2013 12:44:17","Problems with advanced tapping","Start of a test program that can be placed in StreamCommandTests: {code} @Test public void testTappingAndChannels() { executeStreamCreate(""myhttp"",""http --port=9314 | transform --expression=payload.toUpperCase() | log"",true); executeStreamCreate(""tap"",""tap @myhttp.1 | log"",true); executeStreamCreate(""tap_new"",""tap myhttp.1 > log"",true); executeCommand(""http post --data Dracarys! --target http://localhost:9314""); // TODO verify both logs output DRACARYS! } {code} In the test program see two taps. One using the older style and one using the newer style and '>' so that there is no real tap module source, the log module just gets its input channel wired directly to myhttp.1 (the output of transform). They should be doing the same thing. However when run the output for tap_new is missing, all I see is: {code} 11:39:36,055 WARN New I/O worker #28 logger.tap:141 - DRACARYS! 11:39:36,059 WARN New I/O worker #28 logger.myhttp:141 - DRACARYS! {code} No errors are reported, there is just no output for tap_new.",8 +"XD-593","08/02/2013 08:44:21","Add ""counter delete"" shell command","Add ""counter delete"" shell command. This also requires implementation of DELETE rest end point at CountersController.",1 +"XD-594","08/02/2013 10:54:56","Create list/delete commands for all the metrics","We need to add list/delete commands for the metrics: InMemoryAggregateCounter FieldValueCounter Gauge RichGauge Currently, the AbstractMetricsController class has the delete method to delete the metric from the repository. We can probably use the same for all the metrics.",2 +"XD-595","08/02/2013 11:11:46","Fix wiki documentation to use xd shell command prompt to read ""xd:>""","We need to fix the github wiki to use the xd shell command prompt ""xd:>"".",1 +"XD-596","08/02/2013 12:01:57","Add CONTRIBUTING.md file","Add CONTRIBUTING.md file, use the Spring Integration file as the basis.",1 +"XD-613","08/06/2013 11:14:09","Deployed streams should be restarted on container start","When using Redis store, stored deployed streams should be deployed on container restart.",5 +"XD-614","08/06/2013 12:14:44","Conversion Enhancements","Content-Type during transport transit is not the same as the content-type within modules. ""Real"" transports always use byte[] which may contain raw byte[] from a source, a byte[] converted from a String (which may or may not already contain JSON), or a byte[] containing JSON converted by the transport on the outbound side. The transport needs to convey which of these was applied on the outbound side so it can properly reconstruct the message. Retain any content-type header that already exists in the message, and restore it. For Rabbit, use normal SI/Rabbit headers to convey this information. For Redis, add the information to the byte[].",8 +"XD-621","08/07/2013 08:30:14","Set Default Hadoop Name Node for Shell","Currently, you have to set the default name node every time your start the shell. We should do 2 things: - Provide a default Name node Set Default Hadoop Name Node for Shell: hdfs://localhost:8020 - Should we provide some form of persistence? It kind of sucks that you have to re-specify the name node every time the shell starts up {code} xd:>hadoop fs ls / You must set fs URL before run fs commands {code} ",2 +"XD-624","08/07/2013 11:58:16","Use External Connection Factory in TCP Syslog Source","WARN log emitted because the embedded connection factory does not get an application event publisher. Will be fixed in SI M3 (INT-3107).",1 +"XD-631","08/08/2013 07:17:54","Pluralize test classes in package org.springframework.xd.shell.command","The classes under test are pluralized. Therefore, the test classes themselves should reflect that. E.g. rename *JobCommandTests* to *JobCommandsTests* as it tests class *JobCommands*. Please check all tests in that package for correct naming.",1 +"XD-634","08/08/2013 09:34:46","Fix guava dependency for hadoop20 and phd1 ","Spring Xd currently ships with Guava 12.0 while Hadoop 2.0.5 and Pivotal HD 1.0 depends on 11.0.2 - this could lead to classpath problems if we include both.",3 +"XD-640","08/08/2013 15:12:17","Cannot start xd-container with the --hadoopDistro option","Trying to use xd-container with PHD, and therefore need to start with --hadoopDistro. I get the following error: $ bin/xd-container --hadoopDistro phd1 17:11:20,305 ERROR main server.ContainerMain:59 - ""--hadoopDistro"" is not a valid option ",3 +"XD-643","08/08/2013 17:03:17","Map column names with underscore to camelCase style keys for JDBC sink","We need to add support for matching column names with underscores like ""user_name"" and map them to camel case style keys like ""userName"" in the JdbcMessagePayloadTransformer.",3 +"XD-650","08/09/2013 09:48:15","Eclipse build path error after running gradle -> refresh source folders in Eclipse","After running gradle -> refresh source folders on the spring-xd-module project in Eclipse, there is an error because the {{src/test/java}} folder is missing. Solution is to add a placeholder file.",5 +"XD-654","08/09/2013 12:10:26","Support explict named channel creation with configurable settings via the REST API and Shell","Support pubsub named channels… the story could be a bit more general though: enable channel creation (with configurable settings) via the REST API and shell >namedchannel create foo --domain PUBSUB",0 +"XD-658","08/09/2013 13:36:44","Update to Spring-Data-Redis 1.1.0.M2","Remove the {{NoOpRedisSerializer}} and use the non-serialization feature of M2.",2 +"XD-663","08/09/2013 16:59:59","Use correct FS_DEFAULT_NAME_KEY constant based on Hadoop version used","Keep getting the following warning: WARN Spring Shell conf.Configuration:817 - fs.default.name is deprecated. Instead, use fs.defaultFS Should switch to use the runtime value of the FS_DEFAULT_NAME_KEY constant based on Hadoop version used.",3 +"XD-665","08/09/2013 18:13:03","AggregateCounter display command options with ""lastHours"" and ""lastDays""","It would be nice to have ""lastHours"" and ""lastDays"" options for aggregatecounter display command.",1 +"XD-686","08/12/2013 16:24:33","Support Named Taps (or Similar)","Provide some syntax allowing multiple tap points to be directed to a named channel. e.g. tap foo.4 > namedTap tap bar.2 > namedTap or :tap.foo > counter",8 +"XD-699","08/14/2013 17:14:00","Handling tap operations on a tap that has reference to a deleted stream","When trying to undeploy/destroy a tap that has reference to an already deleted stream fails with the following exception: Command failed org.springframework.xd.rest.client.impl.SpringXDException: XD116E:(pos 4): unrecognized stream reference ''. As expected, the StreamConfigParser's lookupStream fails to find the stream name as the stream doesn't exist in the repository. In this scenario, what is a better way to handle the tap operations. Should we undeploy the tap when the stream is destroyed? ( though I don't see an easy way to find the taps that use a specific stream).",2 +"XD-724","08/20/2013 11:52:52","Test source module in isolation","Register the module under test and deploy the module Verify output across all transports Examples Be able to start the rabbitmq source just by pointing to modules/source/rabbit.xml, pass in some property file for parameters to be replaced, and outgoing message is placed in a in-memory queue backed channel for use with assertions to verify functionality. Test that sending json, results in media-type header is set to json Test that sending POJO -> POJO Test that sending Tuple -> Tuple Test that sending a (JSON) String -> String Test that sending raw bytes -> raw bytes ",5 +"XD-725","08/20/2013 11:54:20","Test processor module in isolation","Register the module under test and have access to a source channel that drives messages into the processor and a output channel where output messages are sent. Examples Built-in Message conversion: send JSON to a processor module that accepts Tuples. ",5 +"XD-726","08/20/2013 11:54:52","Test sink module in isolation","Register the module under test Send a message to the sink using a test source and verify the sink contents - this requires checking an external resource - depends on the sink ",5 +"XD-747","08/23/2013 06:31:10","Bootstrap XD on Yarn","1. How XD Yarn application should be packaged and bootstrapped? 2. Where the code should be? Within xd itself or separate repo?",1 +"XD-748","08/23/2013 06:34:55","Interacting with XD on Yarn","1. How we talk to the XD instance(s) on Yarn 2. There is a rest interface which location can be exposed either via resource manager or appmaster 3. Technically appmaster could also expose interface which could eihter be proxy for xd rest or dedicated interface implementation(i.e. thrift or spring int)",1 +"XD-749","08/23/2013 06:38:07","Comm protocol for appmaster","We need to be able to talk to appmaster which will control the whole xd yarn app. 1. Choose the implementation? Thrift? Spring Int? Something else? ",1 +"XD-750","08/23/2013 06:41:25","Container and Grid Control","1. We'll need a system which give better control of what yarn/xd containers are out there and what is a status of those containers. 2. We also need grouping of containers order to choose, prioritize and scale tasks. 3. We need heartbeating of the grid nodes. Hadoop Yarn itself doesn't give enough tools to know if container is ""alive"".",1 +"XD-751","08/23/2013 06:53:08","XD UI on Yarn","Technically speaking of we want to integrate XD UI on Hadoop tools we should do it so that the proxy on resource manager works with XD UI. From Hadoop Yarn resource manager point of view this proxied url is the applications tracking url(which is registered when application is deployed).",1 +"XD-752","08/23/2013 18:50:20","Restrict Job launcher with more than one batch job configured in job module","Currently the Job launcher launches all the batch jobs configured in the job module. Please refer, ModuleJobLauncher's executeBatchJob(). This makes the JobRegistry registers with multiple batch jobs under the same Spring XD job name (group name). Also, it is understood that having multiple jobs configuration under the same config xml is uncommon.",2 +"XD-754","08/25/2013 07:57:35","Fix Class/Package Tangle Introduced by XD-353","{{container}} and {{event}}. {{XDContainer}} references and is referenced by {{ContainerStartedEvent}} (and stopped). https://sonar.springsource.org/drilldown/measures/7173?metric=package_cycles&rids%5B%5D=7717 ",1 +"XD-755","08/26/2013 09:26:34","Reactor Environment Improvements","Use a profile or similar to only include the {{Environment}} conditionally (currently in module-common.xml. Also Jon Brisbin one thing to keep in mind: we talked about having a properties file for XD that configured the RingBuffer et al in a non-default way Jon Brisbin e.g. no event loop Dispatchers…a ThreadPoolDispatcher with a large thread pool size (50 threads? 100?)…and maybe even two RingBufferDispatchers: input and output Jon Brisbin so we might want to change from strictly a default Environment bean to an EnvironmentFactoryBean with a specific configuration…thinking about it now I maybe should add a namespace element for the Environment",3 +"XD-773","08/28/2013 09:26:50","Tab support inconsistent for http post","When doing *xd:> http post* and press the *tab* key. One should get a list of available options. Right now nothing happens. I have to press *--* and then tab to get the options. Interestingly, this works for *stream create* + *tab* key",2 +"XD-776","08/28/2013 15:46:41","Shell: Remove ""taps list"" command","We should only allow ""tap list"" - currently ""tap list"" AND ""taps list"" are allowed but ""tap list"" does not show up under help.",1 +"XD-788","08/29/2013 11:22:21","Add Integration Tests to run JobCommands Tests against all transports","similar to ChannelRegistry: - AbstractChannelRegistryTests that has the real tests - subclasses for each impl provide the registry to be tested Thus one test can run against multiple transports.",8 +"XD-804","09/04/2013 08:15:33","Add Named Channel API","We need an abstraction in place to retrieve messages from a ""named channel"" programmatically. Right now there is no implementation agnostic way of doing this (such as receiveMessage(), queueSize()). This could be quite useful for integration tests of streams. E.g. to do more focussed tests without resorting to ""temp-files"" and non-essential sinks or sources etc. - e.g. {code} :routeit > router --expression=payload.contains('a')?':foo':':bar' {code}",8 +"XD-805","09/04/2013 08:18:41","Get notified when created named channel ""is ready""","For testing purposes it would be super-helpful if there be a hook to get notified when a named channel is up and running. In current tests one may have to resort to ""Thread.sleep"".",8 +"XD-807","09/04/2013 11:56:50","Shell: Standardize counter name parameter","The parameters are not optimal for the counter name between ""Aggregate Counter"" ""Field Value Counter"" --counterName versus --name",2 +"XD-808","09/04/2013 20:37:58","Update to spring-data-hadoop 1.0.1.RELEASE","This might mean we should adjust our hadoopDistro options to the ones supported in the new release - hadoop12 (default), cdh4, hdp13, phd1 and hadoop21",3 +"XD-819","09/06/2013 14:01:09","Add Service Activator Processor","Would be nice to have a ServiceActivator Processor available so that if one had an existing Spring bean they could simply describe the bean id and method name - without going through the full complexity of creating a processing module.",3 +"XD-842","09/12/2013 08:35:19","Add back classifier = 'dist' to distZip build target","Add back ""classifier = 'dist'"" to distZip build target - it was was accidentally removed.",1 +"XD-847","09/14/2013 10:45:13","Revise the available hadoopDistro options","We should adjust our --hadoopDistro options to the ones supported in the new spring-data-hadoop 1.0.1.RELEASE - hadoop12 (default), cdh4, hdp13, phd1, hadoop20 This includes updating the wiki pages",5 +"XD-849","09/16/2013 05:22:16","Gemfire modules should support connection via locator","The gemfire modules currently accept server host and port. Provide an option to specify a locator host and port",2 +"XD-850","09/16/2013 08:29:32","JAR version mismatches","Looks like there are some version mismatch issues with the build/packaging of the XD components. Looking in xd/lib I see the following which looks suspicious: mqtt-client-0.2.1.jar mqtt-client-1.0.jar jackson-core-asl-1.9.13.jar jackson-mapper-asl-1.9.12.jar spring-integration-core-3.0.0.M3.jar spring-integration-http-2.2.5.RELEASE.jar spring-data-commons-1.6.0.M1.jar spring-data-commons-core-1.4.0.RELEASE.jar ",3 +"XD-872","09/19/2013 21:02:09","Make in-memory meta data stores persistent","Just wanted to create story for this - so we can consider whether this should be addressed. In at least 2 modules we use non-persisted states. We may want to consider making them persistent: *Twitter Search* uses an in-memory *MetadataStore* that keeps track of the twitter ids. There exists a corresponding issue for Spring Integration: ""Create a Redis-backed MetadataStore"" See: https://jira.springsource.org/browse/INT-3085 *File Soure*'s File Inbound Channel Adapter uses a AcceptOnceFileListFilter, which uses an in-memory Queue to keep track of duplicate files. ",8 +"XD-873","09/19/2013 21:26:09","File Source - Provide option to pass on File object","This story may need to be broken into several stories Particularly for Batch scenarios one may not want to run a ""file-to-string-transformer"" on the payload file in the file source but rather handle/pass the file reference itself (local SAN etc.) - e.g. in case somebody drops a 2GB or in scenarios where one wants to push those large files into HDFS and run hadoop jobs on the data. This is important for Batch Jobs as they need to access the file itself for the reader. We need to *keep in mind the various transports we support*. Not sure how Kryo handles file serialization. I would think we only need the File Meta Data to be persisted not the file-data itself (make that configurable??). ",8 +"XD-874","09/19/2013 21:33:03","For file based item reader jobs, step/job completion message should have name of file sent on named channel","It looks like we don't handle deletion of source files currently. We should provide some support for that - Maybe there is a way to into Spring Integration's PseudoTransactionManager support: http://docs.spring.io/spring-integration/api/org/springframework/integration/transaction/PseudoTransactionManager.html The *File Source* should possibly also support File archival functionality (But that might also be a dedicated processor?). Not sure where we want to set the semantic boundaries for the File Source. ",8 +"XD-885","09/20/2013 20:24:05","Add Batch Job Listeners Automatically","Add Batch Job Listeners Automatically * Each major listener category should send notifications to own channel (StepExecution, Chunk, Item etc.) * Add attribute to disallow automatic adding of listeners",8 +"XD-892","09/23/2013 12:16:21","Spring Batch Behavior change from M2 to M3","In M3, the batch job behavior has changed. In M2, it was much easier to create an invoke a batch job. In M3, a trigger is required. Figuring that change out isn't a big deal but the behavior of this batch job in M3 throws a stack trace, yet it executes. In M2, this same batch job runs fine with no stack trace. Logs are attached. I can't see a difference in the container log property files from M2 to M3. Turning the log settings down will suppress the traces, but I was not expecting the traces since they did not show up in M2. Stream Definitions: job create --name pdfLoadBatchJob --definition ""batch-pdfload --inputPath='LOCAL_PDF_PATH' --hdfsPath='REMOTE_HDFS_PATH'"" stream create --name pdfloadtrigger --definition ""trigger > job:pdfLoadBatchJob""",1 +"XD-897","09/24/2013 08:44:15","The HDFS Sink should support copying File payloads","We should support *java.io.file* payloads in order to support non-textual file and large text file payloads being uploaded to HDFS. Currently text file payloads are converted to a text stream in memory and, non-String payloads are converted to JSON first, using an ""object-to-json-transformer"". Ultimately we need to support streams such as ""file | hdfs"" where the actually payload being copied to HDFS is not necessarily JSON or textual. Need to be able to support headers in the message that will indicate which HDFS file the data should be stored in. ",8 +"XD-901","09/24/2013 15:54:33","Wrong Jetty Util on classpath for WebHdfs","We currently include jetty-util-6.1.26.jar but we need to add correct jar for different distributions - PHD uses jetty-util-7.6.10.v20130312.jar Need to check hadoop-hdfs dependencies for the distros and add jetty-util-* to the jar copy for each distro ",3 +"XD-904","09/27/2013 01:36:26","Fix hardcoded redis port from tests","kparikh-mbpro:spring-xd kparikh$ grep -r 6379 * | grep java spring-xd-analytics/src/test/java/org/springframework/xd/analytics/metrics/common/RedisRepositoriesConfig.java: cf.setPort(6379); spring-xd-analytics/src/test/java/org/springframework/xd/analytics/metrics/integration/GaugeHandlerTests.java: cf.setPort(6379); spring-xd-analytics/src/test/java/org/springframework/xd/analytics/metrics/integration/RichGaugeHandlerTests.java: cf.setPort(6379); spring-xd-dirt/src/test/java/org/springframework/xd/dirt/listener/RedisContainerEventListenerTest.java: cf.setPort(6379); ",1 +"XD-908","09/30/2013 04:31:48","Add aggregate counter query by number of points","It should be possible to supply a start or end date (or none for the present), plus a ""count"" value for the number of points required (i.e. after or prior to the given time).",3 +"XD-912","09/30/2013 11:50:06","Support for registering custom message converters","Users need to register custom message converters used by modules.",5 +"XD-917","10/01/2013 11:30:32","Make the parser aware of message conversion configuration","Enhance the stream parser to take message conversion into account in order to validate or automatically configure converters. For example: {noformat:nopanel=true} source --outputType=my.Foo | sink --inputType=some.other.Bar is likely invalid since XD doesn't know how to convert Foo->Bar. {noformat}",8 +"XD-919","10/02/2013 07:04:05","Remove json parameter from twittersearch source","json parameter is no longer required. Use --outputType=application/json instead",2 +"XD-928","10/08/2013 09:33:15","Refactor src/test/resources in Dirt","* In the testmodules.source ** Rename source-config to packaged-source ** Rename source-config to packaged-source-no-lib * All xml files should be prefixed with test. i.e. testsource, testsink * Make sure all tests pass with new configuration",1 +"XD-930","10/08/2013 10:40:56","Return rounded interval values from aggregate counter queries","The aggregate counter query result currently returns the interval that is passed in, whether it is aligned with the bucket resolution requested or not. It would be more intuitive if the time values returned are rounded (down) to the resolution of the query (i.e. whole minutes, hours, days or whatever).",2 +"XD-931","10/08/2013 14:40:24","Format option to display runtime module properties in shell","The runtime module properties requires a format option when displayed in the Shell Based on the PR (https://github.com/spring-projects/spring-xd/pull/340), the module properties are stored as String and displayed as is. ",2 +"XD-939","10/09/2013 12:18:03","Make Runtime modules listing by ContainerId pageable","The RuntimeContainersController (from PR#340) returns the list of runtime modules. Instead we need make it pageable.",2 +"XD-955","10/14/2013 10:08:27","Update Jobs documentation to include ""job launch"" command","This is currently missing and probably supersedes some of the stuff that's in there now.",1 +"XD-974","10/21/2013 14:23:12","The HDFS Sink should support compressing files as they are copied","Get a java.io.File and copy it into HDFS. Could be text or binary. Write compressed with Hadoop and third party codecs see: (XD-277, XD-279) should initially support: - bzip2 - LZO ",8 +"XD-981","10/21/2013 21:11:57","Missing guava-11.0.2.jar dependency for hadoop distros","We used to have a shared guava-11.0.2.jar dependency in the lib dir. That's no longer there so hadoop distros that require this now fail (at least any hadoop 2.0.x based ones) We should also upgrade to current Hadoop versions (Hadoop 2.2 stable)",3 +"XD-990","10/22/2013 15:41:21","The HDFS Store Library should support writing text with delimiter","Support writing lines of text separated by a delimiter Support writing a CSV (comma-separated variables), TSV (tab-separated variables), No compression",8 +"XD-991","10/22/2013 15:45:55","The HDFS Store Library should support compression when writing text","Need to support writing text in compressed format should initially support: - bzip2 - LZO",8 +"XD-992","10/22/2013 15:50:23","The HDFS Store Library should support writing to Sequence Files","Support for writing Sequence Files Without Compression Need a means to specify the key/value to be used ",8 +"XD-993","10/22/2013 15:52:11","The HDFS Store Library should support compression when writing to Sequence Files","Support for using compression when writing Sequence Files Either block or record-based compression. ",8 +"XD-994","10/22/2013 16:18:16","The HDFS Sink should support writing POJOs to HDFS using Parquet","Writing POJOs using Kite SDK ",8 +"XD-998","10/23/2013 09:54:37","Add documentation for gemfire cache-listener source","Need some sample usage, docs for https://github.com/spring-projects/spring-xd/tree/master/modules/source/gemfire ",1 +"XD-1005","10/23/2013 22:15:20","UI: User should be able to filter the list of executions on the execution tab","On clicking the “Executions” tab, user should see the list of all batch job executions. There should be options to filter job executions by few criteria such as by “Job name”, “execution time” etc., ",3 +"XD-1006","10/23/2013 22:18:22","UI: User should be able to view job detail from a specific job execution at Job Executions page","On clicking ""details"" link on a job execution row, user should see the job details. Job detail page will show all the information about the job, where as the table listing of jobs on the Execution tab may have omitted some columns or aggregated values to convey information more easily.",3 +"XD-1007","10/23/2013 23:09:48","UI: User should be able to see step execution info in a table below job detail","On clicking the job detail page, we should display all the step executions associated with the specific job execution in a table view.",3 +"XD-1016","10/25/2013 13:55:41","Provide an option to pretty print JSON output","Probably the cleanest approach is to provide a properties file in the xd config directory that enables this globally, e.g., json.pretty.print=true. This will require some refactoring of the ModuleTypeConversion plugin, i.e., use DI in streams.xml",3 +"XD-1039","11/07/2013 02:10:12","Composed of Composed fails at stream deployment time","Although composition of a module out of an already composed module seems to work at the 'module compose' level, trying to deploy a stream with that more complex module fails with at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Caused by: java.lang.IllegalArgumentException: each module before the last must provide 'output' at org.springframework.util.Assert.notNull(Assert.java:112) at org.springframework.xd.module.CompositeModule.initialize(CompositeModule.java:132) at org.springframework.xd.dirt.module.ModuleDeployer.deploy(ModuleDeployer.java:234) at org.springframework.xd.dirt.module.ModuleDeployer.deployModule(ModuleDeployer.java:224) at org.springframework.xd.dirt.module.ModuleDeployer.handleCompositeModuleDeployment(ModuleDeployer.java:180) at org.springframework.xd.dirt.module.ModuleDeployer.handleMessageInternal(ModuleDeployer.java:129) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:73) ... 63 more",5 +"XD-1041","11/08/2013 09:12:37","Upgrade to Spring for Apache Hadoop 1.0.2.RELEASE and Pivotal HD 1.1","Make sure the sinks and jobs work against Pivotal HD 1.1",3 +"XD-1045","11/09/2013 12:56:25","Create project for model that is common between client and server","this would elminate dependencies that are currently in the codebase, such as: * RESTModuleType and ModuleType enums * ModuleOption and DetailedModuleDefinitionResource.Option ",5 +"XD-1047","11/10/2013 12:41:11","Allow Aggregate Counter to use timestamp field in data.","Currently the aggregate counter aggerates by the current time. However the data may already have a timestamp in it (eg streams from activity events on a website). It would be useful as an alternative approach to be able to specify this field to aggregate on. This would have the following benefits: 1) The aggregate counts would be more accurate as they would reflect the acutal event times and not have any lag from an intermediate messgaging system they might have passed through. 2) If for whatever reason XD is down, comes back up and starts pulling queued messages from the messaging system the aggregate counter will reflect the correct event time. Currently you would get a gap and then a spike as a backlog of messages would get allocated to the current aggregate count. 3) Old data could be rerun through XD still creating the correct aggregate counts. Configuration would be something like stream create --name mytap --definition ""tap:mystream > aggregatecounter --name=mycount --timestampField=eventtime"" without the timestampfield it would behave as currently. ",5 +"XD-1048","11/10/2013 13:19:29","Extend aggregate counter to dynamically aggregate by field values in addition to time.","This would be a combination of the existing aggregate counter and field value counter functionality. For example if the stream data was for car purchases some fields might be colour, make and model. When analysing the aggregate data I dont just want to know how many were sold on Monday, but how many of each make or how many of each colour, or how many of a particular colour, make AND model. This would allow a dashboard type client to 'drill down' into each dimension or combination of dimensions (in real time without executing batch queries against the raw data) Ideally the aggregate counter would be specified as stream create --name mytap --definition ""tap:mystream > aggregatecounter --name=mycount --fieldNames=colour,make,model"" The keys would be dynamically created according to the field values in each record (ie in a similar way to the field value counter you would not need to predefine field values) and keys would be created for all combinations of the fields specified eg the record { ""colour"":""silver"" , ""make"":""VW"" , ""model"" : ""Golf"" } would increment the following key counters (in addtion to the existing time buckets) colour:black make:VW model:Golf colour:black.make:VW colour:black.model:Golf make:VW.model:Golf colour:black.make:VW.model:Golf ie the actual keys would look something like aggregatecounters.mycount.make:VW.model:Golf.201307 etc This may seem like it would generate a lot of key combinations but in practice the data generated will still be massively less than the raw data, and keys will only be created if that combination occurs in a time period. Also some fields may be dependent on each other (such as make and model in the above example) so the amount of possibilites for those composite keys would be a lot less that the number of one times the number of the other. ",5 +"XD-1060","11/12/2013 08:28:43","Add support for Hortonworks Data Platform 2.0","(apologies if a ticket already exists for this, but I didn't see one) I spun up the Hortonworks Data Platform 2.0 sandbox, but see it isn't supported by Spring XD yet. How hard would it be to add these Distro's in? Is it just a matter of dropping in a lib folder for hadoop22 and/or hdp20, and allowing those and options to be passed in via the --hadoopDistro option? I'm currently trying to work through the following tutorial, but using the HDP 2.0 sandbox instead of the 1.3 sandbox http://hortonworks.com/hadoop-tutorial/using-spring-xd-to-stream-tweets-to-hadoop-for-sentiment-analysis/ Thanks!",1 +"XD-1061","11/12/2013 12:55:40","Upgrade asciidoctor-gradle-plugin to 0.7.0","Looks like we need to spend a cycle on Asciidoc - as we still have the author-tag-issue - I thought we can simply upgrade the asciidoctor-gradle-plugin to 0.7.0 (currently 0.4.1) but that breaks the docs being generated.",2 +"XD-1072","11/15/2013 11:00:46","Add bridge module","Add a bridge module per XD-956 to support definitions like topic:foo > queue:bar . Convenient for testing for XD-1066",1 +"XD-1080","11/18/2013 20:17:33","Make deploy=false as the default when creating a new job","The automatic deployment of the job makes it harder to understand the lifecycle of the job and also does not allow for the opportunity to define any additional deployment metadata for how that job runs, e.g is it partitioned etc.",1 +"XD-1097","11/19/2013 03:20:56","Redo Hadoop distribution dependency management","The way we now include various Hadoop distributions is cumbersome to maintain. Need a better way of managing and isolating these dependencies on a module level rather than container level.",8 +"XD-1103","11/20/2013 14:03:34","JDBC sink is broken - looks like some config options got booted","The JDBC sink is broken. Simple ""time | jdbc"" results in: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [insert into test (payload) values(?)]; nested exception is java.sql.SQLSyntaxErrorException: user lacks privilege or object not found: TEST Looks like some config options got clobbered during bootification. ",5 +"XD-1104","11/21/2013 04:01:11","Create Shell Integration test fixture for jdbc related sink","Would be nice to have some kind of regression testing on the jdbc sink, as it becomes more prominent in XD. Use of an in memory db where we expose eg a JdbcTemplate to assert state",5 +"XD-1105","11/21/2013 04:03:31","Add some test coverage to mqtt modules","Even though it may be hard to come up with a mqtt broker, an easy test that should be automated is somesource | mqtt --topic=foo with mqtt --topics=foo | somesink And asserting that what is emitted to somesource ends up in somesink. ",3 +"XD-1108","11/22/2013 07:47:00","Restore lax command line options","Restore --foo=bar as well as --foo bar Validation of values should be done as a separate story",2 +"XD-1112","11/25/2013 05:37:30","Add port scan (and ability to disable) to container launcher","Spring Boot support port scanning if you set server.port=0 (and disable with -1), so we could make that the default for the container node.",0 +"XD-1114","11/25/2013 07:03:47","Investigate dropped Module Deployment Requests","We have observed in unit tests (see AbstractSingleNodeStreamIntegrationTests) that(Redis/SingleNode) occasionally fail. The root cause must be investigated further but there is some evidence to suggest that the control messages (ModuleDeploymentRequests) are not always received and handled by the ModuleDeployer. This does not produce an error but results in runtime stream failures. This problem may be resolved as part of the planned Deployment SPI but is being tracked here until we are certain that it has been resolved.",5 +"XD-1115","11/25/2013 07:36:43","We no longer validate the --hadoopDistro options in the xd scripts","We no longer validate the --hadoopDistro options in the xd scripts. Seem sthe classes doing this validation were removed for boot. We do this validation in the xd-shell script",3 +"XD-1122","11/26/2013 01:46:43","Add jmxPort to list of coerced cmd line options","Following merge of XD-1109. See discussion at https://github.com/spring-projects/spring-xd/commit/eaf886eab3b2ef07da55575029ccabb2c8a36af9#commitcomment-4701947",2 +"XD-1132","12/01/2013 12:08:41","JMS Module - add support for TOPICS","As a Spring XD user I need to listen on a JMS Topic and ingest the messages, so I can process the messages. Currently the module only allows for Queues",2 +"XD-1147","12/06/2013 09:30:28","Allow alternate transports to be used within a stream","Need to clarify if this means alternate transports within a stream, e.g source |[rabbit] | processor |[redis]| sink or specifying that a stream use an alternate transport to the one configured for the container. ",0 +"XD-1155","12/11/2013 12:21:10","The lib directory for hadoop12 contains mix of hadoop versions","This causes issues depending on which version of the core/common jar gets loaded first - like: xd:>hadoop fs ls -ls: Fatal internal error java.lang.UnsupportedOperationException: Not implemented by the DistributedFileSystem FileSystem implementation   at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:213)   at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2401)   at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2411)   at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)   at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166)   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:224)   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:207)   at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)   at org.springframework.xd.shell.hadoop.FsShellCommands.run(FsShellCommands.java:412)   at org.springframework.xd.shell.hadoop.FsShellCommands.runCommand(FsShellCommands.java:407)   at org.springframework.xd.shell.hadoop.FsShellCommands.ls(FsShellCommands.java:110)   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)   at java.lang.reflect.Method.invoke(Method.java:606)   at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:191)   at org.springframework.shell.core.SimpleExecutionStrategy.invoke(SimpleExecutionStrategy.java:64)   at org.springframework.shell.core.SimpleExecutionStrategy.execute(SimpleExecutionStrategy.java:48)   at org.springframework.shell.core.AbstractShell.executeCommand(AbstractShell.java:127)   at org.springframework.shell.core.JLineShell.promptLoop(JLineShell.java:483)   at org.springframework.shell.core.JLineShell.run(JLineShell.java:157)   at java.lang.Thread.run(Thread.java:724) ",3 +"XD-1159","12/13/2013 08:09:46","Add a MongoDB Sink","This should be quite straightforward, since the Spring Data Mongo jars are already included. We have this working by just adding the attached sink context file and the spring-integration-mongodb jar. (This works for JSON string streams, but a mongo converter probably needs added to support Tuple conversion) ",5 +"XD-1160","12/13/2013 11:44:21","Standardize naming and unit for options across modules","We should standardize on the options between modules: idleTimeout - timeout rolloverSize - rollover Also, need to standardize on unit used for timeout - should this be s or ms? ",8 +"XD-1161","12/13/2013 13:33:27","Re-deployment of hdfs sink reuses filename of first deployment","Need to check for existing files with the same file counter",3 +"XD-1162","12/13/2013 17:07:49","Column option of JDBC sink should not convert underscore to property name.","Current implementation of column option of JDBC sink convert underscore to java property name. If database column contains underscore, there is no way to store data. So JdbcMessagePayloadTransformer should not use JdbcUtils.convertUnderscoreNameToPropertyName even if column contains ""_"".",1 +"XD-1170","12/16/2013 16:23:51","Splunk module is broken","Splunk sink module doesn't work at all. It throws java.lang.VerifyError exception like following. nested exception is java.lang.VerifyError: class org.springframework.integration.splunk.outbound.SplunkOutboundChannelAdapter overrides final method onInit.()V This is because SplunkOutputChannelAdapter refers old spring integration jar, but recent AbstractReplyProducingMessageHandler (which SplunkOutputChannelAdapter extends) set final to onInit method. Hence it doesn't work. SplunkOutboundChannelAdapter should be fixed to not override onInit method and replace the jar file spring-integration-splunk-1.0.0.M1.jar.",2 +"XD-1176","12/18/2013 10:42:19","Update to spring-data-hadoop 2.0.0.M4","Update dependencies to spring-data-hadoop 2.0.0.M4",1 +"XD-1182","12/19/2013 11:42:39","Update to spring-data-hadoop 2.0.0.M5","Update to spring-data-hadoop 2.0.0.M5 when it is released and remove the temporary DatasetTemplateAllowingNulls in spring-xd-hadoop We should also review the supported hadoop distros - think we should support anything that is current/stable: - hadoop12 - hadoop22 - phd1 (PHD 1.1) - hdp13 - hdp20 - cdh4 ",3 +"XD-1190","12/27/2013 07:20:33","Setup precedence order for module properties' property resolver","The PropertyResolver needs to follow the below precedence order on PropertySources when resolving the module properties: From lowest to the highest order, 0 application.yml 1 applicaiton.yml fragment 2 property placeholders 2a property placeholder under 'shared' config directory 2b property placeholder under module/(source/sink/processor)/config directory 3. environment variables 4. system properties 5. command line ",5 +"XD-1191","12/30/2013 08:18:57","JDBC sink destroys existing table","The jdbc sink deletes existing table and creates a single column payload one even if properties file has 'initializeDatabase=false'",3 +"XD-1217","01/10/2014 07:14:47","twittersearch and twitterstream should support compatible formats","Currently twitterstream emits native twitter json whereas twittersearch uses SI/Spring Social and emits spring social Tweet types. This makes it difficult to replace twitter sources and reuse XD stream definitions. This requires coordination with SS 1.1.0 and SI 4.0 GA releases. NOTE: I think it's a good idea to continue to support native twitter JSON, keep as an option for twitterstream, but the default should be Tweet types.",5 +"XD-1220","01/10/2014 12:30:46","Batch jobs should use application.yml provided connection as default","Batch jobs should use application.yml provided connection as default. They now have their own configuration in batch-jdbc.properties. This config needs to account for any changes made to application.yml settings so the data is written to the batch metadata database by default.",5 +"XD-1228","01/14/2014 22:07:42","Provide a easy, prescriptive means to perform unit and basic stream integration tests.","AbstractSingleNodeStreamDeploymentIntegrationTests is the basis of 'state of the art' testing for a stream that allows you to get a reference to the input and output channel of the stream http | filter | transform | file. One can send messages to the channel after the http module, but before filter and one can retrieve the messages that were sent to the channel after the transform module but before file. The current implementation inside AbstractSingleNodeStreamDeploymentIntegrationTests can be improved in terms of ease of use for end-users. The issue is to create as simple a way as possible for a user to test their processing modules/stream definitions without having to actually do a real integration test by sending data to the input module. Either as a separate issue or as part of this one, the documentation https://github.com/spring-projects/spring-xd/wiki/Creating-a-Processor-Module should be updated to explicitly show how to use this issue's test functionality. ",8 +"XD-1240","01/14/2014 22:58:40","Add to Acceptance Test EC2 CI build plan a stage that uses XD distributed mode with rabbit","See https://quickstart.atlassian.com/download/bamboo/get-started/bamboo-elements ""Stages are comprised of one or more Jobs, which run in parallel"" we would like the tests across the rabbit and redis transport to occur in parallel. ",8 +"XD-1241","01/14/2014 22:59:58","Add to Acceptance Test EC2 job a stage that uses XD distributed mode with redis","See https://quickstart.atlassian.com/download/bamboo/get-started/bamboo-elements ""Stages are comprised of one or more Jobs, which run in parallel"" we would like the tests across the rabbit and redis transport to occur in parallel.",8 +"XD-1245","01/15/2014 07:04:56","Develop basic acceptance test application to exercise based XD-EC2 deployment from CI","Create a first pass at an acceptance test app for a stream definition of http | log. This will involve creating two new projects in xd 1. spring-xd-integration-test 2. spring-xd-acceptance-tests #1 will contain generally useful utility methods for acceptance test, such as sending data over http, obtaining and asserting JMX values of specific modules. #2 will contain tests that use #1 to test the various out of the box modules provides in XD.",5 +"XD-1252","01/17/2014 04:43:43","Allow processor script variables to be passed as module parameters","Currently, if we want to bind values to script variables we need to put them in a properties file like so: xd:> stream create --name groovyprocessortest --definition ""http --port=9006 | script --location=custom-processor.groovy --properties-location=custom-processor.properties | log Ideally it should be: xd:> stream create --name groovyprocessortest --definition ""http --port=9006 | script --location=custom-processor.groovy --foo=bar --baz=boo | log ",5 +"XD-1255","01/20/2014 09:00:31","Create assertion to get count of messages processed by a specific module in a stream","The modules are exposed via JMX and in turn exposed over http via jolokia. See https://jira.springsource.org/browse/XD-343. This issue is to develop a helper method that given a stream id and/or module name, assert that the number of messages processed after sending stimulus messages is as expected. e.g. int originalCount = getCount(""testStream"", ""file""); //do stuff that generates 100 messages assertCount(""testStream"", ""file"", 100, originalCount) For now we can assume we know the location of where the modules are located by assuming we have only one container deployed.",5 +"XD-1256","01/21/2014 02:49:06","Running XD as service","It is useful to configure operating system so that it will start Spring XD automatically on boot. For example, in Linux it would be great if Spring XD distro contains init.d script to run it as service. A typical init.d script gets executed with arguments such as ""start"", ""stop"", ""restart"", ""pause"", etc. In order for an init.d script to be started or stopped by init during startup and shutdown, the script needs to handle at least ""start"" and ""stop"" arguments. ",2 +"XD-1267","01/24/2014 15:33:07","Improve configuration option handling","There are inconsistencies in our current approach for handling module options (using property file for default vs. classes has different behavior in terms of over-riding with system properties. Need to rationalize the behavior.",0 +"XD-1270","01/27/2014 20:51:07","Add states to the deployment of stream","Improve how the state of the stream is managed. A deploy command moves the stream from the undeployed state to the deploying state. If all modules in the stream are successfully deployed, the stream state is ‘deployed’ If one or more module deployments failed, the stream state is failed. Any modules that were successfully deployed, are still running. Sending an undeploy command will stop all modules of the stream and return the stream to the undeployed state. For the individual modules that failed, we will be able to find out which ones failed. Not yet sure if we can try to redeploy just those parts of the stream that failed. See the [design doc|https://docs.google.com/a/gopivotal.com/document/d/1kWtoH_xEF1wMklzQ8AZaiuhBZWIlpCDi8G9_hAP8Fgc/edit#heading=h.2rk74f16ow4i] for more details. Story points for this issue are the total of all the story points for the subtasks.",20 +"XD-1273","01/28/2014 06:02:59","The use of labelled modules and taps needs more explanation","https://github.com/spring-projects/spring-xd/wiki/Taps mentions this but the explanation needs more elaboration and example, e.g. mystream -> ""http | flibble: transform --expression=payload.toUpperCase() | file"" ""tap:stream:mystream.flibble > transform --expression=payload.replaceAll('A','.') | log"");",1 +"XD-1282","01/30/2014 06:41:13","Add caching to ModuleOptionsMetadataResolver","Will likely involve having the module identity (type+name) be part of the OptionsMetadata identity/cache key",5 +"XD-1296","02/10/2014 01:10:41","Few integration tests fail if JMX is enabled","If JMX is enabled, some of the integration tests fail. This is similar to what we see in XD-1295. One example of this case is, the test classes that extend StreamTestSupport. In StreamTestSupport, the @BeforeClass has this line: moduleDeployer = containerContext.getBean(ModuleDeployer.class); When JMX is enable, the IntegrationMBeanExporter creates JdkDynamicProxy for the ModuleDeployer (since it is of type MessageHandler) and thereby the above line to get bean by the implementing class type (ModuleDeployer) fails. There are few other places where we use to refer the implementing classes on getBean(). Looks like we need to fix those as well. ",2 +"XD-1300","02/10/2014 19:01:06","Handling boolean type module option properties defaults in option metadata","There are few boolean type module option properties whose default values are specified in the module definitions than their corresponding ModuleOptionsMetaData. Also, when using boolean we need to have module option using primitive type boolean than Boolean type. Currently, these are some of the module options that require this change: ""initializeDatabase"" in modules filejdbc, hdfsjdbc job modules, aggregator processor module, jdbc sink module ""restartable"" in all the job modules ""deleteFiles"" in filejdbc, filepollhdfs job modules ",3 +"XD-1301","02/11/2014 08:54:05","MBeans are not destroyed if stream is created and destroyed with no delay","Problem: The container that the stream was deployed to, will not allow new streams to be deployed. Once the error occurs, the only solution is to terminate the XD Container and restart it. To reproduce create a stream foo and destroy the stream, then create the stream foo again. This best done programmatically, taking the same steps using the ""shell"" may not reproduce the problem. i.e. if you put a Sleep of 1-2 seconds between the destroy and the next create, it works fine ",5 +"XD-1307","02/12/2014 03:53:20","Use HATEOAS Link templates","HATEOAS 0.9 introduced some support for templated links. This should be leveraged to properly handle eg /streams/{id} instead of using string concatenation",5 +"XD-1309","02/12/2014 06:08:55","JSR303 validation of options interferes with dsl completion","When using a JSR303 annotated class for module options, the binding failures should be bypassed, as they interfere with completion proposals. ",5 +"XD-1310","02/12/2014 07:56:20","Misleading error message when trying to restart a job exec","Disregard the missing date that is caused by another problem. Here is the setup: {noformat} xd:>job execution list Id Job Name Start Time Step Execution Count Status -- -------- -------------------------------- -------------------- --------- 13 foo Europe/Paris 0 STARTING 12 foo 2014-02-12 15:39:46 Europe/Paris 1 FAILED 11 foo 2014-02-12 15:39:29 Europe/Paris 1 COMPLETED 10 foo 2014-02-12 15:38:36 Europe/Paris 1 COMPLETED 9 foo 2014-02-12 15:38:21 Europe/Paris 1 COMPLETED 8 foo Europe/Paris 0 STARTING 7 foo 2014-02-12 15:25:41 Europe/Paris 1 COMPLETED 6 foo 2014-02-12 15:25:04 Europe/Paris 1 FAILED 5 foo 2014-02-12 15:14:32 Europe/Paris 1 FAILED 4 foo 2014-02-12 15:14:13 Europe/Paris 1 FAILED 3 foo 2014-02-12 15:13:54 Europe/Paris 1 FAILED 2 foo 2014-02-12 15:13:18 Europe/Paris 1 FAILED 1 foo 2014-02-12 15:12:58 Europe/Paris 1 FAILED 0 foo 2014-02-12 15:11:44 Europe/Paris 1 FAILED xd:>job execution restart --id 12 Command failed org.springframework.xd.rest.client.impl.SpringXDException: Job Execution 12 is already running. {noformat} while the server exception is a bit better: {noformat} Caused by: org.springframework.batch.core.repository.JobExecutionAlreadyRunningException: A job execution for this job is already running: JobInstance: id=11, version=0, Job=[foo] at org.springframework.batch.core.repository.support.SimpleJobRepository.createJobExecution(SimpleJobRepository.java:120) {noformat} I'd argue we should not speak in terms of execution ids if possible, but rather in terms of job names ",1 +"XD-1311","02/12/2014 07:58:43","Job execution list should mention jobs that have been deleted","Create a job, execute it a couple of times, destroy it and then invoke job execution list. The job name column should mention that a job is defunct (even though a job with the same name could have been re-created in the interim)",3 +"XD-1312","02/12/2014 08:01:19","Job execution restart fails with NPE","Create a job, launch it but make it fail (eg filejdbc with missing file) job execution list => it's there, as FAILED. Good job execution restart ==> Fails with NPE: {noformat} 16:59:42,160 ERROR http-nio-9393-exec-7 rest.RestControllerAdvice:191 - Caught exception while handling a request java.lang.NullPointerException at org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:351) at org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:135) at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:128) at sun.reflect.GeneratedMethodAccessor157.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.batch.core.configuration.annotation.SimpleBatchConfiguration$PassthruAdvice.invoke(SimpleBatchConfiguration.java:117) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207) at com.sun.proxy.$Proxy39.run(Unknown Source) at org.springframework.batch.admin.service.SimpleJobService.restart(SimpleJobService.java:179) at org.springframework.xd.dirt.plugins.job.DistributedJobService.restart(DistributedJobService.java:77) at org.springframework.xd.dirt.rest.BatchJobExecutionsController.restartJobExecution(BatchJobExecutionsController.java:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springfram {noformat}",5 +"XD-1313","02/12/2014 08:02:49","Commands that start a job should return a representation of the JobExecution","See discussion at https://github.com/spring-projects/spring-xd/pull/572",0 +"XD-1314","02/12/2014 09:05:27","Create XD .zip distribution for YARN","Create XD .zip distribution for YARN that adds an additional sub-project to the spring-xd repo for building the xd-YARN.zip Link into main build file Produce a new artifact spring-xd-v-xyz-yarn.zip as part of the nightly CI process -- will now have 2 artifacts, main xd.zip distribution and xd-yarn.zip Does not include any Hadoop distribution libraries Does include spring-hadoop jars for Apache22 ‘unflavored’ ",3 +"XD-1316","02/12/2014 11:51:29","UI:Fix E2E test warning","When running E2E tests the following warning may be observed: {code} Running ""karma:e2e"" (karma) task INFO [karma]: Karma v0.10.9 server started at http://localhost:7070/_karma_/ INFO [launcher]: Starting browser PhantomJS TypeError: Cannot read property 'verbose' of undefined at enableWebsocket (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-connect-proxy/lib/utils.js:101:18) at Object.utils.proxyRequest [as handle] (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-connect-proxy/lib/utils.js:109:5) at next (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-contrib-connect/node_modules/connect/lib/proto.js:193:15) at Object.livereload [as handle] (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-contrib-connect/node_modules/connect-livereload/index.js:147:5) at next (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-contrib-connect/node_modules/connect/lib/proto.js:193:15) at Function.app.handle (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-contrib-connect/node_modules/connect/lib/proto.js:201:3) at Server.app (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-contrib-connect/node_modules/connect/lib/connect.js:65:37) at Server.EventEmitter.emit (events.js:98:17) at HTTPParser.parser.onIncoming (http.js:2108:12) at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:121:23) at Socket.socket.ondata (http.js:1966:22) at TCP.onread (net.js:525:27) {code} ",2 +"XD-1319","02/13/2014 03:27:49","Allow mixins of ModuleOptionsMetadata","A lot of modules have similar options. Moreover, job modules often have options that belong to at least two domains (eg jdbc + hdfs). I think that by using FlattenedCompositeModuleOptionsMetadata, we could come up with a way to combine several options POJOs into one. Something like: public class JdbcHdfsOptionsMetadata { @OptionsMixin private JdbcOptionsMetadata jdbc; @OptionsMixin private HdfsOptionsMetadata hdfs; } this would expose eg ""driverClass"" as well as ""rolloverSize"" as top level options. Values could be actually injected into the fields, so that eg custom validation could occur (default validation for the mixin class would occur by default)",5 +"XD-1320","02/13/2014 07:53:12","Make Batch Job Restarts Work with Distributed Nodes ","Job restart fails with NPE. See PR for XD-1090: https://github.com/spring-projects/spring-xd/pull/572 ",0 +"XD-1321","02/13/2014 11:45:14","Add XD deployment for YARN","Add YARN specific code based on Janne's prototyping Add YARN Client and AppMaster implementations and startup config files This includes shell scripts to deploy XD to YARN Test working on Apache 2.2 distribution We can modify config files, everything should be possible to override by providing command-line args or env variables. ./xd-yarn-deploy --zipFile /tmp/spring-xd-yarn.zip --config /tmp/spring-xd-yarn.yml ",8 +"XD-1322","02/13/2014 11:47:07","Add way to provide module config options for XD on YARN","There seems to be some intersection with the work for this issue and the rationalization of how module properties are handled. There will be changes to configuration/property management support such that each module (source, sink, etc) will be able to also be overridden in spring-xd.yml (or wherever -Dspring.config.location points to. The HDFS sink module for example, will have default values based on it's OptionsMetadata and will be of the form ..