docker - Kubernetes/Spring Cloud Dataflow stream > spring.cloud.stream.bindings.output.destination is ignored by producer -


i'm trying run "hello, world" spring cloud data flow stream based on simple example explained @ http://cloud.spring.io/spring-cloud-dataflow/. i'm able create simple source , sink , run on local scdf server using kafka, until here correct , messages produced , consumed in topic specified scdf.

now, i'm trying deploy in private cloud based on instructions listed @ http://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/current-snapshot/reference/htmlsingle/#_getting_started. using deployment i'm able deploy simple "time | log" out-of-the-box stream no problems, example fails since producer not writing in topic specified when pod created (for instance, spring.cloud.stream.bindings.output.destination=ntest33.nites-source9) in topic "output". have similar problem sink component, creates , expect messages in topic "input".

i created stream definition using dashboard:

nsource1 | log 

and container args source are:

--spring.cloud.stream.bindings.output.producer.requiredgroups=ntest34 --spring.cloud.stream.bindings.output.destination=ntest34.nsource1 

code snippet source component package xxxx;

import java.text.simpledateformat; import java.util.date;  import org.springframework.boot.springapplication; import org.springframework.boot.autoconfigure.springbootapplication; import org.springframework.cloud.stream.annotation.enablebinding; import org.springframework.cloud.stream.messaging.source; import org.springframework.context.annotation.bean; import org.springframework.integration.annotation.inboundchanneladapter; import org.springframework.integration.core.messagesource; import org.springframework.messaging.support.genericmessage;  @springbootapplication @enablebinding(source.class) public class hellonitesapplication { public static void main(string[] args) {     springapplication.run(hellonitesapplication.class, args); }  @bean @inboundchanneladapter(value = source.output) public messagesource<string> timermessagesource() {     return () -> new genericmessage<>("hello " + new simpledateformat().format(new date())); } 

and in logs can see clearly

2017-04-07t09:44:34.596842965z 2017-04-07 09:44:34,593 info main o.s.i.c.directchannel:81 - channel 'application.output' has 1 subscriber(s).

question is, how override topic messages must produced/consumed or attribute , values use make work on k8s?

update: have similar problem using rabbitmq

2017-04-07t12:56:40.435405177z 2017-04-07 12:56:40.435 info 7 --- [ main] o.s.integration.channel.directchannel : channel 'application.output' has 1 subscriber(s).

the problem docker image. still don't know details using dockerfile indicated @ https://spring.io/guides/gs/spring-boot-docker/ instantiated 2 processes in docker container, 1 parameters, , other without, 1 uptime , therefore being used.

the solution replace

entrypoint [ "sh", "-c", "java $java_opts -djava.security.egd=file:/dev/./urandom -jar /app.jar" ] 

with

entrypoint [ "java", "-jar", "/app.jar" ] 

and started working. there must reason why example indicated first entrypoint , why 2 processes created, reason still beyond understanding.


Comments

Popular posts from this blog

Command prompt result in label. Python 2.7 -

javascript - How do I use URL parameters to change link href on page? -

amazon web services - AWS Route53 Trying To Get Site To Resolve To www -