[ES_master] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]

Description

Hi guys,
I see lots of DEBUG messages in es log and none of new documents are storing in elasticsearch.

[2014-06-15 10:24:13,772][INFO ][node ] [LogStash master] initializing ...
[2014-06-15 10:24:13,777][INFO ][plugins ] [LogStash master] loaded [], sites [head, paramedic]
[2014-06-15 10:24:15,991][INFO ][node ] [LogStash master] initialized
[2014-06-15 10:24:15,991][INFO ][node ] [LogStash master] starting ...
[2014-06-15 10:24:16,083][INFO ][transport ] [LogStash master] bound_address {inet/0:0:0:0:0:0:0:0:9300}, publish_address {inet/172.22.104.140:9300}
[2014-06-15 10:24:19,114][INFO ][cluster.service ] [LogStash master] new_master [LogStash master][Gd8Pe6TwREGBOtOQr0wogQ][logstash.improve][inet/172.22.104.140:9300], reason: zen-disco-join (elected_as_master)
[2014-06-15 10:24:19,138][INFO ][discovery ] [LogStash master] ID_logs/Gd8Pe6TwREGBOtOQr0wogQ
[2014-06-15 10:24:19,188][INFO ][http ] [LogStash master] bound_address {inet/0:0:0:0:0:0:0:0:9200}, publish_address {inet/172.22.104.140:9200}
[2014-06-15 10:24:19,937][INFO ][gateway ] [LogStash master] recovered [38] indices into cluster_state
[2014-06-15 10:24:19,941][INFO ][node ] [LogStash master] started
[2014-06-15 10:25:19,623][DEBUG][action.bulk ] [LogStash master] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2014-06-15 10:25:19,623][DEBUG][action.bulk ] [LogStash master] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2014-06-15 10:25:19,623][DEBUG][action.bulk ] [LogStash master] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2014-06-15 10:25:19,624][DEBUG][action.bulk ] [LogStash master] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]

I have ES+logstash running on one VM. This are services params:
493 1676 19.5 11.8 53384924 1201244 ? Sl 12:47 1:47 /usr/bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.pidfile=/var/run/elasticsearch/elasticsearch.pid -Des.path.home=/usr/share/elasticsearch -cp :/usr/share/elasticsearch/lib/elasticsearch-1.2.1.jar:/usr/share/elasticsearch/lib/:/usr/share/elasticsearch/lib/sigar/ -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch org.elasticsearch.bootstrap.Elasticsearch
root 2561 23.6 2.5 3131640 256564 pts/0 Sl 12:55 0:22 /usr/bin/java -Xmx1024m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -jar /opt/logstash/logstash-1.4.1/vendor/jar/jruby-complete-1.7.11.jar -I/opt/logstash/logstash-1.4.1/lib /opt/logstash/logstash-1.4.1/lib/logstash/runner.rb agent --config /etc/logstash/conf.d --log /var/log/logstash/logstash.log -w 1

Please advice what might be the cause of troubles with elasticsearch?
Regards
Sergey

Activity

Show:
Sergey Zemlyanoy
June 15, 2014, 10:57 AM
Edited

Along with above es messages I get some in logstash log, but I'm not pretty sure if it's related to my topic:

{:timestamp=>"2014-06-15T12:46:40.259000+0200", :message=>"Failed to flush outgoing items", :outgoing_count=>100, :exception=>#<Errno::ECONNREFUSED: Connection refused - Connection refused>, :backtrace=>["org/jruby/ext/socket/RubySocket.java:201:in `connect_nonblock'", "/opt/logstash/logstash-1.4.1/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/connection.rb:156:in `connect'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/logstash-1.4.1/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/connection.rb:139:in `connect'", "/opt/logstash/logstash-1.4.1/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:406:in `connect'", "org/jruby/RubyProc.java:271:in `call'", "/opt/logstash/logstash-1.4.1/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/pool.rb:48:in `fetch'", "/opt/logstash/logstash-1.4.1/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:403:in `connect'", "/opt/logstash/logstash-1.4.1/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:319:in `execute'", "/opt/logstash/logstash-1.4.1/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:217:in `post!'", "/opt/logstash/logstash-1.4.1/lib/logstash/outputs/elasticsearch_http.rb:218:in `post'", "/opt/logstash/logstash-1.4.1/lib/logstash/outputs/elasticsearch_http.rb:213:in `flush'", "/opt/logstash/logstash-1.4.1/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1339:in `each'", "/opt/logstash/logstash-1.4.1/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/logstash-1.4.1/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in `buffer_flush'", "/opt/logstash/logstash-1.4.1/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159:in `buffer_receive'", "/opt/logstash/logstash-1.4.1/lib/logstash/outputs/elasticsearch_http.rb:191:in `receive'", "/opt/logstash/logstash-1.4.1/lib/logstash/outputs/base.rb:86:in `handle'", "(eval):106:in `initialize'", "org/jruby/RubyProc.java:271:in `call'", "/opt/logstash/logstash-1.4.1/lib/logstash/pipeline.rb:266:in `output'", "/opt/logstash/logstash-1.4.1/lib/logstash/pipeline.rb:225:in `outputworker'", "/opt/logstash/logstash-1.4.1/lib/logstash/pipeline.rb:152:in `start_outputs'"], :level=>:warn}

Suyog Rao
February 7, 2015, 12:25 AM

Hi this looks like an issue in Elasticsearch cluster. Can you please debug it at that level?

Assignee

Logstash Developers

Reporter

Sergey Zemlyanoy

Labels

None
Configure