Logstash randomly stops to proceed UDP syslog messages

Description

Hi,
I’m trying to use Logstash to aggregate and parse logs from the network equipment, but encountered Logstash’s strange behavior: it randomly freezes and doesn’t proceed any new log messages (no any relation to time: it can be after 1 hour after start or 20 hours). I have fresh CentOS 6.5 installation, SELinux is disabled, using official Oracle JRE 1.7.0_45, limits for maximum number of open files and maximum locked-in-memory address space are set accordingly to 65536 and unlimited. Tried to run Logstash as non-privileged user as well as root – no difference. The same behavior with 1.2.2. I’ve attached debug logs and content of the config file.
I would appreciate any help

Config file

input { udp { port => 5140 tags => [ "syslog" ] debug => true } } filter { grok { patterns_dir => "/etc/logstash/patterns/" match => [ "message", "^<%{POSINT:cisco_syslogpri}>%{NONNEGINT}:%{SPACE}%{NONNEGINT:cisco_seqnum}:%{SPACE}%{ENCISCOTIMESTAMP:cisco_timestamp}:%{SPACE}%%{CISCOTAG:cisco_tag}:%{SPACE}%{GREEDYDATA:cisco_message}" ] add_tag => [ "cisco" ] } ruby { code => "event['cisco_priority'] = Integer(event['cisco_syslogpri']) - 184" } } output { elasticsearch { embedded => true } }

Logstash run command

/usr/bin/java -Des.path.data=/data/ -jar /usr/share/java/logstash.jar agent -vv -f /etc/logstash -l /var/log/logstash/logstash.log -p /usr/share

Debug logs

{:timestamp=>"2013-12-17T16:53:23.729000+0100", :message=>"filter received", :event=>#<LogStash::Event:0x1a63b91a @cancelled=false, @data={"message"=>"<188>4357: 004603: Dec 17 16:53:22.715 CET: %FW-4-TCP_OoO_SEG: Deleting session as expected TCP segment with seq:859306019 has not arrived even after 25 seconds - session 192.168.130.21:1024 to 10.255.0.1:2000 on zone-pair SECURITY_INSIDE-WAN class CMAP_WAN", "@timestamp"=>"2013-12-17T15:53:23.728Z", "@version"=>"1", "tags"=>["syslog"], "host"=>"192.168.128.1"}>, :level=>:info, :file=>"(eval)", :line=>"18"} {:timestamp=>"2013-12-17T16:53:23.730000+0100", :message=>"Running grok filter", :event=>#<LogStash::Event:0x1a63b91a @cancelled=false, @data={"message"=>"<188>4357: 004603: Dec 17 16:53:22.715 CET: %FW-4-TCP_OoO_SEG: Deleting session as expected TCP segment with seq:859306019 has not arrived even after 25 seconds - session 192.168.130.21:1024 to 10.255.0.1:2000 on zone-pair SECURITY_INSIDE-WAN class CMAP_WAN", "@timestamp"=>"2013-12-17T15:53:23.728Z", "@version"=>"1", "tags"=>["syslog"], "host"=>"192.168.128.1"}>, :level=>:debug, :file=>"java/logstash.jar!/logstash/filters/grok.rb", :line=>"288"} {:timestamp=>"2013-12-17T16:53:23.732000+0100", :message=>"Event now: ", :event=>#<LogStash::Event:0x1a63b91a @cancelled=false, @data={"message"=>"<188>4357: 004603: Dec 17 16:53:22.715 CET: %FW-4-TCP_OoO_SEG: Deleting session as expected TCP segment with seq:859306019 has not arrived even after 25 seconds - session 192.168.130.21:1024 to 10.255.0.1:2000 on zone-pair SECURITY_INSIDE-WAN class CMAP_WAN", "@timestamp"=>"2013-12-17T15:53:23.728Z", "@version"=>"1", "tags"=>["syslog", "_grokparsefailure"], "host"=>"192.168.128.1"}>, :level=>:debug, :file=>"java/logstash.jar!/logstash/filters/grok.rb", :line=>"308"} {:timestamp=>"2013-12-17T16:53:23.733000+0100", :message=>"Exception in filterworker", "exception"=>#<TypeError: can't convert nil into Integer>, "backtrace"=>["org/jruby/RubyKernel.java:499:in `Integer'", "(ruby filter code):1:in `register'", "org/jruby/RubyProc.java:271:in `call'", "file:/usr/share/java/logstash.jar!/logstash/filters/ruby.rb:38:in `filter'", "(eval):40:in `initialize'", "org/jruby/RubyProc.java:271:in `call'", "file:/usr/share/java/logstash.jar!/logstash/pipeline.rb:255:in `filter'", "file:/usr/share/java/logstash.jar!/logstash/pipeline.rb:196:in `filterworker'", "file:/usr/share/java/logstash.jar!/logstash/pipeline.rb:136:in `start_filters'"], :level=>:error, :file=>"java/logstash.jar!/logstash/pipeline.rb", :line=>"205"} {:timestamp=>"2013-12-17T16:53:23.734000+0100", :message=>"Plugin is finished", :plugin=><LogStash::Filters::Grok patterns_dir=>["/etc/logstash/patterns/"], match=>{"message"=>"^<%{POSINT:cisco_syslogpri}>%{NONNEGINT}:%{SPACE}%{NONNEGINT:cisco_seqnum}:%{SPACE}%{ENCISCOTIMESTAMP:cisco_timestamp}:%{SPACE}%%{CISCOTAG:cisco_tag}:%{SPACE}%{GREEDYDATA:cisco_message}"}, add_tag=>["cisco"], tag_on_failure=>["_grokparsefailure"]>, :level=>:info, :file=>"java/logstash.jar!/logstash/plugin.rb", :line=>"59"} log4j, [2013-12-17T17:06:37.945] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] marking shard [kibana-int][0] as inactive (inactive_time[30m]) indexing wise, setting size to [500kb] log4j, [2013-12-17T17:06:37.945] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] marking shard [kibana-int][1] as inactive (inactive_time[30m]) indexing wise, setting size to [500kb] log4j, [2013-12-17T17:06:37.945] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] marking shard [kibana-int][2] as inactive (inactive_time[30m]) indexing wise, setting size to [500kb] log4j, [2013-12-17T17:06:37.945] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] marking shard [kibana-int][3] as inactive (inactive_time[30m]) indexing wise, setting size to [500kb] log4j, [2013-12-17T17:06:37.945] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [kibana-int][0] updating index_buffer_size from [17.1mb] to (inactive) [500kb] log4j, [2013-12-17T17:06:37.948] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [kibana-int][1] updating index_buffer_size from [17.1mb] to (inactive) [500kb] log4j, [2013-12-17T17:06:37.950] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [kibana-int][2] updating index_buffer_size from [17.1mb] to (inactive) [500kb] log4j, [2013-12-17T17:06:37.951] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [kibana-int][3] updating index_buffer_size from [17.1mb] to (inactive) [500kb] log4j, [2013-12-17T17:06:37.953] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] recalculating shard indexing buffer (reason=active/inactive[true] created/deleted[false]), total is [171.1mb] with [6] active shards, each shard set to indexing=[28.5mb], translog=[64kb] log4j, [2013-12-17T17:06:37.953] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [kibana-int][4] updating index_buffer_size from [17.1mb] to [28.5mb] log4j, [2013-12-17T17:06:37.953] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][0] updating index_buffer_size from [17.1mb] to [28.5mb] log4j, [2013-12-17T17:06:37.953] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][1] updating index_buffer_size from [17.1mb] to [28.5mb] log4j, [2013-12-17T17:06:37.954] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][2] updating index_buffer_size from [17.1mb] to [28.5mb] log4j, [2013-12-17T17:06:37.954] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][3] updating index_buffer_size from [17.1mb] to [28.5mb] log4j, [2013-12-17T17:06:37.954] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][4] updating index_buffer_size from [17.1mb] to [28.5mb] log4j, [2013-12-17T17:36:07.963] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] marking shard [kibana-int][4] as inactive (inactive_time[30m]) indexing wise, setting size to [500kb] log4j, [2013-12-17T17:36:07.963] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [kibana-int][4] updating index_buffer_size from [28.5mb] to (inactive) [500kb] log4j, [2013-12-17T17:36:07.972] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] recalculating shard indexing buffer (reason=active/inactive[true] created/deleted[false]), total is [171.1mb] with [5] active shards, each shard set to indexing=[34.2mb], translog=[64kb] log4j, [2013-12-17T17:36:07.972] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][0] updating index_buffer_size from [28.5mb] to [34.2mb] log4j, [2013-12-17T17:36:07.972] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][1] updating index_buffer_size from [28.5mb] to [34.2mb] log4j, [2013-12-17T17:36:07.972] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][2] updating index_buffer_size from [28.5mb] to [34.2mb] log4j, [2013-12-17T17:36:07.972] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][3] updating index_buffer_size from [28.5mb] to [34.2mb] log4j, [2013-12-17T17:36:07.972] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][4] updating index_buffer_size from [28.5mb] to [34.2mb] log4j, [2013-12-17T17:39:07.973] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] marking shard [logstash-2013.12.17][0] as inactive (inactive_time[30m]) indexing wise, setting size to [500kb] log4j, [2013-12-17T17:39:07.973] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] marking shard [logstash-2013.12.17][1] as inactive (inactive_time[30m]) indexing wise, setting size to [500kb] log4j, [2013-12-17T17:39:07.973] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] marking shard [logstash-2013.12.17][2] as inactive (inactive_time[30m]) indexing wise, setting size to [500kb] log4j, [2013-12-17T17:39:07.974] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] marking shard [logstash-2013.12.17][3] as inactive (inactive_time[30m]) indexing wise, setting size to [500kb] log4j, [2013-12-17T17:39:07.974] DEBUG: org.elasticsearch.indices.memory: [Mys-Tech] marking shard [logstash-2013.12.17][4] as inactive (inactive_time[30m]) indexing wise, setting size to [500kb] log4j, [2013-12-17T17:39:07.974] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][0] updating index_buffer_size from [34.2mb] to (inactive) [500kb] log4j, [2013-12-17T17:39:07.991] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][1] updating index_buffer_size from [34.2mb] to (inactive) [500kb] log4j, [2013-12-17T17:39:08.001] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][2] updating index_buffer_size from [34.2mb] to (inactive) [500kb] log4j, [2013-12-17T17:39:08.008] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][3] updating index_buffer_size from [34.2mb] to (inactive) [500kb] log4j, [2013-12-17T17:39:08.016] DEBUG: org.elasticsearch.index.engine.robin: [Mys-Tech] [logstash-2013.12.17][4] updating index_buffer_size from [34.2mb] to (inactive) [500kb]

Gliffy Diagrams

Activity

Show:

Philippe Weber March 25, 2015 at 5:53 AM

Current status and planning for improvment on exception handling in the pipeline are in https://github.com/elastic/logstash/issues/2477
This issue could be closed as duplicate

jerome loyet December 21, 2013 at 11:03 PM

same problem here, logstash-shipper stops processing events but does not crash.

And I'm not using the ruby plugin disappointed face

Casper December 21, 2013 at 5:34 PM

Yep, it's the ruby plugin – disabling it fixes the problem for me too.

Maxim Visnevskis December 20, 2013 at 7:06 PM

Yea, something really wrong with ruby plugin, after I disabled it Logstash started to work without any issues (2 days and continuing)

Casper December 19, 2013 at 4:43 PM
Edited

Seeing the same problem with 1.2.2, it's not just udp, tcp listener too. logstash is still listening on the port, just not processing new events.

I'm also using the ruby plugin (I upgraded as I couldn't get it working with logstash 1.1.12)

I get this error –

{:timestamp=>"2013-12-20T08:12:32.251000+0000", :message=>"A plugin had an
unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::Stdin
type=>\"stdin-type\">\n Error: Unknown error - Bad file descriptor",
:level=>:error}
{:timestamp=>"2013-12-20T08:12:33.261000+0000", :message=>"A plugin had an
unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::Stdin
type=>\"stdin-type\">\n Error: closed stream", :level=>:error}
{:timestamp=>"2013-12-20T08:12:34.265000+0000", :message=>"A plugin had an
unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::Stdin
type=>\"stdin-type\">\n Error: closed stream", :level=>:error}
{:timestamp=>"2013-12-20T08:12:35.268000+0000", :message=>"A plugin had an
unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::Stdin
type=>\"stdin-type\">\n Error: closed stream", :level=>:error}
{:timestamp=>"2013-12-20T08:12:36.272000+0000", :message=>"A plugin had an
unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::Stdin
type=>\"stdin-type\">\n Error: closed stream", :level=>:error}

Details

Assignee

Reporter

Affects versions

Created December 18, 2013 at 9:03 AM
Updated March 25, 2015 at 5:53 AM