filterworker exception in multiline.rb stalls pipeline

Description

Exception looks like:

←[31mException in filterworker {"exception"=>#<RuntimeError: can't add a new key into hash during iteration>, "backtrace"=>["org/jruby/RubyHash.java:985:in `[]='", "file:/C:/logstash/logstash-1.4.0.dev-patched2-flatjar.jar!/logstash/filters/multiline.rb:179:in `filter'", "(eval):73:in `initialize'",
"org/jruby/RubyProc.java:271:in `call'", "file:/C:/logstash/logstash-1.4.0.dev-patched2-flatjar.jar!/logstash/pipeline.rb:255:in `filter'", "file:/C:/logstash/logstash-1.4.0.dev-patched2-flatjar.jar!/logstash/pipeline.rb:196:in `filterworker'", "file:/C:/logstash/logstash-1.4.0.dev-patched2-flatjar
.jar!/logstash/pipeline.rb:136:in `start_filters'"], :level=>:error}←[0m

The pipeline seems to have stalled – checking my interfaces, I still see traffic on the receive side but not the send side. This is after running for >24hrs. I've seen this exception a couple times before too. Not sure what "during iteration" means in this context, it sounds like a pending.append tried to happen while the hash was doing something. Could this be related to re-enabling flush function, a colision between flush and an append?

config: http://pastebin.com/MGET4F80

Assignee

Logstash Developers

Reporter

John Arnold

Labels

Affects versions

Configure