Logstash filter worker will stop processing messages even when there are messages available.
Logstash is getting input from a RabbitMQ queue using SSL, filtering the content, and outputting back to another RabbitMQ queue, again with SSL.
There are no errors visible in the output, even using -vv cmdline switch.
Once the queue has hung, it's not possible to shut down the pipeline with ctrl-C - the Logstash process must be killed to stop it.
What's the next best step to identify the cause of the problem?
Ran with only stdout configured for about an hour and a half but then hung, again on a postfix message (all the hangs have been on postfix messages - not sure if this is significant).
I'm checking queue sizes with the default 'rabbit.rb' which comes as a Graphital sample script. This is just running 'rabbitmqctl list_queues' periodically and sending the results to Graphite.
This problem has only occurred since upgrading to 1.2+
I think I've narrowed this down to a problem with grok + my config. If I remove the postfix section of the config and let the messages pass without processing, I don't get any hanging.
Any more ideas?
OK, I've now run this for 24 hours without the postfix part of the config with no issues at all. Is there anything in there which looks like it could cause an issue, or is the problem with grok or one of the other filters?
After some discussion in #logstash; we identified that the grok filter is what gets stuck. Looks like the filter watchdog was accidentally disabled in 1.2.0 which is why logstash hangs instead of crashing.
For clarity, the "hang" is due to bugs in the Ruby regexp engine where you get exponential execution time in certain conditions. You can fix this by modifying your pattern to avoid this behavior, but the fix can sometimes be tricky.
Thanks for the pointer. I've found my misbehaving pattern and modified it, and all seems stable.