Using the same config as in https://groups.google.com/forum/#!topic/logstash-users/qpvNK6Bz1eM
except I tried the elasticsearch output instead
I generated 1791 log lines that was shipped via lumberjack to my logstash instance. While processing I shutdown logstash:
Stdout from logstash:
In total 2069 was inserted to ES of that 1769 was unique. So I lost 22 events:
which is close to the size of LS internal queue.
Is this expected? If I send SIGTERM instead I lost 68 events.
Also tried with redis as an input which also will loses messages on shutdown:
3859 stored in redis:
Restarting LS while working through the list resulted in 3759 events stored in ES.
Lots of improvements were done in elasticsearch output in regards of flushing on shutdown.
Also logstash integrated the following entry to its roadmap
https://github.com/elasticsearch/logstash/issues/2605
Closing this ticket as a duplicate of the roadmap entry