Details
-
Type:
Bug/Feature
-
Status: New (View workflow)
-
Resolution: Unresolved
-
Affects Version/s: 1.1.13
-
Fix Version/s: None
-
Labels:None
Description
If system mentioned in input or output is unavailable (say, ElasticSearch or Redis are down), Logstash starts produce enormous amounts of log entries quickly.
E.g. in case of Redis input it is 3 entries "Failed to get event from redis" / "Input thread exception" / "Restarting input due to exception".
The problem is that the log file grows to gigabytes very quickly and finally fills whole disk (it happened to me twice during the night, and on morning disk was already full).
What would be really useful in this case is either exponential backoff for inputs/outputs and/or changing logging from "unable to connect" entries to
"Input/Output X started to fail" / "Input/Output X is working normally".