Uploaded image for project: 'logstash'
  1. LOGSTASH-1192

Excessive amount of logging in case of unavailable external service

    Details

    • Type: Bug/Feature
    • Status: New (View workflow)
    • Resolution: Unresolved
    • Affects Version/s: 1.1.13
    • Fix Version/s: None
    • Labels:
      None

      Description

      If system mentioned in input or output is unavailable (say, ElasticSearch or Redis are down), Logstash starts produce enormous amounts of log entries quickly.

      E.g. in case of Redis input it is 3 entries "Failed to get event from redis" / "Input thread exception" / "Restarting input due to exception".

      The problem is that the log file grows to gigabytes very quickly and finally fills whole disk (it happened to me twice during the night, and on morning disk was already full).

      What would be really useful in this case is either exponential backoff for inputs/outputs and/or changing logging from "unable to connect" entries to

      "Input/Output X started to fail" / "Input/Output X is working normally".

        Gliffy Diagrams

          Attachments

            Activity

              People

              • Assignee:
                logstash-dev Logstash Developers (Inactive)
                Reporter:
                dottedmag Mikhail Gusarov
              • Votes:
                0 Vote for this issue
                Watchers:
                1 Start watching this issue

                Dates

                • Created:
                  Updated: