Details

    • Type: Bug/Feature
    • Status: Resolved (View workflow)
    • Resolution: Duplicate
    • Affects versions: 1.4.0
    • Fix versions: None
    • Labels:
      None

      Description

      See config file below. Running on 1.4.1

      Reference thread: https://groups.google.com/d/msg/logstash-users/bLX9IE0ak-o/h_kiHuFCgNUJ

      a) I have 2 local redis instances listening on different ports, redisA and redisB

      b) InputA "file" reads from a dumb log file (i.e. each line just has a new number on it)

      c) Events from InputA end up in output redisA (in a list)

      d) InputB reads events from the list from RedisA

      e) Events from inputB go to another list on RedisB

      This works fine when all conditions are normal and the issue is with the inputB pulling from redisA when redisB (final destination) is down

      When I kill ReidsB (the final output), logstash keeps consuming from the file (as expected) for a little while and sends the events to redisA. However the list length in redisA (being read from inputB) eventually ends up being zero while redisB (final output) is down. Even as hundreds of lines are added to the file. According to the docs, the internal sized queues are 20, so at some point I would expect the list length in redisA to start reporting a size > 0 as logstash's queues begin to fill up and block (stop consuming from redisA). However this is not the case.

      When redisB is down, as I add thousands of additional lines in the log file and save, the .sincedb does NOT increment (hence why redisA reports zero in size), meaning that inputA is blocked... which is odd to me because inputA leads to the output of redisA which is up.

      Again, inputB reads from redisA which is a decoupling point. So I would understand that inputB would be blocked reading from redisA (because its ultimate target of redisB is down), however I thought "inputs" where on entirely separate threads? So why would an output of inputB affect the thread of inputA (reading from the file).

      In SUMMARY: I don't follow why the thread that runs inputA (reading from the file, writing to redisA(which is up)), would block when outputB is unavailable. If each input -> output path/queues are indeed entirely separate threads they should not affect one another.

      The docs state: "The output worker model is currently a single thread. Outputs will receive events in the order they are defined in the config file.". Jordon on the mailing lists said that outputs are no longer single worker limited (note I tried multiple workers on the outputs, same results)

      If they are not entirely separate threads, then this would be a feature request to support that, otherwise I need to run 2 separate logstash agents to achieve this routing I am trying to do.

      (also note I can never cleanly shutdown logstash, I always have to kill -9 it, I just see

      Sending shutdown signal to input thread {:thread=>#<Thread:0x49bd1798 run>, :level=>:info, :file=>"logstash/pipeline.rb", :line=>"236"}
      caller requested sincedb write () {:level=>:debug, :file=>"filewatch/tail.rb", :line=>"185"}
      ^CInterrupt received. Shutting down the pipeline. {:level=>:warn, :file=>"logstash/agent.rb", :line=>"119"}

      CONFIG
      ----------------

      input {

      file

      { charset => "US-ASCII" path => "/path/to/test.log" sincedb_path => "./.sincedb" start_position => "end" tags => [ "queue_locally" ] add_field => [ "log_type", "test" ] }

      redis

      { host => ["127.0.0.1"] port => 6379 # auth key => "local_logs_queue" data_type => "list" tags => [ "queue_remotely" ] }

      }

      filter {

      if "queue_remotely" in [tags] {
      mutate

      { remove_tag => ["queue_locally"] }

      }

      if [log_type] == "test" and "filtered" not in [tags] {

      mutate {
      add_tag => ["filtered"]
      add_field =>

      { "some_field" => "yo!" }

      }

      } else {
      mutate

      { remove_tag => ["filtered"] }

      }

      }

      output {

      if "queue_locally" in [tags] {
      redis

      { host => ["127.0.0.1"] # auth port => 6379 key => "local_logs_queue" data_type => "list" }

      }

      if "queue_remotely" in [tags] {

      redis

      { host => ["127.0.0.1"] # auth port => 6378 key => "final_logs" data_type => "list" }

      }

      }

        Gliffy Diagrams

          Attachments

            Activity

              People

              • Assignee:
                wiibaa Philippe Weber
                Reporter:
                bitsofinfo bitsofinfo
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: