Do not update sincedb if the data is not accepted by elasticsearch

Description

I am using elasticsearch-1.1.1 with logstash 1.4.0 and I am currently in process of parsing 1.2B documents across 18 indices.
At times, due to various reasons (probably a bug in LS), I see these errors:

1 {:timestamp=>"2014-05-13T18:32:06.403000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>2616, :exception=>#<RuntimeError: Non-OK response code from Elasticsearch: 404>, :backtrace=>["/elasticsearch/logstash/logstash-1.4.0/lib/logstash/outputs/elasticsearch/protocol.rb:127:in `bulk_ftw'", "/elasticsearch/logstash/logstash-1.4.0/lib/logstash/outputs/elasticsearch/protocol.rb:80:in `bulk'", "/elasticsearch/logstash/logstash-1.4.0/lib/logstash/outputs/elasticsearch.rb:331:in `flush'", "/elasticsearch/logstash/logstash-1.4.0/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1339:in `each'", "/elasticsearch/logstash/logstash-1.4.0/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in `buffer_flush'", "/elasticsearch/logstash/logstash-1.4.0/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in `buffer_flush'", "/../logstash/logstash-1.4.0/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:112:in `buffer_initia

The problem vanishes after I restart the logstash process.

Anyway, the major problem is that even though the write has failed, logstash updates sincedb and doesn't retry the loglines, this has caused a large pain as one has to constantly monitor and restart if such errors are found. So far I have lost over 100m documents and no idea how to get them back !

Environment

None

Status

Assignee

Logstash Developers

Reporter

NishchayS

Affects versions

1.4.0

Priority