logstash doesn't insert events in the new daily index anymore

Description

Hi,

I'm using logstash 1.1.9 with elasticsearch 0.20.5.

Since 2 weeks logstash created one index per day with no problem, but since 3 or 4 days, after logstash creates the new index at midnight, it doesn't insert events in ES anymore.

To test, I config logstash to create an index per hour, and it works !

Reboot logstash solves my issue, but I can't figure out why I have this problem...any idea ?

Here is my logstash.conf output :
elasticsearch {
host => "127.0.0.1"
cluster => "logcluster"
node_name => "logstash-client"
index => "logstash-%{+YYYY.MM.dd}"
}

Consequently, I have a "Parse Failure [No mapping found for [@timestamp] in order to sort on]" in Kibana, because the index is empty...

Thank you for the help,
Archa

EDIT :
I let logstash creates index per hour today, and sadly, the same problem occur 7 hours later...reboot logstash solves my issue...

Gliffy Diagrams

Activity

Show:

Archa May 15, 2013 at 12:33 PM

Hi today it's a beautiful day where everything is ok, logstash creates its daily index and currently continues to process logs !

It seems that the problem was the memory allowed to ES, it gave 1 Go, but now it's 2 Go.

Thank you very much Philippe for the help.

Archa May 14, 2013 at 1:19 PM

heyhey thank you, I spent a lot of time to get a mapping like that with varnish !

There is nothing in elasticsearch log

"you are on localhost, but who knows..."
And you are right ! Thank you for this suggestion, I checked and indeed both run with a DIFFERENT java version... (logstash java7 and ES java6).
I switched ES with java7.

Now wait and see slightly smiling face

Thank you again !

Philippe Weber May 14, 2013 at 12:15 PM

From this thread,
http://elasticsearch-users.115913.n3.nabble.com/Error-in-logs-Message-not-fully-read-response-for-2298885-handler-future-org-elasticsearch-indices-rt-td4031134.html

it is recommended to validate that the ES version AND the java version are the same on both ends, you are on localhost, but who knows...

Philippe Weber May 14, 2013 at 12:08 PM

That's a nice default mapping slightly smiling face
no other clues about the error from the elasticsearch logs ?

Archa May 14, 2013 at 11:31 AM

Ok thank you, here is my mapping configuration (it is dynamic) :
rsyslog:
properties:
'@fields':
dynamic: "true"
properties:
agent:
type: "string"
backend:
type: "string"
bytes:
type: "string"
cached:
type: "string"
clientip:
type: "string"
httpversion:
type: "string"
nb_hit:
type: "string"
pid:
type: "string"
program:
type: "string"
referrer:
type: "string"
request:
type: "string"
response:
type: "string"
verb:
type: "string"
'@message':
type: "string"
'@source':
type: "string"
'@source_host':
type: "string"
'@source_path':
type: "string"
'@tags':
type: "string"
'@timestamp':
type: "date"
format: "dateOptionalTime"
'@type':
type: "string"

Details

Assignee

Reporter

Affects versions

Created May 10, 2013 at 8:38 AM
Updated May 15, 2013 at 12:33 PM