We're updating the issue view to help you get more done. 

date filter doesn't create a proper mapping on elasticsearch

Description

I'm using an elasticsearch cluster with the following configuration:

1 2 3 4 5 6 7 output { elasticsearch_http { flush_size => 1 host => "<myelasticsearch cluster vip>" port => 9200 } }

and in addition I have a field which I use as the timestamp from my inputs, this field comes in different formats so I've added multiple formats to it per the documentation:

1 2 3 4 5 6 7 date { match => [ "EventTime", "yyyy/MM/dd HH:mm:ss", "yyyy/MM/dd HH:mm:ss,SSS", "yyyy-MM-dd HH:mm:ss", "yyyy-MM-dd HH:mm:ss,SSS" ] add_tag => [ "date" ] }

and I'm getting tones of those messages:

1 {:timestamp=>"2013-06-18T10:01:12.313000-0400", :message=>"Error writing to elasticsearch", :response=>#<FTW::Response:0x6c4d7266 @headers=FTW::HTTP::Headers <{"content-type"=>"application/json; charset=UTF-8", "content-length"=>"436"}>, @body=<FTW::Connection(@3906) @destinations=["<myelasticsearch cluster vip>"] @connected=true @remote_address="<IP ADDRESS>" @secure=false >, @status=400, @logger=#<Cabin::Channel:0x52e3b1 @subscriber_lock=#<Mutex:0x7b9f5ad1>, @metrics=#<Cabin::Metrics:0x1b038ebf @channel=#<Cabin::Channel:0x52e3b1 ...>, @metrics={}, @metrics_lock=#<Mutex:0x690ab74>>, @data={}, @subscribers={}, @level=:info>, @reason="Bad Request", @version=1.1>, :response_body=>"{\"error\":\"RemoteTransportException[[<ES NODE>][inet[/<IPADDRESS>:9300]][index]]; nested: MapperParsingException[failed to parse [@fields.EventTime]]; nested: MapperParsingException[failed to parse date field [2013/06/18 00:00:33,377], tried both date format [yyyy/MM/dd HH:mm:ss||yyyy/MM/dd], and timestamp number]; nested: IllegalArgumentException[Invalid format: \\\"2013/06/18 00:00:33,377\\\" is malformed at \\\",377\\\"]; \",\"status\":400}", :level=>:error}

I've change the mapping on my ES and for this field and it shows as follow:

1 2 3 4 "EventTime" : { "type" : "date", "format" : "yyyy/MM/dd HH:mm:ss||yyyy/MM/dd" },

I've stopped the logstash deamon and delete all the indices to make sure its not because the current index is already define but with the same errors. I'm not sure if I'm missing something or it is just a bug...

Rudy

Environment

None

Status

Assignee

Logstash Developers

Reporter

Rudy Attias

Labels

Affects versions

1.1.13

Priority