fields.timestamp is UTC-5 (logs 5 hours behind)
Description
Attachments
Gliffy Diagrams
Activity

nitin February 17, 2014 at 12:22 AM
I found the problem.
The way JVM works out the default timezone is as follows:
1) Looks to environment variable TZ
This is not set in our linux box
2) JVM looks for the file /etc/sysconfig/clock and tries to find the "ZONE" entry.
However, on these host the ZONE entry does not have a double quote around the actual variable, and the JVM code is unable to recongise the entry.
3) If the ZONE entry is not found, the JVM will compare contents fo /etc/localtime with the contents of every file in /usr/share/zoneinfo recursively. When the contents matches, it returns the path and filename, referenced from /usr/share/zoneinfo
I don't have TZ variable so JVM went to see /etc/sysconfig/clock and it was set to "America/New york" changing that to UTC fixed the problem.
It still doesn't explain why the index was created in future.
Hi,
I have same config on about 40 servers and 5 of them are exhibiting this problem. All my servers are in UTC, all log stamps are in UTC but when logstash reads these logs its converting @fields.timestamp to UTC-5 but the @timestamp is still in UTC. I have tried removing date filter, behavior remains the same. So I suspect it's not the date filter.
What even more strange is when I had the logstash-agent stopped I can still see logs flowing in kibana, which makes me think the logs are 5 hours behind.
Even though the logs are behind it creates a new index 5 hours in future.
filter {
grok {
type => "blabla"
pattern => "%{TIMESTAMP_ISO8601:timestamp} %{GREEDYDATA:message}"
}
date {
match => ["timestamp", "ISO8601", "UNIX", "UNIX_MS"]
}
}
output {
redis {
host => ["192.168.128.189","192.168.128.146"]
data_type => "list"
key => "logstash"
shuffle_hosts => false
}
}
Anyone else had this problem?