Issues
- Logstash drops events after processing multiple events in the same millisecondLOGSTASH-2230Resolved issue: LOGSTASH-2230Jordan Sissel
- when working with ruby filter it stops passing data to elastic searchLOGSTASH-1823Resolved issue: LOGSTASH-1823Logstash Developers
- File input plugin not reading all filesLOGSTASH-1814Resolved issue: LOGSTASH-1814Ananda Prakash Verma
- Interactive Config BuilderLOGSTASH-1805Ananda Prakash Verma
- installing logstash-forwarder on solaris with sunosLOGSTASH-1782Logstash Developers
- Debian packages unreachable (403)LOGSTASH-1736Resolved issue: LOGSTASH-1736Logstash Developers
- [openbsd] Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (NotImplementedError) stat.st_dev unsupported or native support failed to loadLOGSTASH-1725Resolved issue: LOGSTASH-1725Logstash Developers
- InvalidByteSequenceError: ""\xCF"" on ASCII-8BIT when using compress_spoolerLOGSTASH-1721Resolved issue: LOGSTASH-1721Jordan Sissel
- Logstash s3 input not recognized -- Couldn't find any input plugin named 's3'LOGSTASH-1720Resolved issue: LOGSTASH-1720Jordan Sissel
- Crash in racob-x64.dll on WindowsLOGSTASH-1719Logstash Developers
- no such file to load -- file:/opt/logstash/logstash.jar!//lib/aws/core/configurationLOGSTASH-1718Resolved issue: LOGSTASH-1718Jordan Sissel
- Logstash randomly stops to proceed UDP syslog messagesLOGSTASH-1717Logstash Developers
- xmpp input/output brokenLOGSTASH-1716Logstash Developers
- Custom install displays message Invalid resource type anchor at /etc/puppet/modules/logstash/manifests/init.pp:106LOGSTASH-1712Jason Kendall
- Error in new ElasticSearch index template supportLOGSTASH-1711Resolved issue: LOGSTASH-1711Aaron Mildenstein
- OpenTSDB multiple tagsLOGSTASH-1709Logstash Developers
- Using geoip and prune generates "superclass mismatch for class Error" on startup.LOGSTASH-1708Resolved issue: LOGSTASH-1708Logstash Developers
- Calling php script from exec output plugin hangs LogstashLOGSTASH-1706Logstash Developers
- Create database input pluginLOGSTASH-1704Logstash Developers
- GeoIP filter does not save GeoJSON as advertised in the changelogLOGSTASH-1703Resolved issue: LOGSTASH-1703Aaron Mildenstein
- logstash-forwarder TLS handshake problemLOGSTASH-1702Logstash Developers
- new grok patterns for atlassian stash-access.logLOGSTASH-1701Resolved issue: LOGSTASH-1701Zachary Buckholz
- Exception when using AWS components with flatjarLOGSTASH-1700Resolved issue: LOGSTASH-1700Jordan Sissel
- Failed to flush outgoing items while processing Cisco ASA syslogsLOGSTASH-1699Resolved issue: LOGSTASH-1699Logstash Developers
- Failed to flush outgoing itemsLOGSTASH-1302Resolved issue: LOGSTASH-1302Jason Kendall
- Sincedb not updating correctlyLOGSTASH-1242Resolved issue: LOGSTASH-1242João Duarte
- [openbsd] Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (NotImplementedError) stat.st_dev unsupported or native support failed to loadLOGSTASH-1200Logstash Developers
Logstash drops events after processing multiple events in the same millisecond
Description
Gliffy Diagrams
Details
Details
Assignee
Reporter
Affects versions
Activity
Suyog Rao February 7, 2015 at 12:05 AM
Hi this has been fixed in https://github.com/elasticsearch/logstash/issues/1938
Please try and if not solved, open an issue in Logstash github
Former user May 28, 2014 at 3:28 PM
I just tried inverting the filter order and with just one filter, but the same happens.
if [type] == "jboss" {
grok {
patterns_dir => "/opt/logstash/patterns"
match => [message, "%{JBOSS1}"]
}
multiline {
pattern => "^\D"
what => "previous"
}
date {
match => [ "timestamp", "YY-MM-dd HH:mm:ss,SSS" ]
}
I am starting to read the code, but I am really new to it. I cannot promise anything, but if I found something, I'll let you know.
Thanks!
Former user May 28, 2014 at 3:17 PM
Unfortunately, the same thing happened eventhough I changed the multiline filters to:
if [type] == "jboss" {
grok {
patterns_dir => "/opt/logstash/patterns"
match => [message, "%{JBOSS1}"]
}
date {
match => [ "timestamp", "YY-MM-dd HH:mm:ss,SSS" ]
}
multiline {
pattern => "^\D|^$|^\n"
what => "previous"
}
However, the issue is indeed related to the multiline filter. I tried without the filter and works OK. Only that I have to merge all the lines java stack trace in a single message.
Former user May 28, 2014 at 1:12 PM
Excellent! I'll try to merge the three conditions in a single multiline filter and let you know.
Thanks!
Jordan Sissel May 28, 2014 at 5:08 AM
I suspect this is mostly to due with using so many multiline filters, and perhaps them interacting in a way that you do not expect. I haven't tried your problem myself, but most proposals suggesting that logstash drops messages with multiple events and/or multiline is usually just bad interactions between multiple independent multiline filters.
Dear Sir/Madam,
I really found Logstash very useful. But recently stumbled into a situation that wories me.
Apparently, when parsing logs (jboss logs in particular), after parsing events in the same millisecond, Logstash losses track of the current line and drop events. These are the details:
1) Configuration snipet
input { ... tcp { type => "jboss" port => 3350 add_field => [ "server", "server1", "jboss_instance", "jboss1" ] } ... } filter { if [type] == "jboss" { grok { patterns_dir => "/opt/logstash/patterns" match => [message, "%{JBOSS1}"] } date { match => [ "timestamp", "YY-MM-dd HH:mm:ss,SSS" ] } multiline { # type => "somefiletype" pattern => "^\D" what => "previous" } multiline { pattern => "^$" what => "previous" } multiline { pattern => "^\n" what => "previous" } }
(Here the multiline is to join the Java stack trace and other stuff. And as I send the logs asynchronously, I use the time in the event log and not the time in which the event is received; most of the times it works just fine).
I just match the date and loglevel: JBOSS1 %{DATESTAMP:timestamp} %{LOGLEVEL:loglevel}
I send the logs using netcat to a specific port in the local box where everything resides (actually I rsync the logs files from remote machines to a local machine and netcat the diference in the lines; but we can safely assume that everything lives in the same Linux box, because the rsync part works fine).
2) This is the input I send:
2014-05-27 09:15:00,031 INFO [mi.package.core.batch.jobs.Tarea] (DefaultQuartzScheduler_Worker-2) **Se ejecuta la tarea: Tarea**
2014-05-27 09:15:00,031 INFO [mi.package.core.batch.jobs.Fachada] (DefaultQuartzScheduler_Worker-2) Ejecutando la versión 2.7.0
2014-05-27 09:15:00,031 INFO [mi.package.core.batch.jobs.Fachada] (DefaultQuartzScheduler_Worker-2) levantando properties
2014-05-27 09:15:00,031 INFO [mi.package.core.batch.jobs.Fachada] (DefaultQuartzScheduler_Worker-2)
2014-05-27 09:15:00,031 ERROR [org.quartz.core.JobRunShell] (DefaultQuartzScheduler_Worker-2) Job GrupoTarea.Tarea threw an unhandled Exception:
java.lang.NullPointerException
at java.util.Hashtable.put(Hashtable.java:394)
at java.util.Properties.setProperty(Properties.java:143)
...
2014-05-27 09:15:00,034 ERROR [org.quartz.core.ErrorLogger] (DefaultQuartzScheduler_Worker-2) Job (GrupoTarea.Tarea threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception. [See nested exception: java.lang.NullPointerException]
...
2014-05-27 09:15:09,094 INFO [mi.package.gw.pool.Manager] (Thread-57)
3) This is what gets into Elasticsearch:
2014-05-27 09:15:00,031 INFO [mi.package.core.batch.jobs.Fachada] (DefaultQuartzScheduler_Worker-2) Ejecutando la versi\xF3n 2.7.0 2014-05-27T09:15:00.031-03:00
2014-05-27 09:15:00,031 INFO [mi.package.core.batch.jobs.Fachada] (DefaultQuartzScheduler_Worker-2) 2014-05-27T09:15:00.031-03:00
2014-05-27 09:15:00,031 INFO [mi.package.core.batch.jobs.Tarea] (DefaultQuartzScheduler_Worker-2) ***Se ejecuta la tarea: Tarea... 2014-05-27T09:15:00.031-03:00
2014-05-27 09:15:00,031 INFO [mi.package.core.batch.jobs.Fachada] (DefaultQuartzScheduler_Worker-2) levantando properties
The first thing you might notice is the order of the lines gets messed up. That is not so bad. However, the events in 2014-05-27 09:15:00,034 and in 2014-05-27 09:15:09,094 doesn't get to Elasticsearch.
The version of Logstash I am using is 1.3.2 flatjar. I've seen there are many changes in version 1.4.1 so I rather not upgrade unless I am sure this issue is addressed in version 1.4.1.
Thank you!
Best regards,
Marcelo