Insufficient performance


We are using LogStash in our Cocaine cloud project as logging service, which provides indexed log transport from cloud nodes to Elasticsearch. Everything was perfect until we noticed significant data loss (~20%) on fairly medium load (~1000-1500RPS). In production environment we have about 5-20k events per second, so we are in trouble . So can you give us some advises about how to cook LogStash to increase its performance?

We are using it the next way:

LogStash 1.2.2.
Json-event | => udp input (json-lines coded) => elasticsearch output.
Medium event size is approximately 120 bytes.

I've also done some benchmarks on blocking tcp socket/input with null output, and it gave me about 3kRPS. At last I've tried to pack events with msgpack, and it gave no appreciable difference.



Jordan Sissel


Evgeny Safronov

Fix versions

Affects versions