Fixed
Details
Assignee
Jordan SisselJordan SisselReporter
Evgeny SafronovEvgeny SafronovFix versions
Affects versions
Details
Details
Assignee
Jordan Sissel
Jordan SisselReporter
Evgeny Safronov
Evgeny SafronovFix versions
Affects versions
Created January 9, 2014 at 12:00 PM
Updated March 24, 2014 at 4:53 AM
Resolved March 24, 2014 at 4:53 AM
We are using LogStash in our Cocaine cloud project as logging service, which provides indexed log transport from cloud nodes to Elasticsearch. Everything was perfect until we noticed significant data loss (~20%) on fairly medium load (~1000-1500RPS). In production environment we have about 5-20k events per second, so we are in trouble . So can you give us some advises about how to cook LogStash to increase its performance?
We are using it the next way:
LogStash 1.2.2.
Json-event | => udp input (json-lines coded) => elasticsearch output.
Medium event size is approximately 120 bytes.
I've also done some benchmarks on blocking tcp socket/input with null output, and it gave me about 3kRPS. At last I've tried to pack events with msgpack, and it gave no appreciable difference.