I've been reading at the source code of redis output plugin and I have few questions/remarks to do:
codec is not taken into account (it is hardcoded to json)
why do not use pipelining instead of passing items as parameters of redis commands?
I am developing a log CEP using redis and I wanted to reuse Logstash.
Since the huge amount of logs/transactions per second, I want to:
inherith from redis plugin (or copy it)
implement the codec part (I want to use msgpack)
be able to use a custom redis command (previously added to redis)
In this way, logstash just encodes the parsed event to redis in msgpack and some simple processing will be done directly on redis.
What do you think about it?
I just ran into the same problem of the redis output not obeying the codec parameter while trying to test using Redis to feed Elasticsearch via this Redis river plugin based on the codec created for LOGSTASH-1445.
I'm able to hack the output a bit to make it work by calling @codec.encode however the Redis key to publish to is sprintf'd based on the event and when I'm within the @codec.on_event block I don't have the event anymore so the hack omits sprintf'ing the key. With that the output works although I've also turned off batching as that's another code path that would need fixing.