Sometimes, we just don't care if a message gets lost. What's more important at those times is to just keep trying to get out the latest information as it comes in.
Not all of our Logstash use is as a reliable log-delivery client. It's sort of become the easiest way to grab, parse, and move the latest news from point A to point B when we can't predict what architectures we'll be working with in the finished product.
Can we, output-by-output, define whether or not any given output is to be tried again, or to define a max number of retries to send any given message before it's thrown away? Or, for minor blockages, set a size for an internal buffer of messages that are allowed to be retried before we toss out the oldest of the lot? That way we can use a single logstash client/conf to handle both the stuff that needs guaranteed delivery for long-term storage, and the stuff that needs to be quickly delivered or not delivered at all.