Beats Connection Closed by Logstash

Published by Torry Crass on

It was recently one of those days that odd network “chop” caused me to take a look at various systems in the environment to track down the possible culprits. On checking various logs I noticed a lot of Windows Application Event log errors coming through about different “beats” agents (used to send information to the Logstash system).

Specifically, errors like the ones below were happening for each beat agent at least once a minute.

2018-11-21T23:49:10.917-0500    ERROR   pipeline/output.go:92   Failed to publish events: write tcp CLIENT_IP_ADDRESS:51577->LOGSTASH_IP_ADDRESS:5044: wsasend: An existing connection was forcibly closed by the remote host.
2018-11-21T23:49:09.913-0500    ERROR   logstash/async.go:235   Failed to publish events caused by: write tcp CLIENT_IP_ADDRESS:51577->LOGSTASH_IP_ADDRESS:5044: wsasend: An existing connection was forcibly closed by the remote host.

This probably wasn’t the cause of the network problems I’d been seeing but I certainly wanted to see why these connections were being “forcibly closed“.

So I went about checking services status, network settings, looking at traffic, all to no conclusive information. After a few searches I stumbled on information about a setting for the logstash server referring to client_inactivity_timeout (see references below as needed).

This seemed like a great place to start. So, first thing to note, this setting is on your logstash server, not at the client side so don’t look there for this setting (this wasn’t made clear in the posts). You’ll probably find the configuration in roughly the following area:

/etc/logstash/conf.d.available/0006_input_beats.conf

Open this file and you’ll see something like the following:

input {
beats {
port => "5044"
tags => [ "beat" ]
}
}

Pretty simple right? So now we should be able to update the configuration file to add a better timeout period for the connection such as below. Use your favorite text editor and make the changes you need.

input {
beats {
port => "5044"
tags => [ "beat" ]
client_inactivity_timeout => "1200"
}
}

Note the “1200” second value for the added option. If your system does not communicate very often, that is likely the cause of the initial errors and to resolve them you may have to add a significant increase from the default value of “5”.

Now you’ll want to restart the logstash service. This will vary depending on if you’re using a stand alone instance, ELK Stack, or something like Security Onion. Best to do a quick search on how to do this for your setup if you don’t already know how.

This change took my errors from one a minute to nonexistent. Hopefully it can do the same for you.


REFERENCES:

https://www.elastic.co/guide/en/logstash/2.4/plugins-inputs-beats.html#plugins-inputs-beats-client_inactivity_timeout

https://discuss.elastic.co/t/filebeat6-2-4-error-logstash-async-go-235-failed-to-publish-events-caused-by-write-tcp-192-168-1-2-19616-192-168-1-3-write-connection-reset-by-peer/147503/7

https://discuss.elastic.co/t/solved-tcp-rst-reset-causing-filebeat-logstash-connection-reset/82343/3


Leave a Comment

%d bloggers like this: