-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/bug - no timeout specification #58
Comments
Just noticed something of interest. Even though the "Workers" limit is defaulted at 1, I'm getting those exceptions for workers reported the likes of "worker7" or "worker6" or... I tried forcing "workers => 1" to no avail. I'm wondering whether that parameter is being respected. It might also be the cause of the timeouts, concurrency problems? Whenever I try to stop LogStash I get a pile of errors, some mentioning "concurrency". |
I am seeing the same errors. Any luck with this @PauloAugusto-Asos? |
@knepperjm ^ I was collecting web access logs and often we have several exactly equal access log for the same time. The lines shipped to InfluxDB would of course override one another and Logstash doesn't have any mechanism I could figure out to solve those conflicts. I also investigated how to aggregate data into (ex:) 1 minute buckets but couldn't figure that out either until I gave up and started shipping the data to ElasticSearch. |
Your report is for the influxdb output v4.0.0 which is a bit different from master (v5.0.0?) -- can you upgrade the plugin and try again? |
I would suggest that instead of making the timeout configurable (you cannot predict the duration of network partitions), that instead of dropping the data, make it retry? I'll leave the design up to y'all, I don't know anything about influxdb, so it's hard to advise. |
@PauloAugusto-Asos I was able to get the data I wanted into influxdb with the following configuration. Essentially, I just created the data points map like this:
|
My PR #55 should be in in version 5.0 of the plugin (merged 28th April) and provides a initial_delay and max_retries (backoff is exponentially increasing the value of max retries, including the option of -1 for infinite retries). The documentation for the plugin on logstash does not seem to reflect this yet for some reason. Basically under the hood I switched it to using the ruby client for InfluxDb rather than HTTP directly |
Feature Request / Bug Report?
The configs for this plugin do not allow us to specify the Timeout and I'm getting Timeout warnings.
Problem:
As per (lack of, maybe?) the documentation there's no way to specify the timeout value to ship the data to the InfluxDB DB.
I am getting timeout warnings that I don't know where they're coming from nor know how "much" the violation is is.
My LogStash config is:
output
{
_ _ influxdb
_ _ {
_ _ _ _ host => "xxx.yyy.zzz"
_ _ _ _ port => 8086
_ _ _ _ db => "IIS"
_ _ _ _ retention_policy => "MyDefault"
_ _ _ _ measurement => "access_logs"
_ _ _ _ coerce_values => {
_ _ _ _ _ _ 'timetaken' => 'integer'
_ _ _ _ }
_ _ _ _ data_points => {
_ _ _ _ _ _ "timetaken" => "%{timetaken}"
_ _ _ _ _ _ "host" => "%{host}"
_ _ _ _ _ _ "server" => "%{server}"
_ _ _ _ _ _ "blabla" => "%{blabla}"
_ _ _ _ }
_ _ _ _ # use_event_fields_for_data_points => false
_ _ _ _ send_as_tags => [
_ _ _ _ _ _ "host",
_ _ _ _ _ _ "server",
_ _ _ _ _ _ "blabla"
_ _ _ _ ]
_ _ _ _ #flush_size => 100
_ _ _ _ idle_flush_time => 29
_ _ _ _ #time_precision => "s"
_ _ }
}
Proposal:
output {
_ _ influxdb {
_ _ _ _ timeout => 30
The text was updated successfully, but these errors were encountered: