-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
logstash 7.16.* recently fails to start when using this plugin! #70
Comments
Looks like an configuration issue. Could you please provide your pipeline Config? regards |
@cherweg Did you try the reproduction steps? Do you not notice that the Dockerfile encapsulated default configs and the only addition is adding this plugin? If you want you can docker exec -it into the container and view the default configs. |
I STRONGLY advise using docker build --no-cache (as a docker build will not rebuild the layer installing the plugin) |
I‘ll reproduce and analyze … maybe today evening. regards |
Elastic support shared this: elastic/logstash#13777 (comment) |
My take is that ALL versions of logstash that use plugins (not just this one) are broken. Does elastic.co even test? Here's a reply from elastic support:
|
I have the same issue, i believe this started with the new logstash-mixin-aws release. I have a working AMI that is running logstash-mixin-aws 5.0.0 and logstash-input-s3-sns-sqs 2.1.3, but if I run logstash-plugin update or install it immediately breaks with the same error show above. |
we have exactly same issue, are we expecting new patches coming up? |
Am I the only one that thinks Elasrtic is unclear what semantic versioning means? Also, I wonder what other dependencies are not locked down in logstash core. They will lock down the Sinatra dependency, but what other landmines are there? |
The Logstash plugin manager uses Bundler to modify the Logstash installation itself in order to build and install a dependency graph that satisfies all of the transitive dependencies of itself and its plugins. This is a fundamentally grey area, as bundler cannot know at Logstash release time what the dependency graph for plugins will look like at a later date. We pin things somewhat optimistically, and review changes as part of the release process before locking in what ships with each Logstash release. Clearly, this can have problems when third-parties release artifacts of things we depend on after we release. I have filed an issue on Logstash core to chase down being less optimistic about unrelated dependencies when installing plugins (elastic/logstash#13789). As to why this particular issue happened: by installing this (or any other) plugin, your Logstash installation picked up a minor bump of Sinatra from the 2.x version that was tested (such as 2.1.0 with Logstash 7.16.2) to 2.2.0 that was released yesterday. Unfortunately, while Sinatra does commit to semantic versioning, and this change was semantically a minor change for Sinatra, it surfaced a bug that fundamentally broke it when running in the JRuby runtime. We have reported the bug upstream to both Sinatra (sinatra/sinatra#1749) and JRuby (jruby/jruby#7102), and reduced the optimism with which we pin Sinatra until it is resolved. Additionally, the following patch can be applied to an installation that has been modified to include the offending Sinatra 2.2.0, to include a proposed fix for upstream Sinatra (sinatra/sinatra#1750): diff --git a/vendor/bundle/jruby/2.5.0/gems/sinatra-2.2.0/lib/sinatra/base.rb b/vendor/bundle/jruby/2.5.0/gems/sinatra-2.2.0/lib/sinatra/base.rb
index 6ced0a6..986fe45 100644
--- a/vendor/bundle/jruby/2.5.0/gems/sinatra-2.2.0/lib/sinatra/base.rb
+++ b/vendor/bundle/jruby/2.5.0/gems/sinatra-2.2.0/lib/sinatra/base.rb
@@ -1533,10 +1533,11 @@ module Sinatra
# Create a new instance of the class fronted by its middleware
# pipeline. The object is guaranteed to respond to #call but may not be
# an instance of the class new was called on.
- def new(*args, **kwargs, &bk)
- instance = new!(*args, **kwargs, &bk)
+ def new(*args, &bk)
+ instance = new!(*args, &bk)
Wrapper.new(build(instance).to_app, instance)
end
+ ruby2_keywords :new if respond_to?(:ruby2_keywords, true)
# Creates a Rack::Builder instance with all the middleware set up and
# the given +app+ as end point. |
Thank you @yaauie for this comprehensive analysis and resolution initiative. I still have an unsettled concern, that may be shared by other elastic logstash customers using/relying on "immutable infrastructure" and want to reduce risk and disruption. In my case, this bug surfaced quickly, because our CI/CD system operates many times a day and the docker build of our logstash image happens regardless of the nature of the change that triggered the build. This frequent and simplistic image building is not strictly necessary when the change is, for example, adding client logstash test case. I certainly can eliminate the image building in that scenario. However, there are other scenarios that do demand an image rebuild and I think there are two categories of these where one is build with the intention to simply change like a logstash pipeline config file, and the other is like to add a new plugin or other large change. I want to essentially reuse, a prior qualified by me, dependency graph when I assert it is safe to do so. Ideally, a build directive to help control the dependency graph analysis would be provided. Could this be considered? |
Hey everyone,
It works only if you have a fresh version of logstash without installed dependencys, but breaks if there is sinatra 2.2.0 already installed. |
But as you can see elastic fixed it by pinning sinatra, too: to "repair" your dependecy to sinatra you could do something like:
regards |
I absolutely agree. As a plugin mantainer you also shouldn't need to pin dependencies of things that aren't actually dependencies of your plugin. But the Logstash plugin manager doesn't need to optimistically update unrelated dependencies by default. It would be best if our community could rely on the Logstash team to ship artifacts whose dependency graph has been extensively tested, and for that graph to not be modified unnecessarily by the plugin manager. We have filed elastic/logstash#13789 to chase that down, and have a PR in flight to make the behaviour configurable. I have also personally validated a fix from upstream Sinatra, with a patch available here to in-place "repair" Sinatra 2.2.0 after it has been installed: sinatra/sinatra#1750 (comment) @richard-mauri it is perhaps worth taking your concerns over to elastic/logstash#13789 were we discuss how optimistic we are willing to be about unrelated dependencies when the plugin manager is invoking bundler to install plugins by name; alternately, feel free to rope me in on a separate issue on Logstash core. I agree that this isn't currently an experience that is easy for a user to reason about, and that is a bug. If immutable infrastructure is your goal, Logstash also supports offline plugin packs which (a) contain only the explicitly-named plugins and their dependencies and (b) fail to install if those dependencies don't mesh with the dependency graph of what is already installed. That way you can validate the plugin's behaviour with a specific set of versioned dependencies, distribute that set, and not rely on dependency resolution at container launch-time. As to the separately-listed concerns about the base image being changed out, we certainly hear you and we didn't make the decision lightly. With CentOS changing positions from being a stable downstream build of RHEL to effectively its "experimental upstream", along-side the RHEL/CentOS End-of-Life policies, we needed to ensure that we can continue to produce our artifacts with CVE's in the underlying OS addressed in a timely manner for the duration of our 7.x maintenance window. If there are things we can do to make that transition less painful for you, please let us know on Logstash core. |
Please run these steps (works on mac os with Docker desktop)
Note I also tried to specify a locked down version with --version 2.1.3 but it doesn't help.
You will see:
The text was updated successfully, but these errors were encountered: