-
Notifications
You must be signed in to change notification settings - Fork 532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bump openapi version to support for k8s media types #1341
bump openapi version to support for k8s media types #1341
Conversation
/lgtm |
this is weird we need more than 2gb of memory to run our tests now? |
requestContext.setHeaderParam('Content-type', 'application/json-patch+json'); | ||
return requestContext; | ||
}, | ||
post: async (responseContext: ResponseContext) => responseContext, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, one question:
is it required to define both pre
as well as post
even if you do not do anything else than return the arguments like in your example with post
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i can try removing them right now and let you know, but I had them to satisfy the typescript typings and show what was possible, not necessarily because they are needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't see any default implementations of the pre/post methods, so they are most likely required for now, but we could easily add a wrapper that does default them or even a wrapper that makes middleware/configuration from a desired MIME media type since that's most of what we're after here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update: you can skip the middleware altogether if you want to use the first supported content type which is application/json-patch+json
, but for any other types you'll have to add it, but this is just a workaround for now, and the ergonomics will be greatly improved with config merging in the future :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for explaining. I was just asking because I was curious :)
This is fine, especially as a workaround! 👍
i was able to locally repo this using the ubuntu 22.04 docker image and running the container with a 2gb memory limit, but when limiting node memory with I'm continuing to test so we can get his merged |
testing this out for now actions/runner-images#70 (comment) |
This had many different effects that derived from it, making this PR so massive. **Modules**: The first stage was to rewrite all the imports to use .js endings everywhere. Then we needed to replace and bump a bunch of dependencies to their ESM version. **Bundling**: Following that we tried to use rollup to bundle everything back into CommonJS so that we can use it in pkg since pkg does not support ESM. This was unsuccessful, and after trying several different single application solutions, we decided we will have to write our own. **Single Executable Binary**: We now use a custom Rust binary which bundles a zipped version of NodeJS, the native extensions and the bundled code. We bundle the code for tree-shaking and size reduction, but also because otherwise we had issues with how we ensure that native modules are loaded correctly and with resolving the imports correctly cross-platform. Then on first run, the binary extracts those files to the filesystem and runs node from there. There are checksum files that we bundle together with the archives and that we store on disk to see if we need to re-extract or not. This also gives us a huge performance boost in some cases - especially when lots of filesystem reads are required - since pkg was patching and proxying those methods and making things slow. **Kubernetes client**: We also had to update the Kubernetes library to the 1.0.0-rc3, with a patch from a fork that contains the changes for kubernetes-client/javascript#1341 That is because the library was using request-promise behind the scenes, which started showing up with unhandled rejection warnings and errors about req not being defined in some cases. That library update required us to change almost every k8s call since the interface now uses objects instead of positional arguments, which makes things much nicer to read and use. We also had to introduce a workaround for using custom certificates together with proxies, something that was previously globally monkey-patched by global-agent. **request-promise**: We no longer depend on request-promise and fully removed the dependency. Fixes #3841 Fixes #4898
This had many different effects that derived from it, making this PR so massive. **Modules**: The first stage was to rewrite all the imports to use .js endings everywhere. Then we needed to replace and bump a bunch of dependencies to their ESM version. **Bundling**: Following that we tried to use rollup to bundle everything back into CommonJS so that we can use it in pkg since pkg does not support ESM. This was unsuccessful, and after trying several different single application solutions, we decided we will have to write our own. **Single Executable Binary**: We now use a custom Rust binary which bundles a zipped version of NodeJS, the native extensions and the bundled code. We bundle the code for tree-shaking and size reduction, but also because otherwise we had issues with how we ensure that native modules are loaded correctly and with resolving the imports correctly cross-platform. Then on first run, the binary extracts those files to the filesystem and runs node from there. There are checksum files that we bundle together with the archives and that we store on disk to see if we need to re-extract or not. This also gives us a huge performance boost in some cases - especially when lots of filesystem reads are required - since pkg was patching and proxying those methods and making things slow. **Kubernetes client**: We also had to update the Kubernetes library to the 1.0.0-rc3, with a patch from a fork that contains the changes for kubernetes-client/javascript#1341 That is because the library was using request-promise behind the scenes, which started showing up with unhandled rejection warnings and errors about req not being defined in some cases. That library update required us to change almost every k8s call since the interface now uses objects instead of positional arguments, which makes things much nicer to read and use. We also had to introduce a workaround for using custom certificates together with proxies, something that was previously globally monkey-patched by global-agent. **request-promise**: We no longer depend on request-promise and fully removed the dependency. Fixes #3841 Fixes #4898 Co-authored-by: Steffen Neubauer <[email protected]>
@davidgamero I needed to increase the memory limit in this PR as well in order to run the tests to even 8GB: #1388 |
@mstruebing i just merged your changes from main and it looks like it works now, but i suspect one of the packages just had an issue that's somehow consuming extra CI memory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @davidgamero some small comments but in general I'm happy with this PR :)
@mstruebing I'll let you merge this when ready. |
i was able to make some progress isolating the gc memory issue- maybe we make a separate issue to track it by executing the majority of the growing heap usage seems to be coming from SourceMapConsumer in i found this potentially relevant link that has encouraging similarities. by changing nyc.instrument=false in package.json on line 51, the memory never exceeded 600MB, while re-enabling that line eventually balloons up to 12GB which was the most i was willing to test. i will continue investigating upgrading our source-map indirect dependency so we can hopefully get this resolved |
/lgtm Thank you very much for this effort 🙏 |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: brendandburns, davidgamero, mstruebing, OliverMKing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
regenerate after bumping the open api version to include OpenAPITools/openapi-generator#16386
ref #754
this should allow the routes that are currently blocked by missing media types, and i've included an example for patch showing how it can be done
the next priority would ideally be a fully custom object serializer, or simply adding the ability to merge header maps in the
options
parameter.For now, this will provide a way to get things working even if it's a bit verbose