You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Test all product bundles and nuget packages for freshness and flag inconsistencies to catch problems before they surface as unpatched components.
We have a very sophisticated build system that intends to compose a consistent and patched product. The goal is that developers can make fixes and trust that their fixes will make it into the product. This is especially important in servicing where we need to fix critical vulnerabilities and ensure those are fixed in the final product.
We have no tests that ensure this is the case, and we have a multiple systems that allow for components to consume pre-built binaries (NuGet, .NETSDK etc). Today we have some manual process and prototype tests that help us scan the bits we ship to find out of date binaries and ensure we update, build, and flow more repositories. This process is extremely expensive - not in complexity - but in time due to fragility in systems, manual processes, iterations, etc.
We need to add tests to the product to ensure that we are composing the live built binaries. These tests are important for the VMR as well which changes drastically how we build the product. The result of these tests is they will point out places where the product ships stale bits.
The gist of the test methodology is to scan all the things we intend to ship - diving into containers to identify files. We can identify components with information like file name, target framework, assembly identity, and file version. We can also identify references to components from packages and deps files.
We can assert consistency for a set of these things, while using others as selection key. For example -- given a file name and target framework assembly identity (including key and version) should be the same, file version should be the same.
We can't assert file hash is the same, because there will be building per vertical. There is also IL vs cross-gened bits that might appear. If we wanted, we could also have a means for calculating all of these and including them as comparison keys. Such a comparison would be less about consistency though and more about reducing redundant work (like building, signing, or crossgening a binary more than once when it could be shared).
We can assert consistency with externally provided data - which can help in cases we need to feed external patch information into the testing. For example: Newtonsof.Json has a patch that isn't part of our builds but we'd like to assert that every part of our product is updated to that patch.
We'll need a system for baselining because I expect that at first we'll have many problems of this variety in our build.
We can start by baselining everything, then driving down the baselines through fixes:
a) Don't redistribute binaries you don't need to.
b) Move redistribution to a later point in the build that can include the live bits.
c) Don't use a package when you reference from framework.
d) Multi-target to avoid a package dependency.
e) (eventually) Rely on NuGet's supplied by-framework to drop mentions of packages from deps files and accidental redistribution.
We'd like to develop these inside the product validation suite being produced for the VMR and share with others as much as possible to drive validation upstream where possible.
The text was updated successfully, but these errors were encountered:
Test all product bundles and nuget packages for freshness and flag inconsistencies to catch problems before they surface as unpatched components.
We have a very sophisticated build system that intends to compose a consistent and patched product. The goal is that developers can make fixes and trust that their fixes will make it into the product. This is especially important in servicing where we need to fix critical vulnerabilities and ensure those are fixed in the final product.
We have no tests that ensure this is the case, and we have a multiple systems that allow for components to consume pre-built binaries (NuGet, .NETSDK etc). Today we have some manual process and prototype tests that help us scan the bits we ship to find out of date binaries and ensure we update, build, and flow more repositories. This process is extremely expensive - not in complexity - but in time due to fragility in systems, manual processes, iterations, etc.
We need to add tests to the product to ensure that we are composing the live built binaries. These tests are important for the VMR as well which changes drastically how we build the product. The result of these tests is they will point out places where the product ships stale bits.
The gist of the test methodology is to scan all the things we intend to ship - diving into containers to identify files. We can identify components with information like file name, target framework, assembly identity, and file version. We can also identify references to components from packages and deps files.
a) Don't redistribute binaries you don't need to.
b) Move redistribution to a later point in the build that can include the live bits.
c) Don't use a package when you reference from framework.
d) Multi-target to avoid a package dependency.
e) (eventually) Rely on NuGet's supplied by-framework to drop mentions of packages from deps files and accidental redistribution.
We'd like to develop these inside the product validation suite being produced for the VMR and share with others as much as possible to drive validation upstream where possible.
The text was updated successfully, but these errors were encountered: