You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is no built-in feature in travis to merge all the artifacts and deploy to github just once, instead of having each job edits the release. An external storage service is required, such as S3, to have all of them merged in a dir and the deploy all at once. See https://docs.travis-ci.com/user/build-stages/#Data-persistence-between-stages-and-jobs Luckily, #489 includes using DockerHub, and we can reuse it in replacement of S3. This avoids the requirement to handle credentials for a fourth service. The scheme is as follows:
Each job in stage 1 (see #489) builds after tests are successfully executed. The difference compared to is that the latter is based on (as explained aboved), but the first one is based on scratch. I.e., contains only ghdl-*-stretch-mcode.tgz.
Travis stage 3, named Pack artifacts
This stage mimicks the nested loops of stage 0 (see #489). However, instead of building anything, it pulls images which were created in the modified stage 1 and merges all the tarballs in a single directory. Then, a single deploy can be triggered in this stage.
This stage is especially useful for the following reasons:
At now tarballs are generated 'raw', i.e., there is no build, license, copying... This stage allows to enhance the tarballs with missing information/files, prior to pushing them to GitHub. @tgingold 2017-02-14
I think it is better to handle this here, and not in the build/test stage. This is because the tarballs in stage 1 (and, therefore, ghdl/ghdl images) are generated before the tests are run.
Additional meta-information can be added here:
The versions of the dependencies which where used to build the tarball: make, gcc, gnat, clang...
The date the tarball/image was created.
The size of ghdl binaries, ghdl libraries and images.
Expecting that appveyors builds will not take much longer than all the linux builds + mac build, appveyor artifacts can be retrieved here, and added to the release deploy. The same applies to PDFs generated in RTD.
...
The info gathered here can be used to enhace the downloads section of the documentation.
Related to nightly builds mentioned in USE_CASES.md, images will always have tarballs corresponding to the latest succesful build. It is kind of stupid to require docker in order to download a tarball, even if a 3-4 line long shell script can be used to make it straightforward. Once again, play-with-docker can be used to execute the script, but this requires a Docker ID. Yet, adquiring the tgz is just a hopefully useful side effect, not the main feature.
The text was updated successfully, but these errors were encountered:
From ghdl/ghdl#477
Packing, integration with Appveyor and RTD
There is no built-in feature in travis to merge all the artifacts and deploy to github just once, instead of having each job edits the release. An external storage service is required, such as S3, to have all of them merged in a dir and the deploy all at once. See https://docs.travis-ci.com/user/build-stages/#Data-persistence-between-stages-and-jobs Luckily, #489 includes using DockerHub, and we can reuse it in replacement of S3. This avoids the requirement to handle credentials for a fourth service. The scheme is as follows:
scratch
. I.e., contains onlyghdl-*-stretch-mcode.tgz
.Travis stage 3, named
Pack artifacts
This stage mimicks the nested loops of stage 0 (see #489). However, instead of building anything, it pulls images which were created in the modified stage 1 and merges all the tarballs in a single directory. Then, a single deploy can be triggered in this stage.
This stage is especially useful for the following reasons:
Related to nightly builds mentioned in USE_CASES.md, images will always have tarballs corresponding to the latest succesful build. It is kind of stupid to require docker in order to download a tarball, even if a 3-4 line long shell script can be used to make it straightforward. Once again, play-with-docker can be used to execute the script, but this requires a Docker ID. Yet, adquiring the tgz is just a hopefully useful side effect, not the main feature.
The text was updated successfully, but these errors were encountered: