-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conda pack support #213
base: master
Are you sure you want to change the base?
Conda pack support #213
Conversation
9d34cbb
to
08b7bdc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will probably need to wait for #206 to be landed before this change.
scripts/build-pack.sh
Outdated
### This script is, in principle, similar to download-env.sh | ||
### Notable differences: | ||
### - download-env does not download everything, but depends on system variables | ||
### - download-env uses user's site-packages |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not true. download-env
uses a self contained environment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will double check it, but I believe I was able to use a local package as a dependency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not be possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that PYTHONNOUSERSITE=1 you added in eb927b8 fixes it
cat $ENV_TEMPLATE > $ENV_FILE | ||
echo " - conda=$CONDA_VERSION" >> $ENV_FILE | ||
echo " - python=$PYTHON_VERSION" >> $ENV_FILE | ||
echo " - openocd=$OPENOCD_VERSION" >> $ENV_FILE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to extract the versions from the environment.yml file and remove the settings.sh file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it makes sense, parsing it will be trivial and will allow us to drop this quasi-auto-generation of yml. It's a shame conda does not allow variables substitution. Even python-pip does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mithro So we gave some thought to it and this calls for more discussion.
- download-env.sh installs packages on per-need basis
- build-pack.sh installs all the packages at once
- the versions are the same in both scenarios
- some packages are downloaded via pip, some are via conda.
Right now cmake is the only pip-installed package that has a version specified. It turns out that Conda has it in an older, but still suitable version available.
This means we could have multiple requirements.txt files in scripts/dependencies. There could be requirements.txt for common stuff, requirements-zephyr.txt, requirements-tinyfpga_b2.txt etc.
This way we could:
a) have a clear distinction over what requires which packages
b) install via pip install -r
in both download-env and build-pack.
For Conda we can do it similarly, and have multiple requirement files for different dependencies, and then use them all to create a pack.
There are, allegedly, some problems with using files as source for conda install
, but I'm sure it's manageable. The format is also not really specified, but I think that pip
files work fine, just as yml files.
Parsing files is something we'd prefer to avoid, for example to be able to easily switch the format/backend in the future.
Summing up, the flow would be:
download-env.sh:
if zephyr then ensure_deps_for(zephyr)
if sth_else then ensure_deps_for(sth_else)
build-pack.sh:
ensure_all_deps()
ensure_deps_for(x):
// in future: support docker/local installations
conda install -f conda_deps_for_$x
pip install -r pip_deps_for_$x
ensure_all_deps():
for x in deps:
ensure_deps_for(x)
What's your opinion on that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of having;
requirements-<board>.txt
requirements-<firmware>.txt
requirements-<cpu>.txt
I'm unclear if it should be separate firmware / cpu files or if it should be requirements-<firmware>-<cpu>
?
Another option is that we could use dummy conda packages with the requirements as dependencies?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe I will be enough to have requirements-<something>.txt
, with only one dimension. If more is needed, we can always add it.
# flavors. | ||
# Some conda packages are not there yet (namely x86_64 and lm32 with musl), | ||
# so we comment these out. | ||
for ARCH in or1k riscv32 lm32 x86_64; do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think x86_64
really makes sense? Or?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not per se, it was actually an emulation of CPU=none
. Is there any reasonable usage of this setting at this moment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually CPU=none
is a good thing for us to support, so lets make sure it works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can it be used in a meaningful way? I mean, how do we define "works"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a compiler, test it is able to compile something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean x86 compiler? I don't really understand the way you'd like to use it. CPU=none ./scripts/build-linux.sh
could make sense in a way, but it has nothing to do with LiteX or the compilers we usually use, it's just a generic kernel compilation within a conda env. The same applies to Zephyr or micropython.
To effectively include this option in the general flow I need to understand how can it be useful.
fi | ||
echo " - binutils-$ARCH-elf=$BINUTILS_VERSION" >> $ENV_FILE | ||
for FLAVOR in elf-nostdc elf-newlib linux-musl; do | ||
if [[ "$ARCH" == "x86_64" || ( "$ARCH" == "lm32" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A case statement here is probably clearer.
# conda env create does not seem to take PYTHONNOUSERSITE=1 very seriously. | ||
# We're not installing conda-pack from conda because it requires conda-forge | ||
# source. | ||
pip install conda-pack \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use a requirements.txt
file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please see #213 (comment)
- gperf | ||
- pyyaml | ||
- ipython | ||
- pyserial |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nextpnr
is only needed for ice40 / ecp5 platforms?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
template-environment.yml is used to create a one-pack-that-has-all. So the idea here was to gather all the deps in one place. Is this in line with what you wanted to have?
dependencies: | ||
- fxload | ||
- icefunprog | ||
- iceprog |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
icefunprog
and iceprog
are only needed for certain boards?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see above
- fxload | ||
- icefunprog | ||
- iceprog | ||
- ninja |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What needs ninja?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both gperf
and ninja
are needed for Zephyr SDK (and west tool)
- icefunprog | ||
- iceprog | ||
- ninja | ||
- gperf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What needs gperf?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both gperf
and ninja
are needed for Zephyr SDK (and west tool)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add some comments?
50c54ef
to
08b7bdc
Compare
Please rebase onto master. |
source $ENV_PATH/bin/activate | ||
conda-unpack | ||
|
||
# Some packages we use (like openocd or iceprog) rely on host GCC and binutils |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we fix them so they don't rely on host gcc / binutils?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, the reason for these dependencies is here: https://github.com/litex-hub/litex-conda-packages/blob/master/lib/ftdi/meta.yaml#L31-L33
I can try removing these dependencies unless you see an obvious reason not to
These functions are exactly same in both download-env.sh and enter-env.sh. Also, they will be used in conda-pack related scripts.
This script downloads all LiteX-BuildEnv dependencies and creates a tar.gz file. The file can be then used as a fully ready environment for LXBE.
This script is responsible for deploying and activating the conda-pack environment. It requires the usual environmental variables (CPU, PLATFORM etc.) to be set. It can be sourced multiple times.
b6022d9
to
a8acfac
Compare
@mateusz-holenko -- Could you work with @PiotrZierhoffer to get something like this pull request merged? |
This PR is dependent on #212
It adds two scripts: build-pack.sh and enter-pack.sh that allow to create and use the package.
build-pack.sh can be started on a clean lxbe repo, building a package in ./build/lxbe-env.tar.gz. It weights about 1.3GB.
The pack contains a Conda environment, which is subsequently used by enter-pack.
enter-pack.sh must be sourced and requires the usual variables to be set (e.g.
export TARGET=base FIRMWARE=zephyr PLATFORM=arty CPU_VARIANT=full CPU=vexriscv
). It unpacks the provided tar.gz (or uses the already unpacked directory) and activates the environment.This is an important change to the way enter-env.sh operates (that is - using the default env, without activation).
After the activation, the usual tools can be used to build/run/flash things.