Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot install flax because these package versions have conflicting dependencies. #65

Open
pr1ntr opened this issue Jun 14, 2022 · 15 comments

Comments

@pr1ntr
Copy link

pr1ntr commented Jun 14, 2022

ERROR: Cannot install flax because these package versions have conflicting dependencies.

The conflict is caused by:
    optax 0.1.2 depends on jaxlib>=0.1.37
    optax 0.1.1 depends on jaxlib>=0.1.37
    optax 0.1.0 depends on jaxlib>=0.1.37
    optax 0.0.91 depends on jaxlib>=0.1.37
    optax 0.0.9 depends on jaxlib>=0.1.37
    optax 0.0.8 depends on jaxlib>=0.1.37
    optax 0.0.6 depends on jaxlib>=0.1.37
    optax 0.0.5 depends on jaxlib>=0.1.37
    optax 0.0.3 depends on jaxlib>=0.1.37
    optax 0.0.2 depends on jaxlib>=0.1.37
    optax 0.0.1 depends on jaxlib>=0.1.37

WIn 11
Python 3.10.5
PIP 22.0.4

@Phildo
Copy link

Phildo commented Jun 14, 2022

Same:

ERROR: Cannot install -r requirements.txt (line 4), flax and transformers because these package versions have conflicting dependencies.

The conflict is caused by:
optax 0.1.2 depends on jaxlib>=0.1.37
optax 0.1.1 depends on jaxlib>=0.1.37
flax 0.5.0 depends on typing-extensions>=4.1.1
huggingface-hub 0.1.0 depends on typing-extensions
jax 0.3.0 depends on typing_extensions
optax 0.1.0 depends on typing-extensions~=3.10.0
flax 0.5.0 depends on typing-extensions>=4.1.1
huggingface-hub 0.1.0 depends on typing-extensions
jax 0.3.0 depends on typing_extensions
optax 0.0.91 depends on typing-extensions~=3.10.0
optax 0.0.9 depends on jaxlib>=0.1.37
optax 0.0.8 depends on jaxlib>=0.1.37
optax 0.0.6 depends on jaxlib>=0.1.37
optax 0.0.5 depends on jaxlib>=0.1.37
optax 0.0.3 depends on jaxlib>=0.1.37
optax 0.0.2 depends on jaxlib>=0.1.37
optax 0.0.1 depends on jaxlib>=0.1.37

To fix this you could try to:
1. loosen the range of package versions you've specified
3. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

Win 10
Python 3.9.13
PIP 22.1.2

@dmwyatt
Copy link

dmwyatt commented Jun 15, 2022

I managed to get it installed in WSL2 instead of straight on Windows.

@dmwyatt
Copy link

dmwyatt commented Jun 15, 2022

Had to get cuda toolkit, cudnn, troubleshooting issues and stuff...but it's working great in WSL2.

Using mega model version takes ~30 seconds per image with Geforce GTX 1080, 32GB ram, Ryzen 5900X.

@Phildo
Copy link

Phildo commented Jun 15, 2022

Had to get cuda toolkit, cudnn, troubleshooting issues and stuff...but it's working great in WSL2.

installed WSL2, cuda toolkit, and cudnn. getting a bunch of errors booting up mega, the first of which being:

--> Starting DALL-E Server. This might take up to two minutes.                                                                                                                                                                                       Traceback (most recent call last):

File "/home/phildo/.local/lib/python3.8/site-packages/dalle_mini/model/utils.py", line 23, in from_pretrained
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/apis/public.py", line 3885, in download
File "/usr/lib/python3.8/multiprocessing/pool.py", line 364, in map
File "/usr/lib/python3.8/multiprocessing/pool.py", line 771, in get
File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker
File "/usr/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/apis/public.py", line 3979, in _download_file
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/apis/public.py", line 3355, in download
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/sdk/wandb_artifacts.py", line 912, in load_file
File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/sdk/interface/artifacts.py", line 948, in helper
File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/util.py", line 1430, in fsync_open
OSError: [Errno 5] Input/output error

During handling of the above exception, another exception occurred:

(then a bunch more errors)

EDIT: nvm- problem was due to full hard drive 🤦
Leaving previous comment for others in similar situation.

However- I still can't get it to work on WSL2. Now I'm getting

--> Starting DALL-E Server. This might take up to two minutes.
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)

followed by an error Some of the weights of DalleBart were initialized in float16 precision from the model checkpoint at /tmp/tmpmx4i52le:, followed by a huge list of strings

Some of the weights of DalleBart were initialized in float16 precision from the model checkpoint at /tmp/tmpmx4i52le:
[('lm_head', 'kernel'), ('model', 'decoder', 'embed_positions', 'embedding'), ('model', 'decoder', 'embed_tokens', 'embedding'), ('model', 'decoder', 'final_ln', 'bias'), ('model', 'decoder', 'layernorm_embedding', 'bias'), ('model', 'decoder', 'layernorm_embedding', 'scale'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_0', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_0', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_0', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_0', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_1', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_1', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_1', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_1', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'GLU_0', 'Dense_0', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'GLU_0', 'Dense_1', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'GLU_0', 'Dense_2', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'GLU_0', 'LayerNorm_0', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'GLU_0', 'LayerNorm_1', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_0', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_1', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_1', 'scale'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_2', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_3', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_3', 'scale'), ('model', 'encoder', 'embed_positions', 'embedding'), ('model', 'encoder', 'embed_tokens', 'embedding'), ('model', 'encoder', 'final_ln', 'bias'), ('model', 'encoder', 'layernorm_embedding', 'bias'), ('model', 'encoder', 'layernorm_embedding', 'scale'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'FlaxBartAttention_0', 'k_proj', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'FlaxBartAttention_0', 'out_proj', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'FlaxBartAttention_0', 'q_proj', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'FlaxBartAttention_0', 'v_proj', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'GLU_0', 'Dense_0', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'GLU_0', 'Dense_1', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'GLU_0', 'Dense_2', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'GLU_0', 'LayerNorm_0', 'bias'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'GLU_0', 'LayerNorm_1', 'bias'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'LayerNorm_0', 'bias'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'LayerNorm_1', 'bias'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'LayerNorm_1', 'scale')]
You should probably UPCAST the model weights to float32 if this was not intended. See [`~FlaxPreTrainedModel.to_fp32`] for further information on how to do this.
/home/phildo/jax/jax/_src/ops/scatter.py:87: FutureWarning: scatter inputs have incompatible types: cannot safely cast value from dtype=float16 to dtype=float32. In future JAX releases this will result in an error.
  warnings.warn("scatter inputs have incompatible types: cannot safely cast "

Then it appears to hang, with no further input.

I'm able to load up the front end, but get the error "Error querying DALL-E service. Check your backend server logs." when trying to prompt anything.

I even followed this PR's instructions for installing it on WSL2 ( 0e6cfb2 ), including building jax from source (which, warning: takes like an hour!).

Any advice @dmwyatt ? I'm on Win 10 with a GTX 1080 Ti (with a fresh default installation of WSL2)

@SantinoPetrovic
Copy link

SantinoPetrovic commented Jun 24, 2022

Having the same issue:

ERROR: Cannot install -r requirements.txt (line 4), flax and transformers because these package versions have conflicting dependencies.
The conflict is caused by:
    optax 0.1.2 depends on jaxlib>=0.1.37
    optax 0.1.1 depends on jaxlib>=0.1.37
    flax 0.5.0 depends on typing-extensions>=4.1.1
    huggingface-hub 0.1.0 depends on typing-extensions
    jax 0.3.0 depends on typing_extensions
    optax 0.1.0 depends on typing-extensions~=3.10.0
    flax 0.5.0 depends on typing-extensions>=4.1.1
    huggingface-hub 0.1.0 depends on typing-extensions
    jax 0.3.0 depends on typing_extensions
    optax 0.0.91 depends on typing-extensions~=3.10.0
    optax 0.0.9 depends on jaxlib>=0.1.37
    optax 0.0.8 depends on jaxlib>=0.1.37
    optax 0.0.6 depends on jaxlib>=0.1.37
    optax 0.0.5 depends on jaxlib>=0.1.37
    optax 0.0.3 depends on jaxlib>=0.1.37
    optax 0.0.2 depends on jaxlib>=0.1.37
    optax 0.0.1 depends on jaxlib>=0.1.37

@Talkashie
Copy link

Talkashie commented Jul 12, 2022

Is there any solution to this? I'm also getting stuck here.

ERROR: Cannot install -r requirements.txt (line 4), flax and transformers because these package versions have conflicting dependencies.

The conflict is caused by:
    optax 0.1.2 depends on jaxlib>=0.1.37
    optax 0.1.1 depends on jaxlib>=0.1.37
    flax 0.5.0 depends on typing-extensions>=4.1.1
    huggingface-hub 0.1.0 depends on typing-extensions
    jax 0.3.0 depends on typing_extensions
    optax 0.1.0 depends on typing-extensions~=3.10.0
    flax 0.5.0 depends on typing-extensions>=4.1.1
    huggingface-hub 0.1.0 depends on typing-extensions
    jax 0.3.0 depends on typing_extensions
    optax 0.0.91 depends on typing-extensions~=3.10.0
    optax 0.0.9 depends on jaxlib>=0.1.37
    optax 0.0.8 depends on jaxlib>=0.1.37
    optax 0.0.6 depends on jaxlib>=0.1.37
    optax 0.0.5 depends on jaxlib>=0.1.37
    optax 0.0.3 depends on jaxlib>=0.1.37
    optax 0.0.2 depends on jaxlib>=0.1.37
    optax 0.0.1 depends on jaxlib>=0.1.37

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

EDIT: I seem to have fixed most of the issues by installing dependencies separately and creating a new environment.

@karyeet
Copy link

karyeet commented Jul 21, 2022

I recommend just installing on WSL2 if you wish to use windows. The install will complete without issue.

@gadams999
Copy link

@karyeet I've got it working in WSL2 too, but realized I don't have access to all GPU RAM. Do you know what the system memory and GPU memory requirements are for Mega_full?

@karyeet
Copy link

karyeet commented Aug 2, 2022

@gadams999

Had to get cuda toolkit, cudnn, troubleshooting issues and stuff...but it's working great in WSL2.

Using mega model version takes ~30 seconds per image with Geforce GTX 1080, 32GB ram, Ryzen 5900X.

As dmwyatt points out, make sure you have the cuda toolkit and cudnn installed.
Also make sure you have jax[cuda] installed.

I could only run mini on 3060 6gb, this post suggests you need at least 16gb of vram for mega_full.

Likely 8gb needed for mega

@sameermahajan
Copy link

Any resolution for this? I cannot install tensorflowjs on my windows due to this.

@mnai01
Copy link

mnai01 commented Apr 28, 2023

Any resolution for this? I cannot install tensorflowjs on my windows due to this.

Same exact problem when running pip install tensorflowjs

@reidpat
Copy link

reidpat commented Jul 10, 2023

I am sadly also having this error when trying to install tensorflowjs on my windows machine.

@saharmor
Copy link
Owner

@reidpat have you tried the solutions mentioned in this thread? If so, please paste the error you are getting

@yhann0827
Copy link

im also having the same error. anyone got any issues to solve this?

@AndreAlbu
Copy link

pip install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_releases.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests