Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where should I put sd_turbo safetensor file in? #16

Open
QuantumLiu opened this issue Jan 7, 2025 · 5 comments
Open

Where should I put sd_turbo safetensor file in? #16

QuantumLiu opened this issue Jan 7, 2025 · 5 comments

Comments

@QuantumLiu
Copy link

I download sd_turbo.safetensors manually because of network issues,so where should I put files in?

python inference_invsr.py -i ../test_images/inputs/longines/ -o ../test_images/results/longines --sd_path weights/stabilityai/sd_turbo.safetensors
@imesu2378
Copy link

I had the same problem due to working offline.

set the config to --sd_path weights/

and the sd_turbo files manually downloaded from https://huggingface.co/stabilityai/sd-turbo must be placed under the following directories.

weights/noise_predictor_sd_turbo_v5.pth
weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/model_index.json
weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/unet/...
weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/text_encoder/...
weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/vae/...
weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/scheduler/...
weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/tokenizer/...

@zsyOAOA
Copy link
Owner

zsyOAOA commented Jan 13, 2025

python inference_invsr.py -i ../test_images/inputs/longines/ -o ../test_images/results/longines --sd_path weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2

I had the same problem due to working offline.

set the config to --sd_path weights/

and the sd_turbo files manually downloaded from https://huggingface.co/stabilityai/sd-turbo must be placed under the following directories.

weights/noise_predictor_sd_turbo_v5.pth weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/model_index.json weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/unet/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/text_encoder/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/vae/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/scheduler/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/tokenizer/...

@Maple-geekZhu
Copy link

I've encountered the same issue. Due to network problems, I cannot access Hugging Face directly on the server, so I manually downloaded the relevant weight files from the sd_turbo repository. The model I have downloaded has the following structure:

  • sd-turbo
    -- schedule
    -- text_encoder
    -- tokenizer
    -- unet
    -- vae
    -- model_index.json
    -- sd_turbo.safetensors
    However, even after specifying the corresponding weight paths, I still get an error. I used the following .sh file:
CUDA_VISIBLE_DEVICES=2,3 \
python inference_invsr.py \
    -i /data/zrt/dataset/render-samples \
    -o ./result1 \
    --bs 1 \
    --chopping_bs 8 \
    --num_steps 1 \
    --cfg_path ./configs/sample-sd-turbo.yaml \
    --sd_path /data/zrt/checkpoints/sd-turbo \
    --started_ckpt_path /data/zrt/checkpoints/noise_predictor_sd_turbo_v5.pth \
    --chopping_size 32

But I still encounter the following error:

 Setting timesteps for inference: [200]
Couldn't connect to the Hub: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/stabilityai/sd-turbo (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x741f68280520>: Failed to establish a new connection: [Errno 111] Connection refused'))"), '(Request ID: cd3412f8-5f48-4c15-a136-7e028efd3619)').
Will try to load from local cache.
Traceback (most recent call last):
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/urllib3/connection.py", line 198, in _new_conn
    sock = connection.create_connection(
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
    raise err
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
    response = self._make_request(
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 488, in _make_request
    raise new_e
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 464, in _make_request
    self._validate_conn(conn)
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1093, in _validate_conn
    conn.connect()
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/urllib3/connection.py", line 704, in connect
    self.sock = sock = self._new_conn()
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/urllib3/connection.py", line 213, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x741f68280520>: Failed to establish a new connection: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/requests/adapters.py", line 667, in send
    resp = conn.urlopen(
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 841, in urlopen
    retries = retries.increment(
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/urllib3/util/retry.py", line 519, in increment
    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/stabilityai/sd-turbo (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x741f68280520>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/data/zrt/InvSR/src/diffusers/pipelines/pipeline_utils.py", line 1291, in download
    info = model_info(pretrained_model_name, token=token, revision=revision)
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2532, in model_info
    r = get_session().get(path, headers=headers, timeout=timeout, params=params)
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
    return self.request("GET", url, **kwargs)
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 93, in send
    return super().send(request, *args, **kwargs)
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/requests/adapters.py", line 700, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/stabilityai/sd-turbo (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x741f68280520>: Failed to establish a new connection: [Errno 111] Connection refused'))"), '(Request ID: cd3412f8-5f48-4c15-a136-7e028efd3619)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/data/zrt/InvSR/inference_invsr.py", line 115, in <module>
    main()
  File "/data/zrt/InvSR/inference_invsr.py", line 110, in main
    sampler = InvSamplerSR(configs)
  File "/data/zrt/InvSR/sampler_invsr.py", line 43, in __init__
    self.build_model()
  File "/data/zrt/InvSR/sampler_invsr.py", line 60, in build_model
    base_pipe = util_common.get_obj_from_str(self.configs.sd_pipe.target).from_pretrained(**params)
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/data/zrt/InvSR/src/diffusers/pipelines/pipeline_utils.py", line 699, in from_pretrained
    cached_folder = cls.download(
  File "/data/zrt/anaconda3/envs/invsr/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/data/zrt/InvSR/src/diffusers/pipelines/pipeline_utils.py", line 1536, in download
    raise EnvironmentError(
OSError: Cannot load model stabilityai/sd-turbo: model is not cached locally and an error occurred while trying to fetch metadata from the Hub. Please check out the root cause in the stacktrace above.

Could it be due to the folder name?

@Smilezbc
Copy link

Smilezbc commented Mar 7, 2025

I had the same problem due to working offline.

set the config to --sd_path weights/

and the sd_turbo files manually downloaded from https://huggingface.co/stabilityai/sd-turbo must be placed under the following directories.

weights/noise_predictor_sd_turbo_v5.pth weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/model_index.json weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/unet/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/text_encoder/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/vae/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/scheduler/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/tokenizer/...

and add a item"revision: b261bac6fd2cf515557d5d0707481eafa0485ec2" in sd_pipe.params

like this

Image

@Maple-geekZhu
Copy link

I had the same problem due to working offline.
set the config to --sd_path weights/
and the sd_turbo files manually downloaded from https://huggingface.co/stabilityai/sd-turbo must be placed under the following directories.
weights/noise_predictor_sd_turbo_v5.pth weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/model_index.json weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/unet/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/text_encoder/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/vae/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/scheduler/... weights/models--stabilityai--sd-turbo/snapshots/b261bac6fd2cf515557d5d0707481eafa0485ec2/tokenizer/...

and add a item"revision: b261bac6fd2cf515557d5d0707481eafa0485ec2" in sd_pipe.params

like this

Image

thanks for your reply, I use another solution:
Revise the py ../site-packages/huggingface_hub/constants.py replace _HF_DEFAULT_ENDPOINT = "https://huggingface.co" to _HF_DEFAULT_ENDPOINT = "https://hf-mirror.com"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants