Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: Converting checkpoint to diffusers fails #6514

Open
raldone01 opened this issue Jun 14, 2024 · 4 comments
Open

[bug]: Converting checkpoint to diffusers fails #6514

raldone01 opened this issue Jun 14, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@raldone01
Copy link

raldone01 commented Jun 14, 2024

Operating system

Linux (docker)

GPU vendor

Nvidia (CUDA)

Version number

a3cb5da

Browser

Firefox

What happened

When I try to convert one of my models to diffusers in the model manager it fails because there seems to be a mismatch in the convert_cache naming.
The conversion seems to work.

❯ ls .convert_cache/
<folder> invokeai_models_sd-1_main_cyberrealistic_v5_fp32.safetensors

However the model manager is looking for /invokeai/models/.convert_cache/8b1c8220-9abd-481f-9827-be64ec67461a which does not exist.

I patched invokeai/app/api/routers/model_manager.py with the following:

cache_path = loader.convert_cache.cache_path(key)
+logger.info(f"CachePath: {cache_path}")
assert cache_path.exists()
invoke_ai-1  | [2024-06-14 15:11:22,532]::[InvokeAI]::INFO --> CachePath: /invokeai/models/.convert_cache/8b1c8220-9abd-481f-9827-be64ec67461a
invoke_ai-1  | [2024-06-14 15:11:22,533]::[uvicorn.access]::INFO --> 172.24.13.33:15583 - "PUT /api/v2/models/convert/8b1c8220-9abd-481f-9827-be64ec67461a HTTP/1.1" 500
invoke_ai-1  | [2024-06-14 15:11:22,533]::[uvicorn.error]::ERROR --> Exception in ASGI application
invoke_ai-1  | 
invoke_ai-1  | Traceback (most recent call last):
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 412, in run_asgi
invoke_ai-1  |     result = await app(  # type: ignore[func-returns-value]
invoke_ai-1  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
invoke_ai-1  |     return await self.app(scope, receive, send)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
invoke_ai-1  |     await super().__call__(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
invoke_ai-1  |     await self.middleware_stack(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
invoke_ai-1  |     raise exc
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
invoke_ai-1  |     await self.app(scope, receive, _send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 24, in __call__
invoke_ai-1  |     await responder(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 44, in __call__
invoke_ai-1  |     await self.app(scope, receive, self.send_with_gzip)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/cors.py", line 93, in __call__
invoke_ai-1  |     await self.simple_response(scope, receive, send, request_headers=headers)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/cors.py", line 148, in simple_response
invoke_ai-1  |     await self.app(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/fastapi_events/middleware.py", line 43, in __call__
invoke_ai-1  |     await self.app(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
invoke_ai-1  |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
invoke_ai-1  |     raise exc
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
invoke_ai-1  |     await app(scope, receive, sender)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
invoke_ai-1  |     await self.middleware_stack(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
invoke_ai-1  |     await route.handle(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
invoke_ai-1  |     await self.app(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
invoke_ai-1  |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
invoke_ai-1  |     raise exc
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
invoke_ai-1  |     await app(scope, receive, sender)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/routing.py", line 72, in app
invoke_ai-1  |     response = await func(request)
invoke_ai-1  |                ^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
invoke_ai-1  |     raw_response = await run_endpoint_function(
invoke_ai-1  |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
invoke_ai-1  |     return await dependant.call(**values)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/invokeai/invokeai/app/api/routers/model_manager.py", line 634, in convert_model
invoke_ai-1  |     assert cache_path.exists()
invoke_ai-1  | AssertionError
invoke_ai-1  | [2024-06-14 15:11:22,601]::[uvicorn.access]::INFO --> 172.24.13.33:15584 - "GET /api/v2/models/ HTTP/1.1" 200

Note: Converting with older releases works fine.

@raldone01 raldone01 added the bug Something isn't working label Jun 14, 2024
@raldone01 raldone01 changed the title [bug]: Converting safetensors to diffusers fails [bug]: Converting checkpoint to diffusers fails Jun 14, 2024
@Sourdface
Copy link

Can confirm. The real path that gets generated after conversion is based on the full path to the original checkpoint and does not contain the UUID(?) that the conversion script seems to be looking for.

A workaround is to convert, let it fail with the above error, then manually move the converted checkpoint folder to the normal location where Invoke places models of that type, then tell Invokeai to scan for models within its root, and finally install the model from its new location. The model appears to work like diffuser models normally work after that point.

@raldone01
Copy link
Author

I just downgrade invoke to convert diffusers. Just have to fix the schema version in the invoke.yml. I had no issues with DB migrations so far...

@FormerTwitterEmployee
Copy link

I am also getting this error since the most recent update

[2024-07-31 05:07:54,641]::[uvicorn.error]::ERROR --> Exception in ASGI application

Traceback (most recent call last):
  File "E:\InvokeAI\.venv\lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 412, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "E:\InvokeAI\.venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
  File "E:\InvokeAI\.venv\lib\site-packages\fastapi\applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
    raise exc
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\middleware\cors.py", line 93, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\middleware\cors.py", line 148, in simple_response
    await self.app(scope, receive, send)
  File "E:\InvokeAI\.venv\lib\site-packages\fastapi_events\middleware.py", line 43, in __call__
    await self.app(scope, receive, send)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\middleware\exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "E:\InvokeAI\.venv\lib\site-packages\starlette\routing.py", line 72, in app
    response = await func(request)
  File "E:\InvokeAI\.venv\lib\site-packages\fastapi\routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "E:\InvokeAI\.venv\lib\site-packages\fastapi\routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
  File "E:\InvokeAI\.venv\lib\site-packages\invokeai\app\api\routers\model_manager.py", line 760, in convert_model
    assert cache_path.exists()
AssertionError

@FormerTwitterEmployee
Copy link

Can confirm. The real path that gets generated after conversion is based on the full path to the original checkpoint and does not contain the UUID(?) that the conversion script seems to be looking for.

A workaround is to convert, let it fail with the above error, then manually move the converted checkpoint folder to the normal location where Invoke places models of that type, then tell Invokeai to scan for models within its root, and finally install the model from its new location. The model appears to work like diffuser models normally work after that point.

I do not see any converted checkpoint folder in my models folder, just the original ckpt file as if nothing has changed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants