Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Built Container - Fails after mounting GFPGAN, during "Loading Python libraries..." #540

Closed
sc0ttwad3 opened this issue Jul 7, 2023 · 10 comments
Labels
awaiting-response Waiting for the issuer to respond bug Something isn't working Stale

Comments

@sc0ttwad3
Copy link

Has this issue been opened before?

  • [*] It is not in the FAQ, I checked.
  • [*] It is not in the issues, I searched.

Describe the bug

After successfully creating the container... Docker fails just after Mounting GFPGANv1.4.pth during "Loading Python libraries..."

Container webui-docker-invoke-1  Created
Attaching to webui-docker-invoke-1
webui-docker-invoke-1  | Mounted ldm
webui-docker-invoke-1  | Mounted .cache
webui-docker-invoke-1  | Mounted RealESRGAN
webui-docker-invoke-1  | Mounted Codeformer
webui-docker-invoke-1  | Mounted GFPGAN
webui-docker-invoke-1  | Mounted GFPGANv1.4.pth
webui-docker-invoke-1  | Loading Python libraries...
webui-docker-invoke-1  |
webui-docker-invoke-1  | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮
webui-docker-invoke-1  |/opt/conda/bin/invokeai-configure:5 in <module>webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |2 # -*- coding: utf-8 -*-                                                  │
webui-docker-invoke-1  |3 import rewebui-docker-invoke-1  |4 import syswebui-docker-invoke-1  | │ ❱ 5 from ldm.invoke.config.invokeai_configure import mainwebui-docker-invoke-1  |6 if __name__ == '__main__':                                               │
webui-docker-invoke-1  |7sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])     │
webui-docker-invoke-1  |8sys.exit(main())                                                     │
webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |/InvokeAI/ldm/invoke/config/invokeai_configure.py:40 in <module>webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |37webui-docker-invoke-1  |38 import invokeai.configs as configswebui-docker-invoke-1  |39webui-docker-invoke-1  | │ ❱  40 from ..args import PRECISION_CHOICES, Argswebui-docker-invoke-1  |41 from ..globals import Globals, global_config_dir, global_config_file,  │
webui-docker-invoke-1  |42 from .model_install import addModelsForm, process_and_executewebui-docker-invoke-1  |43 from .model_install_backend import (                                   │
webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |/InvokeAI/ldm/invoke/args.py:100 in <module>webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |97webui-docker-invoke-1  |98 import ldm.invokewebui-docker-invoke-1  |99 import ldm.invoke.pngwriterwebui-docker-invoke-1  | │ ❱  100 from ldm.invoke.conditioning import split_weighted_subpromptswebui-docker-invoke-1  |101webui-docker-invoke-1  |102 from ldm.invoke.globals import Globalswebui-docker-invoke-1  |103webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |/InvokeAI/ldm/invoke/conditioning.py:18 in <module>webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |15 from compel.prompt_parser import FlattenedPrompt, Blend, Fragment, Crowebui-docker-invoke-1  |16Conjunctionwebui-docker-invoke-1  |17 from .devices import torch_dtypewebui-docker-invoke-1  | │ ❱  18 from .generator.diffusers_pipeline import StableDiffusionGeneratorPipewebui-docker-invoke-1  |19 from ..models.diffusion.shared_invokeai_diffusion import InvokeAIDiffuwebui-docker-invoke-1  |20 from ldm.invoke.globals import Globalswebui-docker-invoke-1  |21webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |/InvokeAI/ldm/invoke/generator/__init__.py:4 in <module>webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |1 '''                                                                      │
webui-docker-invoke-1  | │   2 Initialization file for the ldm.invoke.generator package                 │
webui-docker-invoke-1  | │   3 '''webui-docker-invoke-1  | │ ❱ 4 from .base import Generatorwebui-docker-invoke-1  |5webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |/InvokeAI/ldm/invoke/generator/base.py:21 in <module>webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |18 from diffusers import DiffusionPipelinewebui-docker-invoke-1  |19 from einops import rearrangewebui-docker-invoke-1  |20 from pathlib import Pathwebui-docker-invoke-1  | │ ❱  21 from pytorch_lightning import seed_everythingwebui-docker-invoke-1  |22 from tqdm import trangewebui-docker-invoke-1  |23webui-docker-invoke-1  |24 import invokeai.assets.web as web_assetswebui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |/opt/conda/lib/python3.10/site-packages/pytorch_lightning/__init__.py:34 inwebui-docker-invoke-1  |<module>webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |31_logger.addHandler(logging.StreamHandler())                         │
webui-docker-invoke-1  |32_logger.propagate = Falsewebui-docker-invoke-1  |33webui-docker-invoke-1  | │ ❱ 34 from pytorch_lightning.callbacks import Callback  # noqa: E402          │
webui-docker-invoke-1  |35 from pytorch_lightning.core import LightningDataModule, LightningModulewebui-docker-invoke-1  |36 from pytorch_lightning.trainer import Trainer  # noqa: E402             │
webui-docker-invoke-1  |37 from pytorch_lightning.utilities.seed import seed_everything  # noqa: E │
webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__webui-docker-invoke-1  | │ .py:25 in <module>webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |22 from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoinwebui-docker-invoke-1  |23 from pytorch_lightning.callbacks.model_summary import ModelSummarywebui-docker-invoke-1  |24 from pytorch_lightning.callbacks.prediction_writer import BasePredictiowebui-docker-invoke-1  | │ ❱ 25 from pytorch_lightning.callbacks.progress import ProgressBarBase, RichPwebui-docker-invoke-1  |26 from pytorch_lightning.callbacks.pruning import ModelPruningwebui-docker-invoke-1  |27 from pytorch_lightning.callbacks.quantization import QuantizationAwareTwebui-docker-invoke-1  |28 from pytorch_lightning.callbacks.rich_model_summary import RichModelSumwebui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progresswebui-docker-invoke-1  |/__init__.py:22 in <module>webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |19webui-docker-invoke-1  |20 """                                                                     │
webui-docker-invoke-1  |21 from pytorch_lightning.callbacks.progress.base import ProgressBarBasewebui-docker-invoke-1  | │ ❱ 22 from pytorch_lightning.callbacks.progress.rich_progress import RichProgwebui-docker-invoke-1  |23 from pytorch_lightning.callbacks.progress.tqdm_progress import TQDMProgwebui-docker-invoke-1  |24webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progresswebui-docker-invoke-1  |/rich_progress.py:20 in <module>webui-docker-invoke-1  | │                                                                              │
webui-docker-invoke-1  |17 from datetime import timedeltawebui-docker-invoke-1  |18 from typing import Any, Dict, Optional, Unionwebui-docker-invoke-1  |19webui-docker-invoke-1  | │ ❱  20 from torchmetrics.utilities.imports import _compare_versionwebui-docker-invoke-1  |21webui-docker-invoke-1  |22 import pytorch_lightning as plwebui-docker-invoke-1  |23 from pytorch_lightning.callbacks.progress.base import ProgressBarBasewebui-docker-invoke-1  | ╰──────────────────────────────────────────────────────────────────────────────╯
webui-docker-invoke-1  | ImportError: cannot import name '_compare_version' from
webui-docker-invoke-1  | 'torchmetrics.utilities.imports'
webui-docker-invoke-1  | (/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py)
webui-docker-invoke-1 exited with code 1

Which UI

InvokeAI

Hardware / Software

  • OS: Ubuntu-22.04 - WSL2 on Windows 11 Insider
  • OS version: Windows 11 Insider Build 25393.1
  • WSL version (if applicable): WSL 2 - Ubuntu-22.04
  • Docker version:
    Client:
    Cloud integration: v1.0.33
    Version: 24.0.2
    API version: 1.43
    Go version: go1.20.4
    Git commit: cb74dfc
    Built: Thu May 25 21:53:15 2023
    OS/Arch: windows/amd64
    Context: default
    Server: Docker Desktop 4.20.1 (110738)
    Engine:
    Version: 24.0.2-38-g8e70a1b23e
    API version: 1.43 (minimum version 1.12)
    Go version: go1.20.4
    Git commit: 8e70a1b23e965d86ec8c2feb77605196ae124630
    Built: Fri Jun 2 15:58:50 2023
    OS/Arch: linux/amd64
    Experimental: false
    containerd:
    Version: 1.6.21
    GitCommit: 3dce8eb055cbb6872793272b4f20ed16117344f8
    runc:
    Version: 1.1.7
    GitCommit: v1.1.7-0-g860f061
    docker-init:
    Version: 0.19.0
    GitCommit: de40ad0
  • Docker Compose version: Docker Compose version v2.18.1
  • Python version: 3.10.6
  • Repo version: commit 103e114 (HEAD -> master, origin/master, origin/HEAD)
  • RAM: 32 GB
  • GPU/VRAM: Compute 8.6 CUDA device: [NVIDIA GeForce RTX 3070 Ti GPU] 16GB (480.632 billion interactions per second, 9612.635 single-precision GFLOP/s at 20 flops per interaction)

Steps to Reproduce

docker compose --profile invoke up --build

Additional context

@sc0ttwad3 sc0ttwad3 added the bug Something isn't working label Jul 7, 2023
@peacheniya
Copy link

add into services/invoke/Dockerfile after line 35

RUN --mount=type=cache,target=/root/.cache/pip \
  pip uninstall -y torchmetrics && \
  pip install torchmetrics==0.11.4

@sc0ttwad3
Copy link
Author

THX! I ended up using a local installation instead of Docker, and all set. But I will return to using Docker and will update the Dockerfile and try it again.

@burn4science
Copy link

add into services/invoke/Dockerfile after line 35

RUN --mount=type=cache,target=/root/.cache/pip \
  pip uninstall -y torchmetrics && \
  pip install torchmetrics==0.11.4

Perfect, that was the solution! The only difference for me: I had to include the code snippet into services/AUTOMATIC1111/Dockerfile after line 35.

THANKS A LOT!

@AbdBarho
Copy link
Owner

Is this still an issue, even after the latest update? seems to me like a conflict in dependencies with an extension?

@AbdBarho AbdBarho added the awaiting-response Waiting for the issuer to respond label Jul 30, 2023
@prostolyubo
Copy link

It is still an issue. The solution below helps (maybe create a PR?):

add into services/invoke/Dockerfile after line 35

RUN --mount=type=cache,target=/root/.cache/pip \
  pip uninstall -y torchmetrics && \
  pip install torchmetrics==0.11.4

@AbdBarho
Copy link
Owner

AbdBarho commented Aug 6, 2023

@prostolyubo does this still work if you put in the startup.sh script?

@prostolyubo
Copy link

@prostolyubo does this still work if you put in the startup.sh script?

I'm not sure what you're talking about. I don't see such a script anywhere. I simply modified the dockerfile and rebuilt the image.

@sc0ttwad3
Copy link
Author

@burn4science Exactly what I needed.

@github-actions
Copy link

This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.

@github-actions github-actions bot added the Stale label Aug 21, 2023
@github-actions
Copy link

This issue was closed because it has been stalled for 7 days with no activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting-response Waiting for the issuer to respond bug Something isn't working Stale
Projects
None yet
Development

No branches or pull requests

5 participants