Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Pipeline] Enhancements to AutoPipeline #10006

Closed
wants to merge 7 commits into from

Conversation

suzukimain
Copy link
Contributor

@suzukimain suzukimain commented Nov 24, 2024

What does this PR do?

This PR adds the ability to automatically search for models from civitai and huggingface in AutoPipeline and enables the loading of single file checkpoints.

From #9986

Example:

pip install git+https://github.com/suzukimain/diffusers.git@Update_AutoPipeline
# Search for Civitai
from diffusers import AutoPipelineForText2Image
pipe = AutoPipelineForText2Image.from_civitai("any").to("cuda")
image = pipe("cat").images[0]
image.save("cat.png")
# Search for Huggingface
from diffusers import AutoPipelineForText2Image
pipe = AutoPipelineForText2Image.from_huggingface("any").to("cuda")
img = pipe("cat").images[0]
img.save("cat.png")

Before submitting

Who can review?

@yiyixuxu @asomoza @bghira

@bghira
Copy link
Contributor

bghira commented Nov 24, 2024

cc @vladmandic check this out 😹 oh god @suzukimain you genius. thank you.

@vladmandic
Copy link
Contributor

vladmandic commented Nov 26, 2024

cc @vladmandic check this out 😹 oh god @suzukimain you genius. thank you.

i have something similar, but this is pretty clean!

needs a bit of cleanup to actually use params - e.g., instead of

model_path = f"/root/.cache/Civitai/{repo_id}/{version_id}/{file_name}"

use actual cache_dir param to construct path.
also, download resume-on-error is almost a necessity given the size of the models and how unreliable civitai can be at the times.

@yiyixuxu
Copy link
Collaborator

hi @suzukimain

If you make this tool as separate repo, we are happy to add it to our doc and help promote it!

@suzukimain
Copy link
Contributor Author

hi @suzukimain

If you make this tool as separate repo, we are happy to add it to our doc and help promote it!

Originally, this PR was incorporated into AutoPipeline based on the repository below, so it is possible.
However, since it was created as a hobby project, it might be necessary to implement the changes and organize the entire repository, as well as change the license.

github: https://github.com/suzukimain/auto_diffusers

pypi: https://pypi.org/project/auto-diffusers/

@bghira
Copy link
Contributor

bghira commented Nov 26, 2024

assuming huggingface doesnt want to support a competitor even if it benefits the user or this project. bummer

@suzukimain
Copy link
Contributor Author

suzukimain commented Nov 26, 2024

assuming huggingface doesnt want to support a competitor even if it benefits the user or this project. bummer

Sorry, what do you mean by “competitor”?

@bghira
Copy link
Contributor

bghira commented Nov 26, 2024

CivitAI as a model hosting provider

@suzukimain
Copy link
Contributor Author

Oh, I see

@yiyixuxu
Copy link
Collaborator

@suzukimain feel free to dress it up and write a nice introduction about it!
cc @asomoza here too! maybe we can use it to make diffusers node:)

@asomoza
Copy link
Member

asomoza commented Nov 26, 2024

@suzukimain this is cool and I agree that it would work a lot better as an external tool with your library. Do you have plans for something similar with loras?

@suzukimain
Copy link
Contributor Author

@suzukimain this is cool and I agree that it would work a lot better as an external tool with your library. Do you have plans for something similar with loras?

The functions search_huggingface and search_civitai, which search for hubs, are designed to support multiple tasks.
Thus, you can support different tasks by simply changing the model_type argument.
However, baseModel and types are not yet supported by the Hugging Face API at this time and could not be implemented.

By the way, the baseModels should be in hf_api.model_info, but I get an error.

from huggingface_hub import hf_api

info = hf_api.model_info(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    expand=["baseModels"]
)

print(info.baseModels)
# AttributeError: 'ModelInfo' object has no attribute 'baseModels'

# Set up parameters and headers for the CivitAI API request
params = {
"query": search_word,
"types": model_type,
"sort": "Most Downloaded",
"limit": 20,
}
if base_model is not None:
params["baseModel"] = base_model

from diffusers.pipelines.auto_pipeline import (
    search_huggingface,
    search_civitai,
) 

# Search Lora
Lora = search_civitai(
    "Keyword_to_search_Lora",
    model_type="LORA",
    base_model = "SD 1.5",
    download=True,
    )
# Load Lora into the pipeline.
pipeline.load_lora_weights(Lora)


# Search TextualInversion
TextualInversion = search_civitai(
    "EasyNegative",
    model_type="TextualInversion",
    base_model = "SD 1.5",
    download=True
)
# Load TextualInversion into the pipeline.
pipeline.load_textual_inversion(TextualInversion, token="EasyNegative")

@yiyixuxu
Copy link
Collaborator

cc @stevhliu, where should this go if we want to add a doc page about it (as an external library)?

@suzukimain
Copy link
Contributor Author

Made it available for use as an external tool.
The main changes are as follows:

  • Changed the license to Apache-2.0, the same as Diffusers.
  • Updated the downloader to huggingface_hub.http_get to support resume-on-error.
  • Cache directory is now obtained by os.path.expanduser.

Made it as usable as possible in the same way as a pull request.

pip install --quiet auto_diffusers

from auto_diffusers import EasyPipelineForText2Image

# Search for Huggingface
pipe = EasyPipelineForText2Image.from_huggingface("any").to("cuda")
img = pipe("cat").images[0]
img.save("cat.png")


# Search for Civitai
pipe = EasyPipelineForText2Image.from_civitai("any").to("cuda")
image = pipe("cat").images[0]
image.save("cat.png")

@yiyixuxu, @stevhliu, @bghira

@stevhliu
Copy link
Member

stevhliu commented Dec 2, 2024

I think you can add your project here and then link to the docs you're making in #9986

@suzukimain
Copy link
Contributor Author

I think you can add your project here and then link to the docs you're making in #9986

Based on this advice, I have made some revisions here. Is it okay to apply these changes?
I apologize for the inconvenience.

@suzukimain
Copy link
Contributor Author

I think you can add your project here and then link to the docs you're making in #9986

Based on this advice, I have made some revisions here. Is it okay to apply these changes? I apologize for the inconvenience.

I intend to apply the changes in this way.

@stevhliu
Copy link
Member

stevhliu commented Dec 3, 2024

Yeah thats fine with me, feel free to apply the changes in #9986 :)

@suzukimain
Copy link
Contributor Author

hi @stevhliu , What should I do with this PR?

@stevhliu
Copy link
Member

I think this can be closed since we your project is here https://github.com/huggingface/diffusers/tree/main/examples/model_search now :)

@suzukimain
Copy link
Contributor Author

hello @stevhliu
Understood. Also, if possible, I would appreciate it if you could let me know how this matter will turn out.

I think you can add your project here and then link to the docs you're making in #9986

@suzukimain
Copy link
Contributor Author

Add this functionality to auto_diffusers and close this as #9986 and #10358 have been merged. thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants