-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use anime model with API #38
Comments
I tried using However it gave this error:
|
By the way, I also want to be able to upscale the anime pictures I generate (using the API), how can I do this? When upscaling at the nolibox website, there's an option for normal upscaling and anime upscaling, where I want to use anime upscaling using the API. Thank you! (And sorry for spamming your inbox!!) |
Haha never mind and let me answer your questions one by one.
![]() In order to re-create the same behavior, here's an example: {
"text": "anime girl, high quality",
"negative_prompt": "<easynegative>",
"seed": 1001,
"guidance_scale": 7.5,
"sampler": "k_euler",
"is_anime": true,
"custom_embeddings": {
"<easynegative>": x.tolist()
}
} Here,
|
Hey! I haven't tried yet, but I'm not quite sure what I should do with the easynegative.json file? Should I download it and put it in the Colab in any way? Rename it to x or how does it know what x is? I'm sorry if this is a dumb question, and thank you very much :) |
Oh, I made it work by just copying the entire contents of the json file and replaced "x.tolist()" with that. I guess this was what you meant? Well it works and I'm happy. Thank you very much. (The body for posting is pretty long tho haha but it works great!) |
Haha yes, this is the correct way to do it!
|
Hello!
I'm glad you got the colab to work again with the sd.base, however, I noticed the results when using for example:
is worse than using the same prompt on
https://creator.nolipix.com/guest
but with the Anime model.So I guess it's not the same model? How can I use the Anime model for the API? Maybe this is too much for the colab again and we will run out of RAM, or do you think it's possible? Thank you!
The text was updated successfully, but these errors were encountered: