Skip to content
This repository has been archived by the owner on Dec 29, 2024. It is now read-only.

Commit

Permalink
Merge pull request #69 from banodoco/green-head
Browse files Browse the repository at this point in the history
Green head
  • Loading branch information
piyushK52 authored Dec 31, 2023
2 parents d0181f8 + 3c01023 commit 301b62e
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 38 deletions.
42 changes: 7 additions & 35 deletions readme.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,13 @@
# Welcome to Banodoco
# Welcome to Banodoco v. 0.5

Banodoco is a simple but powerful open source animation tool build on top of Stable Diffusion, Google FILM, and various other online machine models.
Banodoco v. 0.5 is intended to act as a demo of one approach to building a creative tool on top of open-source AI models - empowering users to create videos via creative interpolation using [Steerable Motion](https://github.com/banodoco/steerable-motion) to fill in the gaps between generated keyframes:

## 1) Test It Out
<img src="sample_assets/sample_images/main_example.gif">

You can test out a preview version of Banodoco <a href="https://banodoco-0-2.streamlit.app/" target="_blank">here</a>.
## Access the web app or contribute to the project

While the buttons and queries won't work and some things won't display properly, it should give you a good idea of what to expect.
The web app is currently in beta. If you want access it, please reach out to POM in our [Discord](https://discord.com/invite/8Wx9dFu5tP) with some examples of stuff you've made before. If you'd like to contribute, please also get in touch with examples of previous projects.

## 2) Download The Repo
## Coming soon - local inference

If you're comfortable with Git, you can pull this repo as normal. If you're not and don't want to figure that out, you can click "Code" in the top right, then click Download Zip to download all the files.

## 3) Open Terminal

Open your terminal and navigate to the folder where you downloaded the repo.

To do this quick, you can type `cd` and then drag the folder into the terminal and press enter.

## 4) Install Dependencies

To install the dependencies, you can run the following command in your open terminal window:

`pip install -r requirements.txt`

If you're a developer, you'll probably want to install these in a virtual environment.

## 5) Run The App

To run the app, you can run the following command in your terminal window:

`streamlit run app.py --runner.fastReruns false`

This should open a new tab in your browser with the app running. If it doesn't, you can copy and paste the link that is printed in your terminal window.

> Note: if you encounter issues, I'd suggest that you paste the error messages you get in terminal into ChatGPT and follow its suggestions. It this doesn't work, message in Discord!
## 6) Follow The Setup Guide

Once you have the app running, you can follow the setup guide inside the app to get started!
We're working on a local inference version of Banodoco, which will be available soon. This will allow you to run the app locally on your own machine, without needing to pay for cloud compute.
Binary file added sample_assets/sample_images/main_example.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 5 additions & 3 deletions ui_components/widgets/animation_style_element.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ def animation_style_element(shot_uuid):
footer1, _ = st.columns([2, 1])
with footer1:
interpolation_style = 'ease-in-out'
motion_scale = st_memory.slider("Motion scale:", min_value=0.0, max_value=2.0, value=1.0, step=0.1, key="motion_scale")
motion_scale = st_memory.slider("Motion scale:", min_value=0.0, max_value=2.0, value=1.0, step=0.01, key="motion_scale")

st.markdown("***")
if st.button("Reset to default settings", key="reset_animation_style"):
Expand Down Expand Up @@ -288,6 +288,7 @@ def plot_weights(weights_list, frame_numbers_list, frame_names):
relative_ipadapter_strength = st_memory.slider("How much would you like to influence the style?", min_value=0.0, max_value=5.0, value=1.1, step=0.1, key="ip_adapter_strength")
relative_ipadapter_influence = st_memory.slider("For how long would you like to influence the style?", min_value=0.0, max_value=5.0, value=1.1, step=0.1, key="ip_adapter_influence")
soft_scaled_cn_weights_multipler = st_memory.slider("How much would you like to scale the CN weights?", min_value=0.0, max_value=10.0, value=0.85, step=0.1, key="soft_scaled_cn_weights_multiple_video")
append_to_prompt = st_memory.text_input("What would you like to append to the prompts?", key="append_to_prompt")

normalise_speed = True

Expand Down Expand Up @@ -338,8 +339,9 @@ def plot_weights(weights_list, frame_numbers_list, frame_names):
if timing.primary_image and timing.primary_image.location:
b = timing.primary_image.inference_params
prompt = b['prompt'] if b else ""
prompt += append_to_prompt # Appending the text to each prompt
frame_prompt = f"{idx * linear_frame_distribution_value}_" + prompt
positive_prompt += ":" + frame_prompt if positive_prompt else frame_prompt
positive_prompt += ":" + frame_prompt if positive_prompt else frame_prompt
else:
st.error("Please generate primary images")
time.sleep(0.7)
Expand Down Expand Up @@ -540,7 +542,7 @@ def update_interpolation_settings(values=None, timing_list=None):
}

for idx in range(0, len(timing_list)):
default_values[f'dynamic_frame_distribution_values_{idx}'] = (idx ) * 16
default_values[f'dynamic_frame_distribution_values_{idx}'] = (idx) * 16
default_values[f'dynamic_key_frame_influence_values_{idx}'] = 1.0
default_values[f'dynamic_cn_strength_values_{idx}'] = (0.0,0.7)

Expand Down

0 comments on commit 301b62e

Please sign in to comment.