Skip to content

mamorett/TengraiRefiner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

██████ ██████ ██   ██  ████  ██████   ██   ██████ ██████ ██████ ██████ ██████ ██   ██ ██████ ██████ 
  ██   ██     ███  ██ ██     ██  ██  ████    ██   ██  ██ ██     ██       ██   ███  ██ ██     ██  ██ 
  ██   ████   ██ █ ██ ██ ███ ██████ ██  ██   ██   ██████ ████   ████     ██   ██ █ ██ ████   ██████ 
  ██   ██     ██  ███ ██  ██ ██ ██  ██████   ██   ██ ██  ██     ██       ██   ██  ███ ██     ██ ██  
  ██   ██████ ██   ██  ████  ██  ██ ██  ██ ██████ ██  ██ ██████ ██     ██████ ██   ██ ██████ ██  ██ 

❯ REPLACE-ME

license last-commit repo-top-language repo-language-count

Built with the tools and technologies:

tqdm Python


Table of Contents


TengraiRefiner is a Python script for batch processing images using FLUX models with optional acceleration via Alimama Turbo or ByteDance Hyper LORA adapters. It is meant mainly to act as refiner for images produced by Tengrai AI (www.tengrai.ai) but can be obivously used to enhance any image using Flux.dev.

Features

  • Support for both single image and batch processing
  • Compatible with FLUX.1-dev and FLUX.1-Redux-dev models
  • Memory-efficient processing with automatic CPU offloading
  • Quantization support for optimal performance
  • Progress tracking with detailed step information
  • Configurable acceleration options (Alimama Turbo or ByteDance Hyper)

Prerequisites

Before running the script, ensure you have Python 3.x installed and the following dependencies:

torch>=2.6.0
diffusers==0.32.2
transformers>=4.35.0
safetensors>=0.4.0
python-dotenv>=1.0.0
Pillow>=10.0.0
tqdm>=4.66.0
huggingface-hub>=0.19.0
optimum-quanto
multiformats
xformers>=0.0.25

Installation

  1. Clone this repository or download the script
  2. Install the required dependencies:
    pip install -r requirements.txt

Usage

The script can be run from the command line with various options:

python script.py <path> [options]

Arguments

  • path: Required. Path to input file or directory containing PNG files to process

Options

  • -a, --acceleration: Choose acceleration LORA (options: 'alimama' or 'hyper', default: 'alimama')
  • -p, --prompt: Set a custom prompt (default: 'Very detailed, masterpiece quality')
  • -r, --redux: Use redux instead of img2img
  • -o, --output_dir: Specify output directory (mutually exclusive with --subdir)
  • -s, --subdir: Save output in a subdirectory of the input path (mutually exclusive with --output_dir)

Examples

  1. Process a single image with default settings:

    python script.py path/to/image.png
  2. Process a directory of images with Hyper acceleration:

    python script.py path/to/directory -a hyper
  3. Process images with a custom prompt and specific output directory:

    python script.py path/to/directory -p "high quality, detailed" -o output/folder
  4. Use redux processing with Alimama acceleration:

    python script.py path/to/directory -r -a alimama

Processing Details

  • Images are processed one at a time with progress tracking
  • Default processing uses 25 inference steps (10 steps with acceleration)
  • Strength parameter is set to 0.20 for img2img and 1.0 for redux
  • Already processed images are skipped to avoid duplication
  • Output maintains original image dimensions

Memory Optimization

The script includes several optimizations:

  • Memory-efficient attention for SD-based models
  • BFloat16 precision
  • Automatic CPU offloading
  • Transformer and text encoder quantization
  • Model freezing for reduced memory usage

Error Handling

  • Skips already processed images
  • Provides error messages for failed processing attempts
  • Validates input paths and arguments
  • Continues processing remaining images if one fails

Notes

  • Requires CUDA-capable GPU for optimal performance
  • Progress bars show both overall progress and per-image steps
  • Environment variables can be configured via .env file
  • Original file names are preserved in output

Project Roadmap

  • Task 1: Support Refiner mode and Redux mode
  • Task 2: Implement memory optimization for Redux.

Contributing

Contributing Guidelines
  1. Fork the Repository: Start by forking the project repository to your github account.
  2. Clone Locally: Clone the forked repository to your local machine using a git client.
    git clone https://github.com/mamorett/TengraiRefiner
  3. Create a New Branch: Always work on a new branch, giving it a descriptive name.
    git checkout -b new-feature-x
  4. Make Your Changes: Develop and test your changes locally.
  5. Commit Your Changes: Commit with a clear message describing your updates.
    git commit -m 'Implemented new feature x.'
  6. Push to github: Push the changes to your forked repository.
    git push origin new-feature-x
  7. Submit a Pull Request: Create a PR against the original project repository. Clearly describe the changes and their motivations.
  8. Review: Once your PR is reviewed and approved, it will be merged into the main branch. Congratulations on your contribution!
Contributor Graph


License

This project is protected under the GNU GPLv3 License. For more details, refer to the LICENSE file.


Acknowledgments


About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages