This project is a Python π-based video ποΈ creation pipeline that processes media files π from directories π and automatically generates videos π½οΈ by applying different tasks π and effects β¨ in sequence. The application allows for the automation π€ of video production by combining audio πΆ files, images πΌοΈ, and other visual elements π into complete videos πΉ, with support for slideshows π , text overlays βοΈ, audio visualizations π, and much more.
The project aims to be highly configurable π§, with a YAML π-based configuration file that defines the sequence of tasks π and settings βοΈ for each video π₯, making it versatile for various video production needs.
- VideoFlowPy
- PyVideoPipeline
- MediaMagic β¨
- ClipForge π¨
- VidCraft π οΈ
- AutoVidPy π€
- VidTransformer π
- ClipSynth πΆ
- PythonVideoFactory ππ
- VidMakerAutomation βοΈ
- Configurable Pipeline π§: Set up your video production workflow via a YAML π configuration file.
- Automated Task Execution π€: Perform various tasks in sequence, such as creating slideshows π , adding text overlays βοΈ, merging clips π, and more.
- Modular Converters π: Each task (like adding text βοΈ, images πΌοΈ, or exporting a video π€) is implemented as a converter, making it easy to extend or modify.
- Multi-Format Support π½οΈπ΅πΌοΈ: Supports various formats for images πΌοΈ, audio πΆ, and video π₯.
- Rich Logging π: Uses
Rich
π andIcecream
π¦ for informative console output π₯οΈ and logging, making it easier to debug π.
Clone the repository π and install the dependencies π:
git clone https://github.com/yourusername/VideoFlowPy.git
cd VideoFlowPy
pip install -r requirements.txt
Create a configuration file π for your video pipeline in YAML format or modify the provided config.yaml
example.
Run the main script to process the directories π and generate videos π₯:
python main.py
The main configuration is in the config.yaml
file π. You can specify the directories π, tasks π, and the sequence of converters π to apply to each video project π₯. Here is a basic overview of the configuration:
- tasks π: A list of tasks, each containing a set of converters applied in sequence.
- converters π: Defines the converters (e.g.,
AudioReaderConverter
πΆ,SlideshowCreatorConverter
π , etc.) and their settings βοΈ for each task.
Refer to the example config.yaml
for more detailed usage.
- AudioReaderConverter πΆ: Reads audio from the directory π.
- SlideshowCreatorConverter π : Creates a slideshow from a set of images πΌοΈ.
- TextOverlayConverter βοΈ: Adds text with configurable position π, color π¨, and font settings ποΈ.
- ImageOverlayConverter πΌοΈ: Adds static or animated images on top of the video π½οΈ.
- SplitConverter βοΈ: Splits video into parts for parallel processing β‘.
- AudioVisualizationConverter π: Adds audio visualizations to the video πΆ.
- JoinConverter π: Joins multiple video parts together.
- VideoExportConverter π€: Exports the final video π₯ with configurable quality π and output settings βοΈ.
- MoviePy π₯: For video processing.
- PyYAML π: To handle the YAML configuration.
- Rich π and Icecream π¦: For enhanced console logging π and debugging π.
Install these by running:
pip install -r requirements.txt
Here is an example configuration that you can use as a starting point. This file (config.yaml
) contains the tasks π and settings βοΈ for creating a full-length video π₯ and a short clip πΉ.
tasks:
- name: "Create Full Length Video"
converters:
- type: "AudioReaderConverter"
config:
start_time: 0
end_time: null # End of the audio
- type: "SlideshowCreatorConverter"
config:
slideshow:
height: 720 # Height of the images to be unified
transition:
fade_in: 1.0 # Fade-in duration in seconds
fade_out: 1.0 # Fade-out duration in seconds
- type: "TextOverlayConverter"
config:
text: "Welcome to Our Video"
position:
x: "50%" # Position in percentage for horizontal centering
y: "90%" # Position in percentage for vertical placement at the bottom
font:
name: "Arial"
size: 24
color: "white"
contour:
color: "black"
size: 2
transition:
fade_in: 0.5
fade_out: 0.5
- type: "ImageOverlayConverter"
config:
image:
path: "overlay.png"
position:
x: "10pt" # Position in pixels (e.g., '10pt') or percentage (e.g., '10%')
y: "20pt" # Position in pixels (e.g., '20pt') or percentage (e.g., '20%')
timing:
start_time: 5 # Seconds from the start
end_time: 10 # Seconds from the start
- type: "SplitConverter"
config:
parts: 3
- type: "AudioVisualizationConverter"
config:
visualization:
bar_count: 30
height: 150
palette: "COLORMAP_MAGMA" # OpenCV colormap naming convention
- type: "JoinConverter"
- type: "AudioReaderConverter"
config:
start_time: 0
end_time: null # End of the audio, searches for mp3 in directory
- type: "VideoExportConverter"
config:
export:
output_path: "output.mp4"
fps: 24
codec: "libx264"
quality_preset: "medium"
- name: "Create YouTube Short"
converters:
- type: "AudioReaderConverter"
config:
start_time: 30
end_time: 60 # Duration of 30 seconds
- type: "SlideshowCreatorConverter"
config:
slideshow:
height: 1080 # Height for YouTube Short format
transition:
fade_in: 0.5
fade_out: 0.5
- type: "TextOverlayConverter"
config:
text: "Enjoy this Short Clip!"
position:
x: "50%" # Position in percentage for horizontal centering
y: "50%" # Position in percentage for vertical centering
font:
name: "Arial"
size: 32
color: "yellow"
contour:
color: "black"
size: 2
transition:
fade_in: 0.5
fade_out: 0.5
- type: "VideoExportConverter"
config:
export:
output_path: "short_output.mp4"
fps: 30
codec: "libx264"
quality_preset: "high"