Means - Dynamic Integration for Video Improvement and Digital Eradication of Non-Desired Data
Fr ? Read Description for functionality
- Automated Watermark Detection
- High-Resolution Output
- Temporal Consistency
- Plug-and-Play Integration
- Scalable and Robust
DIVIDEND/
│
├── datasets/ # Training and testing datasets
│ ├── train/
│ ├── test/
│
├── models/ # Model architectures
│ ├── unet_attention.py # U-Net with attention mechanism
│ ├── discriminator.py # GAN-based discriminator
│ └── temporal_network.py # Temporal consistency model (3D CNN)
│
├── scripts/ # Training and utility scripts
│ ├── train.py # Training script
│ ├── test.py # Testing script
│ └── utils.py # Utility functions (data loading, pre-processing)
│
├── checkpoints/ # Saved model weights
├── results/ # Output video frames
├── requirements.txt # Python dependencies
└── README.md # Project documentation
git clone https://github.com/venusarathy/dividend.git
pip install -r requirements.txt
- Place your watermarked and non-watermarked video frames in the
datasets/
directory. - Ensure the dataset is properly structured into
train/
andtest/
directories.
python scripts/train.py
python scripts/test.py
-
U-Net with Attention (
unet_attention.py
):- Detects and removes watermarks using an attention mechanism to focus on watermark regions.
-
Temporal Consistency Network (
temporal_network.py
):- Ensures smooth transitions between video frames using 3D convolutions.
-
Discriminator (
discriminator.py
):- Used in GAN architecture to enhance the realism of generated frames during adversarial training.
- Contributions are welcome! Please submit a pull request or open an issue for any bugs or feature requests.
- See you on the pull request tab :)
This project is licensed under the MIT License. See the LICENSE
file for details.
Venu Sarathy
GitHub Profile
Special thanks to all contributors and supporters.