Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Inference without GT #26

Closed
Saartjes opened this issue May 21, 2021 · 2 comments
Closed

Running Inference without GT #26

Saartjes opened this issue May 21, 2021 · 2 comments

Comments

@Saartjes
Copy link

Your work looks amazing! I want to try to run this on a dataset of my own to improve the quality of a video. As this video is blurry of it’s own I do not have a non-blurry ground truth available. Is there any way to run the model without?

@csbhr
Copy link
Owner

csbhr commented May 21, 2021

You can copy the burry video and take it as the placeholder of GT. Then you can run the inference code.

In addition, if you want to get the whole deblurred video, you can copy the border frames to make this code consider border frames.

For example:

burry frames [1, 2, 3, 4, 5] -> deblurred frames [3]

copy the border frames:
burry frames [1, 1, 1, 2, 3, 4, 5, 5, 5] -> deblurred frames [1, 2, 3, 4, 5]

@Etienne66
Copy link

This is the same as issue #12, which I fixed in my fork.

As for the example. I usually do it a bit different to keep motion in the frames even if it is in the reverse direction.
blurry frames [3, 2, 1, 2, 3, 4, 5, 4, 3] -> deblurred frames [1, 2, 3, 4, 5]

I'm going to try to figure out how to do this automatically in inference.py but I haven't gotten around to it yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants