You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your work looks amazing! I want to try to run this on a dataset of my own to improve the quality of a video. As this video is blurry of it’s own I do not have a non-blurry ground truth available. Is there any way to run the model without?
The text was updated successfully, but these errors were encountered:
This is the same as issue #12, which I fixed in my fork.
As for the example. I usually do it a bit different to keep motion in the frames even if it is in the reverse direction.
blurry frames [3, 2, 1, 2, 3, 4, 5, 4, 3] -> deblurred frames [1, 2, 3, 4, 5]
I'm going to try to figure out how to do this automatically in inference.py but I haven't gotten around to it yet.
Your work looks amazing! I want to try to run this on a dataset of my own to improve the quality of a video. As this video is blurry of it’s own I do not have a non-blurry ground truth available. Is there any way to run the model without?
The text was updated successfully, but these errors were encountered: