Failure to track on zoomed-out videos #2029
Unanswered
milicicn212
asked this question in
Help!
Replies: 1 comment 1 reply
-
Hi @milicicn212, Sorry for the delay, I think I was the one who was responsible for support when you posted your question 😅 Can you give us some more information about your setup and what you've tried so far? What's the model configuration look like? Can you share some screenshots of the zoomed out data? Are you including the zoomed in videos in the same project as the zoomed out ones? (You probably shouldn't -- I can foresee some problems if they're mixed.) Messing with some hyperparams will probably be the move here, but let's start by figuring out where you're at :) Cheers, Talmo |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all, I've been using SLEAP to track swim bouts in individual larval fish, and it's generally been doing very well on videos containing a single bout, which are pretty zoomed in. However, when I've tried to predict on larger videos - still containing one fish, but zoomed out to show more of the tank so I can capture multiple bouts in sequence - SLEAP is completely unable to track the fish beyond a certain point. Any points that are visible are highly inaccurate, and the hidden points are scattered randomly, not approximating the "average" skeleton at all.
I was wondering if anyone has any information on why this is happening (maybe something to do with the receptive fields?) and what might be done to circumvent it. I was thinking just adding a few videos at a larger zoom into the dataset might fix it, or maybe messing with some hyperparameters. Thanks so much for any help you can provide, and let me know if you need more information!
Beta Was this translation helpful? Give feedback.
All reactions