-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification on clip_length Parameter for Reproducing EK100 Action Recognition Results #15
Comments
Dear @Geneam, I know that this is a bit on an unrelated question, but I was curious if during the installation of the env, you faced a problem with building the decord module. In my case, I got an error related to AVBSFContext from ffmpeg_common.h:187:5. If you encountered such a problem, how did you manage to solve it? Looking forward to your response. |
Sorry for the late reply. I remember installing |
Hi @Geneam sorry for the late reply.
The clip length is always 16 when we measure the fine-tuned classification accuracy. A shorter clip length (e.g. 4) will likely perform worse but 7% sounds a bit much. Could you try 16 instead? |
Might be relevant to #15 (comment). Can you try ffmpeg 4? Or there is a new PR to support ffmpeg>5.0. I haven't got time to test it yet but it might worth trying. |
Dear Author,
I would like to ask: when reproducing the action recognition results for EK100, should the parameter
clip_length
be set to 16? I used the evaluation command from model.md as follows:The results I obtained were about 7% lower than reported. For the above command, the
clip_length
parameter seems to be set to 4.Could you please clarify the value of
clip_length
used in the results you reported? Additionally, does this parameter significantly impact the model’s performance?Looking forward to your response.
The text was updated successfully, but these errors were encountered: