-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add LFMMI loss #1725
add LFMMI loss #1725
Conversation
wenet/transformer/asr_model.py
Outdated
@@ -89,6 +94,9 @@ def forward( | |||
text: (Batch, Length) | |||
text_lengths: (Batch,) | |||
""" | |||
if self.lfmmi_dir != '': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why load it in forward? I think we should load it in construction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then there is no need to use hasattr when loading resource
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why load it in forward? I think we should load it in construction.
because i need to decorate it with torch.jit.ignore, i dont want to decorate whole forward function with torch.jit.ignore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
now we move load_mmi_resource into init and remove hasattr judgement
@torch.jit.ignore(drop=True) | ||
def _calc_lfmmi_loss(self, encoder_out, encoder_mask, text): | ||
ctc_probs = self.ctc.log_softmax(encoder_out) | ||
supervision_segments = torch.stack( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why we should move it to cpu?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
k2 requires supervision_segments on cpu
@aluminumbox 请问现在lfmmi在runtime推理中能够使用吗 |
add LFMMI loss, add torch.jit.ignore for lfmmi function