-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
没有进行finutne,训练了1000个epoch,验证:pre:0.05 recall:0.02 hmean:0.03 AP:0?可能是什么问题呢 #98
Comments
1000 epoch , e2e 不finetune 训练可能有点难。
neverstoplearn ***@***.***> 于2022年5月16日周一 14:41写道:
… —
Reply to this email directly, view it on GitHub
<#98>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANWXLSGJBMPM77HAG64Q4DVKHUYHANCNFSM5WANQXCQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
|
不fintune的话大概多少epoch可以有不错的结果啊,我看你在readme里面不finetune 1000轮效果还可以。 |
我最近 训的一个 1000 不finetune 就很拉垮。 |
结果差不多 啥时候更新一波在SynthText800k预训练模型啊?另外想请教一下,这类端到端的算法是不是不适用于中文OCR? |
我跑的tf复现的FOTS,效果也不尽人意,检测分支训练效果一直不好。 |
No description provided.
The text was updated successfully, but these errors were encountered: