Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement a finetuning feature #73

Merged
merged 14 commits into from
May 1, 2024
Merged

Implement a finetuning feature #73

merged 14 commits into from
May 1, 2024

Conversation

borauyar
Copy link
Member

@borauyar borauyar commented May 1, 2024

The user can decide to use a portion of the test dataset to fine-tune a model trained on the train dataset.
The user can set "--finetuning_samples" to a number of samples to be used for finetuning.
A 5-fold cross-validation scheme is used to test different learning rates and model parameter freezing strategies (freezing the encoders or supervisors or neither) to find the best setup. A final model is built on the finetuning samples. The fine-tuned model is evaluated on the remaining test samples.

@borauyar borauyar merged commit 33f28f5 into main May 1, 2024
@borauyar borauyar deleted the finetuning branch December 31, 2024 13:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant