-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: remove fine tuning support #355
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
old chats might be calling the functions with fine_tune_id, isn't it better to be backward compatible and check if fine_tune_id is set, print a warning saying that this arg is deprecated and add the deployment_id as a separate arg?
Agree. We should probably don't modify the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you add a test to make sure that fine_tune_id still acessible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check that fine tune id is still accessible to all the tests that used to call the fine tune logic, not only one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't understand these changes very well. we are removing the code that processes fine_tune_id but we don't remove the fine_tune_id parameter from the function signatures. which means that the code will receive the parameter, raise a deprecation warning but still will not use the parameter for anything. i don't think this makes sense.
why are we concerned with making this change backwards compatible? we can bump a major version if we want to, people using the vision-agent can decide whether to upgrade the version or not.
Hi @Dayof @CamiloInx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
We are discontinuing support for fine-tuning, as va.landing.ai no longer offers this feature. However, we will continue to support custom object detection model training on LandingLens (app.landing.ai).
Warning
This is a breaking change for old chats that used fine-tuned models.