Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for Guide/Material on DPO Alignment for Phi3.5-Vision Model #238

Closed
ladanisavan opened this issue Dec 18, 2024 · 1 comment
Closed

Comments

@ladanisavan
Copy link

Hi,

Thank you for your incredible work and for sharing it!

I was wondering if there is any guide or material available on how to perform DPO alignment on the Phi 3.5-Vision model?
Any resources or pointers would be greatly appreciated.

Thanks!

@leestott
Copy link
Contributor

leestott commented Jan 8, 2025

Hi we havent created anything specific yet but have a look at https://github.com/NiuTrans/Vision-LLM-Alignment this is a great sample so you can follow Tutorials, these tutorials can offer step-by-step guidance and practical tips.

Experiment and Iterate: DPO alignment can be complex, so don't hesitate to experiment with different approaches and parameters. Keep track of your experiments and iterate based on the results to find the most effective alignment strategy.

The field of machine learning is constantly evolving. Stay updated with the latest research papers, blog posts, and updates from the community to incorporate new techniques and improvements into your alignment process and please share your results

@leestott leestott closed this as completed Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants