You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your incredible work and for sharing it!
I was wondering if there is any guide or material available on how to perform DPO alignment on the Phi 3.5-Vision model?
Any resources or pointers would be greatly appreciated.
Thanks!
The text was updated successfully, but these errors were encountered:
Hi we havent created anything specific yet but have a look at https://github.com/NiuTrans/Vision-LLM-Alignment this is a great sample so you can follow Tutorials, these tutorials can offer step-by-step guidance and practical tips.
Experiment and Iterate: DPO alignment can be complex, so don't hesitate to experiment with different approaches and parameters. Keep track of your experiments and iterate based on the results to find the most effective alignment strategy.
The field of machine learning is constantly evolving. Stay updated with the latest research papers, blog posts, and updates from the community to incorporate new techniques and improvements into your alignment process and please share your results
Hi,
Thank you for your incredible work and for sharing it!
I was wondering if there is any guide or material available on how to perform DPO alignment on the Phi 3.5-Vision model?
Any resources or pointers would be greatly appreciated.
Thanks!
The text was updated successfully, but these errors were encountered: