-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
March Binary Update #565
March Binary Update #565
Conversation
@martindevans It's working OK on MacOS. |
Resolve #531 |
Shall we add a new release when merging this PR? Some new features are included in this llama.cpp version. |
I don't want to do a release with this PR, since there's hopefully going to be some LLava bits merged in after this. Definitely shortly after though. |
Used this branch to run my local projects on both the desktop (Windows, CUDA 12) and laptop (Windows, AVX2) and found no issues. |
Thanks for testing that @m0nsky |
Updated binaries to llama.cpp
3ab8b3a92ede46df88bc5a2dfca3777de4a2b2b6
. Build: https://github.com/SciSharp/LLamaSharp/actions/runs/8118890586).Does not yet include Vulkan binaries.
Testing required: