-
-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama.cpp 2950 (New formula) #172186
llama.cpp 2950 (New formula) #172186
Conversation
Adds a formula for compiling llama.cpp from a github release. We expose both the CLI as well as the server through this Formula. The Formula is compliant with the strict brew audit.
Thanks for contributing to Homebrew! 🎉 It looks like you're having trouble with a CI failure. See our contribution guide for help. You may be most interested in the section on dealing with CI failures. You can find the CI logs in the Checks tab of your pull request. |
relates to #169894 |
Ah! that's my bad! Happy to close this in favor of the other (although I haven't seen much activity on the PR) What's the best way forward in this case? |
Updated the PR with the following things:
The test ran perfectly for me offline, looking into why it fails on the CI. |
llama.cpp 2960 (New Formula) Fix test. Adds a more comprehensive test which loads a model and then checks whether the generation happened or not in the first place. llama.cpp 2960 (New Formula) Add llama.cpp to autombump Adding the update to autobump file, as I forgot to add it earlier. llama.cpp 2960 (New Formula) Add curl dependency llama.cpp 2960 (New Formula) Fix the order of statements. llama.cpp 2960 (New Formula) Fix the failing test. llama.cpp 2960 (New Formula) Fix the failing test. x2 llama.cpp 2960 (New Formula) Fix the failing test. x3 llama.cpp 2960 (New Formula) Fix style issues. llama.cpp 2960 (New Formula) Fix order. llama.cpp 2960 (New Formula) All test pass locally. All style and the actual test cases pass locally now llama.cpp 2960 (New Formula) Restrict the macos version. All style and the actual test cases pass locally now llama.cpp 2960 (New Formula) Fix Xcode issues. llama.cpp 2960 (New Formula) Fix style issues.
35ca770
to
dd9ad47
Compare
Hi @SMillerDev, All tests pass now. I'm sorry it took ages. I eventually found out that the Runners on GitHub CI don't work as well with Metal. I think the PR is good to merge if you don't have any other questions! 🚀 |
Co-authored-by: Sean Molenaar <[email protected]>
Brilliant! Thanks for the suggestions. I've applied them. |
Co-authored-by: Sean Molenaar <[email protected]>
Co-authored-by: Sean Molenaar <[email protected]>
Thanks for the suggestions again! 🤗 - I just reasked for a review |
depends_on xcode: ["15.0", :build] | ||
depends_on arch: :arm64 | ||
depends_on macos: :ventura |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where are these requirements documented?
depends_on xcode: ["15.0", :build] | ||
depends_on arch: :arm64 | ||
depends_on macos: :ventura | ||
depends_on :macos |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
depends_on :macos |
This doesn't seem correct.
Adds a formula for compiling llama.cpp from a github release. We expose both the CLI as well as the server through this Formula. The Formula is compliant with the strict brew audit.
HOMEBREW_NO_INSTALL_FROM_API=1 brew install --build-from-source <formula>
, where<formula>
is the name of the formula you're submitting?brew test <formula>
, where<formula>
is the name of the formula you're submitting?brew audit --strict <formula>
(after doingHOMEBREW_NO_INSTALL_FROM_API=1 brew install --build-from-source <formula>
)? If this is a new formula, does it passbrew audit --new <formula>
?