Skip to content
This repository has been archived by the owner on Sep 12, 2024. It is now read-only.

feat: temp workaround for cublas build #42

Merged
merged 1 commit into from
May 4, 2023
Merged

Conversation

hlhr202
Copy link
Member

@hlhr202 hlhr202 commented May 4, 2023

only supported dynamic link

@hlhr202 hlhr202 linked an issue May 4, 2023 that may be closed by this pull request
@hlhr202 hlhr202 merged commit d573d74 into main May 4, 2023
@hlhr202 hlhr202 deleted the feature/llama-cpp-cuda branch May 13, 2023 07:07
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[ASK] enable cuda with manual compilation
1 participant