Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: use singleton in llama_cpp #1013

Merged
merged 2 commits into from
Jun 25, 2024

refactor: Add thread lock

5aa8871
Select commit
Loading
Failed to load commit list.
Merged

fix: use singleton in llama_cpp #1013

refactor: Add thread lock
5aa8871
Select commit
Loading
Failed to load commit list.
Codecov / codecov/project succeeded Jun 25, 2024 in 0s

21.83% (+0.14%) compared to 651eb33

View this Pull Request on Codecov

21.83% (+0.14%) compared to 651eb33

Details

Codecov Report

Attention: Patch coverage is 38.09524% with 13 lines in your changes missing coverage. Please review.

Project coverage is 21.83%. Comparing base (651eb33) to head (5aa8871).
Report is 7 commits behind head on main.

Files Patch % Lines
application/llm/llama_cpp.py 38.09% 13 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1013      +/-   ##
==========================================
+ Coverage   21.69%   21.83%   +0.14%     
==========================================
  Files          80       80              
  Lines        3632     3645      +13     
==========================================
+ Hits          788      796       +8     
- Misses       2844     2849       +5     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.