Skip to content

Actions: jart/llama.cpp

flake8 Lint

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
109 workflow runs
109 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Fix time complexity of string replacement
flake8 Lint #133: Commit 08a49aa pushed by jart
August 25, 2024 03:31 20s replace
August 25, 2024 03:31 20s
Fix time complexity of string replacement
flake8 Lint #132: Commit b3c41a8 pushed by jart
August 25, 2024 03:29 26s replace
August 25, 2024 03:29 26s
llava : fix occasional undefined behavior crash
flake8 Lint #131: Commit 60e6e2a pushed by jart
August 18, 2024 14:18 18s clip
August 18, 2024 14:18 18s
ggml : make GeLU more accurate on CPU
flake8 Lint #130: Commit bb668b6 pushed by jart
August 18, 2024 12:55 18s gelu
August 18, 2024 12:55 18s
ggml : make GeLU faster and more accurate on CPU
flake8 Lint #129: Commit 8860b7a pushed by jart
August 5, 2024 18:36 17s gelu
August 5, 2024 18:36 17s
ggml : make GeLU faster and more accurate on CPU
flake8 Lint #128: Commit 12e2ebc pushed by jart
August 5, 2024 16:35 18s gelu
August 5, 2024 16:35 18s
Make GeLU faster and more accurate on CPU
flake8 Lint #127: Commit 528ccef pushed by jart
August 5, 2024 16:17 27s gelu
August 5, 2024 16:17 27s
Make GeLU faster and more accurate on CPU
flake8 Lint #126: Commit 4135424 pushed by jart
August 5, 2024 16:02 18s gelu
August 5, 2024 16:02 18s
Fix overflows in elu function
flake8 Lint #125: Commit 67c8926 pushed by jart
August 5, 2024 09:36 20s elu
elu
August 5, 2024 09:36 20s
llamafile : improve moe prompt eval speed on cpu
flake8 Lint #124: Commit 2dd5d1f pushed by jart
June 28, 2024 23:21 22s moe
moe
June 28, 2024 23:21 22s
main: remove special token file descriptor feature (#5)
flake8 Lint #123: Commit e75c5ca pushed by jart
May 25, 2024 07:04 22s grammar-token
May 25, 2024 07:04 22s
Disable new mixmul for text generation
flake8 Lint #121: Commit 0629a79 pushed by jart
May 23, 2024 10:36 21s moe
moe
May 23, 2024 10:36 21s
Add basic bf16 support to ggml-cuda
flake8 Lint #120: Commit ebd5efe pushed by jart
May 23, 2024 10:00 22s jart16
May 23, 2024 10:00 22s
Add basic bf16 support to ggml-cuda
flake8 Lint #119: Commit caf0dcb pushed by jart
May 23, 2024 09:43 18s jart16
May 23, 2024 09:43 18s
Update examples/server/server.cpp
flake8 Lint #118: Commit 8be06dc pushed by jart
May 22, 2024 08:11 20s failhouse
May 22, 2024 08:11 20s
Fix CI errors
flake8 Lint #117: Commit 3cb4275 pushed by jart
May 22, 2024 07:57 24s thread
May 22, 2024 07:57 24s
Make atomic operations explicit
flake8 Lint #116: Commit ebbc728 pushed by jart
May 22, 2024 07:38 19s thread
May 22, 2024 07:38 19s
Make sampling not throw exception
flake8 Lint #115: Commit a948952 pushed by jart
May 22, 2024 02:24 18s failhouse
May 22, 2024 02:24 18s
llamafile : improve moe prompt eval speed on cpu
flake8 Lint #114: Commit 2d47404 pushed by jart
May 22, 2024 00:53 19s moe
moe
May 22, 2024 00:53 19s
Make sampling not throw exception
flake8 Lint #113: Commit 6c6d55b pushed by jart
May 22, 2024 00:47 20s failhouse
May 22, 2024 00:47 20s
Update common/sampling.cpp
flake8 Lint #112: Commit aa3094c pushed by jart
May 21, 2024 23:20 20s failhouse
May 21, 2024 23:20 20s
Update examples/perplexity/perplexity.cpp
flake8 Lint #111: Commit 6b17898 pushed by jart
May 21, 2024 23:20 18s failhouse
May 21, 2024 23:20 18s
Update llama.cpp
flake8 Lint #110: Commit cc363da pushed by jart
May 21, 2024 23:20 19s failhouse
May 21, 2024 23:20 19s
Update llama.cpp
flake8 Lint #109: Commit bc9a2e8 pushed by jart
May 21, 2024 23:20 18s failhouse
May 21, 2024 23:20 18s