Skip to content

Expermenting with quantized AVX-512 dot product for llama.cpp

Notifications You must be signed in to change notification settings

dfyz/llama-avx-512

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ideas

  • Instead of permuting the scales individually, multiply unpermuted scales, then permute the result (doesn't seem to improve performance).
  • Get rid of masked loads so that the compiler can use vperm* directly on memory operands (promising).
  • Somehow use one vpdpbusds instead of two (doesn't seem to be possible).
  • Somehow use the accumulator in vpdpbusds instead of a separate subtraction at the very end (also doesn't seem to be possible).

About

Expermenting with quantized AVX-512 dot product for llama.cpp

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published