Releases: lucidrains/memorizing-transformers-pytorch
Releases · lucidrains/memorizing-transformers-pytorch
0.4.1
0.4.0
prepare to use knn attention in another repository, for the ultimate …
0.3.10
0.3.10
0.3.9a
fix setup.py
0.3.9
address https://github.com/lucidrains/memorizing-transformers-pytorch…
0.3.8
use the new einops unpack! thank you @arogozhnikov 🙏
0.3.7
just give knn attention its own relative positional bias
0.3.6
give knn attention layer one more way to tune out local if need be
0.3.5
allow the network to pay more attention to memory later into training…
0.3.4
turn KNN attention into full cosine sim attention (from the paper que…