Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: Embeddings need to be represented in a prompt in human readable, but still load via key #5804

Closed
1 task done
psychedelicious opened this issue Feb 26, 2024 · 0 comments
Labels
4.0.0 bug Something isn't working

Comments

@psychedelicious
Copy link
Collaborator

psychedelicious commented Feb 26, 2024

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Linux

GPU vendor

Nvidia (CUDA)

GPU model

No response

GPU VRAM

No response

Version number

next

Browser

FF

Python dependencies

No response

What happened

We don't have the concept of embedding triggers - only model keys and names. We need to be able to represent and embedding using a stable "trigger" word (maybe the mutable "name" of the model), and load the embedding in the backend based on this.

Currently on next, embeddings look nice in the prompt, but don't work. This change was in #5801 63f7c61.

Prior to that change, on next embeddings showed up as the long model key in the prompt, but did work.

What you expected to happen

Embeddings are represented as a human-readable string in the prompt and are loaded correctly.

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
4.0.0 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant