[WIP] [tests] test encode_prompt()
in isolation
#10438
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
encode_prompt()
is often used in isolation to lift off memory requirements and to also compute prompt embeddings in many trainers. It's important that we testencode_prompt()
properly.This PR adds a test suite to do that. Since we cannot reliably map the outputs of
encode_prompt()
into keyword arguments of a pipeline call, I propose to add a test class attribute calledprompt_embed_kwargs
to pack the prompt embeddings related kwargs appropriately.This PR is a PoC of what the changes might look like at the test level.
Once we agree on the common structure I will propagate to the rest of the pipelines.
The closest test we have currently is this:
diffusers/tests/pipelines/test_pipelines_common.py
Line 2165 in 15d4569
But, IMO, this should be extended to the other pipelines on a more general basis.
TODOs
encode_prompt()
to work in isolation in the pipelines if needed.SDXLOptionalComponentsTesterMixin
if needed.