Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] Fix missing parameters in docstrings #10419

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

stevhliu
Copy link
Member

Fixes #10416 by adding missing parameters in FlowMatchEulerDiscreteScheduler and DPMSolverMultistepScheduler .

@@ -174,6 +174,9 @@ class DPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin):
Whether to use the uniform-logSNR for step sizes proposed by Lu's DPM-Solver in the noise schedule during
the sampling process. If `True`, the sigmas and time steps are determined according to a sequence of
`lambda(t)`.
use_flow_sigmas (`bool`, *optional*, defaults to `False`):
Whether to use flow sigmas for step sizes in the noise schedule during the sampling process.
flow_shift (`float`, *optional*, defaults to 1.0):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I get some help filling in what this parameter does? @hlky

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use_flow_sigmas was added with Sana, it is to support Flow Match in the existing schedulers, flow_shift is like shift in FlowMatchEuler. Schedulers with use_flow_sigmas have limited support for other models as some don't support passing sigmas or dynamic shifting, as I understand the schedulers are in use by some integrations with Flux and SD3 but there may be quality issues with outputs.

def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None):

alphas = np.linspace(1, 1 / self.config.num_train_timesteps, num_inference_steps + 1)
sigmas = 1.0 - alphas
sigmas = np.flip(self.config.flow_shift * sigmas / (1 + (self.config.flow_shift - 1) * sigmas))[:-1].copy()

It's similar in FlowMatchEuler, by default it is for SD3 and other models pass sigmas

timesteps = np.linspace(
self._sigma_to_t(self.sigma_max), self._sigma_to_t(self.sigma_min), num_inference_steps
)

sigmas = np.linspace(1.0, 0.0, num_inference_steps + 1)[:-1] if sigmas is None else sigmas

sigmas = linear_quadratic_schedule(num_inference_steps, threshold_noise)

sigmas = np.linspace(1.0, 1 / num_inference_steps, num_inference_steps) if sigmas is None else sigmas

The schedulers with use_flow_sigmas are scheduling_deis_multistep, scheduling_dpmsolver_multistep_inverse, scheduling_dpmsolver_multistep, scheduling_dpmsolver_singlestep, scheduling_sasolver, scheduling_unipc_multistep.

The maximum image sequence length.
invert_sigmas (`bool`, defaults to False):
Whether to invert the sigmas.
shift_terminal (`float`, defaults to None):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also need some help with this parameter please :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

invert_sigmas was added with Mochi, the model works in the opposite way than usual so the option flips sigmas after calculation

if self.config.invert_sigmas:
sigmas = 1.0 - sigmas
timesteps = sigmas * self.config.num_train_timesteps
sigmas = torch.cat([sigmas, torch.ones(1, device=sigmas.device)])
else:
sigmas = torch.cat([sigmas, torch.zeros(1, device=sigmas.device)])

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@stevhliu
Copy link
Member Author

stevhliu commented Jan 6, 2025

Sorry I wasn't clear enough, but I was referring to the shift_terminal parameter. Let me know what you think 🙂

@hlky
Copy link
Collaborator

hlky commented Jan 6, 2025

Stretches and shifts the timestep schedule to ensure it terminates at the shift_terminal value. Used by LTXVideo.

@stevhliu
Copy link
Member Author

stevhliu commented Jan 6, 2025

Thanks! Let me know if the descriptions are clear enough, otherwise I think we can merge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Euler flow matching scheduler is missing documentation for parameters
3 participants