Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extract embedding of latent space in TemporalFusionTransformer #1735

Open
Mpaperlee opened this issue Dec 20, 2024 · 1 comment
Open

Extract embedding of latent space in TemporalFusionTransformer #1735

Mpaperlee opened this issue Dec 20, 2024 · 1 comment
Labels
documentation Improvements or additions to documentation

Comments

@Mpaperlee
Copy link

[DOC] I am using the pytorch_forecasting to create a TemporalFusionTransformer model. After train the model, I want to extract the data embedding in latent space, than use it to clustering. But I cannot find out the solution. How should I do

@Mpaperlee Mpaperlee added the documentation Improvements or additions to documentation label Dec 20, 2024
@benHeid
Copy link
Collaborator

benHeid commented Dec 24, 2024

If I remember correctly, we do not return the output of the encoder currently. So would probably need some changes to the code to enable it. There are two possibilities to enable it. Either by extracting the relevant code into a new function or by adding the encoded data as additional field to the network output.

@fkiraly @XinyuWuu @julian-fong @pranavvp16 any opinions about the two solutions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants