Releases: masa-su/pixyz
Releases · masa-su/pixyz
v0.3.3
New features
- Added
set_cache_maxsize
, a setter ofmaxsize
for lru_cache, andcache_maxsize
, a getter ofmaxsize
for lru_cache. By increasing thismaxsize
, the size that can store the results of get_params (≒ forward of the Distribution class) becomes larger. This will speed up most VAE implementations! - Added the mixture of experts (MoE)
Bug fix
- Fixed
get_params
forreplace_var
- Fixed
maxsize
of lru_cache forget_params
- Fixed a major bug in PoE.
v0.3.2
New features
- Added description of tutorial in readme. #167
- Added save and load methods to
Model
class. #161 - Added an option to replace the graphical model with a deterministic one. #169
Bug fix
- Fixed the Bernoulli Distribution so that the likelihood calculation of continuous values such as luminance is also available in the new version of pytorch.
- Fixed a bug where the option
feature_dims
, which determines whether tensor dimensions are added together as simultaneous log likelihoods, was not enabled. #163 - Resolved the issue where options set in
DistGraph.set_option
were overwritten by options inDistribution.sample
and were not reflected. #162 - Avoided build failures with readthedocs. #170
v0.3.1
v0.3.0
New feature
- New field
distribution.graph
represents graphical model of the distribution. #119 - Supported drawing a grapical model by
networkx.draw_networkx(distribution.graph.visible_graph())
. #122 - Added tutorial notebooks. #148
Enabled memoization for calcuration graph of pixyz to speed up. #149
Changed API
- Renamed DataDistribution to EmpiricalDistribution. #146
timestep_var
is no longer specified by default (t is still the default in the display), which gives the time index when evaluating step loss or calling SliceStep. #142- Corrected the error of the time index in the display. #142
- Reverted the TransformedDistribution arguments to their previous (=v0.1.4) format (Inference in TransformedDistribution internally uses the result of the sample / forward called immediately before). #139
- Removed the input_var argument of the Loss API. #150
- Added the ConstantVar class that sets the value of a variable before calling loss.eval. #150
- The argument order of var and cond_var in Distribution API has been unified. (Var first) #147
- Changed the output directory of example notebooks to make it easier to browse (and automatically installed sklearn with !pip). #141
Bug fix
- Eliminated of errors in pytorch 1.6 caused by flow's non-contiguous tensors. #140
- Fixed bug related to documents. #138
- Fixed a bug so that PoE does not cause an error when only one distribution is specified. #144
- Fixed a bug when specifying the same random variable as args of init in DistributionBase. #143
- Replaced
forward
call with__call__
call to take advantage of pytorch.nn.module hook options. #136
v0.2.1
Updates
- #133 Enabled to use time-specific step losses without extension of slice function
- #127, #130 Changed the output directory of example notebooks to make it easier to browse (and automatically installed sklearn with
!pip
) - #126 Reverted the TransformedDistribution arguments to their previous format (use memoization to get the log-likelihood)
Bugfix
- #134 Elimination of errors in the latest version of pytorch caused by flow non-contiguous tensors
- #133 Fixed a bug related to timestep_var in Iterative Loss
- #132 Fixed a bug when specifying the same random variable as args of init in DistributionBase
- #124 Fixed a bug so that PoE does not cause an error when only one distribution is specified
v0.2.0
Interface change 🛠︎
Bug Fixes 🐛
Added a feature 🆕
Distribution API
- 🛠︎Change the argument order of __init__ in the exponential distribution families and make the distribution parameters explicit #90
- 🛠︎Removed the sampling option from DistributionBase.set_dist (relaxed distribution families still have a sampling option) #108
- 🛠︎ TransformedDistribution will be the probability distribution of 2 variables of input and output of flow, and the return_all option of the conventional sample method is abolished. #115
- 🛠︎ Rename return_all option of InverseTransformedDistribution.sample to return_hidden #115
- 🛠︎ Added TransformedDistribution(or InverseTransformedDistribution ).sample with return_all option (that is, option that also returns random variables that are not involved). #115
- 🛠︎ TransformedDistribution.__init__ argument
var
is renamed toflow_output_var
- 🆕 Added Distribution.has_reparam property #93
- 🆕 Add return_all option to MixtureModel.sample #115
- 🐛 Fixed a bug that unintentional overwrite of parameters of basic Distribution when torch.load #113
Loss API
- 🛠︎StochasticReconstructionLoss removed (need to explicitly configure this loss) #103
- 🛠︎Changed the base class of Loss to torch.nn.Module #100
- 🛠︎Renamed Loss.train to Loss.loss_train, and Loss.test to Loss.loss_test #100
- 🛠︎Renamed Loss._get_eval to Loss.forward #95
- 🛠︎Added Entropy method to switch entropy estimation by AnalyticalEntropy and MonteCarlo in options (deprecated Entropy class) #89
- 🛠︎Added a CrossEntropy method that can be switched as well (deprecated the CrossEntropy class) #89
- 🛠︎Added a KullbackLeibler method that can be switched as well (deprecated the KullbackLeibler class) #89
- 🛠︎ deprecated ELBO class and replaced it with ELBO method that returns a Loss instance #89
- 🆕 Support for Loss.detach method #93
- 🆕 Support for MinLoss, MaxLoss #95
- 🆕 Support for alternative loss
REINFORCE
to derive a policy gradient. #93 - 🆕 Loss support for DataParallel #100
- 🆕 Separated some features of the Loss class into the Divergence class #95
- 🆕 Added return_all option to Loss.eval (that is, when
return_dict = True
, you can also select whether to return unrelated random variables) #115 - 🐛 Fixed a bug in IterativeLoss where the past value was conditioned on each step as the future value. #115
- 🐛 Placed the parameter tensor of ValueLoss in nn.Module.device. #100
- 🐛 Fixed incorrect argument checking in WassersteinDistance and MMDLoss. #103
- 🐛 Fixed a bug in checking variables in Loss initialization #107
- 🐛 Fixed a bug in IterativeLoss #107
Other
- 🆕 Add utils.lru_cache_for_sample_dict decorator that enables memoization with a function that takes a dictionary of random variables and their realization values #109
- 🆕 Renamed examples/vae_model to vae_with_vae_class
- 🆕 Added some exception messages #103
- 🐛 Fixed jacobian calculation in flow/Preprocess #107
- 🐛 Fixed a bug in some browsers that does not show the formula of readme.
- 🐛 Alternate text is displayed when readme formula is not displayed. #117