Skip to content

v0.0.3

Pre-release
Pre-release
Compare
Choose a tag to compare
@FrancescoConti FrancescoConti released this 27 Apr 13:13
· 50 commits to master since this release
34d7eef
  • Automatically derive DFQ equalization dict -- moreover, consider also activation alpha in DFQ equalization. This is useful if the alpha is an "algorithmic" clipping derived, e.g., from a ReLU6 -- as opposed to a clipping we impose only post-calibration such as in the case of a ReLU.
  • Consider also ConstantPad2d in bias removal
  • Add convenience net.qd_stage and net.id_stage funcs: these represent a more user-friendly way to switch between the FQ -> QD -> ID internal representations of the network.
  • Support BatchNorm1d in QD,ID (via PACT_*BatchNorm2d): did not (yet) rename PACT_*BatchNorm2d to reflect the fact that they are actually PACT_*BatchNormNd's now!
  • Fix shape of reshape_before/after for PACT_Linear layers
  • Add dropout_to_identity transform and related flag in quantize_pact: this is useful when deploying networks including Dropout layers, which are not useful in inference and cause issues in the DeployGraph. Note that the Dropout removal has to be done in FullPrecision stage, i.e., before creating the deploy graph: this commits adds a flag to quantize_pact in order to do this (should probably not be active when doing retraining).
  • Bump version to 0.0.3
  • Fix mnist_test to make it easier to debug
  • Work around a regression in mnist_test for equalize_dfq
  • Other minor fixes