Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Does ProxylessNas implementation really support optimizing inference latency? #3113

Open
tigert1998 opened this issue Nov 22, 2020 · 1 comment
Assignees
Labels
help wanted Encourage external contributors to contribute NAS new feature user raised
Milestone

Comments

@tigert1998
Copy link
Contributor

Environment:

  • NNI version: v1.9 (including current master)
  • NNI mode (local|remote|pai):
  • Client OS:
  • Server OS (for remote mode only):
  • Python version:
  • PyTorch/TensorFlow version:
  • Is conda/virtualenv/venv used?:
  • Is running in Docker?:

Log message:

  • nnimanager.log:
  • dispatcher.log:
  • nnictl stdout and stderr:

What issue meet, what's expected?:

The most important feature of ProxylessNas is that it can balance deployment latency and accuracy with simple regularization parameters. But this feature is clearly missing in nni. I only found loss = criterion(outputs, labels) and loss.backward() where the criterion is just a cross-entropy loss and only applicable for image classification.

Would nni team consider adding this feature? If not, would you mind I writing this feature and pulling a request?

How to reproduce it?:

Additional information:

@QuanluZhang
Copy link
Contributor

@tigert1998 thanks for reporting this issue. yes, nni team has not supported this feature. it would be great if you can contribute this feature :)

@scarlett2018 scarlett2018 added user raised new feature help wanted Encourage external contributors to contribute NAS labels Dec 5, 2020
@kvartet kvartet added this to the Backlog milestone Jun 10, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
help wanted Encourage external contributors to contribute NAS new feature user raised
Projects
None yet
Development

No branches or pull requests

4 participants