-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nvidia dali support #791
Comments
The dali iterator does not support resetting while epoch is not finished. I suppose I shall provide a warning for that. |
@smallzzy this is sick. super excited by this feature |
@smallzzy we have cleared the API and now it should be stable... Mind resume your addition? |
Some news about DALI support? |
@brunoalano interested in implementing DALI? |
@Borda Do you have a pipeline what should have be done to support it? But I'm available to do that with a minimal guidance (started using PyTorch Lightning and DALI these days) |
we had almost done PR some time ago so I use you can resume from that point... #789 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Is anyone working on this currently? |
I tried converting PyTorch Lightning's MNIST example to use DALI, it seems to work out of the box without requiring modification to the PyTorch Lightning's internals. See below. https://gist.github.com/irustandi/3d180ed3ec9d7ff4e73d3fdbd67df3ca |
@irustandi cool, mind add it as example, send a PR? |
@Borda sure, will create the PR. |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
I want to know that if dali with ddp and amp and dali's pipeline in distributed situation can work well? And have a perfect performance? |
I want to know if |
Good question, I believe so... @awaelchli? |
Lightning takes care of seeding the PyTorch related, built-in objects. That is, everything in the torch library, numpy, cuda and the dataloader workers. DALI and other libraries have their own logic regarding seeding and the user should consult their documentation regarding usage. |
🚀 Feature
Re-open the case for #513. i.e. Support nvidia dali iterator as a possible data loader.
Pitch
I have submitted a wip pull request to the master. #789
I would like to know if it is ok for the test to be depending on nvidia-dali in addition to apex.
The text was updated successfully, but these errors were encountered: