-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible memory leak when reloading a model after disposing of it. #8459
Comments
Hi @shmishra99, |
Hi @shmishra99, I'm confused. Are you investigating this issue or have you assigned it to yourself for some other reason? |
Hi @stevexbritton , Apologies for the late response. I was testing the link you shared. For me, both runs show the same response without increasing the size of the array in Test 1. I'm not sure about Test 2. It's giving the same number of tensors with increased tensor values. For Test1 output: For Test2 output: Can you please confirm if I'm getting the same output as you are, and how this could be a case of a memory leak? Please let me know if I'm missing anything. Thank you! |
Any further updates? |
Hi @shmishra99, would you please respond to this, even if it's to say you can't look it at the moment. I don't think just ignoring it is acceptable once you've assigned it to yourself. |
Hi @stevexbritton , Thank you for reaching out. I apologize for the delay in responding. I've tested the code snippet you provided, and I've noticed that the array size is increasing with each run, even after disposing of the model and tensors. Your code flow seems correct to me, but I am not sure why this is happening. I will discuss this issue internally next week and provide an update. Here is the console snapshot after each run: Snapshot1: Snapshot2: Snapshot3: Thank You!! |
System information
Describe the current behavior
After loading and using a LayersModel I call model.dispose() and tf.disposeVariables() to release the tf memory. However, if I reload the model to use it again memory is leaked, at least 16k of Array data. This occurs each time around the loop.
Describe the expected behavior
I would not expect a memory leak and would expect it to behave the same as if the model was just reused.
Standalone code to reproduce the issue
The url "https://vykingsneakerkitnative.s3.eu-central-1.amazonaws.com/SteveTest/tmp/tf-leak-test.html" demonstrates the problem.
Steps to demonstrate:
Steps to demonstrate model reuse with minimal memory grown
Other info / logs Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.
The text was updated successfully, but these errors were encountered: