Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leak in harvesters/core.py #454

Closed
2 of 6 tasks
ntr5090Vitro opened this issue May 3, 2024 · 4 comments
Closed
2 of 6 tasks

Memory Leak in harvesters/core.py #454

ntr5090Vitro opened this issue May 3, 2024 · 4 comments

Comments

@ntr5090Vitro
Copy link

Describe the Issue
We are utilizing harvesters inside a Docker container that uses two Teledyne linescan cameras to take images. We have noticed that after about a half hour the container has grown to 32+GB in size and is eventually killed due to lack of system memory. I then ran tracemalloc to find where exactly the memory leak was coming from. The memory leak appears to come from line 2589 in harvesters core.py. See screenshots below.

Sample Code
I can show a piece of code that demonstrates the reported phenomenon:

  • [X ] Yes
  • No
  • (Please select either yes or no.)

If yes, please provide a sample code:
Pseudocode + code pieces is all I can present at the current moment:

We initalize everything as so:

self.harvester = Harvester()
self.harvester.add_file('/opt/cvb/drivers/genicam/libGevTL.cti')
self.harvester.update()
self.cam_list = []
for i in range(0, len(self.harvester.device_info_list)):
    self.camera.append(image_harvester.create({'serial_number': camera_serial_num}))

We then get a start signal from the manufacturing line (this is how we start and stop the buffer grabbing process)

Then we launch two threads each with a function to gather images:

def image_func(self, camera_index):
self.cam_list[cam_index].start()
while break_condition:
           with self.cam_list[camera_index].fetch(timeout=30) as buffer:
                   gather buffers from cameras and save

                   we eventually hit our break_condition:
                         self.cam_list[cam_index].stop()
                         break out of while loop

we then wait for the threads to finish by waiting for a .join()

the threads are created and the above function is run every time we need to take pictures with the cameras and then joined back to the main process which runs in an infinite loop (by design)

Expected Behavior
No more memory leak

Screenshots
tracemalloc output of top 10:
tracemalloc output of top 10

place in harvesters.core:
place in harvesters core

Configuration

  • OS: Ubuntu 20.04 LTS
  • Python: 3.7+
  • Harvester: 1.4.2
  • GenTL Producer: CVB -> libGevTL.cti
  • Camera: 2x Teledyne DALSA 4k Line Scan Cameras

Reproducibility

This phenomenon can be stably reproduced:

Every 30 minutes approximately

  • Yes
  • No.
  • (Please select either yes or no.)

If applicable, please provide your observation about the reproducibility.

Actions You Have Taken
- Looked through current and past GitHub issues for similar issues to fix
- Made this GitHub issue for advice on how to proceed.

Additional context
Add any other context about the problem here.

@sunavlis
Copy link
Member

sunavlis commented May 6, 2024

Hi @ntr5090Vitro

Thanks for reporting the issue. I can confirm you, that the Harvesters 1.4.2 release contains a bug resulting in a memory leak. A fix is already implemented and merged in the main branch of the repository (#402)

The next release will coming soon including the fix for this issue. I will inform you here. In the mean time, you could use Harvesters 1.4.0 or checkout and use the current main branch of Harvesters.

@ntr5090Vitro
Copy link
Author

Thank you so much for your response. I will look out for your announcement about the new version release.

@sunavlis
Copy link
Member

Version 1.4.3 is now released and available on PyPI.

I would expect, that this issue is solved. So I will close it now. If you observe further issues or memory leaks, please feel free to reopen it or crate a new one.

Thanks for reporting and the detailed issue description!

@erik-mansson
Copy link

Thank you both for reporting and solving this regression! For my application with notable buffers to acquire frames at 1 kHz for 1 minute, then closing the acquirer and creating a new and acquiring again (in a loop), version 1.4.2 was leaking 46 MB per minute, while it's now in 1.4.3 only 0.49 MB/minute (tested over approximately 300 iterations or 5 hours). The remaining rate (measured just by looking at the memory usage grow for the Python process in Windows' Task Manager) is probably small enough to ignore for now.

If @sunavlis sees this comment (despite the issue being closed), I just want to add that you forgot to increase the version number string in __init__.py. No big deal, but maybe add a reminder in some kind of checklist for making releases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants