You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 22, 2020. It is now read-only.
We want to automatically segment nodules as described #3 . To train a machine learning algorithm, the model expects a fixed input and output size. This is challenging because on the one hand, the training data, which is represented by CT scans scaled to mm voxels, varies greatly between the different scans so small scaled scans have to be padded appropriately. On the other hand, having training data with shape 512x512x512 can blow up the memory consumption of the model if we're e.g. trying to implement a 3D U-Net. Currently, a fixed input size is implemented here.
Expected Behavior
Find an appropriate shape for training data such that all the LIDC scans can be imported after being rescaled to mm voxels. Show that you can still train a classifier on top of that which is not too demanding regarding GPU memory while training time (preferably even using a convolutional neural network since algorithms based on them are state-of-the-art solutions for pattern detection in many areas).
Possible Implementation
You could try to use get_max_scaled_dimensions which iterates over all rescaled LIDC images and returns the maximal dimensions of them.
Furthermore, you could try to use cropping layers provided by keras.
The text was updated successfully, but these errors were encountered:
reubano
changed the title
Segment nodules: find appropriate training data shape for training
Segment nodules: find appropriate training data shape
Nov 1, 2017
We want to automatically segment nodules as described #3 . To train a machine learning algorithm, the model expects a fixed input and output size. This is challenging because on the one hand, the training data, which is represented by CT scans scaled to mm voxels, varies greatly between the different scans so small scaled scans have to be padded appropriately. On the other hand, having training data with shape 512x512x512 can blow up the memory consumption of the model if we're e.g. trying to implement a 3D U-Net. Currently, a fixed input size is implemented here.
Expected Behavior
Find an appropriate shape for training data such that all the LIDC scans can be imported after being rescaled to mm voxels. Show that you can still train a classifier on top of that which is not too demanding regarding GPU memory while training time (preferably even using a convolutional neural network since algorithms based on them are state-of-the-art solutions for pattern detection in many areas).
Possible Implementation
You could try to use get_max_scaled_dimensions which iterates over all rescaled LIDC images and returns the maximal dimensions of them.
Furthermore, you could try to use cropping layers provided by keras.
The text was updated successfully, but these errors were encountered: