-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathdocker_Help.py
executable file
·119 lines (102 loc) · 3.92 KB
/
docker_Help.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
#!/usr/bin/env python
import os
def docker_help(ImgName):
query = """=============================%s helper=============================
--------------------------
Performing Segmentation
+ With GPU:
docker run --gpus all -v (your work directory):/data \
%s \
segment.py \
-in /data/(specified NIfTI file) \
-model nhp-model-04-epoch
(optional arguments)
+ Without GPU:
docker run -v (your work directory):/data \
%s \
segment.py \
-in /data/(specified NIfTI file) \
-model nhp-model-04-epoch
(optional arguments)
+ Optional arguments see help for segment.py
docker run %s segment.py
+ Use your custimized model
mount your model directory and specify the model by adding:
-v (Path of model for testing):/Models
-model /Models/(your model)
-------------------------
Training & Updating Models
+ With GPU:
docker run --gpus all \
-v (Path of T1w images for training):/TrainT1w \
-v (Path of T1w masks for training):/TrainMsk \
-v (Path of trained model and log):/Results \
%s \
train_unet.py \
-trt1w /TrainT1w \
-trmsk /TrainMsk \
-out /Results
(optional arguments)
+ Without GPU:
docker run \
-v (Path of T1w images for training):/TrainT1w \
-v (Path of T1w masks for training):/TrainMsk \
-v (Path of trained model and log):/Results \
%s \
train_unet.py \
-trt1w /TrainT1w \
-trmsk /TrainMsk \
-out /Results
(optional arguments)
+ Optimal arguments see help for train_unet.py
docker run %s train_unet.py
--------------------------
Testing Models
+ With GPU:
docker run --gpus all \
-v (Path of T1w images for training):/TestT1w \
-v (Path of T1w masks for training):/TestMsk \
-v (Path of model for testing):/Models \
-v (Path of log):/Results \
%s \
test_unet.py \
-tet1w /TrainT1w \
-temsk /TrainMsk \
-model /Models/(your model for testing) \
-out /Results
(optional arguments)
+ Without GPU:
docker run \
-v (Path of T1w images for training):/TestT1w \
-v (Path of T1w masks for training):/TestMsk \
-v (Path of model for testing):/Models \
-v (Path of log):/Results \
%s \
test_unet.py \
-tet1w /TrainT1w \
-temsk /TrainMsk \
-model /Models/(your model for testing) \
-out /Results
(optional arguments)
+ Optimal arguments see help for test_unet.py
docker run %s test_unet.py
--------------------------
Listing Models in Container
docker run %s ls
--------------------------
Tips:
1. Make sure that the input head image is correctly oriented.
2. The models inlcuded in the docker were trained on the bias corrected data.
Thus, run denoising and bias correction before the skullstripping is helpful.
For example, command DenoiseImage, N4BiasFieldCorrection using ANTs.
3. If the current model failed, custimize the current model using your data.
See help for 'Updating Models' above.
--------------------------
NOTE: To use --gpus option, you need install nvidia-container-toolkit
""" % \
(ImgName, ImgName, ImgName, ImgName, ImgName, ImgName,
ImgName, ImgName, ImgName, ImgName, ImgName)
print(query)
if __name__=='__main__':
ImgName=os.getenv("DIMGNAME")
docker_help(ImgName)