Skip to content
This repository has been archived by the owner on Dec 1, 2021. It is now read-only.

Commit

Permalink
Merge branch 'master' into rename_QTZ_linear_mid_tread_half
Browse files Browse the repository at this point in the history
  • Loading branch information
Joeper214 authored Mar 11, 2020
2 parents c051937 + a0078e1 commit 7867a33
Show file tree
Hide file tree
Showing 33 changed files with 98 additions and 86 deletions.
4 changes: 2 additions & 2 deletions blueoil/configs/core/classification/darknet_cifar10.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
step_per_epoch = 50000 // 200
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.01, 0.001, 0.0001, 0.00001],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
step_per_epoch = 50000 // 200
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.01, 0.001, 0.0001, 0.00001],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.polynomial_decay
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.polynomial_decay
# TODO(wakiska): It is same as original yolov2 paper (batch size = 128).
NETWORK.LEARNING_RATE_KWARGS = {"learning_rate": 1e-1, "decay_steps": 1600000, "power": 4.0, "end_learning_rate": 0.0}
NETWORK.IMAGE_SIZE = IMAGE_SIZE
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.1, 0.01, 0.001, 0.0001],
"boundaries": [40000, 60000, 80000],
Expand Down
4 changes: 2 additions & 2 deletions blueoil/configs/core/classification/lmnet_cifar10.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
step_per_epoch = 50000 // BATCH_SIZE
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.01, 0.1, 0.01, 0.001, 0.0001],
Expand Down
4 changes: 2 additions & 2 deletions blueoil/configs/core/classification/lmnet_cifar100.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
step_per_epoch = 50000 // 200
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.01, 0.001, 0.0001, 0.00001],
Expand Down
4 changes: 2 additions & 2 deletions blueoil/configs/core/classification/lmnet_openimagesv4.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,9 +68,9 @@
# IS_DEBUG = True

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
step_per_epoch = 50000 // 200
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.01, 0.001, 0.0001, 0.00001],
Expand Down
4 changes: 2 additions & 2 deletions blueoil/configs/core/classification/lmnet_quantize_cifar10.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,9 +73,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
step_per_epoch = 50000 // BATCH_SIZE
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.01, 0.1, 0.01, 0.001, 0.0001],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
step_per_epoch = 50000 // 200
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.01, 0.001, 0.0001, 0.00001],
Expand Down
4 changes: 2 additions & 2 deletions blueoil/configs/core/classification/lmnet_v1_cifar10.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,9 +68,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
step_per_epoch = 50000 // BATCH_SIZE
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.01, 0.001, 0.0001, 0.00001],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
step_per_epoch = 50000 // BATCH_SIZE
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.01, 0.001, 0.0001, 0.00001],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@
'optimizer_class': hp.choice(
'optimizer_class', [
{
'optimizer': tf.train.MomentumOptimizer,
'optimizer': tf.compat.v1.train.MomentumOptimizer,
'momentum': 0.9,
},
]
Expand All @@ -95,7 +95,7 @@
'learning_rate_func': hp.choice(
'learning_rate_func', [
{
'scheduler': tf.train.piecewise_constant,
'scheduler': tf.compat.v1.train.piecewise_constant,
'scheduler_factor': hp.uniform('scheduler_factor', 0.05, 0.5),
'scheduler_steps': [25000, 50000, 75000],
},
Expand Down
4 changes: 2 additions & 2 deletions blueoil/configs/core/classification/mobilenet_v2_cifar10.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
step_per_epoch = 50000 // BATCH_SIZE
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.01, 0.1, 0.01, 0.001, 0.0001],
Expand Down
4 changes: 2 additions & 2 deletions blueoil/configs/core/classification/resnet_cifar10.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,9 +70,9 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
NETWORK.LEARNING_RATE_KWARGS = {
"values": [0.1, 0.01, 0.001, 0.0001],
"boundaries": [40000, 60000, 80000],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,9 +85,9 @@
step_per_epoch = 149813 // BATCH_SIZE

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.AdamOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.AdamOptimizer
NETWORK.OPTIMIZER_KWARGS = {}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
NETWORK.LEARNING_RATE_KWARGS = {
"values": [1e-4, 1e-3, 1e-4, 1e-5],
"boundaries": [5000, step_per_epoch * 5, step_per_epoch * 10],
Expand Down
4 changes: 2 additions & 2 deletions blueoil/configs/core/object_detection/lm_fyolo_bdd100k.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,9 +80,9 @@
])

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
# In the origianl yolov2 Paper, with a starting learning rate of 10−3, dividing it by 10 at 60 and 90 epochs.
# Train data num per epoch is 16551
step_per_epoch = 16551 // BATCH_SIZE
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -99,9 +99,9 @@
])

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
# In the origianl yolov2 Paper, with a starting learning rate of 10−3, dividing it by 10 at 60 and 90 epochs.
# Train data num per epoch is 16551
step_per_epoch = 16551 // BATCH_SIZE
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -187,9 +187,9 @@
])

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
# In the yolov2 paper, with a starting learning rate of 10−3, dividing it by 10 at 60 and 90 epochs.
# Train data num per epoch is 16551
# In first 5000 steps, use small learning rate for warmup.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@


NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.AdamOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.AdamOptimizer
NETWORK.OPTIMIZER_KWARGS = {"learning_rate": 0.001}
NETWORK.IMAGE_SIZE = IMAGE_SIZE
NETWORK.BATCH_SIZE = BATCH_SIZE
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.AdamOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.AdamOptimizer
NETWORK.OPTIMIZER_KWARGS = {"learning_rate": 0.001}
NETWORK.IMAGE_SIZE = IMAGE_SIZE
NETWORK.BATCH_SIZE = BATCH_SIZE
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.AdamOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.AdamOptimizer
NETWORK.OPTIMIZER_KWARGS = {"learning_rate": 0.001}
NETWORK.IMAGE_SIZE = IMAGE_SIZE
NETWORK.BATCH_SIZE = BATCH_SIZE
Expand Down
2 changes: 1 addition & 1 deletion blueoil/configs/core/segmentation/segnet_camvid.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.AdamOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.AdamOptimizer
NETWORK.OPTIMIZER_KWARGS = {"learning_rate": 0.001}
NETWORK.IMAGE_SIZE = IMAGE_SIZE
NETWORK.BATCH_SIZE = BATCH_SIZE
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.AdamOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.AdamOptimizer
NETWORK.OPTIMIZER_KWARGS = {"learning_rate": 0.001}
NETWORK.IMAGE_SIZE = IMAGE_SIZE
NETWORK.BATCH_SIZE = BATCH_SIZE
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.AdamOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.AdamOptimizer
NETWORK.OPTIMIZER_KWARGS = {"learning_rate": 0.001}
NETWORK.IMAGE_SIZE = IMAGE_SIZE
NETWORK.BATCH_SIZE = BATCH_SIZE
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,15 +85,15 @@
'optimizer_class': hp.choice(
'optimizer_class', [
{
'optimizer': tf.train.AdamOptimizer,
'optimizer': tf.compat.v1.train.AdamOptimizer,
},
]
),
'learning_rate': hp.uniform('learning_rate', 0, 0.01),
'learning_rate_func': hp.choice(
'learning_rate_func', [
{
'scheduler': tf.train.piecewise_constant,
'scheduler': tf.compat.v1.train.piecewise_constant,
'scheduler_factor': 1.0,
'scheduler_steps': [25000, 50000, 75000],
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.AdamOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.AdamOptimizer
NETWORK.OPTIMIZER_KWARGS = {"learning_rate": 0.001}
NETWORK.IMAGE_SIZE = IMAGE_SIZE
NETWORK.BATCH_SIZE = BATCH_SIZE
Expand Down
2 changes: 1 addition & 1 deletion blueoil/configs/example/classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@
POST_PROCESSOR = None

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.AdamOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.AdamOptimizer
NETWORK.OPTIMIZER_KWARGS = {"learning_rate": 0.001}
NETWORK.IMAGE_SIZE = IMAGE_SIZE
NETWORK.BATCH_SIZE = BATCH_SIZE
Expand Down
4 changes: 2 additions & 2 deletions blueoil/configs/example/object_detection.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,9 +88,9 @@
])

NETWORK = EasyDict()
NETWORK.OPTIMIZER_CLASS = tf.train.MomentumOptimizer
NETWORK.OPTIMIZER_CLASS = tf.compat.v1.train.MomentumOptimizer
NETWORK.OPTIMIZER_KWARGS = {"momentum": 0.9}
NETWORK.LEARNING_RATE_FUNC = tf.train.piecewise_constant
NETWORK.LEARNING_RATE_FUNC = tf.compat.v1.train.piecewise_constant
_epoch_steps = 16551 // BATCH_SIZE
NETWORK.LEARNING_RATE_KWARGS = {
"values": [1e-6, 1e-4, 1e-5, 1e-6, 1e-7],
Expand Down
1 change: 0 additions & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,3 @@ services:
- CUDA_VISIBLE_DEVICES=${CUDA_VISIBLE_DEVICES:-0}
- DATA_DIR=/home/blueoil/dataset
- OUTPUT_DIR=/home/blueoil/saved
- PYTHONPATH=/home/blueoil:/home/blueoil/lmnet:/home/blueoil/dlk/python/dlk
15 changes: 8 additions & 7 deletions docs/tutorial/image_cls.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,12 @@ The CIFAR-10 dataset consists of 60,000 32x32 color images split into 10 classe
Generate your model configuration file interactively by running the `blueoil init` command.

$ docker run --rm -it \
-v $(pwd)/cifar:/home/blueoil/cifar \
-v $(pwd)/config:/home/blueoil/config \
blueoil_$(id -un):{TAG} \
blueoil init -o config/my_config.yml
blueoil init -o config/cifar10_test.py

The `{TAG}` value must be set to a value like `v0.15.0-15-gf493ec9` that can be obtained with the `docker images` command.
The `{TAG}` value must be set to a value like `v0.20.0-11-gf1e07c8` that can be obtained with the `docker images` command.
This value depends on your environment.

Below is an example configuration.
Expand All @@ -67,18 +68,18 @@ Below is an example configuration.
choose network: LmnetV1Quantize
choose dataset format: Caltech101
training dataset path: /home/blueoil/cifar/train/
set validataion dataset? (if answer no, the dataset will be separated for training and validation by 9:1 ratio.) yes
set validation dataset? (if answer no, the dataset will be separated for training and validation by 9:1 ratio.): yes
validataion dataset path: /home/blueoil/cifar/test/
batch size (integer): 64
image size (integer x integer): 32x32
how many epochs do you run training (integer): 100
select optimizer: Momentum
initial learning rate: 0.001
choose learning rate schedule ({epochs} is the number of training epochs you entered before): '3-step-decay' -> learning rate decrease by 1/10 on {epochs}/3 and {epochs}*2/3 and {epochs}-1
enable data augmentation: Yes
enable data augmentation? (Y/n): Yes
Please choose augmentors: done (5 selections)
-> select Brightness, Color, FlipLeftRight, Hue, SSDRandomCrop
apply quantization at the first layer? no
apply quantization at the first layer? (Y/n): no
```

- Model name: (Any)
Expand All @@ -98,7 +99,7 @@ Below is an example configuration.
- Augmentors: (Random)
- Quantization on the first layer: No

If configuration finishes, the configuration file is generated in the `my_config.yml` under config directory.
If configuration finishes, the configuration file is generated in the `cifar10_test.py` under config directory.

## Train a neural network

Expand All @@ -110,7 +111,7 @@ Train your model by running `blueoil train` with model configuration.
-v $(pwd)/config:/home/blueoil/config \
-v $(pwd)/saved:/home/blueoil/saved \
blueoil_$(id -un):{TAG} \
blueoil train -c config/my_config.yml
blueoil train -c config/cifar10_test.py

Just like init, set the value of `{TAG}` to the value obtained by `docker images`.
Change the value of `CUDA_VISIBLE_DEVICES` according to your environment.
Expand Down
Loading

0 comments on commit 7867a33

Please sign in to comment.