From 420dce2889b21f8c9e721ba009311424cddef9b5 Mon Sep 17 00:00:00 2001 From: suttang Date: Tue, 5 Nov 2019 16:57:31 +0900 Subject: [PATCH 1/2] fix documentation --- README.md | 17 +++++++++++++---- docs/tutorial/image_cls.md | 20 ++++++++++++-------- docs/tutorial/image_det.md | 20 +++++++++++++------- docs/tutorial/image_seg.md | 20 +++++++++++++------- docs/usage/convert.md | 17 +++++++++-------- docs/usage/init.md | 6 +++--- docs/usage/train.md | 16 ++++++++-------- 7 files changed, 71 insertions(+), 45 deletions(-) diff --git a/README.md b/README.md index d30346c88..17163d5ea 100644 --- a/README.md +++ b/README.md @@ -66,11 +66,20 @@ We can test each opereations of drore_run.sh by using shell script. - `expect` >= version 5.45 ``` -$ ./blueoil_test.sh +$ make test +``` -Usage: ./blueoil_test.sh +You can test specific task. -Arguments: - YML_CONFIG_FILE config file path for this test (optional) ``` +$ CUDA_VISIBLE_DEVICES={YOUR_GPU_ID} make test-classification +$ CUDA_VISIBLE_DEVICES={YOUR_GPU_ID} make test-object-detection +$ CUDA_VISIBLE_DEVICES={YOUR_GPU_ID} make est-semantic-segmentation +``` + +You can also test the modules used in Blueoil. +``` +$ make test-lmnet +$ make test-dlk +``` diff --git a/docs/tutorial/image_cls.md b/docs/tutorial/image_cls.md index f5f5b5281..93409e602 100644 --- a/docs/tutorial/image_cls.md +++ b/docs/tutorial/image_cls.md @@ -48,9 +48,9 @@ The CIFAR-10 dataset consists of 60,000 32x32 color images split into 10 classe ## Generate a configuration file -Generate your model configuration file interactively by running the `blueoil init` command. +Generate your model configuration file interactively by running the `python blueoil/cmd/main.py init` command. - $ ./blueoil.sh init + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init Below is an example configuration. @@ -79,19 +79,23 @@ Below is an example configuration. - Image size: 32x32 - Number of epoch: (Any number) -If configuration finishes, the configuration file is generated in the `{Model name}.yml` under `./config` directory. +If configuration finishes, the configuration file is generated in the `{Model name}.yml` under current directory. + +When you want to create config yaml in specific filename or directory, you can use `-o` option. + + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init -o ./configs/my_config.yml ## Train a neural network -Train your model by running `blueoil train` with model configuration. +Train your model by running `python blueoil/cmd/main.py train` with model configuration. - $ ./blueoil.sh train config/{Model name}.yml + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py train -c {PATH_TO_CONFIG.yml} When training has started, the training log and checkpoints are generated under `./saved/{Mode name}_{TIMESTAMP}`. Training is running on TensorFlow backend. So you can use TensorBoard to visualize your training process. - $ ./blueoil.sh tensorboard saved/{Model name}_{TIMESTAMP} {Port} + $ tensorboard --logdir=saved/{Model name}_{TIMESTAMP} --port {Port} - Loss / Cross Entropy, Loss, Weight Decay @@ -105,9 +109,9 @@ Training is running on TensorFlow backend. So you can use TensorBoard to visuali Convert trained model to executable binary files for x86, ARM, and FPGA. Currently, conversion for FPGA only supports Intel Cyclone® V SoC FPGA. - $ ./blueoil.sh convert config/[Model name].yml saved/{Mode name}_{TIMESTAMP} + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py convert -e {Model name} -`Blueoil convert` automatically executes some conversion processes. +`python blueoil/cmd/main.py convert` automatically executes some conversion processes. - Converts Tensorflow checkpoint to protocol buffer graph. - Optimizes graph. - Generates source code for executable binary. diff --git a/docs/tutorial/image_det.md b/docs/tutorial/image_det.md index bcfe97df5..36f5bb5b3 100644 --- a/docs/tutorial/image_det.md +++ b/docs/tutorial/image_det.md @@ -27,9 +27,9 @@ This dataset consists of 2866 Human Face images and 5170 annotation boxes. ## Generate a configuration file -Generate your model configuration file interactively by running `blueoil.sh init`. +Generate your model configuration file interactively by running the `python blueoil/cmd/main.py init` command. - $ ./blueoil.sh init + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init Below is an example of initialization. @@ -52,17 +52,23 @@ Please choose augmentors: done (5 selections) apply quantization at the first layer? no ``` +If configuration finishes, the configuration file is generated in the `{Model name}.yml` under current directory. + +When you wnat to create config yaml in specific filename or directory, you can use `-o` option. + + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init -o ./configs/my_config.yml + ## Train a network model -Train your model by running `blueoil.sh train` with a model configuration. +Train your model by running `python blueoil/cmd/main.py train` with a model configuration. - $ ./blueoil.sh train config/{Model name}.yml + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py train -c {PATH_TO_CONFIG.yml} When training has started, the training log and checkpoints are generated under `./saved/{Mode name}_{TIMESTAMP}`. Training runs on the TensorFlow backend. So you can use TensorBoard to visualize your training process. - $ ./blueoil.sh tensorboard saved/{Model name}_{TIMESTAMP} {Port} + $ tensorboard --logdir=saved/{Model name}_{TIMESTAMP} --port {Port} - Metrics / Accuracy @@ -79,9 +85,9 @@ Training runs on the TensorFlow backend. So you can use TensorBoard to visualize Convert trained model to executable binary files for x86, ARM, and FPGA. Currently, conversion for FPGA only supports Intel Cyclone® V SoC FPGA. - $ ./blueoil.sh convert config/{Model name}.yml saved/{Mode name}_{TIMESTAMP} + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py convert -e {Model name} -`blueoil.sh convert` automatically executes some conversion processes. +`python blueoil/cmd/main.py convert` automatically executes some conversion processes. - Converts Tensorflow checkpoint to protocol buffer graph. - Optimizes graph. - Generates source code for executable binary. diff --git a/docs/tutorial/image_seg.md b/docs/tutorial/image_seg.md index fc6dace0f..301ff09b3 100644 --- a/docs/tutorial/image_seg.md +++ b/docs/tutorial/image_seg.md @@ -54,9 +54,9 @@ CamVid dataset consists of 360x480 color images in 12 classes. There are 367 tr ## Generate a configuration file -Generate your model configuration file interactively by running `blueoil.sh init` command. +Generate your model configuration file interactively by running the `python blueoil/cmd/main.py init` command. - $ ./blueoil.sh init + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init This is an example of the initialization procedure. @@ -81,17 +81,23 @@ Please choose augmentors: done (5 selections) apply quantization at the first layer? no ``` +If configuration finishes, the configuration file is generated in the `{Model name}.yml` under current directory. + +When you wnat to create config yaml in specific filename or directory, you can use `-o` option. + + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init -o ./configs/my_config.yml + ## Train a network model -Train your model by running `blueoil.sh train` command with model configuration. +Train your model by running `python blueoil/cmd/main.py train` command with model configuration. - $ ./blueoil.sh train config/{Model name}.yml + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py train -c {PATH_TO_CONFIG.yml} When training has started, the training log and checkpoints will be generated under `./saved/{Mode name}_{TIMESTAMP}`. Training is running on the TensorFlow backend. So you can use TensorBoard to visualize your training progress. - $ ./blueoil.sh tensorboard saved/{Model name}_{TIMESTAMP} {Port} + $ tensorboard --logdir=saved/{Model name}_{TIMESTAMP} --port {Port} - Learning Rate / Loss @@ -107,9 +113,9 @@ Training is running on the TensorFlow backend. So you can use TensorBoard to vis Convert trained model to executable binary files for x86, ARM, and FPGA. Currently, conversion for FPGA only supports Intel Cyclone® V SoC FPGA. - $ ./blueoil.sh convert config/{Model name}.yml saved/{Mode name}_{TIMESTAMP} + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py convert -e {Model name} -`blueoil.sh convert` automatically executes some conversion processes. +`python blueoil/cmd/main.py convert` automatically executes some conversion processes. - Convert Tensorflow checkpoint to protocol buffer graph. - Optimize graph - Generate source code for executable binary diff --git a/docs/usage/convert.md b/docs/usage/convert.md index 564d9a3c3..28ed32064 100644 --- a/docs/usage/convert.md +++ b/docs/usage/convert.md @@ -1,19 +1,20 @@ # Convert your training result to FPGA ready format ``` -$ ./blueoil.sh convert config/test.yml ./saved/test_20180101000000 +$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py convert -e test_20180101000000 Usage: - ./blueoil.sh convert + main.py convert [OPTIONS] Arguments: - YML_CONFIG_FILE config file path for this training [required] - EXPERIMENT_DIRECTORY experiment directory path for input [required] - this is same as {OUTPUT_DIRECTORY}/{EXPERIMENT_ID} in training options. - CHECKPOINT_NO checkpoint number [optional] (default is latest checkpoint) - if you want to use save.ckpt-1000, you can set CHECKPOINT_NO as 1000. + -e, --experiment_id TEXT ID of this experiment. [required] + -p, --checkpoint TEXT Checkpoint name. e.g. save.ckpt-10001 + -t, --template TEXT Path of output template directory. + --image_size input image size height and width. if these are not provided, it restores from saved experiment config.e.g --image_size 320 320 + --project_name TEXT project name which generated by convert + --help Show this message and exit. ``` -`blueoil convert` command converts trained models to executable binary files for x86, ARM Cortex-A9, and FPGA. +`python blueoil/cmd/main.py convert` command converts trained models to executable binary files for x86, ARM Cortex-A9, and FPGA. diff --git a/docs/usage/init.md b/docs/usage/init.md index 2df87dcc3..8a0f15851 100644 --- a/docs/usage/init.md +++ b/docs/usage/init.md @@ -1,10 +1,10 @@ # Generate a configuration file -You can generate your configuration file interactively by running `blueoil init`. +You can generate your configuration file interactively by running `python blueoil/cmd/main.py init`. - $ ./blueoil.sh init + $ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init -`blueoil init` generates a configuration file used to train your new model. +`python blueoil/cmd/main.py init` generates a configuration file used to train your new model. Below is an example. ``` diff --git a/docs/usage/train.md b/docs/usage/train.md index ce2e11c68..4d5e88079 100644 --- a/docs/usage/train.md +++ b/docs/usage/train.md @@ -2,22 +2,22 @@ ``` -$ ./blueoil.sh train config/test.yml +$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py train -c config/test.yml Usage: - ./blueoil.sh train + main.py train [OPTIONS] Arguments: - YML_CONFIG_FILE config file path for this training [required] - OUTPUT_DIRECTORY output directory path for saving models [optional] (defalt is ./saved) - EXPERIMENT_ID id of this training [optional] (default is {CONFIG_NAME}_{TIMESTAMP}) + -c, --config TEXT Path of config file. [required] + -e, --experiment_id TEXT ID of this training. + --help Show this message and exit. ``` -`blueoil train` command runs actual training. +`python blueoil/cmd/main.py train` command runs actual training. -Before running `blueoil train`, make sure you've already put training/test data in the proper location, as defined in the configuration file. +Before running `python blueoil/cmd/main.py train`, make sure you've already put training/test data in the proper location, as defined in the configuration file. -If you want to stop training, you should press `Ctrl + C` or kill the `blueoil.sh` processes. You can restart training from saved checkpoints by setting `EXPERIMENT_ID` to be the same as an existing id. +If you want to stop training, you should press `Ctrl + C` or kill the `blueoil train` processes. You can restart training from saved checkpoints by setting `experiment_id` to be the same as an existing id. ## Training on GPUs From d200c15589ddd5d0de8b3a88cff0a25af15a41fd Mon Sep 17 00:00:00 2001 From: Takahiro Suzuki Date: Wed, 6 Nov 2019 20:31:04 +0900 Subject: [PATCH 2/2] Update README.md Co-Authored-By: Hideaki Masuda --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 17163d5ea..0e6c104c2 100644 --- a/README.md +++ b/README.md @@ -74,7 +74,7 @@ You can test specific task. ``` $ CUDA_VISIBLE_DEVICES={YOUR_GPU_ID} make test-classification $ CUDA_VISIBLE_DEVICES={YOUR_GPU_ID} make test-object-detection -$ CUDA_VISIBLE_DEVICES={YOUR_GPU_ID} make est-semantic-segmentation +$ CUDA_VISIBLE_DEVICES={YOUR_GPU_ID} make test-semantic-segmentation ``` You can also test the modules used in Blueoil.