Skip to content
This repository has been archived by the owner on Dec 1, 2021. It is now read-only.

Fix documentation for CLI changes #591

Merged
merged 4 commits into from
Nov 6, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 13 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,11 +66,20 @@ We can test each opereations of drore_run.sh by using shell script.
- `expect` >= version 5.45

```
$ ./blueoil_test.sh
$ make test
```

Usage: ./blueoil_test.sh <YML_CONFIG_FILE(optional)>
You can test specific task.

Arguments:
YML_CONFIG_FILE config file path for this test (optional)
```
$ CUDA_VISIBLE_DEVICES={YOUR_GPU_ID} make test-classification
$ CUDA_VISIBLE_DEVICES={YOUR_GPU_ID} make test-object-detection
$ CUDA_VISIBLE_DEVICES={YOUR_GPU_ID} make test-semantic-segmentation
```

You can also test the modules used in Blueoil.

```
$ make test-lmnet
$ make test-dlk
```
20 changes: 12 additions & 8 deletions docs/tutorial/image_cls.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,9 @@ The CIFAR-10 dataset consists of 60,000 32x32 color images split into 10 classe

## Generate a configuration file

Generate your model configuration file interactively by running the `blueoil init` command.
Generate your model configuration file interactively by running the `python blueoil/cmd/main.py init` command.

$ ./blueoil.sh init
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init

Below is an example configuration.

Expand Down Expand Up @@ -79,19 +79,23 @@ Below is an example configuration.
- Image size: 32x32
- Number of epoch: (Any number)

If configuration finishes, the configuration file is generated in the `{Model name}.yml` under `./config` directory.
If configuration finishes, the configuration file is generated in the `{Model name}.yml` under current directory.

When you want to create config yaml in specific filename or directory, you can use `-o` option.

$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init -o ./configs/my_config.yml

## Train a neural network

Train your model by running `blueoil train` with model configuration.
Train your model by running `python blueoil/cmd/main.py train` with model configuration.

$ ./blueoil.sh train config/{Model name}.yml
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py train -c {PATH_TO_CONFIG.yml}

When training has started, the training log and checkpoints are generated under `./saved/{Mode name}_{TIMESTAMP}`.

Training is running on TensorFlow backend. So you can use TensorBoard to visualize your training process.

$ ./blueoil.sh tensorboard saved/{Model name}_{TIMESTAMP} {Port}
$ tensorboard --logdir=saved/{Model name}_{TIMESTAMP} --port {Port}

- Loss / Cross Entropy, Loss, Weight Decay
<img src="../_static/train_loss.png">
Expand All @@ -105,9 +109,9 @@ Training is running on TensorFlow backend. So you can use TensorBoard to visuali
Convert trained model to executable binary files for x86, ARM, and FPGA.
Currently, conversion for FPGA only supports Intel Cyclone® V SoC FPGA.

$ ./blueoil.sh convert config/[Model name].yml saved/{Mode name}_{TIMESTAMP}
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py convert -e {Model name}

`Blueoil convert` automatically executes some conversion processes.
`python blueoil/cmd/main.py convert` automatically executes some conversion processes.
- Converts Tensorflow checkpoint to protocol buffer graph.
- Optimizes graph.
- Generates source code for executable binary.
Expand Down
20 changes: 13 additions & 7 deletions docs/tutorial/image_det.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ This dataset consists of 2866 Human Face images and 5170 annotation boxes.

## Generate a configuration file

Generate your model configuration file interactively by running `blueoil.sh init`.
Generate your model configuration file interactively by running the `python blueoil/cmd/main.py init` command.

$ ./blueoil.sh init
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init

Below is an example of initialization.

Expand All @@ -52,17 +52,23 @@ Please choose augmentors: done (5 selections)
apply quantization at the first layer? no
```

If configuration finishes, the configuration file is generated in the `{Model name}.yml` under current directory.

When you wnat to create config yaml in specific filename or directory, you can use `-o` option.

$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init -o ./configs/my_config.yml

## Train a network model

Train your model by running `blueoil.sh train` with a model configuration.
Train your model by running `python blueoil/cmd/main.py train` with a model configuration.

$ ./blueoil.sh train config/{Model name}.yml
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py train -c {PATH_TO_CONFIG.yml}

When training has started, the training log and checkpoints are generated under `./saved/{Mode name}_{TIMESTAMP}`.

Training runs on the TensorFlow backend. So you can use TensorBoard to visualize your training process.

$ ./blueoil.sh tensorboard saved/{Model name}_{TIMESTAMP} {Port}
$ tensorboard --logdir=saved/{Model name}_{TIMESTAMP} --port {Port}

- Metrics / Accuracy
<img src="../_static/object_detection_train_metrics.png">
Expand All @@ -79,9 +85,9 @@ Training runs on the TensorFlow backend. So you can use TensorBoard to visualize
Convert trained model to executable binary files for x86, ARM, and FPGA.
Currently, conversion for FPGA only supports Intel Cyclone® V SoC FPGA.

$ ./blueoil.sh convert config/{Model name}.yml saved/{Mode name}_{TIMESTAMP}
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py convert -e {Model name}

`blueoil.sh convert` automatically executes some conversion processes.
`python blueoil/cmd/main.py convert` automatically executes some conversion processes.
- Converts Tensorflow checkpoint to protocol buffer graph.
- Optimizes graph.
- Generates source code for executable binary.
Expand Down
20 changes: 13 additions & 7 deletions docs/tutorial/image_seg.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,9 @@ CamVid dataset consists of 360x480 color images in 12 classes. There are 367 tr

## Generate a configuration file

Generate your model configuration file interactively by running `blueoil.sh init` command.
Generate your model configuration file interactively by running the `python blueoil/cmd/main.py init` command.

$ ./blueoil.sh init
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init

This is an example of the initialization procedure.

Expand All @@ -81,17 +81,23 @@ Please choose augmentors: done (5 selections)
apply quantization at the first layer? no
```

If configuration finishes, the configuration file is generated in the `{Model name}.yml` under current directory.

When you wnat to create config yaml in specific filename or directory, you can use `-o` option.

$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init -o ./configs/my_config.yml

## Train a network model

Train your model by running `blueoil.sh train` command with model configuration.
Train your model by running `python blueoil/cmd/main.py train` command with model configuration.

$ ./blueoil.sh train config/{Model name}.yml
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py train -c {PATH_TO_CONFIG.yml}

When training has started, the training log and checkpoints will be generated under `./saved/{Mode name}_{TIMESTAMP}`.

Training is running on the TensorFlow backend. So you can use TensorBoard to visualize your training progress.

$ ./blueoil.sh tensorboard saved/{Model name}_{TIMESTAMP} {Port}
$ tensorboard --logdir=saved/{Model name}_{TIMESTAMP} --port {Port}

- Learning Rate / Loss
<img src="../_static/semantic_segmentation_loss.png">
Expand All @@ -107,9 +113,9 @@ Training is running on the TensorFlow backend. So you can use TensorBoard to vis
Convert trained model to executable binary files for x86, ARM, and FPGA.
Currently, conversion for FPGA only supports Intel Cyclone® V SoC FPGA.

$ ./blueoil.sh convert config/{Model name}.yml saved/{Mode name}_{TIMESTAMP}
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py convert -e {Model name}

`blueoil.sh convert` automatically executes some conversion processes.
`python blueoil/cmd/main.py convert` automatically executes some conversion processes.
- Convert Tensorflow checkpoint to protocol buffer graph.
- Optimize graph
- Generate source code for executable binary
Expand Down
17 changes: 9 additions & 8 deletions docs/usage/convert.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,20 @@
# Convert your training result to FPGA ready format

```
$ ./blueoil.sh convert config/test.yml ./saved/test_20180101000000
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py convert -e test_20180101000000

Usage:
./blueoil.sh convert <YML_CONFIG_FILE> <EXPERIMENT_DIRECTORY> <CHECKPOINT_NO(optional)>
main.py convert [OPTIONS]

Arguments:
YML_CONFIG_FILE config file path for this training [required]
EXPERIMENT_DIRECTORY experiment directory path for input [required]
this is same as {OUTPUT_DIRECTORY}/{EXPERIMENT_ID} in training options.
CHECKPOINT_NO checkpoint number [optional] (default is latest checkpoint)
if you want to use save.ckpt-1000, you can set CHECKPOINT_NO as 1000.
-e, --experiment_id TEXT ID of this experiment. [required]
-p, --checkpoint TEXT Checkpoint name. e.g. save.ckpt-10001
-t, --template TEXT Path of output template directory.
--image_size <INTEGER INTEGER> input image size height and width. if these are not provided, it restores from saved experiment config.e.g --image_size 320 320
--project_name TEXT project name which generated by convert
--help Show this message and exit.
```

`blueoil convert` command converts trained models to executable binary files for x86, ARM Cortex-A9, and FPGA.
`python blueoil/cmd/main.py convert` command converts trained models to executable binary files for x86, ARM Cortex-A9, and FPGA.


6 changes: 3 additions & 3 deletions docs/usage/init.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Generate a configuration file

You can generate your configuration file interactively by running `blueoil init`.
You can generate your configuration file interactively by running `python blueoil/cmd/main.py init`.

$ ./blueoil.sh init
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py init

`blueoil init` generates a configuration file used to train your new model.
`python blueoil/cmd/main.py init` generates a configuration file used to train your new model.

Below is an example.
```
Expand Down
16 changes: 8 additions & 8 deletions docs/usage/train.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,22 +2,22 @@


```
$ ./blueoil.sh train config/test.yml
$ PYTHONPATH=.:lmnet:dlk/python/dlk python blueoil/cmd/main.py train -c config/test.yml

Usage:
./blueoil.sh train <YML_CONFIG_FILE> <OUTPUT_DIRECTORY(optional)> <EXPERIMENT_ID(optional)>
main.py train [OPTIONS]

Arguments:
YML_CONFIG_FILE config file path for this training [required]
OUTPUT_DIRECTORY output directory path for saving models [optional] (defalt is ./saved)
EXPERIMENT_ID id of this training [optional] (default is {CONFIG_NAME}_{TIMESTAMP})
-c, --config TEXT Path of config file. [required]
-e, --experiment_id TEXT ID of this training.
--help Show this message and exit.
```

`blueoil train` command runs actual training.
`python blueoil/cmd/main.py train` command runs actual training.

Before running `blueoil train`, make sure you've already put training/test data in the proper location, as defined in the configuration file.
Before running `python blueoil/cmd/main.py train`, make sure you've already put training/test data in the proper location, as defined in the configuration file.

If you want to stop training, you should press `Ctrl + C` or kill the `blueoil.sh` processes. You can restart training from saved checkpoints by setting `EXPERIMENT_ID` to be the same as an existing id.
If you want to stop training, you should press `Ctrl + C` or kill the `blueoil train` processes. You can restart training from saved checkpoints by setting `experiment_id` to be the same as an existing id.

## Training on GPUs

Expand Down