Skip to content

Commit

Permalink
Update Readme. Minor fix on demos.
Browse files Browse the repository at this point in the history
Update README.md

Update README.md

Update README.md

Update README.md

Improve installation guide

Update README.md

Update README.md
  • Loading branch information
sunggg committed Dec 11, 2021
1 parent 1778045 commit 1ae4f84
Show file tree
Hide file tree
Showing 3 changed files with 47 additions and 19 deletions.
42 changes: 35 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,48 @@
# CAUTION: Currently, this repo is under refactoring. Please checkout around Dec 15th.

# Collage
System for automated integration of deep learning backends. Our implementation uses TVM as its code generator.
System for automated integration of deep learning backends.

# Installation
1. Go to `tvm/` and install tvm. Make sure backend libaries of interest are built together. [TVM installation guide](https://tvm.apache.org/docs/install/index.html)
2. Declare following environment variables
Since our implementation uses TVM as the main code generator, install tvm under `tvm/`. [TVM installation guide](https://tvm.apache.org/docs/install/index.html)
1. Install dependencies
```
sudo apt-get update
sudo apt-get install -y python3 python3-dev python3-setuptools gcc libtinfo-dev zlib1g-dev build-essential cmake libedit-dev libxml2-dev
```
```
pip3 install --user numpy decorator attrs tornado psutil xgboost cloudpickle pytest
```

2. Create build directory and go to build directory
```
mkdir tvm/build && cd tvm/build
```
3. Prepare `cmake` configuration file. Make sure backend libaries of interest are built together. We provide cmake config that we used for our GPU/CPU experiments (`config.cmake.gpu`, `config.cmake.cpu`) in `tvm/cmake/`. Users may copy it to their build directory and rename it to `config.cmake`
```
cp ../cmake/config.cmake.gpu config.cmake
```
4. Run `cmake` and `make`
```
cmake .. && make -j$(nproc)
```
5. Declare following environment variables
```
export COLLAGE_HOME=/path/to/collage/repo
export COLLAGE_TVM_HOME=${COLLAGE_HOME}/tvm
export PYTHONPATH=${COLLAGE_TVM_HOME}/python:${COLLAGE_HOME}/python:${PYTHONPATH}
```

# Demo
1. `cd demo/`
2. `python3 demo.py`
Install the following dependencies for deep learning models used for demo.
```
pip3 install --user torch torchvision tqdm onnx onnxruntime
```

We provide two demos (`demo_performance.py`, `demo_customization.py`) under `demo/`.
* `demo_performance.py` shows how collage optimizes given workloads with popular backends that Collage provides by default.
* `demo_customization.py` shows how users can register new backend with their custom codegen, pattern, pattern rule.

For the best result, it is highly recommend to create the tuning log by using `autotune_tvm_ops.py` before running those demos.


# Note
* As Collage uses TVM as its code generator, it cannot support backends that TVM is unable to build. Tested backends are
Expand Down
22 changes: 11 additions & 11 deletions demo/demo_customization.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,16 +47,16 @@
workload = {
"optimizer": "op-level",
"backends": ["autotvm", "cudnn", "cublas", "tensorrt"],
"network_name": "dcgan", #"resnext50_32x4d",
"network_name": "dcgan",
"target": "cuda",
"batch_size": 1,
}

# Default logging level
#logging.basicConfig(level=logging.ERROR)
# Default logging level. Skip messages during optimization
logging.basicConfig(level=logging.ERROR)

# Enable logging to monitor optimization progress e.g., operator matching, profiling...
logging.basicConfig(level=logging.INFO)
#logging.basicConfig(level=logging.INFO)

def measure_perf(lib, workload):
# Create workload
Expand Down Expand Up @@ -103,8 +103,8 @@ def check_dimension(config):
return dim1 == 2 and dim2 == 2

patterns = [
tuple([Pattern(is_op("nn.conv2d")(wildcard(), wildcard())), None]),
tuple([Pattern(is_op("nn.dense")(wildcard(), wildcard())), check_dimension])
tuple([Pattern(is_op("nn.conv2d")(wildcard(), wildcard())), check_dimension]),
tuple([Pattern(is_op("nn.dense")(wildcard(), wildcard())), None])
]

collage_mod.register_new_backend(
Expand Down Expand Up @@ -141,11 +141,11 @@ def cg_VanillaTVM(net, target, params, **kwargs):

# Run backend placement optimization with two custom backends
workload["backends"] = ["VanillaTVM", "SimpleBackend"]
#lib = collage_mod.optimize_backend_placement(**workload)
#collage_mean_perf, collage_std_perf = measure_perf(lib, workload)
#print(f"# Network: {workload['network_name']}, Collage optimizer: {workload['optimizer']}")
#print(f" - Provided backends: {workload['backends']}")
#print(f" - Run with Collage (mean, std) = ({collage_mean_perf:.4f}+-{collage_std_perf:.4f})")
lib = collage_mod.optimize_backend_placement(**workload)
collage_mean_perf, collage_std_perf = measure_perf(lib, workload)
print(f"# Network: {workload['network_name']}, Collage optimizer: {workload['optimizer']}")
print(f" - Provided backends: {workload['backends']}")
print(f" - Run with Collage (mean, std) = ({collage_mean_perf:.4f}+-{collage_std_perf:.4f})")


# 3. Register new backend with a pattern rule
Expand Down
2 changes: 1 addition & 1 deletion demo/demo_performance.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
"batch_size": 1,
}

# Default logging level
# Default logging level. Skip messages during optimization
logging.basicConfig(level=logging.ERROR)

# Enable logging to monitor optimization progress e.g., operator matching, profiling...
Expand Down

0 comments on commit 1ae4f84

Please sign in to comment.