Skip to content

Commit

Permalink
[Tutorial] Fix formatting, grammar, dead link (apache#9281)
Browse files Browse the repository at this point in the history
* tutorial: preprocess.py: Fix leading whitespace

This fixes the indentation of metadata in `preprocess.py` in the TVMC tutorial, removing the leading whitespaces in the HTML rendering[^1].

[^1] https://tvm.apache.org/docs/tutorial/tvmc_command_line_driver.html#preprocess-py

* tutorial: Add missing code block escapes

* tutorial: Grammar fixup

* README.md: Fix link to introduction

Co-authored-by: Martin Kröning <[email protected]>
  • Loading branch information
mkroening and Martin Kröning authored Oct 23, 2021
1 parent 1526ad1 commit bb5e653
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 10 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ TVM is licensed under the [Apache-2.0](LICENSE) license.
Getting Started
---------------
Check out the [TVM Documentation](https://tvm.apache.org/docs/) site for installation instructions, tutorials, examples, and more.
The [Getting Started with TVM](https://tvm.apache.org/docs/tutorials/get_started/introduction.html) tutorial is a great
The [Getting Started with TVM](https://tvm.apache.org/docs/tutorial/introduction.html) tutorial is a great
place to start.

Contribute to TVM
Expand Down
5 changes: 4 additions & 1 deletion gallery/tutorial/autotvm_relay_x86.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@
# TVMC has adopted NumPy's ``.npz`` format for both input and output data.
#
# As input for this tutorial, we will use the image of a cat, but you can feel
# free to substitute image for any of your choosing.
# free to substitute this image for any of your choosing.
#
# .. image:: https://s3.amazonaws.com/model-server/inputs/kitten.jpg
# :height: 224px
Expand Down Expand Up @@ -278,6 +278,7 @@
from tvm.autotvm.tuner import XGBTuner
from tvm import autotvm

################################################################################
# Set up some basic parameters for the runner. The runner takes compiled code
# that is generated with a specific set of parameters and measures the
# performance of it. ``number`` specifies the number of different
Expand All @@ -303,6 +304,7 @@
enable_cpu_cache_flush=True,
)

################################################################################
# Create a simple structure for holding tuning options. We use an XGBoost
# algorithim for guiding the search. For a production job, you will want to set
# the number of trials to be larger than the value of 10 used here. For CPU we
Expand Down Expand Up @@ -426,6 +428,7 @@
for rank in ranks[0:5]:
print("class='%s' with probability=%f" % (labels[rank], scores[rank]))

################################################################################
# Verifying that the predictions are the same:
#
# .. code-block:: bash
Expand Down
8 changes: 4 additions & 4 deletions gallery/tutorial/tensor_expr_get_started.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@

################################################################################
# Let's run the function, and compare the output to the same computation in
# numpy. The compiled TVM function is exposes a concise C API that can be invoked
# numpy. The compiled TVM function exposes a concise C API that can be invoked
# from any language. We begin by creating a device, which is a device (CPU in this
# example) that TVM can compile the schedule to. In this case the device is an
# LLVM CPU target. We can then initialize the tensors in our device and
Expand Down Expand Up @@ -258,8 +258,8 @@ def evaluate_addition(func, target, optimization, log):
print(tvm.lower(s, [A, B, C], simple_mode=True))

################################################################################
# Comparing the Diferent Schedules
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Comparing the Different Schedules
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# We can now compare the different schedules

baseline = log[0][1]
Expand Down Expand Up @@ -347,7 +347,7 @@ def evaluate_addition(func, target, optimization, log):
fadd = tvm.build(s, [A, B, C], target=tgt_gpu, name="myadd")

################################################################################
# The compiled TVM function is exposes a concise C API that can be invoked from
# The compiled TVM function exposes a concise C API that can be invoked from
# any language.
#
# We provide a minimal array API in python to aid quick testing and prototyping.
Expand Down
8 changes: 4 additions & 4 deletions gallery/tutorial/tvmc_command_line_driver.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,10 +174,10 @@
# data types. For this reason, most models require some pre and
# post-processing, to ensure the input is valid and to interpret the output.
# TVMC has adopted NumPy's ``.npz`` format for both input and output data. This
# is a well-supported NumPy format to serialize multiple arrays into a file
# is a well-supported NumPy format to serialize multiple arrays into a file.
#
# As input for this tutorial, we will use the image of a cat, but you can feel
# free to substitute image for any of your choosing.
# free to substitute this image for any of your choosing.
#
# .. image:: https://s3.amazonaws.com/model-server/inputs/kitten.jpg
# :height: 224px
Expand All @@ -197,8 +197,8 @@
# requirement for the script.
#
# .. code-block:: python
# :caption: preprocess.py
# :name: preprocess.py
# :caption: preprocess.py
# :name: preprocess.py
#
# #!python ./preprocess.py
# from tvm.contrib.download import download_testdata
Expand Down

0 comments on commit bb5e653

Please sign in to comment.