Skip to content

Commit

Permalink
[Hybrid Script] Unify the symbol tables to one; support `tvm.containe…
Browse files Browse the repository at this point in the history
…r.Array` (apache#2366)
  • Loading branch information
were authored and Wei Chen committed Feb 20, 2019
1 parent 6256aa3 commit 4b14ad4
Show file tree
Hide file tree
Showing 6 changed files with 258 additions and 157 deletions.
43 changes: 33 additions & 10 deletions docs/langref/hybrid_script.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,20 +52,23 @@ The current parse interface looks like:
parser = tvm.hybrid.parse(outer_product, [a, b]) # return the parser of this function
If we pass these tvm tensors to this function, it returns a op node:
If we pass these tvm data structures, like ``Tensor``, ``Var``, ``Expr.*Imm``,
or ``tvm.container.Array``, to this function, it returns a op node:

.. code-block:: python
a = tvm.placeholder((100, ), name='a')
b = tvm.placeholder((99, ), name='b')
c = outer_product(a, b, c) # return the output tensor(s) of the operator
**Under construction, we are still deciding what kind of node should be returned.**
You can use any methods that can be applied on a TVM ``OpNode``, like create_schedule, although
so far, the functionality of schedule is as limited as ``ExternOpNode``. At least, it can be built
to LLVM module.

Tuning
~~~~~~

**Under construction, not truly supported yet.**
**Under construction, not supported yet.**

Follow up the example above, you can use some tvm like interfaces to tune the code:

Expand All @@ -86,6 +89,21 @@ Here we use ``range`` aka ``serial``, ``unroll``, ``parallel``, and ``vectorize`
these **4** keywords to annotate the corresponding types of for loops.
The the usage is roughly the same as Python standard ``range``.

Besides all the loop types supported in Halide, ``const_range`` is supported for some specific conditions.
Sometimes, ``tvm.container.Array`` is desired to pass as an argument, but in TVM-HalideIR, there is no
such support that converts ``tvm.container.Array`` to an ``Expr``. Thus, a limited feature is supported.
Users can access containers by either constants or constants loops annotated.

.. code-block:: python
@tvm.hybrid.script
def foo(a, b): # b is a tvm.container.Array
c = output_tensor(a.shape, a.dtype)
for i in const_range(len(a)): # because you have b access, i should be explicitly annotated as const_range
c[i] = a[i] + b[i]
return c
Variables
~~~~~~~~~

Expand All @@ -111,14 +129,14 @@ It regards the first store of a variable as its declaration.
s += a[i, j] # do something with sum
b[i] = sum # you can still use sum in this level
a[0] = s # you CANNOT use s here, even though it is allowed in conventional Python
b = (1, 2) # this has NOT been supported yet!
Attributes
~~~~~~~~~~

So far, ONLY tensors' ``shape`` attribute is supported! The ``shape`` atrribute is essentailly a
tuple, so you MUST access it as an array. Also, currently, only constant-indexed access is supported.
So far, ONLY tensors' ``shape`` and ``dtype`` attribute are supported!
The ``shape`` atrribute is essentailly a tuple, so you MUST access it as an array.
Currently, only constant-indexed access is supported.

.. code-block:: python
Expand All @@ -133,8 +151,11 @@ Conditional Statement and Expression

.. code-block:: python
if condition:
# do something
if condition1 and condition2 and condition3:
# do something
else:
# do something else
# Select
a = b if condition else c
However, NO ``True`` and ``False`` keyword supported yet.
Expand All @@ -153,7 +174,9 @@ Array Allocation
**Under construction, this function will be supported later!**

Use a function call ``allocation(shape, type, share/local)`` to declare an array buffer.
The basic usage is roughly the same as a normal array.
The basic usage is roughly the same as a normal ``numpy.array``, and you should access
high-dim array in ``a[i, j, k]`` fashion instead of ``a[i][j][k]``,
even for ``tvm.container.Array`` for compilation.


Thread Bind
Expand All @@ -170,5 +193,5 @@ You can also do loop-thread bind by writing code like this:
Keywords
~~~~~~~~
- For keywords: ``serial``, ``range``, ``unroll``, ``parallel``, ``vectorize``, ``bind``
- For keywords: ``serial``, ``range``, ``unroll``, ``parallel``, ``vectorize``, ``bind``, ``const_expr``
- Math keywords: ``log``, ``exp``, ``sigmoid``, ``tanh``, ``power``, ``popcount``
32 changes: 23 additions & 9 deletions python/tvm/hybrid/calls.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,17 @@
#pylint: disable=redefined-builtin

LOOP_INTRIN = {
'range' : For.Serial,
'unroll' : For.Unrolled,
'parallel' : For.Parallel,
'vectorize': For.Vectorized,
'range' : For.Serial,
'unroll' : For.Unrolled,
'parallel' : For.Parallel,
'vectorize' : For.Vectorized,
'const_range' : (For.Unrolled, ),
}


def _range(annotation, args):
"""Handling TVM loop types"""
n = len(args)
n = args.__len__()
if n == 1:
low, ext = _api.const(0, dtype='int32'), args[0]
else:
Expand All @@ -33,13 +35,13 @@ def _range(annotation, args):
return iter_var, low, ext, for_type


range = unroll = vectorize = parallel = _range #pylint: disable=invalid-name
range = unroll = vectorize = parallel = const_range = _range #pylint: disable=invalid-name


def bind(func_id, args):
"""Handling TVM thread binding"""
_internal_assert(func_id == "bind", "This function cannot be directly invoked!")
_internal_assert(len(args) == 2, "A loop bind should only have 2 arguments!")
_internal_assert(args.__len__() == 2, "A loop bind should only have 2 arguments!")
_internal_assert(isinstance(args[0], str), \
"A loop bind's first argument should be a string!")
iter_var = _api.thread_axis(args[0])
Expand All @@ -56,7 +58,7 @@ def _math_intrin(func_id, args):


def _min_max(func_id, args):
_internal_assert(len(args) == 2, "Max/Min function should have 2 elements")
_internal_assert(args.__len__() == 2, "Max/Min function should have 2 elements")
return getattr(_make, func_id.title())(args[0], args[1])


Expand All @@ -66,7 +68,7 @@ def _min_max(func_id, args):
def _allocate_tensor(func_id, args):
"""Handling TVM tensor allocation.
You may refer hybrid.intrin.allocate for more details."""
n = len(args)
n = args.__len__()
_internal_assert(isinstance(_api.convert(args[0]), Array), \
"allocate's first argument should be a tuple of shape!")
shape = args[0]
Expand All @@ -89,4 +91,16 @@ def _allocate_tensor(func_id, args):
scope = 'global' if func_id != 'output_tensor' else 'output'
return (shape, dtype, scope)


output_tensor = allocate = _allocate_tensor #pylint: disable=invalid-name


def len(func_id, args):
"""Iterpret the len function"""
_internal_assert(args.__len__() == 1, "Only 1 argument is expected!")
_internal_assert(func_id == "len", "This function cannot be directly invoked!")
try:
return _api.convert(args[0].__len__())
except: #pylint: disable=bare-except
_internal_assert(args[0].shape.__len__() == 1, "Only one-dimension array can get len")
return _api.convert(args[0].shape[0])
40 changes: 14 additions & 26 deletions python/tvm/hybrid/intrin.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,32 +2,19 @@

import numpy

class _range(object):
"""Base class of the loop ranges in hybrid script"""
def __init__(self, a, b=None):
if b is None:
self.low = 0
self.ext = a
else:
self.low = a
self.ext = b

class bind(object): #pylint: disable=invalid-name
"""GPU bind software emulataion runtime."""
def __init__(self, _, ext):
self.ext = ext

def __iter__(self):
i = 0
while i < self.ext:
yield i + self.low
yield i
i += 1


class bind(_range): #pylint: disable=invalid-name
def __init__(self, tag, ext):
super(bind, self).__init__(ext)
self.tag = tag


unroll = vectorize = parallel = _range #pylint: disable=invalid-name


def allocate(shape, dtype='float32', scope='global'): #pylint: disable=unused-argument
"""Allocate a buffer with given shape
Expand All @@ -47,7 +34,6 @@ def allocate(shape, dtype='float32', scope='global'): #pylint: disable=unused-ar
"""
return numpy.zeros(shape).astype(dtype)

output_tensor = allocate #pylint: disable=invalid-name

def popcount(x):
"""
Expand Down Expand Up @@ -87,17 +73,19 @@ def sigmoid(x):


HYBRID_GLOBALS = {
'unroll' : unroll,
'vectorize' : vectorize,
'parallel' : parallel,
'allocate' : allocate,
'output_tensor': output_tensor,
'len' : len,
'unroll' : range,
'vectorize' : range,
'parallel' : range,
'const_range' : range,
'bind' : bind,
'allocate' : allocate,
'output_tensor': allocate,
'sqrt' : numpy.sqrt,
'log' : numpy.log,
'tanh' : numpy.tanh,
'power' : numpy.power,
'exp' : numpy.exp,
'sigmoid' : sigmoid,
'popcount' : popcount
'popcount' : popcount,
}
Loading

0 comments on commit 4b14ad4

Please sign in to comment.