Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Figure out the number of elements in precision_config #879

Open
burmako opened this issue Jan 4, 2023 · 3 comments
Open

Figure out the number of elements in precision_config #879

burmako opened this issue Jan 4, 2023 · 3 comments
Assignees
Labels

Comments

@burmako
Copy link
Contributor

burmako commented Jan 4, 2023

Recently we've added a verifier for precision_config to check that it contains either 0 or 2 elements, in accordance with the spec. However, it turns out that there are some producers that create precision configs with 1 element. Let's better understand this and decide on how/if the spec should be updated.

@burmako burmako added the Spec label Jan 4, 2023
copybara-service bot pushed a commit to google/tsl that referenced this issue Jan 5, 2023
Recently we've added a verifier for precision_config to check that it contains either 0 or 2 elements, in accordance with the spec: https://github.com/openxla/stablehlo/blob/main/docs/spec.md.

Turns out that there are some producers that create precision configs with 1 element. I've opened a ticket to better understand this (openxla/stablehlo#879), but in the meanwhile this CL proposes to relax the verifier.

PiperOrigin-RevId: 499935917
copybara-service bot pushed a commit to tensorflow/mlir-hlo that referenced this issue Jan 5, 2023
Recently we've added a verifier for precision_config to check that it contains either 0 or 2 elements, in accordance with the spec: https://github.com/openxla/stablehlo/blob/main/docs/spec.md.

Turns out that there are some producers that create precision configs with 1 element. I've opened a ticket to better understand this (openxla/stablehlo#879), but in the meanwhile this CL proposes to relax the verifier.

PiperOrigin-RevId: 499935917
copybara-service bot pushed a commit to openxla/xla that referenced this issue Jan 5, 2023
Recently we've added a verifier for precision_config to check that it contains either 0 or 2 elements, in accordance with the spec: https://github.com/openxla/stablehlo/blob/main/docs/spec.md.

Turns out that there are some producers that create precision configs with 1 element. I've opened a ticket to better understand this (openxla/stablehlo#879), but in the meanwhile this CL proposes to relax the verifier.

PiperOrigin-RevId: 499935917
copybara-service bot pushed a commit to tensorflow/tensorflow that referenced this issue Jan 5, 2023
Recently we've added a verifier for precision_config to check that it contains either 0 or 2 elements, in accordance with the spec: https://github.com/openxla/stablehlo/blob/main/docs/spec.md.

Turns out that there are some producers that create precision configs with 1 element. I've opened a ticket to better understand this (openxla/stablehlo#879), but in the meanwhile this CL proposes to relax the verifier.

PiperOrigin-RevId: 499935917
@ghpvnist
Copy link
Member

Here are the results from my investigation for the number of elements in precision_config:

  • 0 values mean default precision will be applied to lhs and rhs.
  • 1 value means the single precision will be applied to both lhs and rhs.
  • 2 values mean one precision is applied to lhs and the other to rhs.
  • Anything more is not valid.

The 2 value-variant covers for 1 value-variant, so we can decide to only allow 0 or 2 precision_config values. However, this would mean that if lhs and rhs both have the same precision, specifying it twice would be redundant. Allowing 1 value-variant would not hurt the semantics of StableHLO while maintaining readability of this attribute with the spec clearly documenting what different number of precision_config means.

@ghpvnist ghpvnist self-assigned this Feb 10, 2023
@burmako
Copy link
Contributor Author

burmako commented Feb 11, 2023

Thank you for figuring out the semantics of 1 precision config!

Given this information, I would propose that we don't allow 1 precision config. It's a tradeoff between complicating the spec and complicating producer code and StableHLO printouts, and I think it makes sense to prioritize the spec given that the latter complications are fairly minor.

Given that our opset is so big, I think it's important to be vigilant about spec complexity because 100x small additions can easily snowball into a significant increase in complexity levels. Now, if allowing 1 precision config enabled a new feature that cannot be expressed otherwise, the argument in favor would be stronger.

@burmako
Copy link
Contributor Author

burmako commented Feb 11, 2023

There's also a question of whether we should allow 0 precision configs, and I would propose that the answer should be similarly "No" because this doesn't add expressive power but complicates the spec (so far, we've stayed away from having an opinion about default values in the spec).

As far as the StableHLO dialect goes, precision_config is already optional, so if producers want to optimize for convenience, they can skip specifying this attribute altogether:

def StableHLO_PrecisionConfigAttr:
    OptionalAttr<
          TypedArrayAttrBase<StableHLO_PrecisionAttr, "Precision Config attribute">>;

ghpvnist added a commit that referenced this issue Jun 9, 2023
We have the following non-quantization-related constraints (excluding
C13, C15-C20) in the spec:

```
(I1) lhs tensor.
(I2) rhs tensor.
(I3) lhs_batching_dimensions 1-dimensional tensor constant of type `si64`.
(I4) rhs_batching_dimensions 1-dimensional tensor constant of type `si64`.
(I5) lhs_contracting_dimensions 1-dimensional tensor constant of type `si64`.
(I6) rhs_contracting_dimensions 1-dimensional tensor constant of type `si64`.
(I7) precision_config variadic number of enum of `DEFAULT`, `HIGH`, and `HIGHEST`.
(C1) size(`lhs_batching_dimensions`) = size(`rhs_batching_dimensions`).
(C2) size(`lhs_contracting_dimensions`) =
size(`rhs_contracting_dimensions`).
(C3) `lhs_batching_dimensions` and `lhs_contracting_dimensions` combined are
unique.
(C4) `rhs_batching_dimensions` and `rhs_contracting_dimensions` combined are
unique.
(C5) 0 <= `lhs_batching_dimensions[i]` < rank(`lhs`) for all `i`
in [0, size(`lhs_batching_dimensions`)).
(C6) 0 <= `lhs_contracting_dimensions[i]` < rank(`lhs`) for all `i`
in [0, size(`lhs_contracting_dimensions`)).
(C7) 0 <= `rhs_batching_dimensions[i]` < rank(`rhs`) for all `i`
in [0, size(`rhs_batching_dimensions`)).
(C8) 0 <= `rhs_contracting_dimensions[i]` < rank(`rhs`) for all `i`
in [0, size(`rhs_contracting_dimensions`)).
(C9) dim(`lhs`, `lhs_batching_dimensions[i]`) =
dim(`rhs`, `rhs_batching_dimensions[i]`) for all `i` in [0,
size(`lhs_batching_dimensions`)).
(C10) dim(`lhs`, `lhs_contracting_dimensions[i]`) =
dim(`rhs`, `rhs_contracting_dimensions[i]`) for all `i` in [0,
size(`lhs_contracting_dimensions`)).
(C11) size(`precision_config`) = 2.
(C12) shape(`result`) = dim(`lhs`, `lhs_batching_dimensions`) +
dim(`lhs`, `lhs_result_dimensions`) + dim(`rhs`, `rhs_result_dimensions`).
(C14) element_type(`lhs`) = element_type(`rhs`).
```

These constraints will be comprehensively covered by the following
tests:

```
I1: a) lhs is not a tensor. (Covered by ODS).
I2: a) rhs is not a tensor. (Covered by ODS).
I3: a) lhs_batching_dimensions is not a 1-dimensional tensor. (Covered by ODS).
    b) element_type(lhs_batching_dimesnions) != `si64`. (Covered by ODS).
I4: a) rhs_batching_dimensions is not a 1-dimensional tensor. (Covered by ODS).
    b) element_type(rhs_batching_dimesnions) != `si64`. (Covered by ODS).
I5: a) lhs_contracting_dimensions is not a 1-dimensional tensor. (Covered by ODS).
    b) element_type(lhs_contracting_dimensions) != `si64`. (Covered by ODS).
I6: a) rhs_contracting_dimensions is not a 1-dimensional tensor. (Covered by ODS).
    b) element_type(rhs_contracting_dimensions) != `si64`. (Covered by ODS).
I7: a) precision_config does not have variadic number of enum of `DEFAULT`, `HIGH`, and `HIGHEST`. (Covered by ODS).
C1: a) size(lhs_batching_dimensions) != size(rhs_batching_dimensions).
C2: a) size(lhs_contracting_dimensions) != size(rhs_contracting_dimensions).
C3: a) lhs_batching_dimensions and lhs_contracting_dimensions combined are not unique.
C4: a) rhs_batching_dimensions and rhs_contracting_dimensions combined are not unique.
C5: a) lhs_batching_dimensions[i] < 0 for any i.
    b) lhs_batching_dimensions[i] >= rank(lhs) for any i.
C6: a) lhs_contracting_dimensions[i] < 0 for any i.
    b) lhs_contracting_dimensions[i] >= rank(lhs) for any i.
C7: a) rhs_batching_dimensions[i] < 0 for any i.
    b) rhs_batching_dimensions[i] >= rank(rhs) for any i.
C8: a) rhs_contracting_dimensions[i] < 0 for any i.
    b) rhs_contracting_dimensions[i] >= rank(rhs) for any i.
C9: a) dim(lhs, lhs_batching_dimensions[i]) != dim(rhs, rhs_batching_dimensions[i]) for any i.
C10: a) dim(lhs, lhs_contracting_dimensions[i]) != dim(rhs, rhs_contracting_dimensions[i]) for any i.
C11: a) size(precision_config) != 2.
C12: no negative test needed since it's just inferring the shape.
C14: a) element_type(lhs) != element_type(rhs).
```

If we drop the "Covered by ODS" pieces, this will leave us with the
following test cases:

```
C1a: size(lhs_batching_dimensions) != size(rhs_batching_dimensions).
C2a: size(lhs_contracting_dimensions) != size(rhs_contracting_dimensions).
C3a: lhs_batching_dimensions and lhs_contracting_dimensions combined are not unique.
C4a: rhs_batching_dimensions and rhs_contracting_dimensions combined are not unique.
C5a: lhs_batching_dimensions[i] < 0 for any i.
C5b: lhs_batching_dimensions[i] >= rank(lhs) for any i.
C6a: lhs_contracting_dimensions[i] < 0 for any i.
C6b: lhs_contracting_dimensions[i] >= rank(lhs) for any i.
C7a: rhs_batching_dimensions[i] < 0 for any i.
C7b: rhs_batching_dimensions[i] >= rank(rhs) for any i.
C8a: rhs_contracting_dimensions[i] < 0 for any i.
C8b: rhs_contracting_dimensions[i] >= rank(rhs) for any i.
C9a: dim(lhs, lhs_batching_dimensions[i]) != dim(rhs, rhs_batching_dimensions[i]) for any i.
C10a: dim(lhs, lhs_contracting_dimensions[i]) != dim(rhs, rhs_contracting_dimensions[i]) for any i.
C11a: size(precision_config) != 2.
C14a: element_type(lhs) != element_type(rhs).
```

Notes:
* (C14) currently does not have a test and consider removing it #755
* (C11) currently does not have a test due to #755 and #879.

closes #336
ghpvnist added a commit that referenced this issue Apr 23, 2024
This is part 3 of #1964 to implement the remaining parts of #1314.

One notable change in TypeInference.cpp is (C27), whose verification
differs whether element type is quantized.

We have the following constraints in the spec (excluding
quantization-related constraints C28-C33):

```
(I1) `lhs` tensor.
(I2) `rhs` tensor.
(I3) `window_strides` 1-dimensional tensor constant of type `si64`.
(I4) `padding` 2-dimensional tensor constant of type `si64`.
(I5) `lhs_dilation` 1-dimensional tensor constant of type `si64`.
(I6) `rhs_dilation` 1-dimensional tensor constant of type `si64`.
(I7) `window_reversal` 1-dimensional tensor constant of type `i1`.
(I8) `input_batch_dimension` constant of type `si64`.
(I9) `input_feature_dimension` constant of type `si64`.
(I10) `input_spatial_dimensions` 1-dimensional tensor constant of type `si64`.
(I11) `kernel_input_feature_dimension` constant of type `si64`.
(I12) `kernel_output_feature_dimension` constant of type `si64`.
(I13) `kernel_spatial_dimensions` 1-dimensional tensor constant of type `si64`.
(I14) `output_batch_dimension` constant of type `si64`.
(I15) `output_feature_dimension` constant of type `si64`.
(I16) `output_spatial_dimensions` 1-dimensional tensor constant of type `si64`.
(I17) `feature_group_count` constant of type `si64`.
(I18) `batch_group_count` constant of type `si64`.
(I19) `precision_config` variadic number of enums of `DEFAULT`, `HIGH`, and `HIGHEST`.
(C1) `N = rank(lhs) = rank(rhs)`.
(C2) `size(window_strides) = N - 2`.
(C3) `0 < window_strides`.
(C4) `shape(padding) = [N - 2, 2]`.
(C5) `size(lhs_dilation) = N - 2`.
(C6) `0 < lhs_dilation`.
(C7) `size(rhs_dilation) = N - 2`.
(C8) `0 < rhs_dilation`.
(C9) `size(window_reversal) = N - 2`.
(C10) `dim(lhs, input_batch_dimension) % batch_group_count = 0`.
(C11) `dim(lhs, input_feature_dimension) % feature_group_count = 0`.
(C12) `size(input_spatial_dimensions) = N - 2`.
(C13) Given `input_dimensions = [input_batch_dimension] +
     input_spatial_dimensions + [input_feature_dimension]`:
* `is_unique(input_dimensions)`.
* `0 <= input_dimensions < N`.
(C14) `dim(rhs, kernel_input_feature_dimension = dim(lhs, input_feature_dimension) / feature_group_count`.
(C15) `dim(rhs, kernel_output_feature_dimension) % batch_group_count = 0`.
(C16) `dim(rhs, kernel_output_feature_dimension) % feature_group_count = 0`.
(C17) `size(kernel_spatial_dimensions) = N - 2`.
(C18) Given `kernel_dimensions = kernel_spatial_dimensions +
      [kernel_input_feature_dimension] + [kernel_output_feature_dimension]`:
* `is_unique(kernel_dimensions)`.
* `0 <= kernel_dimensions < N`.
(C19) `size(output_spatial_dimensions) = N - 2`.
(C20) Given `output_dimensions = [output_batch_dimension] +
      output_spatial_dimensions + [output_feature_dimension]`:
* `is_unique(output_dimensions)`.
* `0 <= output_dimensions < N`.
(C21) `0 < feature_group_count`.
(C22) `0 < batch_group_count`.
(C23) `feature_group_count = 1 or batch_group_count = 1`.
(C24) `size(precision_config) = 2`.
(C25) `dim(result, result_dim)` is defined as:
* `dim(lhs, input_batch_dimension) / batch_group_count` if `result_dim = output_batch_dimension`.
* `dim(rhs, kernel_output_feature_dimension)` if `result_dim = output_feature_dimension`.
* `num_windows` otherwise, where:
  * `output_spatial_dimensions[spatial_dim] = result_dim`.
  * `lhs_dim = input_spatial_dimensions[spatial_dim]`.
  * `rhs_dim = kernel_spatial_dimensions[spatial_dim]`.
  * `dilated_input_shape[lhs_dim] = dim(lhs, lhs_dim) = 0 ? 0 : (dim(lhs, lhs_dim) - 1) * lhs_dilation[spatial_dim] + 1`.
  * `padded_input_shape[lhs_dim] = padding[spatial_dim, 0] + dilated_input_shape[lhs_dim] + padding[spatial_dim, 1]`.
  * `dilated_window_shape[lhs_dim] = dim(rhs, rhs_dim) = 0 ? 0 : (dim(rhs, rhs_dim) - 1) * rhs_dilation[spatial_dim] + 1`.
  * `is_empty_window[lhs_dim] = padded_input_shape[lhs_dim] = 0 || dilated_window_shape[lhs_dim] > padded_input_shape[lhs_dim]`.
  * `num_windows = is_empty_window[lhs_dim] ? 0 : floor((padded_input_shape[lhs_dim] - dilated_window_shape[lhs_dim]) / window_strides[spatial_dim]) + 1`.
(C26) `rank(result) = N`.
(C27) `element_type(lhs) = element_type(rhs) = element_type(result)`.
```

These constraints will be comprehensively covered by the following
tests:

```
I1: a) `lhs` tensor. (Covered by ODS).
I2: a) `rhs` tensor. (Covered by ODS).
I3: a) `window_strides` is not a 1-dimensional tensor. (Covered by ODS).
    b) element_type(`window_strides`) != `si64`. (Covered by ODS).
I4: a) `padding` is not a 2-dimensional tensor.
    b) element_type(`padding`) != `si64`. (Covered by ODS).
I5: a) `lhs_dilation` is not a 1-dimensional tensor. (Covered by ODS).
    b)  element_type(`lhs_dilation`) != `si64`. (Covered by ODS).
I6: a) `rhs_dilation` is not a 1-dimensional tensor. (Covered by ODS).
    b) element_type(`rhs_dilation`) != `si64`. (Covered by ODS).
I7: a) `window_reversal` is not a 1-dimensional tensor. (Covered by ODS).
    b) element_type(`window_reversal`) != `i1`. (Covered by ODS).
I8: a) element_type(`input_batch_dimension`) != `si64`. (Covered by ODS).
I9: a) element_type(`input_feature_dimension`) != `si64`. (Covered by ODS).
I10: a) `input_spatial_dimensions` is not a 1-dimensional tensor. (Covered by ODS).
     b) element_type(`input_spatial_dimensions`) != `si64`. (Covered by ODS).
I11: a) element_type(`kernel_input_feature_dimension`) != `si64`. (Covered by ODS).
I12: a) element_type(`kernel_output_feature_dimension`) != `si64`. (Covered by ODS).
I13: a) `kernel_spatial_dimensions` is not a 1-dimensional tensor. (Covered by ODS).
     b) element_type(`kernel_spatial_dimensions`) != `si64`. (Covered by ODS).
I14: a) element_type(`output_batch_dimension`) != `si64`. (Covered by ODS).
I15: a) element_type(`output_feature_dimension`) != `si64`. (Covered by ODS).
I16: a) `output_spatial_dimensions` is not a 1-dimensional tensor. (Covered by ODS).
     b) element_type(`output_spatial_dimensions`) != `si64`. (Covered by ODS).
I17: a) element_type(`feature_group_count`) != `si64`. (Covered by ODS).
I18: a) element_type(`batch_group_count`) != `si64`. (Covered by ODS).
I19: a) `precision_config` does not have variadic number of enums of `DEFAULT`, `HIGH`, and `HIGHEST`. (Covered by ODS).
C1: a) N = rank(`lhs`) != rank(`rhs`).
C2: a) size(`window_strides`) != N - 2.
C3: a) `window_strides[i]` <= 0 for any i in [0, size(`window_strides`)).
C4: a) dim(`padding`, 0) != N - 2.
    b) dim(`padding`, 1) != 2.
C5: a) size(`lhs_dilation`) != N - 2.
C6: a) `lhs_dilation[i]` <= 0 for any i in [0, size(`lhs_dilation`)).
C7: a) size(`rhs_dilation`) != N - 2.
C8: a) `rhs_dilation[i]` <= 0 for any i in [0, size(`rhs_dilation`)).
C9: a) size(`window_reversal`) != N - 2.
C10: a) `dim(lhs, input_batch_dimension) % batch_group_count != 0`.
C11: a) `dim(lhs, input_feature_dimension) % feature_group_count != 0`.
C12: a) size(`input_spatial_dimensions`) != N - 2.
C13: a) Given `input_dimensions = [input_batch_dimension] +
     input_spatial_dimensions + [input_feature_dimension]`:
     * Any dimensions in `input_dimensions` are not unique.
     b) Given `input_dimensions = [input_batch_dimension] +
     input_spatial_dimensions + [input_feature_dimension]`:
     * For any i in `input_dimensions`, i < 0.
     c) Given `input_dimensions = [input_batch_dimension] +
     input_spatial_dimensions + [input_feature_dimension]`:
     * For any i in `input_dimensions`, i >= N.
C14: a) `dim(rhs, kernel_input_feature_dimension != dim(lhs, input_feature_dimension) / feature_group_count`.
C15: a) `dim(rhs, kernel_output_feature_dimension) % batch_group_count != 0`.
C16: a) `dim(rhs, kernel_output_feature_dimension) % feature_group_count != 0`.
C17: a) size(`kernel_spatial_dimensions`) != N - 2.
C18: a) Given `kernel_dimensions = kernel_spatial_dimensions +
     [kernel_input_feature_dimension] + [kernel_output_feature_dimension]`:
     * Any dimensions in `kernel_dimensions` are not unique.
     b) Given `kernel_dimensions = kernel_spatial_dimensions +
     [kernel_input_feature_dimension] + [kernel_output_feature_dimension]`:
     * For any i in$ `kernel_dimensions`, i < 0.
     c) Given `kernel_dimensions = kernel_spatial_dimensions +
     [kernel_input_feature_dimension] + [kernel_output_feature_dimension]`:
     * For any i in `kernel_dimensions`, i >= N.
C19: a) size(`output_spatial_dimensions`) != N - 2.
C20: a) Given `output_dimensions = [output_batch_dimension] +
     output_spatial_dimensions + [output_feature_dimension]`:
     * Any dimensions in `output_dimensions` are not unique.
     b) Given `output_dimensions = [output_batch_dimension] +
     output_spatial_dimensions + [output_feature_dimension]`:
     * For any i in `output_dimensions`, i < 0.
     c) Given `output_dimensions = [output_batch_dimension] +
     output_spatial_dimensions + [output_feature_dimension]`:
     * For any i in `output_dimensions`, i >= N.
C21: a) `feature_group_count <= 0`.
C22: a) `batch_group_count <= 0`.
C23: a) `feature_group_count` != 1 and `batch_group_count` != 1.
C24: a) size(`precision_config`) != 2.
C25: a) For result_dim in [0, N):
        `dim(result, result_dim)` != `dim(lhs, input_batch_dimension) / batch_group_count`, if `result_dim = output_batch_dimension`.
     b) For result_dim in [0, N):
        `dim(result, result_dim)` != `dim(rhs, kernel_output_feature_dimension)`, if `result_dim = output_feature_dimension`.
     c) For result_dim in [0, N):
        `dim(result, result_dim)` != `num_windows` otherwise, where:
       * `output_spatial_dimensions[spatial_dim] = result_dim`.
       * `lhs_dim = input_spatial_dimensions[spatial_dim]`.
       * `rhs_dim = kernel_spatial_dimensions[spatial_dim]`.
       * `dilated_input_shape[lhs_dim] = dim(lhs, lhs_dim) == 0 ? 0 : (dim(lhs, lhs_dim) - 1) * lhs_dilation[spatial_dim] + 1`.
       * `padded_input_shape[lhs_dim] = padding[spatial_dim, 0] + dilated_input_shape[lhs_dim] + padding[spatial_dim, 1]`.
       * `dilated_window_shape[lhs_dim] = dim(rhs, rhs_dim) == 0 ? 0 : (dim(rhs, rhs_dim) - 1) * rhs_dilation[spatial_dim] + 1`.
       * `num_windows = (padded_input_shape[lhs_dim] == 0 || dilated_window_shape[lhs_dim] > padded_input_shape[lhs_dim]) ? 0 : floor((padded_input_shape[lhs_dim] - dilated_window_shape[lhs_dim]) / window_strides[spatial_dim]) + 1`.
C26: a) rank(result) != N.
C27: a) element_type(`lhs`) != element_type(`rhs`).
```

If we drop the "Covered by ODS" pieces, this will leave us with the
following test cases:

```
I4a: `padding` is not a 2-dimensional tensor.
C1a: rank(`lhs`) != rank(`rhs`) != N.
C2a: size(`window_strides`) != N - 2.
C3a: `window_strides[i]` <= 0 for any i in [0, size(`window_strides`)).
C4a: dim(`padding`, 0) != N - 2.
C4b: dim(`padding`, 1) != 2.
C5a: size(`lhs_dilation`) != N - 2.
C6a: `lhs_dilation[i]` <= 0 for any i in [0, size(`lhs_dilation`)).
C7a: size(`rhs_dilation`) != N - 2.
C8a: `rhs_dilation[i]` <= 0 for any i in [0, size(`rhs_dilation`)).
C9a: size(`window_reversal`) != N - 2.
C10a: `dim(lhs, input_batch_dimension) % batch_group_count != 0`.
C11a: `dim(lhs, input_feature_dimension) % feature_group_count != 0`.
C12a: size(`input_spatial_dimensions`) != N - 2.
C13a: Given `input_dimensions = [input_batch_dimension] +
      input_spatial_dimensions + [input_feature_dimension]`:
      * Any dimensions in `input_dimensions` are not unique.
C13b: Given `input_dimensions = [input_batch_dimension] +
      input_spatial_dimensions + [input_feature_dimension]`:
      * For any i in `input_dimensions`, i < 0.
C13c: Given `input_dimensions = [input_batch_dimension] +
      input_spatial_dimensions + [input_feature_dimension]`:
      * For any i in `input_dimensions`, i >= N.
C14a: `dim(rhs, kernel_input_feature_dimension != dim(lhs, input_feature_dimension) / feature_group_count`.
C15a: `dim(rhs, kernel_output_feature_dimension) % batch_group_count != 0`.
C16a: `dim(rhs, kernel_output_feature_dimension) % feature_group_count != 0`.
C17a: size(`kernel_spatial_dimensions`) != N - 2.
C18a: Given `kernel_dimensions = kernel_spatial_dimensions +
      [kernel_input_feature_dimension] + [kernel_output_feature_dimension]`:
      * Any dimensions in `kernel_dimensions` are not unique.
C18b: Given `kernel_dimensions = kernel_spatial_dimensions +
      [kernel_input_feature_dimension] + [kernel_output_feature_dimension]`:
      * For any i in$ `kernel_dimensions`, i < 0.
C18c: Given `kernel_dimensions = kernel_spatial_dimensions +
      [kernel_input_feature_dimension] + [kernel_output_feature_dimension]`:
      * For any i in `kernel_dimensions`, i >= N.
C19a: size(`output_spatial_dimensions`) != N - 2.
C20a: Given `output_dimensions = [output_batch_dimension] +
     output_spatial_dimensions + [output_feature_dimension]`:
     * Any dimensions in `output_dimensions` are not unique.
     b) Given `output_dimensions = [output_batch_dimension] +
     output_spatial_dimensions + [output_feature_dimension]`:
     * For any i in `output_dimensions`, i < 0.
     c) Given `output_dimensions = [output_batch_dimension] +
     output_spatial_dimensions + [output_feature_dimension]`:
     * For any i in `output_dimensions`, i >= N.
C21a: `feature_group_count <= 0`.
C22a: `batch_group_count <= 0`.
C23a: `feature_group_count` != 1 and `batch_group_count` != 1.
C24a: size(`precision_config`) != 2.
C25a: For result_dim in [0, N):
      `dim(result, result_dim)` != `dim(lhs, input_batch_dimension) / batch_group_count`, if `result_dim = output_batch_dimension`.
C25b: For result_dim in [0, N):
      `dim(result, result_dim)` != `dim(rhs, kernel_output_feature_dimension)`, if `result_dim = output_feature_dimension`.
C25c: For result_dim in [0, N):
      `dim(result, result_dim)` != `num_windows` otherwise, where:
        * `output_spatial_dimensions[spatial_dim] = result_dim`.
        * `lhs_dim = input_spatial_dimensions[spatial_dim]`.
        * `rhs_dim = kernel_spatial_dimensions[spatial_dim]`.
        * `dilated_input_shape[lhs_dim] = dim(lhs, lhs_dim) == 0 ? 0 : (dim(lhs, lhs_dim) - 1) * lhs_dilation[spatial_dim] + 1`.
        * `padded_input_shape[lhs_dim] = padding[spatial_dim, 0] + dilated_input_shape[lhs_dim] + padding[spatial_dim, 1]`.
        * `dilated_window_shape[lhs_dim] = dim(rhs, rhs_dim) == 0 ? 0 : (dim(rhs, rhs_dim) - 1) * rhs_dilation[spatial_dim] + 1`.
        * `num_windows = (padded_input_shape[lhs_dim] == 0 || dilated_window_shape[lhs_dim] > padded_input_shape[lhs_dim]) ? 0 : floor((padded_input_shape[lhs_dim] - dilated_window_shape[lhs_dim]) / window_strides[spatial_dim]) + 1`.
C26a: rank(result) != N.
C27a: element_type(`lhs`) != element_type(`rhs`).
```

Notes:
* (new C24) is left untouched as there are still pending action item
regarding the number of precision config values allowed in #879.

closes #2092
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: No status
Development

No branches or pull requests

2 participants