Skip to content

Commit

Permalink
syntax fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
317brian committed Jan 9, 2025
1 parent 91ec717 commit 854dccb
Show file tree
Hide file tree
Showing 13 changed files with 49 additions and 46 deletions.
4 changes: 2 additions & 2 deletions docs/configuration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2127,8 +2127,8 @@ The `druid.query.default.context.{query_context_key}` runtime property prefix ap

The precedence chain for query context values is as follows:

hard-coded default value in Druid code <- runtime property not prefixed with `druid.query.default.context`
<- runtime property prefixed with `druid.query.default.context` <- context parameter in the query
hard-coded default value in Druid code `<-` runtime property not prefixed with `druid.query.default.context`
`<-` runtime property prefixed with `druid.query.default.context` `<-` context parameter in the query

Note that not all query context key has a runtime property not prefixed with `druid.query.default.context` that can
override the hard-coded default value. For example, `maxQueuedBytes` has `druid.broker.http.maxQueuedBytes`
Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/k8s-jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -509,7 +509,7 @@ strategy. To explicitly select this strategy, set the `podTemplateSelectStrategy
```

Task specific pod templates can be specified as the runtime property
`druid.indexer.runner.k8s.podTemplate.{taskType}: /path/to/taskSpecificPodSpec.yaml` where {taskType} is the name of the
`druid.indexer.runner.k8s.podTemplate.{taskType}: /path/to/taskSpecificPodSpec.yaml` where `{taskType}` is the name of the
task type. For example, `index_parallel`.

If you are trying to use the default image's environment variable parsing feature to set runtime properties, you need to add a extra escape underscore when specifying pod templates.
Expand Down
4 changes: 2 additions & 2 deletions docs/development/extensions-core/catalog.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ A tableSpec defines a table
| Property | Type | Description | Required | Default |
|--------------|---------------------------------|---------------------------------------------------------------------------|----------|---------|
| `type` | String | the type of table. The only value supported at this time is `datasource` | yes | null |
| `properties` | Map<String, Object> | the table's defined properties. see [table properties](#table-properties) | no | null |
| `columns` | List<[ColumnSpec](#columnspec)> | the table's defined columns | no | null |
| `properties` | Map\<String, Object\> | the table's defined properties. see [table properties](#table-properties) | no | null |
| `columns` | List\<[ColumnSpec](#columnspec)\> | the table's defined columns | no | null |

#### Table Properties

Expand Down
42 changes: 21 additions & 21 deletions docs/development/extensions-core/druid-basic-security.md
Original file line number Diff line number Diff line change
Expand Up @@ -462,23 +462,23 @@ To use these APIs, a user needs read/write permissions for the CONFIG resource t

Root path: `/druid-ext/basic-security/authentication`

Each API endpoint includes {authenticatorName}, specifying which Authenticator instance is being configured.
Each API endpoint includes \{authenticatorName}, specifying which Authenticator instance is being configured.

##### User/Credential Management
`GET(/druid-ext/basic-security/authentication/db/{authenticatorName}/users)`<br />
Return a list of all user names.

`GET(/druid-ext/basic-security/authentication/db/{authenticatorName}/users/{userName})`<br />
Return the name and credentials information of the user with name {userName}
Return the name and credentials information of the user with name \{userName}

`POST(/druid-ext/basic-security/authentication/db/{authenticatorName}/users/{userName})`<br />
Create a new user with name {userName}
Create a new user with name \{userName}

`DELETE(/druid-ext/basic-security/authentication/db/{authenticatorName}/users/{userName})`<br />
Delete the user with name {userName}
Delete the user with name \{userName}

`POST(/druid-ext/basic-security/authentication/db/{authenticatorName}/users/{userName}/credentials)`<br />
Assign a password used for HTTP basic authentication for {userName}
Assign a password used for HTTP basic authentication for \{userName}
Content: JSON password request object

Example request body:
Expand All @@ -497,14 +497,14 @@ Return the current load status of the local caches of the authentication Druid m

Root path: `/druid-ext/basic-security/authorization`<br />

Each API endpoint includes {authorizerName}, specifying which Authorizer instance is being configured.
Each API endpoint includes \{authorizerName}, specifying which Authorizer instance is being configured.

##### User Creation/Deletion
`GET(/druid-ext/basic-security/authorization/db/{authorizerName}/users)`<br />
Return a list of all user names.

`GET(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})`<br />
Return the name and role information of the user with name {userName}
Return the name and role information of the user with name \{userName}

Example output:

Expand Down Expand Up @@ -591,20 +591,20 @@ The `resourceNamePattern` is a compiled version of the resource name regex. It i
```

`POST(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})`<br />
Create a new user with name {userName}
Create a new user with name \{userName}

`DELETE(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})`<br />
Delete the user with name {userName}
Delete the user with name \{userName}

##### Group mapping Creation/Deletion
`GET(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings)`<br />
Return a list of all group mappings.

`GET(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings/{groupMappingName})`<br />
Return the group mapping and role information of the group mapping with name {groupMappingName}
Return the group mapping and role information of the group mapping with name \{groupMappingName}

`POST(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings/{groupMappingName})`<br />
Create a new group mapping with name {groupMappingName}
Create a new group mapping with name \{groupMappingName}
Content: JSON group mapping object
Example request body:

Expand All @@ -619,14 +619,14 @@ Example request body:
```

`DELETE(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings/{groupMappingName})`<br />
Delete the group mapping with name {groupMappingName}
Delete the group mapping with name \{groupMappingName}

#### Role Creation/Deletion
`GET(/druid-ext/basic-security/authorization/db/{authorizerName}/roles)`<br />
Return a list of all role names.

`GET(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})`<br />
Return name and permissions for the role named {roleName}.
Return name and permissions for the role named \{roleName}.

Example output:

Expand Down Expand Up @@ -680,30 +680,30 @@ Example output:


`POST(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})`<br />
Create a new role with name {roleName}.
Create a new role with name \{roleName}.
Content: username string

`DELETE(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})`<br />
Delete the role with name {roleName}.
Delete the role with name \{roleName}.


#### Role Assignment
`POST(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName}/roles/{roleName})`<br />
Assign role {roleName} to user {userName}.
Assign role \{roleName} to user \{userName}.

`DELETE(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName}/roles/{roleName})`<br />
Unassign role {roleName} from user {userName}
Unassign role \{roleName} from user \{userName}

`POST(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings/{groupMappingName}/roles/{roleName})`<br />
Assign role {roleName} to group mapping {groupMappingName}.
Assign role \{roleName} to group mapping \{groupMappingName}.

`DELETE(/druid-ext/basic-security/authorization/db/{authorizerName}/groupMappings/{groupMappingName}/roles/{roleName})`<br />
Unassign role {roleName} from group mapping {groupMappingName}
Unassign role \{roleName} from group mapping \{groupMappingName}


#### Permissions
`POST(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName}/permissions)`<br />
Set the permissions of {roleName}. This replaces the previous set of permissions on the role.
Set the permissions of \{roleName}. This replaces the previous set of permissions on the role.

Content: List of JSON Resource-Action objects, e.g.:

Expand Down Expand Up @@ -732,4 +732,4 @@ Please see [Defining permissions](../../operations/security-user-auth.md#definin

##### Cache Load Status
`GET(/druid-ext/basic-security/authorization/loadStatus)`<br />
Return the current load status of the local caches of the authorization Druid metadata store.
Return the current load status of the local caches of the authorization Druid metadata store.
7 changes: 5 additions & 2 deletions docs/development/extensions-core/test-stats.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,13 +31,16 @@ Make sure to include `druid-stats` extension in order to use these aggregators.

Please refer to [https://www.isixsigma.com/tools-templates/hypothesis-testing/making-sense-two-proportions-test/](https://www.isixsigma.com/tools-templates/hypothesis-testing/making-sense-two-proportions-test/) and [http://www.ucs.louisiana.edu/~jcb0773/Berry_statbook/Berry_statbook_chpt6.pdf](http://www.ucs.louisiana.edu/~jcb0773/Berry_statbook/Berry_statbook_chpt6.pdf) for more details.

z = (p1 - p2) / S.E. (assuming null hypothesis is true)
```
z = (p1 - p2) / S.E. (assuming null hypothesis is true)
```

Please see below for p1 and p2.
Please note S.E. stands for standard error where

```
S.E. = sqrt{ p1 * ( 1 - p1 )/n1 + p2 * (1 - p2)/n2) }

```
(p1 – p2) is the observed difference between two sample proportions.

### zscore2sample post aggregator
Expand Down
6 changes: 3 additions & 3 deletions docs/ingestion/data-formats.md
Original file line number Diff line number Diff line change
Expand Up @@ -399,7 +399,7 @@ For details, see the Schema Registry [documentation](http://docs.confluent.io/cu
| type | String | Set value to `schema_registry`. | no |
| url | String | Specifies the URL endpoint of the Schema Registry. | yes |
| capacity | Integer | Specifies the max size of the cache (default = Integer.MAX_VALUE). | no |
| urls | Array<String\> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
| urls | Array\<String\> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
| config | Json | To send additional configurations, configured for Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no |
| headers | Json | To send headers to the Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no |

Expand Down Expand Up @@ -937,7 +937,7 @@ Each entry in the `fields` list can have the following components:
| sum() | Provides the sum value of an array of numbers | Double | &#10003; | &#10003; | &#10003; | &#10003; |
| concat(X) | Provides a concatenated version of the path output with a new item | like input | &#10003; | &#10007; | &#10007; | &#10007; |
| append(X) | add an item to the json path output array | like input | &#10003; | &#10007; | &#10007; | &#10007; |
| keys() | Provides the property keys (An alternative for terminal tilde ~) | Set<E\> | &#10007; | &#10007; | &#10007; | &#10007; |
| keys() | Provides the property keys (An alternative for terminal tilde ~) | Set\<E\> | &#10007; | &#10007; | &#10007; | &#10007; |

## Parser

Expand Down Expand Up @@ -1654,7 +1654,7 @@ For details, see the Schema Registry [documentation](http://docs.confluent.io/cu
| type | String | Set value to `schema_registry`. | yes |
| url | String | Specifies the URL endpoint of the Schema Registry. | yes |
| capacity | Integer | Specifies the max size of the cache (default = Integer.MAX_VALUE). | no |
| urls | Array<String\> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
| urls | Array\<String\> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
| config | Json | To send additional configurations, configured for Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md). | no |
| headers | Json | To send headers to the Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no |

Expand Down
2 changes: 1 addition & 1 deletion docs/ingestion/ingestion-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ Dimension objects can have the following components:

| Field | Description | Default |
|-------|-------------|---------|
| type | Either `auto`, `string`, `long`, `float`, `double`, or `json`. For the `auto` type, Druid determines the most appropriate type for the dimension and assigns one of the following: STRING, ARRAY<STRING\>, LONG, ARRAY<LONG\>, DOUBLE, ARRAY<DOUBLE\>, or COMPLEX<json\> columns, all sharing a common 'nested' format. When Druid infers the schema with schema auto-discovery, the type is `auto`. | `string` |
| type | Either `auto`, `string`, `long`, `float`, `double`, or `json`. For the `auto` type, Druid determines the most appropriate type for the dimension and assigns one of the following: STRING, Array\<String\>, LONG, ARRAY\<LONG\>, DOUBLE, ARRAY\<DOUBLE\>, or COMPLEX\<json\> columns, all sharing a common 'nested' format. When Druid infers the schema with schema auto-discovery, the type is `auto`. | `string` |
| name | The name of the dimension. This will be used as the field name to read from input records, as well as the column name stored in generated segments.<br /><br />Note that you can use a [`transformSpec`](#transformspec) if you want to rename columns during ingestion time. | none (required) |
| createBitmapIndex | For `string` typed dimensions, whether or not bitmap indexes should be created for the column in generated segments. Creating a bitmap index requires more storage, but speeds up certain kinds of filtering (especially equality and prefix filtering). Only supported for `string` typed dimensions. | `true` |
| multiValueHandling | For `string` typed dimensions, specifies the type of handling for [multi-value fields](../querying/multi-value-dimensions.md). Possible values are `array` (ingest string arrays as-is), `sorted_array` (sort string arrays during ingestion), and `sorted_set` (sort and de-duplicate string arrays during ingestion). This parameter is ignored for types other than `string`. | `sorted_array` |
Expand Down
4 changes: 2 additions & 2 deletions docs/ingestion/input-sources.md
Original file line number Diff line number Diff line change
Expand Up @@ -1167,8 +1167,8 @@ It is strongly recommended to apply filtering only on Iceberg partition columns.
|filterColumn|The column name from the iceberg table schema based on which range filtering needs to happen.|None|yes|
|lower|Lower bound value to match.|None|no. At least one of `lower` or `upper` must not be null.|
|upper|Upper bound value to match. |None|no. At least one of `lower` or `upper` must not be null.|
|lowerOpen|Boolean indicating if lower bound is open in the interval of values defined by the range (">" instead of ">="). |false|no|
|upperOpen|Boolean indicating if upper bound is open on the interval of values defined by range ("<" instead of "<="). |false|no|
|lowerOpen|Boolean indicating if lower bound is open in the interval of values defined by the range (`>` instead of `>=`). |false|no|
|upperOpen|Boolean indicating if upper bound is open on the interval of values defined by range (`<` instead of `<=`). |false|no|

## Delta Lake input source

Expand Down
10 changes: 5 additions & 5 deletions docs/querying/filters.md
Original file line number Diff line number Diff line change
Expand Up @@ -235,8 +235,8 @@ greater than, less than, greater than or equal to, less than or equal to, and "b
| `dimension` | Input column or virtual column name to filter on. | Yes |
| `lower` | The lower bound string match value for the filter. | No |
| `upper`| The upper bound string match value for the filter. | No |
| `lowerStrict` | Boolean indicating whether to perform strict comparison on the `lower` bound (">" instead of ">="). | No, default: `false` |
| `upperStrict` | Boolean indicating whether to perform strict comparison on the upper bound ("<" instead of "<="). | No, default: `false`|
| `lowerStrict` | Boolean indicating whether to perform strict comparison on the `lower` bound (`>` instead of `>=`). | No, default: `false` |
| `upperStrict` | Boolean indicating whether to perform strict comparison on the upper bound (`<` instead of `<=`). | No, default: `false`|
| `ordering` | String that specifies the sorting order to use when comparing values against the bound. Can be one of the following values: `"lexicographic"`, `"alphanumeric"`, `"numeric"`, `"strlen"`, `"version"`. See [Sorting Orders](./sorting-orders.md) for more details. | No, default: `"lexicographic"`|
| `extractionFn` | [Extraction function](./dimensionspecs.md#extraction-functions) to apply to `dimension` prior to value matching. See [filtering with extraction functions](#filtering-with-extraction-functions) for details. | No |

Expand Down Expand Up @@ -319,8 +319,8 @@ Druid's SQL planner uses the range filter by default instead of bound filter whe
| `matchValueType` | String specifying the type of bounds to match. For example `STRING`, `LONG`, `DOUBLE`, `FLOAT`, `ARRAY<STRING>`, `ARRAY<LONG>`, or any other Druid type. The `matchValueType` determines how Druid interprets the `matchValue` to assist in converting to the type of the matched `column` and also defines the type of comparison used when matching values. | Yes |
| `lower` | Lower bound value to match. | No. At least one of `lower` or `upper` must not be null. |
| `upper` | Upper bound value to match. | No. At least one of `lower` or `upper` must not be null. |
| `lowerOpen` | Boolean indicating if lower bound is open in the interval of values defined by the range (">" instead of ">="). | No |
| `upperOpen` | Boolean indicating if upper bound is open on the interval of values defined by range ("<" instead of "<="). | No |
| `lowerOpen` | Boolean indicating if lower bound is open in the interval of values defined by the range (`>` instead of `>=`). | No |
| `upperOpen` | Boolean indicating if upper bound is open on the interval of values defined by range (`<` instead of `<=`). | No |

**Example**: equivalent to `WHERE 21 <= age <= 31`

Expand Down Expand Up @@ -496,7 +496,7 @@ The `arrayContainsElement` filter checks if an `ARRAY` contains a specific eleme

The Interval filter enables range filtering on columns that contain long millisecond values, with the boundaries specified as ISO 8601 time intervals. It is suitable for the `__time` column, long metric columns, and dimensions with values that can be parsed as long milliseconds.

This filter converts the ISO 8601 intervals to long millisecond start/end ranges and translates to an OR of Bound filters on those millisecond ranges, with numeric comparison. The Bound filters will have left-closed and right-open matching (i.e., start <= time < end).
This filter converts the ISO 8601 intervals to long millisecond start/end ranges and translates to an OR of Bound filters on those millisecond ranges, with numeric comparison. The Bound filters will have left-closed and right-open matching (i.e., start `<=` time `<` end).

| Property | Description | Required |
| -------- | ----------- | -------- |
Expand Down
Loading

0 comments on commit 854dccb

Please sign in to comment.