Skip to content

Commit

Permalink
chore(docs): fix typos through stdlib documentation (#5454)
Browse files Browse the repository at this point in the history
* chore(docs): fix typos through stdlib documentation

* chore(docs): fix broken post examples in http/requests packages
  • Loading branch information
sanderson authored Nov 28, 2023
1 parent 505186b commit b440835
Show file tree
Hide file tree
Showing 15 changed files with 51 additions and 51 deletions.
28 changes: 14 additions & 14 deletions libflux/go/libflux/buildinfo.gen.go

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion stdlib/contrib/chobbs/discord/discord.flux
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ send = (
// ```
//
// ## Metadata
// tags: notifcation endpoints, transformations
// tags: notification endpoints, transformations
//
endpoint = (webhookToken, webhookID, username, avatar_url="") =>
(mapFn) =>
Expand Down
2 changes: 1 addition & 1 deletion stdlib/experimental/array/array.flux
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ from = array.from

// concat appends two arrays and returns a new array.
//
// **Deprecated**: Experimetnal `array.concat()` is deprecated in favor of
// **Deprecated**: Experimental `array.concat()` is deprecated in favor of
// [`array.concat()`](https://docs.influxdata.com/flux/v0.x/stdlib/array/concat).
//
// ## Parameters
Expand Down
8 changes: 4 additions & 4 deletions stdlib/experimental/experimental.flux
Original file line number Diff line number Diff line change
Expand Up @@ -412,7 +412,7 @@ builtin join : (left: stream[A], right: stream[B], fn: (left: A, right: B) => C)
// ##### Applicable use cases
// - Write to an InfluxDB bucket and query the written data in a single Flux script.
//
// _**Note:** `experimental.chain()` does not gaurantee that data written to
// _**Note:** `experimental.chain()` does not guarantee that data written to
// InfluxDB is immediately queryable. A delay between when data is written and
// when it is queryable may cause a query using `experimental.chain()` to fail.
//
Expand Down Expand Up @@ -938,7 +938,7 @@ builtin spread : (<-tables: stream[{T with _value: A}]) => stream[{T with _value
// column for each input table.
//
// ## Standard deviation modes
// The following modes are avaialable when calculating the standard deviation of data.
// The following modes are available when calculating the standard deviation of data.
//
// ##### sample
// Calculate the sample standard deviation where the data is considered to be
Expand Down Expand Up @@ -1198,7 +1198,7 @@ builtin min : (<-tables: stream[{T with _value: A}]) => stream[{T with _value: A
// - Outputs a single table for each input table.
// - Outputs a single record for each unique value in an input table.
// - Leaves group keys, columns, and values unmodified.
// - Drops emtpy tables.
// - Drops empty tables.
//
// ## Parameters
// - tables: Input data. Default is piped-forward data (`<-`).
Expand Down Expand Up @@ -1254,7 +1254,7 @@ builtin unique : (<-tables: stream[{T with _value: A}]) => stream[{T with _value
// - tables: Input data. Default is piped-forward data (`<-`).
//
// ## Examples
// ### Create a histgram from input data
// ### Create a histogram from input data
// ```
// import "experimental"
// import "sampledata"
Expand Down
2 changes: 1 addition & 1 deletion stdlib/experimental/geo/geo.flux
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ package geo
import "experimental"
import "influxdata/influxdb/v1"

// units defines the unit of measurment used in geotemporal operations.
// units defines the unit of measurement used in geotemporal operations.
//
// ## Metadata
// introduced: 0.78.0
Expand Down
6 changes: 3 additions & 3 deletions stdlib/experimental/http/requests/requests.flux
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
//
// **Deprecated**: This package is deprecated in favor of [`requests`](https://docs.influxdata.com/flux/v0.x/stdlib/http/requests/).
// Do not mix usage of this experimental package with the `requests` package as the `defaultConfig` is not shared between the two packages.
// This experimental package is completely superceded by the `requests` package so there should be no need to mix them.
// This experimental package is completely superseded by the `requests` package so there should be no need to mix them.
//
// ## Metadata
// introduced: 0.152.0
Expand Down Expand Up @@ -181,8 +181,8 @@ do =
//
// response =
// requests.post(
// url: "https://goolnk.com/api/v1/shorten",
// body: json.encode(v: {url: "http://www.influxdata.com"}),
// url: "https://reqres.in/api/users",
// body: json.encode(v: {name: "doc brown", job: "time traveler"}),
// headers: ["Content-Type": "application/json"],
// )
//
Expand Down
4 changes: 2 additions & 2 deletions stdlib/http/requests/requests.flux
Original file line number Diff line number Diff line change
Expand Up @@ -173,8 +173,8 @@ do = (
//
// response =
// requests.post(
// url: "https://goolnk.com/api/v1/shorten",
// body: json.encode(v: {url: "http://www.influxdata.com"}),
// url: "https://reqres.in/api/users",
// body: json.encode(v: {name: "doc brown", job: "time traveler"}),
// headers: ["Content-Type": "application/json"],
// )
//
Expand Down
2 changes: 1 addition & 1 deletion stdlib/influxdata/influxdb/secrets/secrets.flux
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ package secrets
//
// ## Examples
//
// ### Retrive a key from the InfluxDB secret store
// ### Retrieve a key from the InfluxDB secret store
// ```no_run
// import "influxdata/influxdb/secrets"
//
Expand Down
2 changes: 1 addition & 1 deletion stdlib/influxdata/influxdb/tasks/tasks.flux
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ builtin _lastSuccess : (orTime: T, lastSuccessTime: time) => time where T: Timea
//
// ## Examples
//
// ### Return the time an InfluxDB task last succesfully ran
// ### Return the time an InfluxDB task last successfully ran
// ```no_run
// import "influxdata/influxdb/tasks"
//
Expand Down
8 changes: 4 additions & 4 deletions stdlib/math/math.flux
Original file line number Diff line number Diff line change
Expand Up @@ -537,7 +537,7 @@ builtin cosh : (x: float) => float
//
// ## Examples
//
// ### Return the maximum difference betwee two values
// ### Return the maximum difference between two values
// ```no_run
// import "math"
//
Expand Down Expand Up @@ -1191,7 +1191,7 @@ builtin isNaN : (f: float) => bool
//
builtin j0 : (x: float) => float

// j1 is a funciton that returns the order-one Bessel function for the first kind.
// j1 is a function that returns the order-one Bessel function for the first kind.
//
// ## Parameters
// - x: Value to operate on.
Expand Down Expand Up @@ -1223,7 +1223,7 @@ builtin j0 : (x: float) => float
//
builtin j1 : (x: float) => float

// jn returns the order-n Bessel funciton of the first kind.
// jn returns the order-n Bessel function of the first kind.
//
// ## Parameters
// - n: Order number.
Expand Down Expand Up @@ -1566,7 +1566,7 @@ builtin logb : (x: float) => float
//
builtin mMax : (x: float, y: float) => float

// mMin is a function taht returns the lessser of `x` or `y`.
// mMin is a function that returns the lesser of `x` or `y`.
//
// ## Parameters
// - x: x-value to use in the operation.
Expand Down
2 changes: 1 addition & 1 deletion stdlib/pagerduty/pagerduty.flux
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ option defaultURL = "https://events.pagerduty.com/v2/enqueue"
//
// ## Examples
//
// ### Convert a status level to a PagerDuty serverity
// ### Convert a status level to a PagerDuty severity
// ```no_run
// import "pagerduty"
//
Expand Down
2 changes: 1 addition & 1 deletion stdlib/sql/sql.flux
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@
// authentication credentials using one of the following methods:
//
// - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable that identifies the
// location of yur credential JSON file.
// location of your credential JSON file.
// - Provide your BigQuery credentials using the `credentials` URL parameters in your BigQuery DSN.
//
// #### BigQuery credential URL parameter
Expand Down
2 changes: 1 addition & 1 deletion stdlib/strings/strings.flux
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@ builtin compare : (v: string, t: string) => int
// ## Parameters
//
// - v: String value to search.
// - substr: Substring to count occurences of.
// - substr: Substring to count occurrences of.
//
// The function counts only non-overlapping instances of `substr`.
//
Expand Down
2 changes: 1 addition & 1 deletion stdlib/types/types.flux
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ builtin isType : (v: A, type: string) => bool where A: Basic
// isNumeric tests if a value is a numeric type (int, uint, or float).
//
// This is a helper function to test or filter for values that can be used in
// arithmatic operations or aggregations.
// arithmetic operations or aggregations.
//
// ## Parameters
// - v: Value to test.
Expand Down
30 changes: 15 additions & 15 deletions stdlib/universe/universe.flux
Original file line number Diff line number Diff line change
Expand Up @@ -700,7 +700,7 @@ builtin first : (<-tables: stream[A], ?column: string) => stream[A] where A: Rec

// group regroups input data by modifying group key of input tables.
//
// **Note**: Group does not gaurantee sort order.
// **Note**: Group does not guarantee sort order.
// To ensure data is sorted correctly, use `sort()` after `group()`.
//
// ## Parameters
Expand All @@ -711,7 +711,7 @@ builtin first : (<-tables: stream[A], ?column: string) => stream[A] where A: Rec
//
// - mode: Grouping mode. Default is `by`.
//
// **Avaliable modes**:
// **Available modes**:
// - **by**: Group by columns defined in the `columns` parameter.
// - **except**: Group by all columns _except_ those in defined in the
// `columns` parameter.
Expand Down Expand Up @@ -1196,7 +1196,7 @@ builtin join : (<-tables: A, ?method: string, ?on: [string]) => stream[B] where
//
// ## Examples
//
// ### Caclulate Kaufman's Adaptive Moving Average for input data
// ### Calculate Kaufman's Adaptive Moving Average for input data
// ```
// import "sampledata"
//
Expand All @@ -1215,7 +1215,7 @@ builtin kaufmansAMA : (<-tables: stream[A], n: int, ?column: string) => stream[B

// keep returns a stream of tables containing only the specified columns.
//
// Columns in the group key that are not specifed in the `columns` parameter or
// Columns in the group key that are not specified in the `columns` parameter or
// identified by the `fn` parameter are removed from the group key and dropped
// from output tables. `keep()` is the inverse of `drop()`.
//
Expand Down Expand Up @@ -1696,7 +1696,7 @@ builtin limit : (<-tables: stream[A], n: int, ?offset: int) => stream[A]
// the group key.
//
// #### Preserve columns
// `map()` drops any columns that are not mapped explictly by column label or
// `map()` drops any columns that are not mapped explicitly by column label or
// implicitly using the `with` operator in the `fn` function.
// The `with` operator updates a record property if it already exists, creates
// a new record property if it doesn’t exist, and includes all existing
Expand Down Expand Up @@ -1917,7 +1917,7 @@ builtin movingAverage : (
// - q: Quantile to compute. Must be between `0.0` and `1.0`.
// - method: Computation method. Default is `estimate_tdigest`.
//
// **Avaialable methods**:
// **Available methods**:
//
// - **estimate_tdigest**: Aggregate method that uses a
// [t-digest data structure](https://github.com/tdunning/t-digest) to
Expand Down Expand Up @@ -2465,7 +2465,7 @@ builtin skew : (<-tables: stream[A], ?column: string) => stream[B] where A: Reco
//
builtin spread : (<-tables: stream[A], ?column: string) => stream[B] where A: Record, B: Record

// sort orders rows in each intput table based on values in specified columns.
// sort orders rows in each input table based on values in specified columns.
//
// #### Output data
// One output table is produced for each input table.
Expand Down Expand Up @@ -2585,7 +2585,7 @@ builtin stateTracking : (
// - mode: Standard deviation mode or type of standard deviation to calculate.
// Default is `sample`.
//
// **Availble modes:**
// **Available modes:**
//
// - **sample**: Calculate the sample standard deviation where the data is
// considered part of a larger population.
Expand Down Expand Up @@ -3122,7 +3122,7 @@ builtin findRecord : (<-tables: stream[A], fn: (key: B) => bool, idx: int) => A
// ### Convert all values in a column to booleans
// If converting the `_value` column to boolean types, use `toBool()`.
// If converting columns other than `_value`, use `map()` to iterate over each
// row and `bool()` to covert a column value to a boolean type.
// row and `bool()` to convert a column value to a boolean type.
//
// ```
// # import "sampledata"
Expand Down Expand Up @@ -3234,7 +3234,7 @@ builtin duration : (v: A) => duration
// ### Convert all values in a column to floats
// If converting the `_value` column to float types, use `toFloat()`.
// If converting columns other than `_value`, use `map()` to iterate over each
// row and `float()` to covert a column value to a float type.
// row and `float()` to convert a column value to a float type.
//
// ```
// # import "sampledata"
Expand Down Expand Up @@ -3294,7 +3294,7 @@ builtin _vectorizedFloat : (v: vector[A]) => vector[float]
// ### Convert all values in a column to integers
// If converting the `_value` column to integer types, use `toInt()`.
// If converting columns other than `_value`, use `map()` to iterate over each
// row and `int()` to covert a column value to a integer type.
// row and `int()` to convert a column value to a integer type.
//
// ```
// # import "sampledata"
Expand Down Expand Up @@ -3331,7 +3331,7 @@ builtin int : (v: A) => int
// ### Convert all values in a column to strings
// If converting the `_value` column to string types, use `toString()`.
// If converting columns other than `_value`, use `map()` to iterate over each
// row and `string()` to covert a column value to a string type.
// row and `string()` to convert a column value to a string type.
//
// ```
// # import "sampledata"
Expand Down Expand Up @@ -3373,7 +3373,7 @@ builtin string : (v: A) => string
// ### Convert all values in a column to time
// If converting the `_value` column to time types, use `toTime()`.
// If converting columns other than `_value`, use `map()` to iterate over each
// row and `time()` to covert a column value to a time type.
// row and `time()` to convert a column value to a time type.
//
// ```
// # import "sampledata"
Expand Down Expand Up @@ -3422,7 +3422,7 @@ builtin time : (v: A) => time
// ### Convert all values in a column to unsigned integers
// If converting the `_value` column to uint types, use `toUInt()`.
// If converting columns other than `_value`, use `map()` to iterate over each
// row and `uint()` to covert a column value to a uint type.
// row and `uint()` to convert a column value to a uint type.
//
// ```
// # import "sampledata"
Expand Down Expand Up @@ -3951,7 +3951,7 @@ increase = (tables=<-, columns=["_value"]) =>
// - column: Column to use to compute the median. Default is `_value`.
// - method: Computation method. Default is `estimate_tdigest`.
//
// **Avaialable methods**:
// **Available methods**:
//
// - **estimate_tdigest**: Aggregate method that uses a
// [t-digest data structure](https://github.com/tdunning/t-digest) to
Expand Down

0 comments on commit b440835

Please sign in to comment.