Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs: Miscellaneous typo fixes #1909

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -328,9 +328,9 @@ Preparing Snowflake for use with Estuary Flow involves the following steps:

1\. Keep the Flow web app open and open a new tab or window to access your Snowflake console.

3\. Create a new SQL worksheet. This provides a platform to execute queries.
2\. Create a new SQL worksheet. This provides a platform to execute queries.

4\. Paste the provided script into the SQL console, adjusting the value for `estuary_password` to a strong password.
3\. Paste the provided script into the SQL console, adjusting the value for `estuary_password` to a strong password.

```sql
set database_name = 'ESTUARY_DB';
Expand Down Expand Up @@ -373,11 +373,11 @@ use role sysadmin;
COMMIT;
```

5\. Execute all the queries by clicking the drop-down arrow next to the Run button and selecting "Run All."
4\. Execute all the queries by clicking the drop-down arrow next to the Run button and selecting "Run All."

6\. Snowflake will process the queries, setting up the necessary roles, databases, schemas, users, and warehouses for Estuary Flow.
5\. Snowflake will process the queries, setting up the necessary roles, databases, schemas, users, and warehouses for Estuary Flow.

7\. Once the setup is complete, return to the Flow web application to continue with the integration process.
6\. Once the setup is complete, return to the Flow web application to continue with the integration process.

Back in Flow, head over to the **Destinations** page, where you can [create a new Materialization](https://dashboard.estuary.dev/materializations/create).

Expand Down
2 changes: 1 addition & 1 deletion site/docs/guides/dekaf_reading_collections_from_kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ kcat -C \
-X sasl.username="{}" \
-X sasl.password="Your_Estuary_Refresh_Token" \
-b dekaf.estuary-data.com:9092 \
-t "full/nameof/estuarycolletion" \
-t "full/nameof/estuarycollection" \
-p 0 \
-o beginning \
-s avro \
Expand Down
2 changes: 1 addition & 1 deletion site/docs/reference/Configuring-task-shards.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ sidebar_position: 2
---
# Configuring task shards

For some catalog tasks, it's helpful to control the behavior of [shards](../concepts/advanced/shards.md)
For some catalog tasks, it's helpful to control the behavior of [shards](../concepts/advanced/shards.md).
You do this by adding the `shards` configuration to the capture or materialization configuration.

## Properties
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ If the dataset has a natural cursor that can identify only new or updated rows,
1. Ensure that [Estuary's IP addresses are allowlisted](/reference/allow-ip-addresses) to allow access. You can do by
following [these steps](https://docs.singlestore.com/cloud/reference/management-api/#control-access-to-the-api)
2. Grab the following details from the SingleStore workspace.
3. Workspace URL
4. Username
5. Password
6. Database
7. Configure the Connector with the appropriate values. Make sure to specify the database name under the "Advanced"
1. Workspace URL
2. Username
3. Password
4. Database
3. Configure the Connector with the appropriate values. Make sure to specify the database name under the "Advanced"
section.
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ store them separately.

TOASTed values can sometimes present a challenge for systems that rely on the PostgreSQL write-ahead log (WAL), like this connector.
If a change event occurs on a row that contains a TOASTed value, _but the TOASTed value itself is unchanged_, it is omitted from the WAL.
As a result, the connector emits a row update with the a value omitted, which might cause
As a result, the connector emits a row update with the value omitted, which might cause
unexpected results in downstream catalog tasks if adjustments are not made.

The PostgreSQL connector handles TOASTed values for you when you follow the [standard discovery workflow](/concepts/connectors.md#flowctl-discover)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ store them separately.

TOASTed values can sometimes present a challenge for systems that rely on the PostgreSQL write-ahead log (WAL), like this connector.
If a change event occurs on a row that contains a TOASTed value, _but the TOASTed value itself is unchanged_, it is omitted from the WAL.
As a result, the connector emits a row update with the a value omitted, which might cause
As a result, the connector emits a row update with the value omitted, which might cause
unexpected results in downstream catalog tasks if adjustments are not made.

The PostgreSQL connector handles TOASTed values for you when you follow the [standard discovery workflow](../../../../concepts/connectors.md#flowctl-discover)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ store them separately.

TOASTed values can sometimes present a challenge for systems that rely on the PostgreSQL write-ahead log (WAL), like this connector.
If a change event occurs on a row that contains a TOASTed value, _but the TOASTed value itself is unchanged_, it is omitted from the WAL.
As a result, the connector emits a row update with the a value omitted, which might cause
As a result, the connector emits a row update with the value omitted, which might cause
unexpected results in downstream catalog tasks if adjustments are not made.

The PostgreSQL connector handles TOASTed values for you when you follow the [standard discovery workflow](../../../../concepts/connectors.md#flowctl-discover)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ See [limitations](#limitations) to learn more about reconciling historical and r

## Supported data resources

Alpaca supports over 8000 stocks and EFTs. You simply supply a list of [symbols](https://eoddata.com/symbols.aspx) to Flow when you configure the connector.
Alpaca supports over 8000 stocks and ETFs. You simply supply a list of [symbols](https://eoddata.com/symbols.aspx) to Flow when you configure the connector.
To check whether Alpaca supports a symbol, you can use the [Alpaca Broker API](https://alpaca.markets/docs/api-references/broker-api/assets/#retrieving-an-asset-by-symbol).

You can use this connector to capture data from up to 20 stock symbols into Flow collections in a single capture
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ See the steps below to set up access.

### Setup: Public buckets

For a public buckets, the bucket access policy must allow anonymous reads on the whole bucket or a specific prefix.
For a public bucket, the bucket access policy must allow anonymous reads on the whole bucket or a specific prefix.

1. Create a bucket policy using the templates below.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ See [connectors](../../../concepts/connectors.md#using-connectors) to learn more
| **`schema_registry/schema_registry_type`** | Schema Registry Type | Either `confluent_schema_registry` or `no_schema_registry`. | object | Required |
| `/schema_registry/endpoint` | Schema Registry Endpoint | Schema registry API endpoint. For example: https://registry-id.us-east-2.aws.confluent.cloud. | string | |
| `/schema_registry/username` | Schema Registry Username | Schema registry username to use for authentication. If you are using Confluent Cloud, this will be the 'Key' from your schema registry API key. | string | |
| `/schema_registry/password` | Schema Registry Password | Schema registry password to use for authentication. If you are using Confluent Cloud, this will be the 'Secret' from your schema registry API key.. string | |
| `/schema_registry/password` | Schema Registry Password | Schema registry password to use for authentication. If you are using Confluent Cloud, this will be the 'Secret' from your schema registry API key. | string | |
| `/schema_registry/enable_json_only` | Capture Messages in JSON Format Only | If no schema registry is configured the capture will attempt to parse all data as JSON, and discovered collections will use a key of the message partition & offset. | boolean | |


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ the manual method is the only supported method using the command line.

### Signing in with OAuth2

To use OAuth2 in the Flow web app, you'll need A Facebook Business account and its [Ad Account ID](https://www.facebook.com/business/help/1492627900875762).
To use OAuth2 in the Flow web app, you'll need a Facebook Business account and its [Ad Account ID](https://www.facebook.com/business/help/1492627900875762).

### Configuring manually with an access token

Expand Down
8 changes: 4 additions & 4 deletions site/docs/reference/Connectors/capture-connectors/gitlab.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This connector is based on an open-source connector from a third party, with mod

## Supported data resources

When you [configure the connector](#endpoint), you may a list of GitLab Groups or Projects from which to capture data.
When you [configure the connector](#endpoint), you may provide a list of GitLab Groups or Projects from which to capture data.

From your selection, the following data resources are captured:

Expand All @@ -32,7 +32,7 @@ From your selection, the following data resources are captured:
- [Releases](https://docs.gitlab.com/ee/api/releases/index.html)
- [Group Labels](https://docs.gitlab.com/ee/api/group_labels.html)
- [Project Labels](https://docs.gitlab.com/ee/api/labels.html)
- [Epics](https://docs.gitlab.com/ee/api/epics.html)(only available for GitLab Ultimate and GitLab.com Gold accounts)
- [Epics](https://docs.gitlab.com/ee/api/epics.html) (only available for GitLab Ultimate and GitLab.com Gold accounts)
- [Epic Issues](https://docs.gitlab.com/ee/api/epic_issues.html) (only available for GitLab Ultimate and GitLab.com Gold accounts)

Each resource is mapped to a Flow collection through a separate binding.
Expand All @@ -43,7 +43,7 @@ There are two ways to authenticate with GitLab when capturing data into Flow: us
Their prerequisites differ.

OAuth is recommended for simplicity in the Flow web app;
the access token method is the only supported method using the command line. Which authentication method you choose depends on the policies of your organization. Github has special organization settings that need to be enabled in order for users to be able to access repos that are part of an organization.
the access token method is the only supported method using the command line. Which authentication method you choose depends on the policies of your organization. GitLab has special organization settings that need to be enabled in order for users to be able to access repos that are part of an organization.

### Using OAuth2 to authenticate with GitLab in the Flow web app

Expand All @@ -53,7 +53,7 @@ the access token method is the only supported method using the command line. Whi

* A GitLab user account with access to all entities of interest.

* A GitLab [personal access token](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html)).
* A GitLab [personal access token](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html).

## Configuration

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ You add these to the [endpoint configuration](#endpoint) in the format `{"name":
Each report is mapped to an additional Flow collection.

:::caution
Custom reports involve an integration with Google Universal Analytics, which Google will deprecate in July 2023.
Custom reports involve an integration with Google Universal Analytics, which Google deprecated in July 2023.
:::

## Prerequisites
Expand Down Expand Up @@ -59,7 +59,7 @@ You'll need:

Follow the steps below to meet these prerequisites:

1. Create a [service account and generate a JSON key](https://developers.google.com/identity/protocols/oauth2/service-account#creatinganaccount)
1. Create a [service account and generate a JSON key](https://developers.google.com/identity/protocols/oauth2/service-account#creatinganaccount).
You'll copy the contents of the downloaded key file into the Service Account Credentials parameter when you configure the connector.

2. [Set up domain-wide delegation for the service account](https://developers.google.com/workspace/guides/create-credentials#optional_set_up_domain-wide_delegation_for_a_service_account).
Expand Down Expand Up @@ -93,7 +93,7 @@ so many of these properties aren't required.

| Property | Title | Description | Type | Required/Default |
|---|---|---|---|---|
| **`/stream`** | Stream | Google Search Consol resource from which a collection is captured. | string | Required |
| **`/stream`** | Stream | Google Search Console resource from which a collection is captured. | string | Required |
| **`/syncMode`** | Sync Mode | Connection method. | string | Required |

### Sample
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,10 @@ spreadsheet:
1. The first row must be frozen and contain header names for each column.
1. If the first row is not frozen or does not contain header names, header names will
be set using high-case alphabet letters (A,B,C,D...Z).
2. Sheet is not a image sheet or contains images.
2. Sheet is not an image sheet or contains images.
3. Sheet is not empty.
1. If a Sheet is empty, the connector will not break and wait for changes
inside the Sheet. When new data arrives, you will be prompted by flow to allow
inside the Sheet. When new data arrives, you will be prompted by Flow to allow
for schema changes.
4. Sheet does not contain `formulaValue` inside any cell.

Expand Down
3 changes: 2 additions & 1 deletion site/docs/reference/Connectors/capture-connectors/harvest.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ The following data resources are supported through the Harvest APIs:
* [Uninvoiced Report](https://help.getharvest.com/api-v2/reports-api/reports/uninvoiced-report/)
* [Time Reports](https://help.getharvest.com/api-v2/reports-api/reports/time-reports/)
* [Project Budget Report](https://help.getharvest.com/api-v2/reports-api/reports/project-budget-report/)

By default, each resource is mapped to a Flow collection through a separate binding.

## Prerequisites
Expand All @@ -55,7 +56,7 @@ See [connectors](../../../concepts/connectors.md#using-connectors) to learn more
|---|---|---|---|---|
| `/account_id` | Account ID | Harvest account ID. Required for all Harvest requests in pair with Personal Access Token. | string | Required |
| `/start_date` | Start Date | UTC date and time in the format 2021-01-25T00:00:00Z. Any data before this date will not be replicated. | string | Required |
| `/end_date` | End Date | UTC date and time in the format 2021-01-25T00:00:00Z. Any data before this date will not be replicated. | string | Default |
| `/end_date` | End Date | UTC date and time in the format 2021-01-25T00:00:00Z. Any data after this date will not be replicated. | string | Default |

#### Bindings

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ This connector is based on an open-source connector from a third party, with mod

This connector can be used to sync the following tables from Marketo:

* **activities\_X** where X is an activity type contains information about lead activities of the type X. For example, activities\_send\_email contains information about lead activities related to the activity type `send_email`. See the [Marketo docs](https://developers.marketo.com/rest-api/endpoint-reference/lead-database-endpoint-reference/#!/Activities/getLeadActivitiesUsingGET) for a detailed explanation of what each column means.
* **activities\_X** where X is an activity type. Contains information about lead activities of the type X. For example, activities\_send\_email contains information about lead activities related to the activity type `send_email`. See the [Marketo docs](https://developers.marketo.com/rest-api/endpoint-reference/lead-database-endpoint-reference/#!/Activities/getLeadActivitiesUsingGET) for a detailed explanation of what each column means.
* **activity\_types.** Contains metadata about activity types. See the [Marketo docs](https://developers.marketo.com/rest-api/endpoint-reference/lead-database-endpoint-reference/#!/Activities/getAllActivityTypesUsingGET) for a detailed explanation of columns.
* **campaigns.** Contains info about your Marketo campaigns. [Marketo docs](https://developers.marketo.com/rest-api/endpoint-reference/lead-database-endpoint-reference/#!/Campaigns/getCampaignsUsingGET).
* **leads.** Contains info about your Marketo leads. [Marketo docs](https://developers.marketo.com/rest-api/endpoint-reference/lead-database-endpoint-reference/#!/Leads/getLeadByIdUsingGET).
Expand Down
2 changes: 0 additions & 2 deletions site/docs/reference/Connectors/capture-connectors/snapchat.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,6 @@ This connector can be used to sync the following tables from Snapchat:
* AdaccountsStatsLifetime
* AdsStatsHourly
* AdsStatsDaily
* AdsStatsHourly
* AdsStatsDaily
* AdsStatsLifetime
* AdsquadsStatsDaily
* AdsquadsStatsLifetime
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ so many of these properties aren't required.
| `/credentials/access_token` | Access Token | The long-term authorized access token. | string | |
| `/end_date` | End Date | The date until which you'd like to replicate data for all incremental streams, in the format YYYY-MM-DD. All data generated between `start_date` and this date will be replicated. Not setting this option will result in always syncing the data till the current date. | string | |
| `/report_granularity` | Report Aggregation Granularity | The granularity used for [aggregating performance data in reports](#report-aggregation). Choose `DAY`, `LIFETIME`, or `HOUR`.| string | |
| `/start_date` | Start Date | Replication Start Date | The Start Date in format: YYYY-MM-DD. Any data before this date will not be replicated. If this parameter is not set, all data will be replicated. | string | |
| `/start_date` | Replication Start Date | The Start Date in format: YYYY-MM-DD. Any data before this date will not be replicated. If this parameter is not set, all data will be replicated. | string | |

#### Bindings

Expand Down
2 changes: 1 addition & 1 deletion site/docs/reference/Connectors/dekaf/dekaf-clickhouse.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ array of sources supported by Estuary Flow directly into ClickHouse, using Dekaf

## Step 1: Configure Data Source in Estuary Flow

1. **Generate a [Refresh Token](Estuary Refresh Token ([Generate a refresh token](/guides/how_to_generate_refresh_token))**:
1. **Generate an [Estuary Refresh Token](/guides/how_to_generate_refresh_token)**:
- To access the Kafka-compatible topics, create a refresh token in the Estuary Flow dashboard. This token will act
as the password for both the broker and schema registry.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ To allow SSH tunneling to a database instance hosted on AWS, you'll need to crea
1. Refer to the [guide](/guides/connect-network/) to configure an SSH server on the cloud platform of your choice.

2. Configure your connector as described in the [configuration](#configuration) section above,
with the additional of the `networkTunnel` stanza to enable the SSH tunnel, if using.
with the addition of the `networkTunnel` stanza to enable the SSH tunnel, if using.
See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks)
for additional details and a sample.

Expand Down
Loading