Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate SQS queue global setting #10672

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 13 additions & 10 deletions packages/carbon_black_cloud/_dev/build/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ This module has been tested against `Alerts API (v7) [Beta]`, `Alerts API (v6)`,
### In order to ingest data from the AWS S3 bucket you must:
1. Configure the [Data Forwarder](https://docs.vmware.com/en/VMware-Carbon-Black-Cloud/services/carbon-black-cloud-user-guide/GUID-F68F63DD-2271-4088-82C9-71D675CD0535.html) to ingest data into an AWS S3 bucket.
2. Create an [AWS Access Keys and Secret Access Keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).
3. The default value of the "Bucket List Prefix" is listed below. However, the user can set the parameter "Bucket List Prefix" according to the requirement.
3. The default values of the "Bucket List Prefix" are listed below. However, users can set the parameter "Bucket List Prefix" according to their requirements.

| Data Stream Name | Bucket List Prefix |
| ----------------- | ---------------------- |
Expand All @@ -42,17 +42,20 @@ This module has been tested against `Alerts API (v7) [Beta]`, `Alerts API (v6)`,

### To collect data from AWS SQS, follow the below steps:
1. If data forwarding to an AWS S3 Bucket hasn't been configured, then first setup an AWS S3 Bucket as mentioned in the above documentation.
2. To set up an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html).
- While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket.
3. Set up event notification for an S3 bucket. Follow this [Link](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html).
- The user has to perform Step 3 for all the data streams individually, and each time prefix parameter should be set the same as the S3 Bucket List Prefix as created earlier. (for example, `alert_logs/` for the alert data stream.)
- For all the event notifications that have been created, select the event type as s3:ObjectCreated:*, select the destination type SQS Queue, and select the queue that has been created in Step 2.
2. Follow the steps below for each data stream that has been enabled:
1. Create an SQS queue
- To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html).
- While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket.
2. Setup event notification from the S3 bucket using the instructions [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html). Use the following settings:
- Event type: `All object create events` (`s3:ObjectCreated:*`)
- Destination: SQS Queue
- Prefix (filter): enter the prefix for this data stream, e.g. `alert_logs/`
- Select the SQS queue that has been created for this data stream

**Note**:
- Credentials for the above AWS S3 and SQS input types should be configured using the [link](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#aws-credentials-config).
- A separate SQS queue and S3 bucket notification is required for each enabled data stream.
- Permissions for the above AWS S3 bucket and SQS queues should be configured according to the [Filebeat S3 input documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#_aws_permissions_2)
- Data collection via AWS S3 Bucket and AWS SQS are mutually exclusive in this case.
- When configuring SQS queues, separate queues should be used for each data stream instead of the global SQS queue from version 1.21 onwards to avoid data
loss. File selectors should not be used to filter out data stream logs using the global queue as it was in versions prior.

### In order to ingest data from the APIs you must generate API keys and API Secret Keys:
1. In Carbon Black Cloud, On the left navigation pane, click **Settings > API Access**.
Expand Down Expand Up @@ -127,4 +130,4 @@ This is the `asset_vulnerability_summary` dataset.

{{event "asset_vulnerability_summary"}}

{{fields "asset_vulnerability_summary"}}
{{fields "asset_vulnerability_summary"}}
5 changes: 5 additions & 0 deletions packages/carbon_black_cloud/changelog.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
# newer versions go on top
- version: "2.4.0"
changes:
- description: Deprecate global SQS Queue URL to avoid data loss.
type: bugfix
link: https://github.com/elastic/integrations/pull/10672
efd6 marked this conversation as resolved.
Show resolved Hide resolved
- version: "2.3.0"
changes:
- description: Improve error reporting for API request failures.
Expand Down
4 changes: 2 additions & 2 deletions packages/carbon_black_cloud/data_stream/alert/manifest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,13 +62,13 @@ streams:
vars:
- name: queue_url_alert
type: text
title: "[Alert][SQS] Queue URL"
title: "[SQS] Queue URL"
multi: false
required: false
show_user: true
description: |-
URL of the AWS SQS queue that messages will be received from. This is only required if you want to collect logs via AWS SQS.
This is an alert data stream specific queue URL. This will override the global queue URL if provided.
This is an alert data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream.
- name: bucket_list_prefix
type: text
title: "[S3] Bucket Prefix"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,13 +62,13 @@ streams:
vars:
- name: queue_url_alert
type: text
title: "[Alert][SQS] Queue URL"
title: "[SQS] Queue URL"
multi: false
required: false
show_user: true
description: |-
URL of the AWS SQS queue that messages will be received from. This is only required if you want to collect logs via AWS SQS.
This is an alert data stream specific queue URL. This will override the global queue URL if provided.
This is an alert data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream.
- name: bucket_list_prefix
type: text
title: "[S3] Bucket Prefix"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,13 @@ streams:
vars:
- name: queue_url_endpoint_event
type: text
title: "[Endpoint Event][SQS] Queue URL"
title: "[SQS] Queue URL"
multi: false
required: false
show_user: true
description: |-
URL of the AWS SQS queue that messages will be received from. This is only required if you want to collect logs via AWS SQS.
This is an endpoint event data stream specific queue URL. This will override the global queue URL if provided.
This is an endpoint event data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream.
- name: bucket_list_prefix
type: text
title: "[S3] Bucket Prefix"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,13 @@ streams:
vars:
- name: queue_url_watchlist_hit
type: text
title: "[Watchlist Hit][SQS] Queue URL"
title: "[SQS] Queue URL"
multi: false
required: false
show_user: true
description: |-
URL of the AWS SQS queue that messages will be received from. This is only required if you want to collect logs via AWS SQS.
This is a watchlist hit data stream specific queue URL. This will override the global queue URL if provided.
This is a watchlist hit data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream.
- name: bucket_list_prefix
type: text
title: "[S3] Bucket Prefix"
Expand Down
22 changes: 13 additions & 9 deletions packages/carbon_black_cloud/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ This module has been tested against `Alerts API (v7) [Beta]`, `Alerts API (v6)`,
### In order to ingest data from the AWS S3 bucket you must:
1. Configure the [Data Forwarder](https://docs.vmware.com/en/VMware-Carbon-Black-Cloud/services/carbon-black-cloud-user-guide/GUID-F68F63DD-2271-4088-82C9-71D675CD0535.html) to ingest data into an AWS S3 bucket.
2. Create an [AWS Access Keys and Secret Access Keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).
3. The default value of the "Bucket List Prefix" is listed below. However, the user can set the parameter "Bucket List Prefix" according to the requirement.
3. The default values of the "Bucket List Prefix" are listed below. However, users can set the parameter "Bucket List Prefix" according to their requirements.

| Data Stream Name | Bucket List Prefix |
| ----------------- | ---------------------- |
Expand All @@ -42,17 +42,20 @@ This module has been tested against `Alerts API (v7) [Beta]`, `Alerts API (v6)`,

### To collect data from AWS SQS, follow the below steps:
1. If data forwarding to an AWS S3 Bucket hasn't been configured, then first setup an AWS S3 Bucket as mentioned in the above documentation.
2. To set up an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html).
- While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket.
3. Set up event notification for an S3 bucket. Follow this [Link](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html).
- The user has to perform Step 3 for all the data streams individually, and each time prefix parameter should be set the same as the S3 Bucket List Prefix as created earlier. (for example, `alert_logs/` for the alert data stream.)
- For all the event notifications that have been created, select the event type as s3:ObjectCreated:*, select the destination type SQS Queue, and select the queue that has been created in Step 2.
2. Follow the steps below for each data stream that has been enabled:
1. Create an SQS queue
- To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html).
- While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket.
2. Setup event notification from the S3 bucket using the instructions [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html). Use the following settings:
- Event type: `All object create events` (`s3:ObjectCreated:*`)
- Destination: SQS Queue
- Prefix (filter): enter the prefix for this data stream, e.g. `alert_logs/`
- Select the SQS queue that has been created for this data stream

**Note**:
- Credentials for the above AWS S3 and SQS input types should be configured using the [link](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#aws-credentials-config).
- A separate SQS queue and S3 bucket notification is required for each enabled data stream.
- Permissions for the above AWS S3 bucket and SQS queues should be configured according to the [Filebeat S3 input documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#_aws_permissions_2)
- Data collection via AWS S3 Bucket and AWS SQS are mutually exclusive in this case.
- When configuring SQS queues, separate queues should be used for each data stream instead of the global SQS queue from version 1.21 onwards to avoid data
loss. File selectors should not be used to filter out data stream logs using the global queue as it was in versions prior.

### In order to ingest data from the APIs you must generate API keys and API Secret Keys:
1. In Carbon Black Cloud, On the left navigation pane, click **Settings > API Access**.
Expand Down Expand Up @@ -1190,3 +1193,4 @@ An example event for `asset_vulnerability_summary` looks as following:
| host.os.codename | OS codename, if any. | keyword |
| input.type | Input type | keyword |
| log.offset | Log offset | long |

12 changes: 1 addition & 11 deletions packages/carbon_black_cloud/manifest.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
format_version: "3.0.2"
name: carbon_black_cloud
title: VMware Carbon Black Cloud
version: "2.3.0"
version: "2.4.0"
description: Collect logs from VMWare Carbon Black Cloud with Elastic Agent.
type: integration
categories:
Expand Down Expand Up @@ -191,16 +191,6 @@ policy_templates:
required: false
show_user: true
description: It is a required parameter for collecting logs via the AWS S3 Bucket.
- name: queue_url
type: text
title: "[Global][SQS] Queue URL"
multi: false
required: false
show_user: true
description: |-
URL of the AWS SQS queue that messages will be received from.
This is only required if you want to collect logs via AWS SQS.
This is a global queue URL, i.e this can be overridden by specific local queue URLs for each data stream if required.
- name: access_key_id
type: password
title: Access Key ID
Expand Down
29 changes: 16 additions & 13 deletions packages/cloudflare_logpush/_dev/build/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,8 @@ This module has been tested against **Cloudflare version v4**.
## Setup

### To collect data from AWS S3 Bucket, follow the below steps:
- Configure the [Data Forwarder](https://developers.cloudflare.com/logs/get-started/enable-destinations/aws-s3/) to ingest data into an AWS S3 bucket.
- The default value of the "Bucket List Prefix" is listed below. However, the user can set the parameter "Bucket List Prefix" according to the requirement.
- Configure [Cloudflare Logpush to Amazon S3](https://developers.cloudflare.com/logs/get-started/enable-destinations/aws-s3/) to send Cloudflare's data to an AWS S3 bucket.
- The default values of the "Bucket List Prefix" are listed below. However, users can set the parameter "Bucket List Prefix" according to their requirements.

| Data Stream Name | Bucket List Prefix |
| -------------------------- | ---------------------- |
Expand All @@ -91,19 +91,22 @@ This module has been tested against **Cloudflare version v4**.
| Workers Trace Events | workers_trace |

### To collect data from AWS SQS, follow the below steps:
1. If data forwarding to an AWS S3 Bucket hasn't been configured, then first setup an AWS S3 Bucket as mentioned in the above documentation.
2. To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html).
- While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket.
3. Setup event notification for an S3 bucket. Follow this [Link](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html).
- The user has to perform Step 3 for all the data-streams individually, and each time prefix parameter should be set the same as the S3 Bucket List Prefix as created earlier. (for example, `audit_logs/` for audit data stream.)
- For all the event notifications that have been created, select the event type as s3:ObjectCreated:*, select the destination type SQS Queue, and select the queue that has been created in Step 2.

**Note**:
1. If Logpush forwarding to an AWS S3 Bucket hasn't been configured, then first setup an AWS S3 Bucket as mentioned in the above documentation.
2. Follow the steps below for each Logpush data stream that has been enabled:
1. Create an SQS queue
- To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html).
- While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket.
2. Setup event notification from the S3 bucket using the instructions [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html). Use the following settings:
- Event type: `All object create events` (`s3:ObjectCreated:*`)
- Destination: SQS Queue
- Prefix (filter): enter the prefix for this Logpush data stream, e.g. `audit_logs/`
- Select the SQS queue that has been created for this data stream

**Note**:
- A separate SQS queue and S3 bucket notification is required for each enabled data stream.
- Permissions for the above AWS S3 bucket and SQS queues should be configured according to the [Filebeat S3 input documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#_aws_permissions_2)
- Credentials for the above AWS S3 and SQS input types should be configured using the [link](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#aws-credentials-config).
- Data collection via AWS S3 Bucket and AWS SQS are mutually exclusive in this case.
- You can configure a global SQS queue for all data streams or a local SQS queue for each data stream. Configuring
data stream specific SQS queues will enable better performance and scalability. Data stream specific SQS queues
will always override any global queue definitions for that specific data stream.

### To collect data from Cloudflare R2 Buckets, follow the below steps:
- Configure the [Data Forwarder](https://developers.cloudflare.com/logs/get-started/enable-destinations/r2/) to push logs to Cloudflare R2.
Expand Down
5 changes: 5 additions & 0 deletions packages/cloudflare_logpush/changelog.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
# newer versions go on top
- version: "1.22.0"
changes:
- description: Deprecate global SQS Queue URL to avoid data loss.
type: bugfix
link: https://github.com/elastic/integrations/pull/10672
- version: "1.21.0"
changes:
- description: Update the kibana constraint to ^8.13.0. Modified the field definitions to remove ECS fields made redundant by the ecs@mappings component template.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,11 +62,11 @@ streams:
vars:
- name: queue_url_access_request
type: text
title: "[Access Request][SQS] Queue URL"
title: "[SQS] Queue URL"
multi: false
required: false
show_user: true
description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is a Access Request data stream specific queue URL. This will override the global queue URL if provided."
description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is a Access Request data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream."
- name: bucket_list_prefix
type: text
title: '[S3] Bucket Prefix'
Expand Down
4 changes: 2 additions & 2 deletions packages/cloudflare_logpush/data_stream/audit/manifest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,11 +62,11 @@ streams:
vars:
- name: queue_url_audit
type: text
title: "[Audit][SQS] Queue URL"
title: "[SQS] Queue URL"
multi: false
required: false
show_user: true
description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is an audit data stream specific queue URL. This will override the global queue URL if provided."
description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is an audit data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream."
- name: bucket_list_prefix
type: text
title: '[S3] Bucket Prefix'
Expand Down
Loading