Skip to content

Commit

Permalink
Drop scalers-related dead code (kedacore#871)
Browse files Browse the repository at this point in the history
  • Loading branch information
chalin authored Aug 9, 2022
1 parent 425a316 commit 7b508c8
Show file tree
Hide file tree
Showing 413 changed files with 477 additions and 896 deletions.
3 changes: 1 addition & 2 deletions archetypes/scaler.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ title = "{{ replace .Name "-" " " | title }}"
availability = ""
maintainer = ""
description = "Insert description here"
layout = "scaler"
+++

### Trigger Specification
Expand Down Expand Up @@ -33,4 +32,4 @@ The user will need access to read data from Huawei Cloudeye.

### Example

*Provide an example of how to configure the trigger, preferably using TriggerAuthentication*
*Provide an example of how to configure the trigger, preferably using TriggerAuthentication*
5 changes: 2 additions & 3 deletions content/docs/1.4/scalers/apache-kafka.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
+++
title = "Apache Kafka"
layout = "scaler"
availability = "v1.0+"
maintainer = "Microsoft"
description = "Scale applications based on an Apache Kafka topic or other services that support Kafka protocol."
go_file = "kafka_scaler"
+++

> **Notice:**
> - No. of replicas will not exceed the number of partitions on a topic. That is, if `maxReplicaCount` is set more than number of partitions, the scaler won't scale up to target maxReplicaCount.
> **Notice:**
> - No. of replicas will not exceed the number of partitions on a topic. That is, if `maxReplicaCount` is set more than number of partitions, the scaler won't scale up to target maxReplicaCount.
> - This is so because if there are more number of consumers than the number of partitions in a topic, then extra consumer will have to sit idle.
### Trigger Specification
Expand Down
5 changes: 2 additions & 3 deletions content/docs/1.4/scalers/aws-cloudwatch.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "AWS CloudWatch"
layout = "scaler"
availability = "v1.0+"
maintainer = "Community"
description = "Scale applications based on AWS CloudWatch."
Expand Down Expand Up @@ -41,7 +40,7 @@ triggers:

### Authentication Parameters

> These parameters are relevant only when `identityOwner` is set to `pod`.
> These parameters are relevant only when `identityOwner` is set to `pod`.

You can use `TriggerAuthentication` CRD to configure the authenticate by providing either a role ARN or a set of IAM credentials.

Expand Down Expand Up @@ -70,7 +69,7 @@ metadata:
data:
AWS_ACCESS_KEY_ID: <encoded-user-id>
AWS_SECRET_ACCESS_KEY: <encoded-key>
---
---
apiVersion: keda.k8s.io/v1alpha1
kind: TriggerAuthentication
metadata:
Expand Down
5 changes: 2 additions & 3 deletions content/docs/1.4/scalers/aws-kinesis.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "AWS Kinesis Stream"
layout = "scaler"
availability = "v1.1+"
maintainer = "Community"
description = "Scale applications based on AWS Kinesis Stream."
Expand Down Expand Up @@ -36,7 +35,7 @@ triggers:

### Authentication Parameters

> These parameters are relevant only when `identityOwner` is set to `pod`.
> These parameters are relevant only when `identityOwner` is set to `pod`.

You can use `TriggerAuthentication` CRD to configure the authenticate by providing either a role ARN or a set of IAM credentials, or use other [KEDA supported authentication methods](https://keda.sh/docs/1.4/concepts/authentication/).

Expand Down Expand Up @@ -64,7 +63,7 @@ metadata:
data:
AWS_ACCESS_KEY_ID: <encoded-user-id>
AWS_SECRET_ACCESS_KEY: <encoded-key>
---
---
apiVersion: keda.k8s.io/v1alpha1
kind: TriggerAuthentication
metadata:
Expand Down
9 changes: 4 additions & 5 deletions content/docs/1.4/scalers/aws-sqs.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "AWS SQS Queue"
layout = "scaler"
availability = "v1.0+"
maintainer = "Community"
description = "Scale applications based on AWS SQS Queue."
Expand All @@ -19,7 +18,7 @@ triggers:
queueURL: https://sqs.eu-west-1.amazonaws.com/account_id/QueueName
queueLength: "5" # Default: "5"
# Required: awsRegion
awsRegion: "eu-west-1"
awsRegion: "eu-west-1"
identityOwner: pod | operator # Optional. Default: pod
```
**Parameter list:**
Expand All @@ -35,7 +34,7 @@ triggers:

### Authentication Parameters

> These parameters are relevant only when `identityOwner` is set to `pod`.
> These parameters are relevant only when `identityOwner` is set to `pod`.

You can use `TriggerAuthentication` CRD to configure the authenticate by providing either a role ARN or a set of IAM credentials.

Expand Down Expand Up @@ -64,7 +63,7 @@ metadata:
data:
AWS_ACCESS_KEY_ID: <encoded-user-id>
AWS_SECRET_ACCESS_KEY: <encoded-key>
---
---
apiVersion: keda.k8s.io/v1alpha1
kind: TriggerAuthentication
metadata:
Expand Down Expand Up @@ -96,5 +95,5 @@ spec:
metadata:
queueURL: myQueue
queueLength: "5"
awsRegion: "eu-west-1"
awsRegion: "eu-west-1"
```
3 changes: 1 addition & 2 deletions content/docs/1.4/scalers/azure-event-hub.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "Azure Event Hubs"
layout = "scaler"
availability = "v1.0+"
maintainer = "Community"
description = "Scale applications based on Azure Event Hubs."
Expand All @@ -17,7 +16,7 @@ triggers:
- type: azure-eventhub
metadata:
connection: EVENTHUB_CONNECTIONSTRING_ENV_NAME # Connection string for Event Hub namespace appended with "EntityPath=<event_hub_name>"
storageConnection: STORAGE_CONNECTIONSTRING_ENV_NAME # Connection string for account used to store checkpoint. As of now the Event Hub scaler only reads from Azure Blob Storage.
storageConnection: STORAGE_CONNECTIONSTRING_ENV_NAME # Connection string for account used to store checkpoint. As of now the Event Hub scaler only reads from Azure Blob Storage.
consumerGroup: $Default # Optional. Consumer group of event hub consumer. Default: $Default
unprocessedEventThreshold: '64' # Optional. Target number of unprocessed events across all partitions in Event Hub for HPA. Default: 64 events.
blobContainer: 'name_of_container' # Optional. Container name to store checkpoint. This is needed when a using an Event Hub application written in dotnet or java, and not an Azure function.
Expand Down
5 changes: 2 additions & 3 deletions content/docs/1.4/scalers/azure-monitor.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "Azure Monitor"
layout = "scaler"
availability = "v1.3+"
maintainer = "Community"
description = "Scale applications based on Azure Monitor metrics."
Expand Down Expand Up @@ -65,7 +64,7 @@ data:
---
apiVersion: keda.k8s.io/v1alpha1
kind: TriggerAuthentication
metadata:
metadata:
name: azure-monitor-trigger-auth
spec:
secretTargetRef:
Expand All @@ -90,7 +89,7 @@ spec:
triggers:
- type: azure-monitor
metadata:
resourceURI: Microsoft.ContainerService/managedClusters/azureMonitorCluster
resourceURI: Microsoft.ContainerService/managedClusters/azureMonitorCluster
tenantId: xxx-xxx-xxx-xxx-xxx
subscriptionId: yyy-yyy-yyy-yyy-yyy
resourceGroupName: azureMonitor
Expand Down
1 change: 0 additions & 1 deletion content/docs/1.4/scalers/azure-service-bus.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "Azure Service Bus"
layout = "scaler"
maintainer = "Microsoft"
description = "Scale applications based on Azure Service Bus Queues or Topics."
availability = "v1.0+"
Expand Down
3 changes: 1 addition & 2 deletions content/docs/1.4/scalers/azure-storage-blob.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "Azure Blob Storage"
layout = "scaler"
availability = "v1.1+"
maintainer = "Community"
description = "Scale applications based on the count of blobs in a given Azure Blob Storage container."
Expand All @@ -17,7 +16,7 @@ triggers:
- type: azure-blob
metadata:
blobContainerName: functions-blob # Required: Name of Azure Blob Storage container
blobCount: '5' # Optional. Amount of blobs to scale out on. Default: 5 blobs
blobCount: '5' # Optional. Amount of blobs to scale out on. Default: 5 blobs
connection: STORAGE_CONNECTIONSTRING_ENV_NAME # Optional if TriggerAuthentication defined with pod identity or connection string authentication.
blobPrefix: # Optional. Prefix for the Blob. Use this to specify sub path for the blobs if required. Default : ""
blobDelimiter: # Optional. Delimiter for identifying the blob Prefix. Default: "/"
Expand Down
1 change: 0 additions & 1 deletion content/docs/1.4/scalers/azure-storage-queue.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "Azure Storage Queue"
layout = "scaler"
availability = "v1.0+"
maintainer = "Microsoft"
description = "Scale applications based on Azure Storage Queues."
Expand Down
1 change: 0 additions & 1 deletion content/docs/1.4/scalers/external.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "External"
layout = "scaler"
availability = "v1.0+"
maintainer = "Microsoft"
description = "Scale applications based on an external scaler."
Expand Down
5 changes: 2 additions & 3 deletions content/docs/1.4/scalers/gcp-pub-sub.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "Google Cloud Platform‎ Pub/Sub"
layout = "scaler"
availability = "v1.0+"
maintainer = "Community"
description = "Scale applications based on Google Cloud Platform‎ Pub/Sub."
Expand All @@ -16,7 +15,7 @@ triggers:
- type: gcp-pubsub
metadata:
subscriptionSize: "5" # Optional - Default is 5
subscriptionName: "mysubscription" # Required
subscriptionName: "mysubscription" # Required
credentials: GOOGLE_APPLICATION_CREDENTIALS_JSON # Required
```
Expand Down Expand Up @@ -45,6 +44,6 @@ spec:
- type: gcp-pubsub
metadata:
subscriptionSize: "5"
subscriptionName: "mysubscription" # Required
subscriptionName: "mysubscription" # Required
credentials: GOOGLE_APPLICATION_CREDENTIALS_JSON # Required
```
5 changes: 2 additions & 3 deletions content/docs/1.4/scalers/huawei-cloudeye.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "Huawei Cloudeye"
layout = "scaler"
availability = "v1.1+"
maintainer = "Community"
description = "Scale applications based on a Huawei Cloudeye."
Expand Down Expand Up @@ -58,7 +57,7 @@ The user will need access to read data from Huawei Cloudeye.
apiVersion: v1
kind: Secret
metadata:
name: keda-huawei-secrets
name: keda-huawei-secrets
namespace: keda-test
data:
IdentityEndpoint: <IdentityEndpoint>
Expand All @@ -68,7 +67,7 @@ data:
Domain: <Domain>
AccessKey: <AccessKey>
SecretKey: <SecretKey>
---
---
apiVersion: keda.k8s.io/v1alpha1
kind: TriggerAuthentication
metadata:
Expand Down
1 change: 0 additions & 1 deletion content/docs/1.4/scalers/liiklus-topic.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "Liiklus Topic"
layout = "scaler"
availability = "v1.0+"
maintainer = "Community"
description = "Scale applications based on Liiklus Topic."
Expand Down
3 changes: 1 addition & 2 deletions content/docs/1.4/scalers/mysql.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "MySQL"
layout = "scaler"
availability = "v1.2+"
maintainer = "Community"
description = "Scale applications based on MySQL query result."
Expand All @@ -16,7 +15,7 @@ The trigger always requires the following information:
- `query` - A MySQL query that should return single numeric value.
- `queryValue` - A threshold that is used as `targetAverageValue` in HPA.

To provide information about how to connect to MySQL you can provide:
To provide information about how to connect to MySQL you can provide:
- `connectionString` - MySQL connection string that should point to environment variable with valid value.

Or provide more detailed information:
Expand Down
3 changes: 1 addition & 2 deletions content/docs/1.4/scalers/nats-streaming.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "NATS Streaming"
layout = "scaler"
availability = "v1.0+"
maintainer = "Community"
description = "Scale applications based on NATS Streaming."
Expand Down Expand Up @@ -38,7 +37,7 @@ spec:
pollingInterval: 10 # Optional. Default: 30 seconds
cooldownPeriod: 30 # Optional. Default: 300 seconds
minReplicaCount: 0 # Optional. Default: 0
maxReplicaCount: 30 # Optional. Default: 100
maxReplicaCount: 30 # Optional. Default: 100
scaleTargetRef:
deploymentName: gonuts-sub
triggers:
Expand Down
7 changes: 3 additions & 4 deletions content/docs/1.4/scalers/postgresql.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "PostgreSQL"
layout = "scaler"
availability = "v1.2+"
maintainer = "Community"
description = "Scale applications based on a PostgreSQL query."
Expand All @@ -13,13 +12,13 @@ This specification describes the `postgresql` trigger that scales based on a pos

The Postgresql scaler allows for two connection options:

A user can offer a full connection string
A user can offer a full connection string
(often in the form of an environment variable secret)

- `connection` - PostgreSQL connection string that should point to environment variable with valid value.

Alternatively, a user can specify individual
arguments (host, userName, password, etc.), and the scaler will form a connection string
arguments (host, userName, password, etc.), and the scaler will form a connection string
internally.

- `host:` - Service URL to postgresql. Note that you should use a full svc URL as KEDA will need to contact postgresql from a different namespace.
Expand Down Expand Up @@ -53,7 +52,7 @@ triggers:
metadata:
userName: "kedaUser"
password: PG_PASSWORD
host: postgres-svc.namespace.cluster.local #use the cluster-wide namespace as KEDA
host: postgres-svc.namespace.cluster.local #use the cluster-wide namespace as KEDA
#lives in a different namespace from your postgres
port: "5432"
dbName: postgresql
Expand Down
1 change: 0 additions & 1 deletion content/docs/1.4/scalers/prometheus.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "Prometheus"
layout = "scaler"
availability = "v1.0+"
maintainer = "Community"
description = "Scale applications based on Prometheus."
Expand Down
1 change: 0 additions & 1 deletion content/docs/1.4/scalers/rabbitmq-queue.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "RabbitMQ Queue"
layout = "scaler"
availability = "v1.0+"
maintainer = "Microsoft"
description = "Scale applications based on RabbitMQ Queue."
Expand Down
3 changes: 1 addition & 2 deletions content/docs/1.4/scalers/redis-lists.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "Redis Lists"
layout = "scaler"
availability = "v1.0+"
maintainer = "Community"
description = "Scale applications based on Redis Lists."
Expand All @@ -25,7 +24,7 @@ triggers:
The `address` field in the spec holds the host and port of the redis server. This could be an external redis server or one running in the kubernetes cluster.

As an alternative to the `address` field the user can specify `host` and `port` parameters.
As an alternative to the `address` field the user can specify `host` and `port` parameters.

Provide the `password` field if the redis server requires a password. Both the hostname and password fields need to be set to the names of the environment variables in the target deployment that contain the host name and password respectively.

Expand Down
5 changes: 2 additions & 3 deletions content/docs/1.5/scalers/apache-kafka.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
+++
title = "Apache Kafka"
layout = "scaler"
availability = "v1.0+"
maintainer = "Microsoft"
description = "Scale applications based on an Apache Kafka topic or other services that support Kafka protocol."
go_file = "kafka_scaler"
+++

> **Notice:**
> - No. of replicas will not exceed the number of partitions on a topic. That is, if `maxReplicaCount` is set more than number of partitions, the scaler won't scale up to target maxReplicaCount.
> **Notice:**
> - No. of replicas will not exceed the number of partitions on a topic. That is, if `maxReplicaCount` is set more than number of partitions, the scaler won't scale up to target maxReplicaCount.
> - This is so because if there are more number of consumers than the number of partitions in a topic, then extra consumer will have to sit idle.
### Trigger Specification
Expand Down
7 changes: 3 additions & 4 deletions content/docs/1.5/scalers/artemis.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
+++
title = "ActiveMQ Artemis"
layout = "scaler"
availability = "v1.5+"
maintainer = "Community"
description = "Scale applications based on ActiveMQ Artemis queues"
Expand All @@ -15,11 +14,11 @@ This specification describes the `artemis-queue` trigger for ActiveMQ Artemis qu
triggers:
- type: artemis-queue
metadata:
managementEndpoint: "artemis-activemq.artemis:8161"
managementEndpoint: "artemis-activemq.artemis:8161"
queueName: "test"
brokerName: "artemis-activemq"
brokerAddress: "test"
queueLength: '10'
queueLength: '10'
username: 'ARTEMIS_USERNAME'
password: 'ARTEMIS_PASSWORD'
```
Expand All @@ -31,7 +30,7 @@ triggers:
- `brokerName` - Name of the broker as defined in Artemis.
- `brokerAddress` - Address name of the broker.
- `queueLength` - How much messages are in the queue. (Default: `10`, Optional.)

### Authentication Parameters

You can use `TriggerAuthentication` CRD to configure the `username` and `password` to connect to the management endpoint.
Expand Down
Loading

0 comments on commit 7b508c8

Please sign in to comment.