Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding K8s remote backend to BYOK #878

Merged
merged 16 commits into from
May 17, 2022
11 changes: 8 additions & 3 deletions config/registry/helm/index.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,13 @@ required_providers:
helm:
source: "hashicorp/helm"
version: "2.4.1"
kubernetes:
source: "hashicorp/kubernetes"
version: "2.11.0"
backend:
local:
# this is consistent with terraform generator
path: "./tfstate/{layer_name}.tfstate"
kubernetes:
secret_suffix: "{state_storage}"
config_path: "{kubeconfig}"
Comment on lines +9 to +11
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this change going to affect existing usage of the helm cloud provider?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. No one is using it
  2. It will probably lose the current state, trying to recreate everything

validator:
name: str()
org_name: regex('^[a-z0-9-]{,15}$', name="Valid identifier, regex='[a-z0-9-]{,15}'")
Expand All @@ -22,3 +25,5 @@ output_providers:
helm:
kubernetes:
config_path: "{kubeconfig}"
kubernetes:
config_path: "{kubeconfig}"
3 changes: 3 additions & 0 deletions config/registry/schemas/opta-config-files/azure-env.json
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,9 @@
{
"$ref": "https://app.runx.dev/modules/azure-k8s-base"
},
{
"$ref": "https://app.runx.dev/modules/custom-terraform"
},
{
"$ref": "https://app.runx.dev/modules/datadog"
},
Expand Down
3 changes: 3 additions & 0 deletions config/registry/schemas/opta-config-files/local-env.json
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,9 @@
"description": "The Opta modules to run in this environment",
"items": {
"oneOf": [
{
"$ref": "https://app.runx.dev/modules/custom-terraform"
},
{
"$ref": "https://app.runx.dev/modules/local-base"
},
Expand Down
2 changes: 1 addition & 1 deletion examples/byok-eks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ Instead of having to define a Helm chart folder for each service, you can define
This step will:
- Generate the helm chart for your service
- Use the terraform helm provider to release the service
- Generate the terraform state files in `tfstate` - please commit these files after each apply.
- Store the state of the deployment in K8s as a secret.

3. You can test that your service is deployed by using `kubectl` or `curl` to the public endpoint.
```shell
Expand Down
150 changes: 150 additions & 0 deletions modules/custom_terraform/azure-custom-terraform.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
---
title: "custom-terraform"
linkTitle: "custom-terraform"
date: 2021-12-7
draft: false
weight: 1
description: Allows user to bring in their own custom terraform module
---

This module allows a user to bring in their own custom terraform code into the opta ecosystem, to use in tandem with
their other opta modules, and even reference them. All a user needs to do is specify the
[source](https://www.terraform.io/language/modules/sources#module-sources)
to your module with the `source` input, and the desired inputs to your module (if any) via the
`terraform_inputs` input.

## Use local terraform files
Suppose you have an opta azure environment written in `azure-env.yaml` and you want to deploy your custom terraform module
"blah" that creates something you want (in our case a vm instance). What you could do is create a service for your
environment which uses custom-terraform to call your module (NOTE: custom-terraform doesn't need to be in an opta
service-- it can be in the environment too). For our example, let's say that the file structure looks like so:

```
.
├── README.md
├── azure-env.yaml
└── dummy-service
├── blah
│ └── main.tf
└── opta.yaml
```

The new service is written in `dummy-service/opta.yaml` and looks like this:

```yaml
environments:
- name: azure-example
path: "../azure-env.yaml"
name: customtf
modules:
- type: custom-terraform
name: vm1
source: "./blah"
terraform_inputs:
env_name: "{env}"
```

You can see that the path to your module is specified by `source` (you can use relative or absolute paths),
as are the expected inputs `hello` and `subnet_self_link`. Note that you can use opta interpolation to use variables or
the outputs of the parent environment or other modules as input.

Lastly, you can use the following as content to the main.tf file of the blah module to complete the example/demo:

```hcl
variable "env_name" {
type = string
}

variable "prefix" {
type = string
default = "placeholder"
}

data "azurerm_resource_group" "opta" {
name = "opta-${var.env_name}"
}

data "azurerm_subnet" "opta" {
name = "opta-${var.env_name}-subnet"
virtual_network_name = "opta-${var.env_name}"
resource_group_name = data.azurerm_resource_group.opta.name
}

resource "azurerm_network_interface" "main" {
name = "${var.prefix}-nic"
location = data.azurerm_resource_group.opta.location
resource_group_name = data.azurerm_resource_group.opta.name

ip_configuration {
name = "testconfiguration1"
subnet_id = data.azurerm_subnet.opta.id
private_ip_address_allocation = "Dynamic"
}
}

resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}-vm"
location = data.azurerm_resource_group.opta.location
resource_group_name = data.azurerm_resource_group.opta.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_DS1_v2"

storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
```

Once you opta apply the service you should see your new compute instance up and running in the Azure console and be able
to ssh into it.

## Use a remote terraform module
The `source` input uses terraform's [module source](https://www.terraform.io/language/modules/sources#module-sources)
logic behind the scenes and so follows the same format/limitations. Thus, you can use this for locally available modules,
or modules available remotely.Z

**WARNING** Be very, very, careful about what remote modules you are using, as they leave you wide open to supply chain
attacks, depending on the security and character of the owner of said module. It's highly advised to use either
[official modules](https://registry.terraform.io/browse/modules) or modules under your company's control.

## Using Outputs from your Custom Terraform Module
Currently you can use outputs of your custom terraform module in the same yaml, like so:
```yaml
environments:
- name: azure-example
path: "../azure-env.yaml"
name: customtf
modules:
- type: custom-terraform
name: hi1
source: "./blah1" # <-- This module has an output called output1
- type: custom-terraform
name: hi2
source: "./blah2"
terraform_inputs:
input1: "${{module.hi1.output1}}" # <-- HERE. Note the ${{}} wrapping
```

These outputs, however, currently can not be used in other yamls (e.g. if you put custom terraform in an environment
yaml its outputs can't be used in the services), and will not show up in the `opta output` command. Work on supporting
this is ongoing.
4 changes: 3 additions & 1 deletion modules/custom_terraform/custom-terraform.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,6 @@ output_providers: {}
output_data: {}
clouds:
- gcp
- aws
- aws
- azure
- local
93 changes: 93 additions & 0 deletions modules/custom_terraform/local-custom-terraform.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
---
title: "custom-terraform"
linkTitle: "custom-terraform"
date: 2021-12-7
draft: false
weight: 1
description: Allows user to bring in their own custom terraform module
---

This module allows a user to bring in their own custom terraform code into the opta ecosystem, to use in tandem with
their other opta modules, and even reference them. All a user needs to do is specify the
[source](https://www.terraform.io/language/modules/sources#module-sources)
to your module with the `source` input, and the desired inputs to your module (if any) via the
`terraform_inputs` input.

## Use local terraform files
Suppose you have an opta k8s service which you wish to write down the name of the current image in a file. For this
you have written a small terraform module just to write down input to a local file. What you could do is create a
service for your environment which uses custom-terraform to call your module. For our example, let's say that the file
structure looks like so:

```
.
├── blah
│ └── main.tf
└── opta.yaml
```

The new service is written in `dummy-service/opta.yaml` and looks like this:

```yaml
name: customtf
modules:
- type: k8s-service
name: hello
port:
http: 80
image: "ghcr.io/run-x/hello-opta/hello-opta:main"
healthcheck_path: "/"
public_uri: "/hello"
- type: custom-terraform
name: currentimage
source: "./blah"
terraform_inputs:
to_write: "${{module.hello.current_image}}"
```

You can see that the path to your module is specified by `source` (you can use relative or absolute paths),
as are the expected input `to_write`. Note that you can use opta interpolation to use variables or
the outputs of the parent environment or other modules as input.

Lastly, you can use the following as content to the main.tf file of the blah module to complete the example/demo:

```hcl
variable "to_write" {
type = string
}

resource "local_file" "foo" {
content = "${var.to_write}"
filename = "${path.module}/foo.bar"
}
```

Once you opta apply the service you should see your new file locally!

## Use a remote terraform module
The `source` input uses terraform's [module source](https://www.terraform.io/language/modules/sources#module-sources)
logic behind the scenes and so follows the same format/limitations. Thus, you can use this for locally available modules,
or modules available remotely.

**WARNING** Be very, very, careful about what remote modules you are using, as they leave you wide open to supply chain
attacks, depending on the security and character of the owner of said module. It's highly advised to use either
[official modules](https://registry.terraform.io/browse/modules) or modules under your company's control.

## Using Outputs from your Custom Terraform Module
Currently you can use outputs of your custom terraform module in the same yaml, like so:
```yaml
name: customtf
modules:
- type: custom-terraform
name: hi1
source: "./blah1" # <-- This module has an output called output1
- type: custom-terraform
name: hi2
source: "./blah2"
terraform_inputs:
input1: "${{module.hi1.output1}}" # <-- HERE. Note the ${{}} wrapping
```

These outputs, however, currently can not be used in other yamls (e.g. if you put custom terraform in an environment
yaml its outputs can't be used in the services), and will not show up in the `opta output` command. Work on supporting
this is ongoing.
Loading