Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a new benchmark service #122

Merged
merged 91 commits into from
Aug 31, 2023
Merged

Add a new benchmark service #122

merged 91 commits into from
Aug 31, 2023

Conversation

sudo-shashank
Copy link
Contributor

@sudo-shashank sudo-shashank commented Jun 22, 2023

Summary of changes
Changes introduced in this pull request:

Added a new benchmark service:

  • Continuously runs benchmarks
ruby bench.rb --chain calibnet --tempdir ./snapshots --daily
ruby bench.rb --chain mainnet --tempdir ./snapshots --daily

Reference issue to close (if applicable)

Closes #92

Other information and links

@sudo-shashank sudo-shashank changed the title Benchmark db Added a new benchmark service Jun 22, 2023
@sudo-shashank sudo-shashank changed the title Added a new benchmark service Add a new benchmark service Jun 22, 2023
@github-actions
Copy link

github-actions bot commented Jun 22, 2023

Forest: DB Benchmark Service Infrastructure Plan: success

Show Plan
module.benchmark_db.data.local_file.init: Reading...
module.benchmark_db.data.external.sources_tar: Reading...
module.benchmark_db.data.local_file.init: Read complete after 0s [id=8fb46e70b9e0adba8af36f9d881c29c10a70fe40]
module.benchmark_db.data.external.sources_tar: Read complete after 0s [id=-]
module.benchmark_db.data.local_file.sources: Reading...
module.benchmark_db.data.digitalocean_ssh_keys.keys: Reading...
module.benchmark_db.data.digitalocean_project.forest_project: Reading...
module.benchmark_db.data.local_file.sources: Read complete after 0s [id=ce32c0e2e80743f5a16c307c9d57ab240c4ed532]
module.benchmark_db.data.digitalocean_ssh_keys.keys: Read complete after 0s [id=ssh_keys/4003270040538543685]
module.benchmark_db.digitalocean_droplet.forest: Refreshing state... [id=363338060]
module.benchmark_db.null_resource.destroy_droplet: Refreshing state... [id=2189585105394316578]
module.benchmark_db.data.digitalocean_project.forest_project: Read complete after 1s [id=da5e6601-7fd9-4d02-951e-390f7feb3411]
module.benchmark_db.digitalocean_project_resources.connect_forest_project: Refreshing state... [id=da5e6601-7fd9-4d02-951e-390f7feb3411]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.benchmark_db.digitalocean_droplet.forest must be replaced
-/+ resource "digitalocean_droplet" "forest" {
      ~ created_at           = "2023-07-03T06:39:36Z" -> (known after apply)
      ~ disk                 = 300 -> (known after apply)
      ~ id                   = "363338060" -> (known after apply)
      ~ ipv4_address         = "159.89.3.201" -> (known after apply)
      ~ ipv4_address_private = "10.135.0.8" -> (known after apply)
      + ipv6_address         = (known after apply)
      ~ locked               = false -> (known after apply)
      ~ memory               = 16384 -> (known after apply)
        name                 = "forest-benchmark"
      ~ price_hourly         = 0.19494 -> (known after apply)
      ~ price_monthly        = 131 -> (known after apply)
      ~ private_networking   = true -> (known after apply)
      ~ status               = "active" -> (known after apply)
        tags                 = [
            "iac",
        ]
      ~ urn                  = "do:droplet:363338060" -> (known after apply)
      ~ user_data            = "d135373b155733c09f19887cdc0acc0e9aa6c7e6" -> "d605f7a874b6dda5de7860b5790b974a57e1e0c9" # forces replacement
      ~ vcpus                = 2 -> (known after apply)
      ~ volume_ids           = [] -> (known after apply)
      ~ vpc_uuid             = "46a525ac-fd37-47ea-bb10-95c1db0055f7" -> (known after apply)
        # (9 unchanged attributes hidden)
    }

  # module.benchmark_db.digitalocean_project_resources.connect_forest_project will be updated in-place
  ~ resource "digitalocean_project_resources" "connect_forest_project" {
        id        = "da5e6601-7fd9-4d02-951e-390f7feb3411"
      ~ resources = [
          - "do:droplet:363338060",
        ] -> (known after apply)
        # (1 unchanged attribute hidden)
    }

Plan: 1 to add, 1 to change, 1 to destroy.

Changes to Outputs:
  ~ ip = [
      - [
          - "159.89.3.201",
        ],
      + [
          + (known after apply),
        ],
    ]

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Copy link
Contributor

@lemmih lemmih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good start.

Comment on lines +7 to +12
RUN apt-get update && \
apt-get install --no-install-recommends -y \
ruby ruby-dev make gcc git git-lfs curl wget pkg-config clang build-essential \
mesa-opencl-icd ocl-icd-opencl-dev hwloc libhwloc-dev s3cmd aria2 zstd time bzr jq pkg-config && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit puzzled. Why do we need all of those dependencies? Could we just use Docker? We want to use the latest releases of both Forest and Lotus, which are conveniently dockerized. This would reduce the amount of manual work in the benchmark itself.

That said, I'm okay with doing this in a separate issue. If so, let's create it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll create a separate issue for this

Comment on lines +159 to +160
results[:import_time][key.to_sym] = "#{elapsed} sec"
results[:validation_time][key.to_sym] = "#{tpm} tipsets/sec"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's wasteful to include the unit in every row of the result. This should be a part of the header.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import_time and validation_time comes under Metric header, each metric will have its own unit
we will have to restructure the output format
I can take this out in a separate PR

ruby bench.rb --chain mainnet --tempdir ./tmp --daily

## Upload benchmark result to s3 weekly file
week_number=$(date +%W) # Week starting on Monday
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is overwriting benchmarks from January 2023 with results from January 2024 expected and by design?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added year to avoid this condition


## Upload benchmark result to s3 weekly file
week_number=$(date +%W) # Week starting on Monday
s3cmd get s3://"$BENCHMARK_BUCKET"/benchmark-results/weekly-results/weekly_result_"$week_number".csv /tmp/weekly_result_"$week_number".csv --force ||
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the /tmp/weekly_result_"$week_number".csv is repeated many times. Let's use a variable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Copy link
Member

@LesnyRumcajs LesnyRumcajs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't look that much into Ruby scripts themselves, given they're mostly imported from Forest. All in all, it'd be good to have some more form in them and potentially unit tests (and a way to easily run them locally). We can think about it in another PR.

Another thing is compiling Rust and Lotus manually. This shouldn't be needed with Docker images and would simplify dependencies and initialization of the benchmark.

Also, I'd like to know the plans for CSV migrations, e.g., what do we do when a column is added/removed/modified in the CSV. We can no longer easily compare the results in such case anymore (well, adding a column should be fine).

week_number=$(date +%W) # Week starting on Monday
s3cmd get s3://"$BENCHMARK_BUCKET"/benchmark-results/weekly-results/weekly_result_"$week_number".csv /tmp/weekly_result_"$week_number".csv --force ||
echo "Timestamp,Forest Version,Lotus Version,Chain,Metric,Forest Value,Lotus Value" > /tmp/weekly_result_"$week_number".csv
tail -n +2 -q /chainsafe/result_*.csv >> /tmp/weekly_result_"$week_number".csv && s3cmd --acl-public put /tmp/weekly_result_"$week_number".csv s3://"$BENCHMARK_BUCKET"/benchmark-results/weekly-results/weekly_result_"$week_number".csv || exit 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the plan to deal with breaking changes? Looking at the existing file at https://forest-benchmarks.fra1.digitaloceanspaces.com/benchmark-results/all_results.csv, there were some at one point:

2023-07-05 23:43:49,0.11.0,1.23.4,calibnet,validate_online,11.8,8.0
2023-07-06 01:48:32,0.11.0,1.23.4,mainnet,import,1443,2094
2023-07-06 01:48:32,0.11.0,1.23.4,mainnet,validate_online,0.209,0.175
2023-07-06 02:36:54,0.11.0,1.23.4,calibnet,import,41.1,96
2023-07-06 02:36:54,0.11.0,1.23.4,calibnet,validate_online,2.5,2.6
2023-07-06 04:43:39,0.11.0,1.23.4,mainnet,import,1480,2082
2023-07-06 04:43:39,0.11.0,1.23.4,mainnet,validate_online,0.275,0.184
2023-07-06 16:07:48,0.11.1,1.23.4,calibnet,import_time,44.8 sec,102 sec
2023-07-06 16:07:48,0.11.1,1.23.4,calibnet,validation_time,11.7 tipsets/sec,7.4 tipsets/sec
2023-07-06 18:15:00,0.11.1,1.23.4,mainnet,import_time,1415 sec,2054 sec
2023-07-06 18:15:00,0.11.1,1.23.4,mainnet,validation_time,0.159 tipsets/sec,0.092 tipsets/sec
2023-07-06 16:07:48,0.11.1,1.23.4,calibnet,import_time,44.8 sec,102 sec
2023-07-06 16:07:48,0.11.1,1.23.4,calibnet,validation_time,11.7 tipsets/sec,7.4 tipsets/sec
2023-07-06 18:15:00,0.11.1,1.23.4,mainnet,import_time,1415 sec,2054 sec
2023-07-06 18:15:00,0.11.1,1.23.4,mainnet,validation_time,0.159 tipsets/sec,0.092 tipsets/sec
2023-07-06 19:03:26,0.11.1,1.23.4,calibnet,import_time,43.6 sec,101 sec
2023-07-06 19:03:26,0.11.1,1.23.4,calibnet,validation_time,12.3 tipsets/sec,7.6 tipsets/sec
2023-07-06 21:07:41,0.11.1,1.23.4,mainnet,import_time,1400 sec,2110 sec
2023-07-06 21:07:41,0.11.1,1.23.4,mainnet,validation_time,0.267 tipsets/sec,0.192 tipsets/sec
2023-07-06 16:07:48,0.11.1,1.23.4,calibnet,import_time,44.8 sec,102 sec

Should the former ones be cleared? What is the plan for the future?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will take care in a separate PR


s3cmd get s3://"$BENCHMARK_BUCKET"/benchmark-results/all_results.csv /tmp/all_results.csv --force ||
echo "Timestamp,Forest Version,Lotus Version,Chain,Metric,Forest Value,Lotus Value" > /tmp/all_results.csv
tail -n +2 -q /chainsafe/result_*.csv >> /tmp/all_results.csv && s3cmd --acl-public put /tmp/all_results.csv s3://"$BENCHMARK_BUCKET"/benchmark-results/all_results.csv || exit 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to exit 1 in case of an error? Shouldn't set -e that is already in the file work fine?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

corrected

# Current datetime, to append to the log files
datetime = Time.new.strftime '%FT%H:%M:%S'
health_log = "#{LOG_DIR}/benchmark_#{datetime}_health"
sync_log = "#{LOG_DIR}/benchmark_#{datetime}_sync"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is sync and health and sync log exactly and how are they created / populated?

Copy link
Contributor Author

@sudo-shashank sudo-shashank Aug 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, now there is a log file benchmark_#{datetime}_run which contains actual benchmark run logs and another benchmark_#{datetime}_report which contains high-level log of benchmark run like when it starts, fails or complete.

@sudo-shashank sudo-shashank mentioned this pull request Aug 29, 2023
5 tasks
@github-actions
Copy link

Forest: Benchmark Service Infrastructure Plan: success

Show Plan
module.benchmark.data.external.sources_tar: Reading...
module.benchmark.data.local_file.init: Reading...
module.benchmark.data.local_file.init: Read complete after 0s [id=8a79718ce589edec7ac0c28bf0856a5b355e7588]
module.benchmark.data.external.sources_tar: Read complete after 0s [id=-]
module.benchmark.data.digitalocean_ssh_keys.keys: Reading...
module.benchmark.data.digitalocean_project.forest_project: Reading...
module.benchmark.data.local_file.sources: Reading...
module.benchmark.data.local_file.sources: Read complete after 0s [id=e0d61c4f8f973b46698d0c9f0d0ba4a7df7b6dc8]
module.benchmark.data.digitalocean_ssh_keys.keys: Read complete after 0s [id=ssh_keys/4003270040538543685]
module.benchmark.data.digitalocean_project.forest_project: Read complete after 2s [id=da5e6601-7fd9-4d02-951e-390f7feb3411]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.benchmark.digitalocean_droplet.forest will be created
  + resource "digitalocean_droplet" "forest" {
      + backups              = false
      + created_at           = (known after apply)
      + disk                 = (known after apply)
      + graceful_shutdown    = false
      + id                   = (known after apply)
      + image                = "docker-20-04"
      + ipv4_address         = (known after apply)
      + ipv4_address_private = (known after apply)
      + ipv6                 = false
      + ipv6_address         = (known after apply)
      + locked               = (known after apply)
      + memory               = (known after apply)
      + monitoring           = false
      + name                 = "forest-benchmark"
      + price_hourly         = (known after apply)
      + price_monthly        = (known after apply)
      + private_networking   = (known after apply)
      + region               = "fra1"
      + resize_disk          = true
      + size                 = "so-2vcpu-16gb"
      + ssh_keys             = [
          + "00:a0:c0:54:5f:40:22:10:52:8a:04:48:f9:c8:db:00",
          + "04:77:74:e8:81:92:9d:1e:cb:d3:5d:0d:fa:83:56:f6",
          + "31:fd:e9:da:70:df:ef:33:af:a2:ea:a1:fd:69:a7:9d",
          + "37:1e:1a:fc:25:2d:5a:a7:1f:49:b2:6d:53:5c:0e:45",
          + "41:91:6d:f9:f7:27:44:30:7f:a4:6f:36:e8:97:ad:cb",
          + "5a:a8:6d:02:66:21:e9:f7:27:b2:1c:6e:89:0f:65:77",
          + "5f:d6:ad:06:b8:2d:4a:ef:0a:ac:97:bf:37:b0:7a:4c",
          + "77:09:d9:32:61:65:81:08:d1:e2:50:9b:ec:28:02:62",
          + "99:ea:ec:bf:9f:d1:b2:52:02:b2:78:a2:57:25:a0:e7",
          + "9c:18:88:44:c4:d6:74:84:07:9a:3c:9a:f6:17:f3:e4",
          + "9d:f6:f8:05:f7:c2:72:a2:b7:ce:f8:3e:71:12:2e:09",
          + "b6:03:52:e0:49:14:03:90:19:37:69:c3:c7:d0:e7:69",
          + "bb:7a:cc:18:56:7a:cb:2b:07:d7:8b:30:86:b8:b5:41",
          + "c7:f9:b0:49:24:aa:30:36:4e:5f:d4:a3:ab:43:49:e8",
          + "cb:ca:60:61:a9:ba:e0:a0:ba:95:35:2d:48:f3:c2:05",
          + "d3:6d:af:8e:a4:b9:8f:b8:38:2b:56:06:5f:38:48:a7",
          + "f7:de:2d:83:ce:e7:c3:13:2c:ca:3a:f0:4b:4e:46:da",
          + "fa:48:60:7b:b0:c4:86:70:e9:fa:e9:f8:fb:c7:2e:72",
          + "fa:62:10:64:1b:77:eb:78:a5:ba:e0:86:ff:76:7e:97",
          + "fe:42:94:20:d0:a9:24:67:5f:de:78:c1:bb:8b:6c:92",
        ]
      + status               = (known after apply)
      + tags                 = [
          + "iac",
        ]
      + urn                  = (known after apply)
      + user_data            = "e3a8c847735fe729ce0c730cadbba6764dfcb2b8"
      + vcpus                = (known after apply)
      + volume_ids           = (known after apply)
      + vpc_uuid             = (known after apply)
    }

  # module.benchmark.digitalocean_project_resources.connect_forest_project will be created
  + resource "digitalocean_project_resources" "connect_forest_project" {
      + id        = (known after apply)
      + project   = "da5e6601-7fd9-4d02-951e-390f7feb3411"
      + resources = (known after apply)
    }

Plan: 2 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip = [
      + [
          + (known after apply),
        ],
    ]

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

@github-actions
Copy link

Forest: Benchmark Service Infrastructure Plan: success

Show Plan
module.benchmark.data.local_file.init: Reading...
module.benchmark.data.local_file.init: Read complete after 0s [id=8a79718ce589edec7ac0c28bf0856a5b355e7588]
module.benchmark.data.external.sources_tar: Reading...
module.benchmark.data.external.sources_tar: Read complete after 0s [id=-]
module.benchmark.data.local_file.sources: Reading...
module.benchmark.data.digitalocean_ssh_keys.keys: Reading...
module.benchmark.data.digitalocean_project.forest_project: Reading...
module.benchmark.data.local_file.sources: Read complete after 0s [id=d9e09e5d9d459dfa9ebec32f9774780399dbed6b]
module.benchmark.data.digitalocean_ssh_keys.keys: Read complete after 1s [id=ssh_keys/4003270040538543685]
module.benchmark.data.digitalocean_project.forest_project: Read complete after 2s [id=da5e6601-7fd9-4d02-951e-390f7feb3411]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.benchmark.digitalocean_droplet.forest will be created
  + resource "digitalocean_droplet" "forest" {
      + backups              = false
      + created_at           = (known after apply)
      + disk                 = (known after apply)
      + graceful_shutdown    = false
      + id                   = (known after apply)
      + image                = "docker-20-04"
      + ipv4_address         = (known after apply)
      + ipv4_address_private = (known after apply)
      + ipv6                 = false
      + ipv6_address         = (known after apply)
      + locked               = (known after apply)
      + memory               = (known after apply)
      + monitoring           = false
      + name                 = "forest-benchmark"
      + price_hourly         = (known after apply)
      + price_monthly        = (known after apply)
      + private_networking   = (known after apply)
      + region               = "fra1"
      + resize_disk          = true
      + size                 = "so-2vcpu-16gb"
      + ssh_keys             = [
          + "00:a0:c0:54:5f:40:22:10:52:8a:04:48:f9:c8:db:00",
          + "04:77:74:e8:81:92:9d:1e:cb:d3:5d:0d:fa:83:56:f6",
          + "31:fd:e9:da:70:df:ef:33:af:a2:ea:a1:fd:69:a7:9d",
          + "37:1e:1a:fc:25:2d:5a:a7:1f:49:b2:6d:53:5c:0e:45",
          + "41:91:6d:f9:f7:27:44:30:7f:a4:6f:36:e8:97:ad:cb",
          + "5a:a8:6d:02:66:21:e9:f7:27:b2:1c:6e:89:0f:65:77",
          + "5f:d6:ad:06:b8:2d:4a:ef:0a:ac:97:bf:37:b0:7a:4c",
          + "77:09:d9:32:61:65:81:08:d1:e2:50:9b:ec:28:02:62",
          + "99:ea:ec:bf:9f:d1:b2:52:02:b2:78:a2:57:25:a0:e7",
          + "9c:18:88:44:c4:d6:74:84:07:9a:3c:9a:f6:17:f3:e4",
          + "9d:f6:f8:05:f7:c2:72:a2:b7:ce:f8:3e:71:12:2e:09",
          + "b6:03:52:e0:49:14:03:90:19:37:69:c3:c7:d0:e7:69",
          + "bb:7a:cc:18:56:7a:cb:2b:07:d7:8b:30:86:b8:b5:41",
          + "c7:f9:b0:49:24:aa:30:36:4e:5f:d4:a3:ab:43:49:e8",
          + "cb:ca:60:61:a9:ba:e0:a0:ba:95:35:2d:48:f3:c2:05",
          + "d3:6d:af:8e:a4:b9:8f:b8:38:2b:56:06:5f:38:48:a7",
          + "f7:de:2d:83:ce:e7:c3:13:2c:ca:3a:f0:4b:4e:46:da",
          + "fa:48:60:7b:b0:c4:86:70:e9:fa:e9:f8:fb:c7:2e:72",
          + "fa:62:10:64:1b:77:eb:78:a5:ba:e0:86:ff:76:7e:97",
          + "fe:42:94:20:d0:a9:24:67:5f:de:78:c1:bb:8b:6c:92",
        ]
      + status               = (known after apply)
      + tags                 = [
          + "iac",
        ]
      + urn                  = (known after apply)
      + user_data            = "b7efbc2b7aceda96c1ce7203c2a2a6cd08417ac2"
      + vcpus                = (known after apply)
      + volume_ids           = (known after apply)
      + vpc_uuid             = (known after apply)
    }

  # module.benchmark.digitalocean_project_resources.connect_forest_project will be created
  + resource "digitalocean_project_resources" "connect_forest_project" {
      + id        = (known after apply)
      + project   = "da5e6601-7fd9-4d02-951e-390f7feb3411"
      + resources = (known after apply)
    }

Plan: 2 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip = [
      + [
          + (known after apply),
        ],
    ]

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

@github-actions
Copy link

Forest: Benchmark Service Infrastructure Plan: success

Show Plan
module.benchmark.data.local_file.init: Reading...
module.benchmark.data.local_file.init: Read complete after 0s [id=8a79718ce589edec7ac0c28bf0856a5b355e7588]
module.benchmark.data.external.sources_tar: Reading...
module.benchmark.data.external.sources_tar: Read complete after 0s [id=-]
module.benchmark.data.local_file.sources: Reading...
module.benchmark.data.local_file.sources: Read complete after 0s [id=d9e09e5d9d459dfa9ebec32f9774780399dbed6b]
module.benchmark.data.digitalocean_ssh_keys.keys: Reading...
module.benchmark.data.digitalocean_project.forest_project: Reading...
module.benchmark.data.digitalocean_ssh_keys.keys: Read complete after 0s [id=ssh_keys/4003270040538543685]
module.benchmark.data.digitalocean_project.forest_project: Read complete after 2s [id=da5e6601-7fd9-4d02-951e-390f7feb3411]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.benchmark.digitalocean_droplet.forest will be created
  + resource "digitalocean_droplet" "forest" {
      + backups              = false
      + created_at           = (known after apply)
      + disk                 = (known after apply)
      + graceful_shutdown    = false
      + id                   = (known after apply)
      + image                = "docker-20-04"
      + ipv4_address         = (known after apply)
      + ipv4_address_private = (known after apply)
      + ipv6                 = false
      + ipv6_address         = (known after apply)
      + locked               = (known after apply)
      + memory               = (known after apply)
      + monitoring           = false
      + name                 = "forest-benchmark"
      + price_hourly         = (known after apply)
      + price_monthly        = (known after apply)
      + private_networking   = (known after apply)
      + region               = "fra1"
      + resize_disk          = true
      + size                 = "so-2vcpu-16gb"
      + ssh_keys             = [
          + "00:a0:c0:54:5f:40:22:10:52:8a:04:48:f9:c8:db:00",
          + "04:77:74:e8:81:92:9d:1e:cb:d3:5d:0d:fa:83:56:f6",
          + "31:fd:e9:da:70:df:ef:33:af:a2:ea:a1:fd:69:a7:9d",
          + "37:1e:1a:fc:25:2d:5a:a7:1f:49:b2:6d:53:5c:0e:45",
          + "41:91:6d:f9:f7:27:44:30:7f:a4:6f:36:e8:97:ad:cb",
          + "5a:a8:6d:02:66:21:e9:f7:27:b2:1c:6e:89:0f:65:77",
          + "5f:d6:ad:06:b8:2d:4a:ef:0a:ac:97:bf:37:b0:7a:4c",
          + "77:09:d9:32:61:65:81:08:d1:e2:50:9b:ec:28:02:62",
          + "99:ea:ec:bf:9f:d1:b2:52:02:b2:78:a2:57:25:a0:e7",
          + "9c:18:88:44:c4:d6:74:84:07:9a:3c:9a:f6:17:f3:e4",
          + "9d:f6:f8:05:f7:c2:72:a2:b7:ce:f8:3e:71:12:2e:09",
          + "b6:03:52:e0:49:14:03:90:19:37:69:c3:c7:d0:e7:69",
          + "bb:7a:cc:18:56:7a:cb:2b:07:d7:8b:30:86:b8:b5:41",
          + "c7:f9:b0:49:24:aa:30:36:4e:5f:d4:a3:ab:43:49:e8",
          + "cb:ca:60:61:a9:ba:e0:a0:ba:95:35:2d:48:f3:c2:05",
          + "d3:6d:af:8e:a4:b9:8f:b8:38:2b:56:06:5f:38:48:a7",
          + "f7:de:2d:83:ce:e7:c3:13:2c:ca:3a:f0:4b:4e:46:da",
          + "fa:48:60:7b:b0:c4:86:70:e9:fa:e9:f8:fb:c7:2e:72",
          + "fa:62:10:64:1b:77:eb:78:a5:ba:e0:86:ff:76:7e:97",
          + "fe:42:94:20:d0:a9:24:67:5f:de:78:c1:bb:8b:6c:92",
        ]
      + status               = (known after apply)
      + tags                 = [
          + "iac",
        ]
      + urn                  = (known after apply)
      + user_data            = "b7efbc2b7aceda96c1ce7203c2a2a6cd08417ac2"
      + vcpus                = (known after apply)
      + volume_ids           = (known after apply)
      + vpc_uuid             = (known after apply)
    }

  # module.benchmark.digitalocean_project_resources.connect_forest_project will be created
  + resource "digitalocean_project_resources" "connect_forest_project" {
      + id        = (known after apply)
      + project   = "da5e6601-7fd9-4d02-951e-390f7feb3411"
      + resources = (known after apply)
    }

Plan: 2 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip = [
      + [
          + (known after apply),
        ],
    ]

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

@github-actions
Copy link

Forest: Benchmark Service Infrastructure Plan: success

Show Plan
module.benchmark.data.local_file.init: Reading...
module.benchmark.data.external.sources_tar: Reading...
module.benchmark.data.local_file.init: Read complete after 0s [id=8a79718ce589edec7ac0c28bf0856a5b355e7588]
module.benchmark.data.digitalocean_project.forest_project: Reading...
module.benchmark.data.digitalocean_ssh_keys.keys: Reading...
module.benchmark.data.external.sources_tar: Read complete after 0s [id=-]
module.benchmark.data.local_file.sources: Reading...
module.benchmark.data.local_file.sources: Read complete after 0s [id=d9e09e5d9d459dfa9ebec32f9774780399dbed6b]
module.benchmark.data.digitalocean_ssh_keys.keys: Read complete after 0s [id=ssh_keys/4003270040538543685]
module.benchmark.digitalocean_droplet.forest: Refreshing state... [id=372334296]
module.benchmark.data.digitalocean_project.forest_project: Read complete after 1s [id=da5e6601-7fd9-4d02-951e-390f7feb3411]
module.benchmark.digitalocean_project_resources.connect_forest_project: Refreshing state... [id=da5e6601-7fd9-4d02-951e-390f7feb3411]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.benchmark.digitalocean_droplet.forest must be replaced
-/+ resource "digitalocean_droplet" "forest" {
      ~ created_at           = "2023-08-30T01:40:28Z" -> (known after apply)
      ~ disk                 = 300 -> (known after apply)
      ~ id                   = "372334296" -> (known after apply)
      ~ ipv4_address         = "161.35.220.31" -> (known after apply)
      ~ ipv4_address_private = "10.135.0.8" -> (known after apply)
      + ipv6_address         = (known after apply)
      ~ locked               = false -> (known after apply)
      ~ memory               = 16384 -> (known after apply)
        name                 = "forest-benchmark"
      ~ price_hourly         = 0.19494 -> (known after apply)
      ~ price_monthly        = 131 -> (known after apply)
      ~ private_networking   = true -> (known after apply)
      ~ status               = "active" -> (known after apply)
        tags                 = [
            "iac",
        ]
      ~ urn                  = "do:droplet:372334296" -> (known after apply)
      ~ user_data            = "c56525e0fedf492793605d35f214c91e9293fbc2" -> "b7efbc2b7aceda96c1ce7203c2a2a6cd08417ac2" # forces replacement
      ~ vcpus                = 2 -> (known after apply)
      ~ volume_ids           = [] -> (known after apply)
      ~ vpc_uuid             = "46a525ac-fd37-47ea-bb10-95c1db0055f7" -> (known after apply)
        # (9 unchanged attributes hidden)
    }

  # module.benchmark.digitalocean_project_resources.connect_forest_project will be updated in-place
  ~ resource "digitalocean_project_resources" "connect_forest_project" {
        id        = "da5e6601-7fd9-4d02-951e-390f7feb3411"
      ~ resources = [
          - "do:droplet:372334296",
        ] -> (known after apply)
        # (1 unchanged attribute hidden)
    }

Plan: 1 to add, 1 to change, 1 to destroy.

Changes to Outputs:
  ~ ip = [
      - [
          - "161.35.220.31",
        ],
      + [
          + (known after apply),
        ],
    ]

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Copy link
Member

@LesnyRumcajs LesnyRumcajs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add the concurrency as mentioned in one of the comments. Otherwise, I'm okay with merging this, provided we address improvements in a separate PR soon.

```

To deploy the service:
```bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you update the docs incase someone wants to run manually?. you can check the sync check and snapshot services docs as reference

@samuelarogbonlo
Copy link
Contributor

samuelarogbonlo commented Aug 30, 2023

@lemmih @LesnyRumcajs don’t you guys think we should also set up new relic to monitor the droplet?

@LesnyRumcajs
Copy link
Member

@lemmih @LesnyRumcajs don’t you guys think we should also step up new relic to monitor the droplet?

@samuelarogbonlo let's do it in a separate PR and not increase the load on NR until we are sure that the fixes you made land us well below the free tier limit.

@LesnyRumcajs
Copy link
Member

By the way, shall we delete the benchmark script from the Forest repository now? @sudo-shashank

@sudo-shashank
Copy link
Contributor Author

By the way, shall we delete the benchmark script from the Forest repository now? @sudo-shashank

yes we can once this is merged

@github-actions
Copy link

Forest: Benchmark Service Infrastructure Plan: success

Show Plan
module.benchmark.data.local_file.init: Reading...
module.benchmark.data.digitalocean_ssh_keys.keys: Reading...
module.benchmark.data.digitalocean_project.forest_project: Reading...
module.benchmark.data.local_file.init: Read complete after 0s [id=8a79718ce589edec7ac0c28bf0856a5b355e7588]
module.benchmark.data.external.sources_tar: Reading...
module.benchmark.data.external.sources_tar: Read complete after 0s [id=-]
module.benchmark.data.local_file.sources: Reading...
module.benchmark.data.local_file.sources: Read complete after 0s [id=85a48c72c71c30d97d78a3c9cc1465ee18473bdc]
module.benchmark.data.digitalocean_ssh_keys.keys: Read complete after 0s [id=ssh_keys/4003270040538543685]
module.benchmark.digitalocean_droplet.forest: Refreshing state... [id=372334296]
module.benchmark.data.digitalocean_project.forest_project: Read complete after 2s [id=da5e6601-7fd9-4d02-951e-390f7feb3411]
module.benchmark.digitalocean_project_resources.connect_forest_project: Refreshing state... [id=da5e6601-7fd9-4d02-951e-390f7feb3411]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.benchmark.digitalocean_droplet.forest must be replaced
-/+ resource "digitalocean_droplet" "forest" {
      ~ created_at           = "2023-08-30T01:40:28Z" -> (known after apply)
      ~ disk                 = 300 -> (known after apply)
      ~ id                   = "372334296" -> (known after apply)
      ~ ipv4_address         = "161.35.220.31" -> (known after apply)
      ~ ipv4_address_private = "10.135.0.8" -> (known after apply)
      + ipv6_address         = (known after apply)
      ~ locked               = false -> (known after apply)
      ~ memory               = 16384 -> (known after apply)
        name                 = "forest-benchmark"
      ~ price_hourly         = 0.19494 -> (known after apply)
      ~ price_monthly        = 131 -> (known after apply)
      ~ private_networking   = true -> (known after apply)
      ~ status               = "active" -> (known after apply)
        tags                 = [
            "iac",
        ]
      ~ urn                  = "do:droplet:372334296" -> (known after apply)
      ~ user_data            = "c56525e0fedf492793605d35f214c91e9293fbc2" -> "d2b9bd646d980ab5573bd47673efa40a8362df50" # forces replacement
      ~ vcpus                = 2 -> (known after apply)
      ~ volume_ids           = [] -> (known after apply)
      ~ vpc_uuid             = "46a525ac-fd37-47ea-bb10-95c1db0055f7" -> (known after apply)
        # (9 unchanged attributes hidden)
    }

  # module.benchmark.digitalocean_project_resources.connect_forest_project will be updated in-place
  ~ resource "digitalocean_project_resources" "connect_forest_project" {
        id        = "da5e6601-7fd9-4d02-951e-390f7feb3411"
      ~ resources = [
          - "do:droplet:372334296",
        ] -> (known after apply)
        # (1 unchanged attribute hidden)
    }

Plan: 1 to add, 1 to change, 1 to destroy.

Changes to Outputs:
  ~ ip = [
      - [
          - "161.35.220.31",
        ],
      + [
          + (known after apply),
        ],
    ]

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

@sudo-shashank
Copy link
Contributor Author

@lemmih can you approve this PR, its waiting for your approval to be merged

@sudo-shashank sudo-shashank enabled auto-merge (squash) August 31, 2023 10:49
@sudo-shashank sudo-shashank disabled auto-merge August 31, 2023 14:17
@lemmih lemmih dismissed their stale review August 31, 2023 14:17

stale review

@sudo-shashank sudo-shashank merged commit 32e092d into main Aug 31, 2023
@sudo-shashank sudo-shashank deleted the benchmark-db branch March 19, 2024 11:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Run Daily Benchmark Script Automagically
4 participants