Skip to content
This repository has been archived by the owner on May 3, 2024. It is now read-only.

[DEV] cortx monitor single node VM provisioning Automated

Sumedh Anantrao Kulkarni edited this page Aug 4, 2021 · 12 revisions

Single Node VM provisioning

Component: SSPL

Solution: LR2

Cortx version: 2

SSPL version: 2.0.0

Steps to deploy SSPL on VM

Prepare template_values.1-node.txt

[root@ssc-vm-2518 ~]# cat template_values.1-node.txt
# 1-node config
TMPL_CLUSTER_ID=CC01
TMPL_NODE_ID=SN01
TMPL_RACK_ID=RC01
TMPL_SITE_ID=DC01
TMPL_MACHINE_ID=30512e5ae6df9f1ea02327bab45e499d
TMPL_HOSTNAME=ssc-vm-2217.colo.seagate.com
TMPL_NODE_NAME=srvnode-1
TMPL_SERVER_NODE_TYPE=VM
TMPL_MGMT_INTERFACE=eth0
TMPL_MGMT_PUBLIC_FQDN=srvnode-1.public.fqdn
TMPL_DATA_PRIVATE_FQDN=srvnode-1.data.private.fqdn
TMPL_DATA_PRIVATE_INTERFACE=
TMPL_DATA_PUBLIC_FQDN=srvnode-1.data.public.fqdn
TMPL_DATA_PUBLIC_INTERFACE=
TMPL_BMC_IP=
TMPL_BMC_USER=
TMPL_BMC_SECRET=
TMPL_ENCLOSURE_ID=enc_30512e5ae6df9f1ea02327bab45e499d
TMPL_ENCLOSURE_NAME=enclosure-1
TMPL_ENCLOSURE_TYPE=virtual
TMPL_PRIMARY_CONTROLLER_IP=10.0.0.2
TMPL_PRIMARY_CONTROLLER_PORT=80
TMPL_SECONDARY_CONTROLLER_IP=10.0.0.3
TMPL_SECONDARY_CONTROLLER_PORT=80
TMPL_CONTROLLER_USER=manage
TMPL_CONTROLLER_SECRET="gAAAAABgbcFLyZlF2EkDTTgIqFwd-KNSX_MWOJSdPI4xTIDdUPu11PtMbJpfzYKunjMTHmEsmHGzTTIK5CXkiY1H5cJCZZTCLQ=="
TMPL_CONTROLLER_TYPE=Gallium

Note:

  1. Use localhost in case management/data private & public FQDN are not available and wish to skip nw validation on it.
  2. TMPL_MACHINE_ID must be your machine-id
  3. TMPL_HOSTNAME must be your machine hostname

Execute the following commands on your VM shell


CORTX_MONITOR_BASE_URL="https://raw.githubusercontent.com/Seagate/cortx-monitor/main" 

curl $CORTX_MONITOR_BASE_URL/low-level/files/opt/seagate/sspl/setup/sspl_dev_deploy -o sspl_dev_deploy

chmod a+x sspl_dev_deploy

./sspl_dev_deploy --cleanup

BUILD_URL="http://cortx-storage.colo.seagate.com/releases/cortx/github/main/centos-7.8.2003/<build_number>/prod/"

./sspl_dev_deploy --prereq -T $BUILD_URL

./sspl_dev_deploy --deploy -T $BUILD_URL --variable_file /root/template_values.1-node.txt --storage_type RBOD --server_type HW

systemctl start sspl-ll

If you are installing local RPMS instead of build URL, then use -L option instead of -T.

Example:

./sspl_dev_deploy --prereq -L /root/MYRPMS

./sspl_dev_deploy --deploy -L /root/MYRPMS --variable_file /root/template_values.1-node.txt --storage_type RBOD --server_type HW

Steps to run sanity test


url=yaml:///etc/sspl.conf

global_config_url=$(conf $url get "SYSTEM_INFORMATION>global_config_copy_url")

global_config_url=$(echo $global_config_url | tr -d "["\" | tr -d "\"]")

/opt/seagate/cortx/sspl/bin/sspl_setup test --config $global_config_url --plan alerts

/opt/seagate/cortx/sspl/bin/sspl_setup test --config $global_config_url --plan dev_sanity

Steps to verify alert propagate through all servers in the cluster

On each node, execute below command from kafka installation path. Ex. /opt/kafka/kafka_2.13-2.7.0/


./bin/kafka-console-consumer.sh --bootstrap-server <node-1_fqdn>:9092 --topic alerts 

Steps to generate Code Coverage Report

Add --coverage as an argument to test command with any test plan.

Example:

/opt/seagate/cortx/sspl/bin/sspl_setup test --config $global_config_url --plan alerts --coverage

Set $global_config_url as shown in Sanity Test Section.