Skip to content

Files

devsetup

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
Jan 28, 2025
Jan 31, 2025
Feb 13, 2025
Feb 20, 2025
Feb 20, 2025
Feb 20, 2025
Sep 16, 2024
Feb 20, 2025
Jan 24, 2025
May 15, 2024
May 12, 2022
Aug 14, 2024

CRC automation + tool deployment

CRC

CRC installation requires sudo to create a NetworkManager dispatcher file in /etc/NetworkManager/dispatcher.d/99-crc.sh, also the post step to add the CRC cert to the system store to be able to access the image registry from the host system.

  • Get the pull secret from https://cloud.redhat.com/openshift/create/local and save it in pull-secret.txt of the repo dir, or set the PULL_SECRET env var to point to a different location.
  • CRC_URL and KUBEADMIN_PWD can be used to change requirements for CRC install. The default KUBEADMIN_PWD is 12345678
cd <install_yamls_root_path>/devsetup
CPUS=12 MEMORY=25600 DISK=100 make crc

Note To configure a http and/or https proxy on the crc instance, use CRC_HTTP_PROXY and CRC_HTTPS_PROXY.

After the installation is complete, proceed with the OpenStack service provisioning.

The steps it runs are the following:

# Pre req
# verifies that the pull secret is located at $(pwd)/pull-secret.txt (get it from https://cloud.redhat.com/openshift/create/local)

* install crc
mkdir -p ~/bin
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz | tar -U --strip-components=1 -C ~/bin -xJf - --no-anchored crc

# config CRC
crc config set consent-telemetry no
crc config set kubeadmin-password ${KUBEADMIN_PWD}
crc config set pull-secret-file ${PULL_SECRET_FILE}
crc setup

crc start

# show kubeadmin and devel login details
crc console --credentials

# add crc provided oc client to PATH
eval $(${CRC_BIN} oc-env)

# login to crc env
oc login -u kubeadmin -p ${KUBEADMIN_PWD} https://api.crc.testing:6443

# make sure you can push to the internal registry; without this step you'll get x509 errors
echo -n "Adding router-ca to system certs to allow accessing the crc image registry"
oc extract secret/router-ca --keys=tls.crt -n openshift-ingress-operator --confirm
sudo cp -f tls.crt /etc/pki/ca-trust/source/anchors/crc-router-ca.pem
sudo update-ca-trust

Access OCP from external systems

On the local system add the required entries to your local /etc/hosts. The previous used ansible playbook also outputs the information:

cat <<EOF >> /etc/hosts
192.168.130.11 api.crc.testing canary-openshift-ingress-canary.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing downloads-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing
EOF

Note validate that the IP address matches the installed CRC VM.

To access OCP console

On the local system, enable SSH proxying:

# on Fedora
sudo dnf install sshuttle

# on RHEL
sudo pip install sshuttle

sshuttle -r <user>@<virthost> 192.168.130.0/24

Now you can access the OCP environment

tool deployment

All tools and specific version to develop operators for this Cloud Native OpenStack approch can be deployed via the download_tools make target. All components which don't get installed via rpm get installed to $HOME/bin or /usr/local/bin (go/gofmt).

cd <install_yamls_root_path>/devsetup
make download_tools

EDPM deployment

The EDPM deployment will create additional VM's alongside the crc VM, provides a mechanism to configure them using the ansibleee-operator.

After completing the devsetup, attach the crc VM to the default network:

make crc_attach_default_interface

This requires running operators required for controlplane and dataplane:

pushd ..
make openstack
make openstack_init
popd

This requires controlplane to be deployed before dataplane:

pushd ..
make openstack_deploy
popd

Deploy a compute node VM:

# Creates edpm-compute-0:
make edpm_compute

Execute the edpm_deploy step:

pushd ..
make edpm_deploy
popd

You can also deploy additional compute node VMs:

# Set $EDPM_COMPUTE_SUFFIX to create additional VM's beyond 0:
make edpm_compute EDPM_COMPUTE_SUFFIX=1

The IP of the compute node will be statically assigned starting at 192.168.122.100 (based on the default EDPM_COMPUTE_SUFFIX=0).

Then edit inventory in edpm/edpm-play.yaml.

Cleanup:

pushd ..
make edpm_deploy_cleanup
popd

# Will delete VM's!:
make edpm_compute_cleanup

In case additional compute node VMs are deployed, run:

make edpm_compute_cleanup EDPM_COMPUTE_SUFFIX=1

EDPM virtual baremetal deployment

The EDPM virtual machines can be managed by the openstack-baremetal-operator and metal3, which interact with a virtual Redfish BMC provided by sushy-tools.

This requires running operators required for controlplane and dataplane:

pushd ..
make openstack
popd

This requires controlplane to be deployed before dataplane:

pushd ..
make openstack_deploy
popd

Create and manage the virtual machines:

BM_NODE_COUNT=1 make edpm_baremetal_compute

The dataplane can then be deployed on these nodes as for other baremetal dataplane deployments:

pushd ..
DATAPLANE_TOTAL_NODES=1 make edpm_deploy_baremetal
popd

Cleanup:

pushd ..
make edpm_deploy_cleanup
popd
# Will delete VM's!:
BM_NODE_COUNT=1 make edpm_baremetal_compute_cleanup

BMaaS LAB

The BMaaS LAB will create additional VM's alongside the CRC instance as well as a virtual RedFish (sushy-emulator) service running in CRC. The VMs can be used as virtual baremetal nodes managed by Ironic deployed on CRC.

The VM's are attached to a separate libvirt network crc-bmaas, this network is attached to the CRC instance and a linux-bridge, crc-bmaas, is configured on the CRC with a NetworkAttachmentDefinition baremetal.

When deploying ironic, set up the networkAttachments, provisionNetwork and inspectionNetwork to use the baremetal NetworkAttachmentDefinition.

The MetalLB load-balancer is also configured with an address pool and L2 advertisment for the baremetal network.

The 172.20.1.0/24 subnet is split into pools as shown in the table below.

Address pool Reservation
172.20.1.1/32 Router address
172.20.1.2/32 CRC bridge (crc-bmaas) address
172.20.1.0/26 Whearabouts IPAM (addresses for pods)
172.20.1.64/26 MetalLB IPAddressPool
172.20.1.128/25 Available for ironic provisioning and inspection

Example:

  ---
  apiVersion: ironic.openstack.org/v1beta1
  kind: Ironic
  metadata:
    name: ironic
    namespace: openstack
  spec:
    < --- snip --->
    ironicConductors:
    - networkAttachments:
      - baremetal
      provisionNetwork: baremetal
      dhcpRanges:
      - name: netA
        cidr: 172.20.1.0/24
        start: 172.20.1.130
        end: 172.20.1.200
        gateway: 172.20.1.1
    ironicInspector:
      networkAttachments:
      - baremetal
      inspectionNetwork: baremetal
      dhcpRanges:
      - name: netA
        cidr: 172.20.1.0/24
        start: 172.20.1.201
        end: 172.20.1.220
        gateway: 172.20.1.1
    < --- snip --->

The RedFish (sushy-emulator) is accessible via a route: http://sushy-emulator.apps-crc.testing

curl -u admin:password http://sushy-emulator.apps-crc.testing/redfish/v1/Systems/
{
    "@odata.type": "#ComputerSystemCollection.ComputerSystemCollection",
    "Name": "Computer System Collection",
    "[email protected]": 2,
    "Members": [

            {
                "@odata.id": "/redfish/v1/Systems/e5b1b096-f585-4f39-9174-e03bffe46a95"
            },

            {
                "@odata.id": "/redfish/v1/Systems/f91de773-c6a4-4a1b-b419-e0b3dbda3b84"
            }

    ],
    "@odata.context": "/redfish/v1/$metadata#ComputerSystemCollection.ComputerSystemCollection",
    "@odata.id": "/redfish/v1/Systems",
    "@Redfish.Copyright": "Copyright 2014-2016 Distributed Management Task Force, Inc. (DMTF). For the full DMTF copyright policy, see http://www.dmtf.org/about/policies/copyright."

Pre-requisites

Install CRC and the nmstate operator and the openstack namespace

cd <install_yamls_root_path>/devsetup
make crc
cd <install_yamls_root_path>/
make nmstate
make namespace

Create the BMaaS LAB

cd <install_yamls_root_path>/devsetup
make bmaas BMAAS_NODE_COUNT=4  # Default node count is: 1

Cleanup

cd <install_yamls_root_path>/devsetup
make bmaas_cleanup

Enroll nodes using node inventory yaml

TIP make bmaas_generate_nodes_yaml | tail -n +2 will print nodes YAML

Example:

---
nodes:
- name: crc-bmaas-01
  driver: redfish
  driver_info:
    redfish_address: http://sushy-emulator.apps-crc.testing
    redfish_system_id: /redfish/v1/Systems/f91de773-c6a4-4a1b-b419-e0b3dbda3b84
    redfish_username: admin
    redfish_password: password
  ports:
  - address: 52:54:00:fa:a7:b1
- name: crc-bmaas-02
  driver: redfish
  driver_info:
    redfish_address: http://sushy-emulator.apps-crc.testing
    redfish_system_id: /redfish/v1/Systems/e5b1b096-f585-4f39-9174-e03bffe46a95
    redfish_username: admin
    redfish_password: password
  ports:
  - address: 52:54:00:8a:ea:14

IPv6 LAB

Create the IPv6 LAB

Export vars:

export NETWORK_ISOLATION_NET_NAME=net-iso
export NETWORK_ISOLATION_IPV4=false
export NETWORK_ISOLATION_IPV6=true
export NETWORK_ISOLATION_INSTANCE_NAME=sno
export NETWORK_ISOLATION_IP_ADDRESS=fd00:aaaa::10
export NNCP_INTERFACE=enp7s0

Change to the devsetup directory:

cd <install_yamls_root_path>/devsetup

Set up the networking using NAT64 and SNO Single-node-Openshift:

make ipv6_lab

Create the network-isolation network with IPv6 enabled

make network_isolation_bridge

Attach the network-isolation bridge to SNO (Single-node-Openshift):

make attach_default_interface

Login to the cluster:

oc login -u admin -p 12345678 https://api.sno.lab.example.com:6443