Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 4.8.0 - Beta 4 - E2E UX tests - Deployment on Kubernetes #22450

Closed
1 of 2 tasks
davidjiglesias opened this issue Mar 12, 2024 · 5 comments
Closed
1 of 2 tasks

Comments

@davidjiglesias
Copy link
Member

davidjiglesias commented Mar 12, 2024

End-to-End (E2E) Testing Guideline

  • Documentation: Always consult the development documentation for the current stage tag at this link. Be careful because some of the description steps might refer to a current version in production, always navigate using the current development documention for the stage under test. Also, visit the following pre-release package guide to understand how to modify certain links and urls for the correct testing of the development packages.
  • Test Requirements: Ensure your test comprehensively includes a full stack and agent/s deployment as per the Deployment requirements, detailing the machine OS, installed version, and revision.
  • Deployment Options: While deployments can be local (using VMs, Vagrant, etc) or on the aws-dev account, opt for local deployments when feasible. For AWS access, coordinate with the DevOps team through this link.
  • External Accounts: If tests require third-party accounts (e.g., GitHub, Azure, AWS, GCP), request the necessary access through the DevOps team here.
  • Alerts: Every test should generate a minimum of one end-to-end alert, from the agent to the dashboard, irrespective of test type.
  • Multi-node Testing: For multi-node wazuh-manager tests, ensure agents are connected to both workers and the master node.
  • Package Verification: Use the pre-release package that matches the current TAG you're testing. Confirm its version and revision.
  • Filebeat Errors: If you encounter errors with Filebeat during testing, refer to this Slack discussion for insights and resolutions.
  • Known Issues: Familiarize yourself with previously reported issues in the Known Issues section. This helps in identifying already recognized errors during testing.
  • Reporting New Issues: Any new errors discovered during testing that aren't listed under Known Issues should be reported. Assign the issue to the corresponding team (QA if unsure), add the Release testing objective and Very high priority. Communicate these to the team and QA via the c-release Slack channel.
  • Test Conduct: It's imperative to be thorough in your testing, offering enough detail for reviewers. Incomplete tests might necessitate a redo.
  • Documentation Feedback: Encountering documentation gaps, unclear guidelines, or anything that disrupts the testing or UX? Open an issue, especially if it's not listed under Known Issues. Please answer the feedback section, this is a mandatory step.
  • Format: If this is your first time doing this, refer to the format (but not necessarily the content, as it may vary) of previous E2E tests, here you have an example Release 4.3.5 - Release Candidate 1 - E2E UX tests - Wazuh Indexer #13994.
  • Status and completion: Change the issue status within your team project accordingly. Once you finish testing and write the conclusions, move it to Pending review and notify the @wazuh/devel-devops team via Slack using the c-release channel. Beware that the reviewers might request additional information or task repetitions.
  • For reviewers: Please move the issue to Pending final review and notify via Slack using the same thread if everything is ok, otherwise, perform an issue update with the requested changes and move it to On hold, increase the review_cycles in the team project by one and notify the issue assignee via Slack using the same thread.

For the conclusions and the issue testing and updates, use the following legend:

Status legend

  • 🟢 All checks passed
  • 🟡 Found a known issue
  • 🔴 Found a new error

Issue delivery and completion

  • Initial delivery: The issue's assignee must complete the testing and deliver the results by Mar 14, 2024 and notify the @wazuh/devel-devops team via Slack using the c-release channel
  • Review: The @wazuh/devel-devops team will assign a reviewer and add it to the review_assignee field in the project. The reviewer must then review the test steps and results. Ensure that all iteration cycles are completed by Mar 15, 2024 date (issue must be in Pending final review status) and notify the QA team via Slack using the c-release channel.
  • Auditor: The QA team must audit, validate the results, and close the issue by Mar 16, 2024.

Deployment requirements

Component Installation Type OS
Indexer Deployment on Kubernetes Single node RHEL 9 x86_64
Server Deployment on Kubernetes Multi node RHEL 9 x86_64
Dashboard Deployment on Kubernetes - RHEL 9 x86_64
Agent Wazuh WUI one-liner deploy using FQDN and GROUP (created beforehand, don't use default) - Oracle Linux 9 x86_64

Test description

Test Deployment on Kubernetes.

  • Test the installation guide in detail
  • Test changing the default passwords

Remember to update the version of the Docker image to point to the current v4.8.0-beta4 under tests. Example: vX.Y.Z-sv4.8.0-beta4e#

Known issues

Conclusions

Summarize the errors detected (Known Issues included). Illustrate using the table below. REMOVE CURRENT EXAMPLES:

Status Test Failure type Notes
🔴 Deplyment with Kubernetes Errors in Wazuh Manager New issue opened: #22511

Feedback

We value your feedback. Please provide insights on your testing experience.

  • Was the testing guideline clear? Were there any ambiguities?
    • It was completely clear
  • Did you face any challenges not covered by the guideline?
    • I had no prior experience with Kubernetes, so my challenge was to research about it to be able to do the test.
  • Suggestions for improvement:
    • No suggestions

Reviewers validation

The criteria for completing this task is based on the validation of the conclusions and the test results by all reviewers.

All the checkboxes below must be marked in order to close this issue.

@Tostti
Copy link
Member

Tostti commented Mar 14, 2024

Deployment environment

Host server
[root@test-server vagrant]# cat /etc/os-release 
NAME="Red Hat Enterprise Linux"
VERSION="9.3 (Plow)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="9.3"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Red Hat Enterprise Linux 9.3 (Plow)"
ANSI_COLOR="0;31"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:redhat:enterprise_linux:9::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 9"
REDHAT_BUGZILLA_PRODUCT_VERSION=9.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.3"
[root@test-server vagrant]# free -h
               total        used        free      shared  buff/cache   available
Mem:            30Gi       1.5Gi        25Gi        26Mi       4.1Gi        29Gi
Swap:          2.0Gi          0B       2.0Gi
[root@test-server vagrant]# df --total -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     4.0M     0  4.0M   0% /dev
tmpfs                         16G   84K   16G   1% /dev/shm
tmpfs                        6.2G   17M  6.2G   1% /run
/dev/mapper/rhel_rhel9-root   70G  6.1G   64G   9% /
/dev/sda1                    960M  177M  784M  19% /boot
tmpfs                        3.1G     0  3.1G   0% /run/user/1000
shm                           63M   84K   63M   1% /var/lib/containers/storage/overlay-containers/54943e5211005ffe4925d95e95180f77f7046dc44f5f42b2112a49f36254ba23/userdata/shm
overlay                       70G  6.1G   64G   9% /var/lib/containers/storage/overlay/ff8057097939c2c0d5074318c29187708137140dcfdd113523e719705b4cea8f/merged
total                        166G   13G  154G   8% -
[root@test-server vagrant]# lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         46 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  16
  On-line CPU(s) list:   0-15
Vendor ID:               GenuineIntel
  Model name:            12th Gen Intel(R) Core(TM) i7-12700
    CPU family:          6
    Model:               151
    Thread(s) per core:  1
    Core(s) per socket:  16
    Socket(s):           1
    Stepping:            2
    BogoMIPS:            4223.99
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3
                          cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase bmi1 avx2 bmi2 invpcid rdseed clflushopt arat md_clear flush_l1d arch_capabilities
Virtualization features: 
  Hypervisor vendor:     KVM
  Virtualization type:   full
Caches (sum of all):     
  L1d:                   768 KiB (16 instances)
  L1i:                   512 KiB (16 instances)
  L2:                    20 MiB (16 instances)
  L3:                    400 MiB (16 instances)
NUMA:                    
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-15
Vulnerabilities:         
  Gather data sampling:  Not affected
  Itlb multihit:         KVM: Mitigation: VMX unsupported
  L1tf:                  Mitigation; PTE Inversion
  Mds:                   Mitigation; Clear CPU buffers; SMT Host state unknown
  Meltdown:              Mitigation; PTI
  Mmio stale data:       Not affected
  Retbleed:              Not affected
  Spec store bypass:     Vulnerable
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
  Srbds:                 Not affected
  Tsx async abort:       Not affected
Kubernetes (minikube)
[vagrant@test-server ~]$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /home/vagrant/.minikube/ca.crt
    extensions:
    - extension:
        last-update: Wed, 13 Mar 2024 23:47:25 UTC
        provider: minikube.sigs.k8s.io
        version: v1.32.0
      name: cluster_info
    server: https://192.168.49.2:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    extensions:
    - extension:
        last-update: Wed, 13 Mar 2024 23:47:25 UTC
        provider: minikube.sigs.k8s.io
        version: v1.32.0
      name: context_info
    namespace: default
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /home/vagrant/.minikube/profiles/minikube/client.crt
    client-key: /home/vagrant/.minikube/profiles/minikube/client.key

Deployment

Clone Wazuh repo 🟢
[vagrant@test-server ~]$ git clone https://github.com/wazuh/wazuh-kubernetes.git -b v4.8.0-beta4 --depth=1
Cloning into 'wazuh-kubernetes'...
remote: Enumerating objects: 63, done.
remote: Counting objects: 100% (63/63), done.
remote: Compressing objects: 100% (53/53), done.
remote: Total 63 (delta 24), reused 23 (delta 7), pack-reused 0
Receiving objects: 100% (63/63), 33.43 KiB | 633.00 KiB/s, done.
Resolving deltas: 100% (24/24), done.
Note: switching to '659d157bb67f718dcfddc6bc455dd5a74aab83ee'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false
Generate certificates 🟢
[vagrant@test-server ~]$ cd wazuh-kubernetes
[vagrant@test-server wazuh-kubernetes]$ wazuh/certs/indexer_cluster/generate_certs.sh
Root CA
Admin cert
create: admin-key-temp.pem
create: admin-key.pem
create: admin.csr
Ignoring -days without -x509; not generating a certificate
create: admin.pem
Certificate request self-signature ok
subject=C = US, L = California, O = Company, CN = admin
* Node cert
create: node-key-temp.pem
create: node-key.pem
create: node.csr
Ignoring -days without -x509; not generating a certificate
create: node.pem
Certificate request self-signature ok
subject=C = US, L = California, O = Company, CN = indexer
* dashboard cert
create: dashboard-key-temp.pem
create: dashboard-key.pem
create: dashboard.csr
Ignoring -days without -x509; not generating a certificate
create: dashboard.pem
Certificate request self-signature ok
subject=C = US, L = California, O = Company, CN = dashboard
* Filebeat cert
create: filebeat-key-temp.pem
create: filebeat-key.pem
create: filebeat.csr
Ignoring -days without -x509; not generating a certificate
create: filebeat.pem
Certificate request self-signature ok
subject=C = US, L = California, O = Company, CN = filebeat
[vagrant@test-server wazuh-kubernetes]$ wazuh/certs/dashboard_http/generate_certs.sh
...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*...........+...+....+.....+...+............+.......+......+..+.+.....+......................+..+......+....+.....+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*..+......+...+...........+....+........+.........+....+....................+.......+..+.......+......+..+............+...+.+.....+.+...+.....+...+....+...+...+...+.....+....+.....+................+.........+.....+......+......+..........+.....+....+..+.........+.+........+....+...+...+.....+.......+..+............+......+...+....+......+..+...+.+.....+.+...............+.........+...+.....+...+....+......+.....+...+....+...+............+.....+.+..+............+......+.........+.+...........+.+...+...........+.+..+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.............+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*..+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.+....+........+..........+..+....+.........+..+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-----
Edit envs/local-env/storage-class.yaml and configure the provisioner for a Minikube cluster 🟢
[vagrant@test-server wazuh-kubernetes]$ cat envs/local-env/storage-class.yaml 
# Copyright (C) 2019, Wazuh Inc.
#
# This program is a free software; you can redistribute it
# and/or modify it under the terms of the GNU General Public
# License (version 2) as published by the FSF - Free Software
# Foundation.

# Wazuh StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: wazuh-storage

# Microk8s is our standard for local development
#provisioner: microk8s.io/hostpath

# In case you're running Minikube you can comment the line above and use this one
provisioner: k8s.io/minikube-hostpath

# If you're using a different provider you can list storage classes
# with: "kubectl get sc" and look for the column "Provisioner"
Cluster deployment 🟢
[vagrant@test-server wazuh-kubernetes]$ kubectl apply -k envs/local-env/
namespace/wazuh created
storageclass.storage.k8s.io/wazuh-storage created
configmap/dashboard-conf-46kfc92gfm created
configmap/indexer-conf-t8tdh7thct created
configmap/wazuh-conf-9g4ffmc689 created
secret/dashboard-certs-68dt77h67d created
secret/dashboard-cred created
secret/indexer-certs-4fk24782k6 created
secret/indexer-cred created
secret/wazuh-api-cred created
secret/wazuh-authd-pass created
secret/wazuh-cluster-key created
service/dashboard created
service/indexer created
service/wazuh created
service/wazuh-cluster created
service/wazuh-indexer created
service/wazuh-workers created
deployment.apps/wazuh-dashboard created
statefulset.apps/wazuh-indexer created
statefulset.apps/wazuh-manager-master created
Warning: spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector: a null labelSelector results in matching no pod
statefulset.apps/wazuh-manager-worker created
Verifying the deployment 🟢

First attempt 🔴

[vagrant@test-server wazuh-kubernetes]$ kubectl get namespaces | grep wazuh
wazuh             Active   2m21s
[vagrant@test-server wazuh-kubernetes]$ kubectl get services -n wazuh
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                          AGE
dashboard       LoadBalancer   10.100.252.196   <pending>     443:30456/TCP                    2m28s
indexer         LoadBalancer   10.101.18.7      <pending>     9200:30364/TCP                   2m28s
wazuh           LoadBalancer   10.103.142.84    <pending>     1515:30564/TCP,55000:31494/TCP   2m28s
wazuh-cluster   ClusterIP      None             <none>        1516/TCP                         2m28s
wazuh-indexer   ClusterIP      None             <none>        9300/TCP                         2m28s
wazuh-workers   LoadBalancer   10.107.127.229   <pending>     1514:30100/TCP                   2m28s
[vagrant@test-server wazuh-kubernetes]$ kubectl get deployments -n wazuh
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
wazuh-dashboard   0/1     1            0           2m49s
[vagrant@test-server wazuh-kubernetes]$ kubectl get statefulsets -n wazuh
NAME                   READY   AGE
wazuh-indexer          0/1     3m3s
wazuh-manager-master   0/1     3m3s
wazuh-manager-worker   0/1     3m3s
[vagrant@test-server wazuh-kubernetes]$ kubectl get pods -n wazuh
NAME                              READY   STATUS             RESTARTS   AGE
wazuh-dashboard-bdcb5bd57-sztfg   0/1     ImagePullBackOff   0          3m17s
wazuh-indexer-0                   0/1     ImagePullBackOff   0          3m17s
wazuh-manager-master-0            0/1     ImagePullBackOff   0          3m17s
wazuh-manager-worker-0            0/1     ImagePullBackOff   0          3m17s
[vagrant@test-server wazuh-kubernetes]$ kubectl get services -n wazuh
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                          AGE
dashboard       LoadBalancer   10.100.252.196   <pending>     443:30456/TCP                    3m53s
indexer         LoadBalancer   10.101.18.7      <pending>     9200:30364/TCP                   3m53s
wazuh           LoadBalancer   10.103.142.84    <pending>     1515:30564/TCP,55000:31494/TCP   3m53s
wazuh-cluster   ClusterIP      None             <none>        1516/TCP                         3m53s
wazuh-indexer   ClusterIP      None             <none>        9300/TCP                         3m53s
wazuh-workers   LoadBalancer   10.107.127.229   <pending>     1514:30100/TCP                   3m53s

I proceeded to troubleshot the issue. After some research and tests, found that in all the pods there was an error pulling the docker images

  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  69s                default-scheduler  Successfully assigned wazuh/wazuh-dashboard-666899794-pdzb6 to minikube
  Warning  Failed     54s                kubelet            Failed to pull image "wazuh/wazuh-dashboard:4.8.0-beta4": Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     54s                kubelet            Error: ErrImagePull
  Normal   BackOff    53s                kubelet            Back-off pulling image "wazuh/wazuh-dashboard:4.8.0-beta4"
  Warning  Failed     53s                kubelet            Error: ImagePullBackOff
  Normal   Pulling    38s (x2 over 69s)  kubelet            Pulling image "wazuh/wazuh-dashboard:4.8.0-beta4"

Fixing the problem

As the problem was only related to the pull of images, I opted to first manually pulling and storing the images into the cache

[vagrant@test-server wazuh-kubernetes]$ minikube cache add wazuh/wazuh-dashboard:4.8.0-beta4
❗  "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
[vagrant@test-server wazuh-kubernetes]$ minikube cache add wazuh/wazuh-indexer:4.8.0-beta4
❗  "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
[vagrant@test-server wazuh-kubernetes]$ minikube cache add wazuh/wazuh-manager:4.8.0-beta4
❗  "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
[vagrant@test-server wazuh-kubernetes]$ minikube cache add busybox
❗  "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"

After that, I modified the manifest files to prevent pulling images and thus using the local cached with imagePullPolicy: Never

[vagrant@test-server ~]$ cat wazuh-kubernetes/wazuh/indexer_stack/wazuh-indexer/cluster/indexer-sts.yaml | grep -A 1 image
          image: busybox
          imagePullPolicy: Never
          resources:
--
          image: busybox
          imagePullPolicy: Never
          command:
--
          image: 'wazuh/wazuh-indexer:4.8.0-beta4'
          imagePullPolicy: Never
          resources:
[vagrant@test-server ~]$ cat wazuh-kubernetes/wazuh/indexer_stack/wazuh-dashboard/dashboard-deploy.yaml | grep -A 1 image          image: 'wazuh/wazuh-dashboard:4.8.0-beta4'
          imagePullPolicy: Never
          resources:
[vagrant@test-server ~]$ cat wazuh-kubernetes/wazuh/wazuh_managers/wazuh-master-sts.yaml | grep -A 1 image
          image: 'wazuh/wazuh-manager:4.8.0-beta4'
          imagePullPolicy: Never
          resources:
[vagrant@test-server ~]$ wazuh-kubernetes/wazuh/wazuh_managers/wazuh-worker-sts.yaml | grep -A 1 image
-bash: wazuh-kubernetes/wazuh/wazuh_managers/wazuh-worker-sts.yaml: Permission denied
[vagrant@test-server ~]$ cat wazuh-kubernetes/wazuh/wazuh_managers/wazuh-worker-sts.yaml | grep -A 1 image
          image: 'wazuh/wazuh-manager:4.8.0-beta4'
          imagePullPolicy: Never
          resources:

After fixing 🟢

[vagrant@test-server wazuh-kubernetes]$ kubectl get namespaces | grep wazuh
wazuh             Active   2m1s
[vagrant@test-server wazuh-kubernetes]$ kubectl get services -n wazuh
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                          AGE
dashboard       LoadBalancer   10.106.123.178   <pending>     443:30404/TCP                    2m7s
indexer         LoadBalancer   10.111.225.203   <pending>     9200:30976/TCP                   2m7s
wazuh           LoadBalancer   10.97.211.92     <pending>     1515:30542/TCP,55000:32601/TCP   2m7s
wazuh-cluster   ClusterIP      None             <none>        1516/TCP                         2m7s
wazuh-indexer   ClusterIP      None             <none>        9300/TCP                         2m7s
wazuh-workers   LoadBalancer   10.98.94.80      <pending>     1514:31688/TCP                   2m7s
[vagrant@test-server wazuh-kubernetes]$ kubectl get deployments -n wazuh
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
wazuh-dashboard   1/1     1            1           2m19s
[vagrant@test-server wazuh-kubernetes]$ kubectl get statefulsets -n wazuh
NAME                   READY   AGE
wazuh-indexer          1/1     2m26s
wazuh-manager-master   1/1     2m26s
wazuh-manager-worker   1/1     2m26s
[vagrant@test-server wazuh-kubernetes]$ kubectl get pods -n wazuh
NAME                               READY   STATUS    RESTARTS   AGE
wazuh-dashboard-696d77679c-np76w   1/1     Running   0          2m34s
wazuh-indexer-0                    1/1     Running   0          2m34s
wazuh-manager-master-0             1/1     Running   0          2m34s
wazuh-manager-worker-0             1/1     Running   0          2m34s
Accessing Wazuh Dashboard 🔴

Previous step

As suggested in the documentation, I forwarded the port to be able to access

kubectl -n wazuh port-forward service/dashboard 8443:443

Verifying the access

image
image
image

As I was able to access the dashboard, I verified why the connection with the API was failing
image

[vagrant@test-server ~]$ kubectl exec --stdin --tty wazuh-manager-master-0 -- /bin/bash
bash-5.2# /var/ossec/bin/wazuh-control status
wazuh-clusterd is running...
wazuh-modulesd: Process 733 not used by Wazuh, removing...
wazuh-modulesd not running...
wazuh-monitord is running...
wazuh-logcollector is running...
wazuh-remoted is running...
wazuh-syscheckd is running...
wazuh-analysisd is running...
wazuh-maild not running...
wazuh-execd is running...
wazuh-db is running...
wazuh-authd is running...
wazuh-agentlessd not running...
wazuh-integratord not running...
wazuh-dbd not running...
wazuh-csyslogd not running...
wazuh-apid is running...

bash-5.2# /var/ossec/bin/wazuh-control restart
2024/03/14 04:03:47 wazuh-modulesd:router: INFO: Loaded router module.
2024/03/14 04:03:47 wazuh-modulesd:content_manager: INFO: Loaded content_manager module.
Killing wazuh-clusterd...
wazuh-modulesd not running...
Killing wazuh-monitord...
Killing wazuh-logcollector...
Killing wazuh-remoted...
Killing wazuh-syscheckd...
Killing wazuh-analysisd...
wazuh-maild not running...
Killing wazuh-execd...
Killing wazuh-db...
Killing wazuh-authd...
wazuh-agentlessd not running...
wazuh-integratord not running...
wazuh-dbd not running...
wazuh-csyslogd not running...
Killing wazuh-apid...
Wazuh v4.8.0 Stopped
Starting Wazuh v4.8.0...
Started wazuh-apid...
Started wazuh-csyslogd...
Started wazuh-dbd...
2024/03/14 04:03:53 wazuh-integratord: INFO: Remote integrations not configured. Clean exit.
Started wazuh-integratord...
Started wazuh-agentlessd...
Started wazuh-authd...
Started wazuh-db...
Started wazuh-execd...
Started wazuh-analysisd...
2024/03/14 04:03:56 wazuh-syscheckd: WARNING: The check_unixaudit option is deprecated in favor of the SCA module.
Started wazuh-syscheckd...
Started wazuh-remoted...
Started wazuh-logcollector...
Started wazuh-monitord...
2024/03/14 04:03:58 wazuh-modulesd:router: INFO: Loaded router module.
2024/03/14 04:03:58 wazuh-modulesd:content_manager: INFO: Loaded content_manager module.
Started wazuh-modulesd...
Started wazuh-clusterd...
Completed.

bash-5.2# /var/ossec/bin/wazuh-control status 
wazuh-clusterd is running...
wazuh-modulesd: Process 2917 not used by Wazuh, removing...
wazuh-modulesd not running...
wazuh-monitord is running...
wazuh-logcollector is running...
wazuh-remoted is running...
wazuh-syscheckd not running...
wazuh-analysisd is running...
wazuh-maild not running...
wazuh-execd is running...
wazuh-db is running...
wazuh-authd is running...
wazuh-agentlessd not running...
wazuh-integratord not running...
wazuh-dbd not running...
wazuh-csyslogd not running...
wazuh-apid is running...

bash-5.2# tail /var/ossec/logs/ossec.log 
2024/03/14 04:03:58 wazuh-modulesd:syscollector: INFO: Module started.
2024/03/14 04:03:58 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2024/03/14 04:03:59 wazuh-modulesd:syscollector: INFO: Evaluation finished.
2024/03/14 04:03:59 wazuh-analysisd: INFO: EPS limit disabled
2024/03/14 04:03:59 wazuh-analysisd: INFO: (7200): Logtest started
2024/03/14 04:03:59 indexer-connector: INFO: IndexerConnector initialized.
2024/03/14 04:04:00 wazuh-syscheckd: INFO: (6009): File integrity monitoring scan ended.
2024/03/14 04:04:00 wazuh-syscheckd: ERROR: Resource temporarily unavailable
2024/03/14 04:04:00 wazuh-syscheckd: ERROR: (1109): Unable to create new pthread. Resource temporarily unavailable (11)
2024/03/14 04:04:00 wazuh-syscheckd: CRITICAL: (1109): Unable to create new pthread.

bash-5.2# grep -iE 'WARN|ERR|CRIT' /var/ossec/logs/ossec.log 
2024/03/14 03:16:23 wazuh-syscheckd: WARNING: The check_unixaudit option is deprecated in favor of the SCA module.
2024/03/14 03:16:25 wazuh-logcollector: ERROR: (1103): Could not open file '/var/log/syslog' due to [(2)-(No such file or directory)].
2024/03/14 03:16:25 wazuh-logcollector: ERROR: (1103): Could not open file '/var/log/dpkg.log' due to [(2)-(No such file or directory)].
2024/03/14 03:16:28 indexer-connector: WARNING: Error initializing IndexerConnector: HTTP response code said error: 503, we will try again after 2 seconds.
2024/03/14 03:16:30 indexer-connector: WARNING: Error initializing IndexerConnector: HTTP response code said error: 503, we will try again after 4 seconds.
2024/03/14 04:03:56 wazuh-syscheckd: WARNING: The check_unixaudit option is deprecated in favor of the SCA module.
2024/03/14 04:03:57 wazuh-logcollector: ERROR: (1103): Could not open file '/var/log/syslog' due to [(2)-(No such file or directory)].
2024/03/14 04:03:57 wazuh-logcollector: ERROR: (1103): Could not open file '/var/log/dpkg.log' due to [(2)-(No such file or directory)].
2024/03/14 04:04:00 wazuh-syscheckd: ERROR: Resource temporarily unavailable
2024/03/14 04:04:00 wazuh-syscheckd: ERROR: (1109): Unable to create new pthread. Resource temporarily unavailable (11)
2024/03/14 04:04:00 wazuh-syscheckd: CRITICAL: (1109): Unable to create new pthread.

bash-5.2# /var/ossec/bin/wazuh-control info
WAZUH_VERSION="v4.8.0"
WAZUH_REVISION="40805"
WAZUH_TYPE="server"

I was not able to overcome this problem. New issue opened: #22511

Changing passwords ⚫

Indexer users 🟢

[vagrant@test-server ~]$ kubectl exec -it wazuh-indexer-0 -n wazuh -- /bin/bash
Defaulted container "wazuh-indexer" out of: wazuh-indexer, volume-mount-hack (init), increase-the-vm-max-map-count (init)
bash-5.2$ export JAVA_HOME=/usr/share/wazuh-indexer/jdk
bash-5.2$ bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/hash.sh
**************************************************************************
** This tool will be deprecated in the next major release of OpenSearch **
** https://github.com/opensearch-project/security/issues/1755           **
**************************************************************************
[Password:]
$2y$12$VJJG4ixJaoX05h.73.0xQerJyvDeAvpQw9.sggzU843VLPw4WFXiO
bash-5.2$ exit
exit
[vagrant@test-server ~]$ nano wazuh-kubernetes/wazuh/indexer_stack/wazuh-indexer/indexer_conf/internal_users.yml 
[vagrant@test-server ~]$ echo -n "Test1234" | base64
VGVzdDEyMzQ=
[vagrant@test-server ~]$ nano wazuh-kubernetes/wazuh/secrets/indexer-cred-secret.yaml 
[vagrant@test-server ~]$ nano wazuh-kubernetes/wazuh/secrets/dashboard-cred-secret.yaml
[vagrant@test-server wazuh-kubernetes]$ kubectl apply -k envs/local-env/
namespace/wazuh unchanged
storageclass.storage.k8s.io/wazuh-storage unchanged
configmap/dashboard-conf-46kfc92gfm unchanged
configmap/indexer-conf-7g75424b89 unchanged
configmap/wazuh-conf-9g4ffmc689 unchanged
secret/dashboard-certs-hbf5d4fc9b configured
secret/dashboard-cred unchanged
secret/indexer-certs-d6bbk6f792 configured
secret/indexer-cred unchanged
secret/wazuh-api-cred unchanged
secret/wazuh-authd-pass unchanged
secret/wazuh-cluster-key unchanged
service/dashboard unchanged
service/indexer unchanged
service/wazuh unchanged
service/wazuh-cluster unchanged
service/wazuh-indexer unchanged
service/wazuh-workers unchanged
deployment.apps/wazuh-dashboard configured
statefulset.apps/wazuh-indexer configured
statefulset.apps/wazuh-manager-master configured
statefulset.apps/wazuh-manager-worker configured
[vagrant@test-server wazuh-kubernetes]$ kubectl exec -it wazuh-indexer-0 -n wazuh -- /bin/bash
Defaulted container "wazuh-indexer" out of: wazuh-indexer, volume-mount-hack (init), increase-the-vm-max-map-count (init)
bash-5.2$ export INSTALLATION_DIR=/usr/share/wazuh-indexer
CACERT=$INSTALLATION_DIR/certs/root-ca.pem
KEY=$INSTALLATION_DIR/certs/admin-key.pem
CERT=$INSTALLATION_DIR/certs/admin.pem
export JAVA_HOME=/usr/share/wazuh-indexer/jdk
bash-5.2$ bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/securityadmin.sh -cd /usr/share/wazuh-indexer/opensearch-security/ -nhnv -cacert  $CACERT -cert $CERT -key $KEY -p 9200 -icl -h $NODE_NAME
**************************************************************************
** This tool will be deprecated in the next major release of OpenSearch **
** https://github.com/opensearch-project/security/issues/1755           **
**************************************************************************
Security Admin v7
Will connect to wazuh-indexer-0:9200 ... done
Connected as "CN=admin,O=Company,L=California,C=US"
OpenSearch Version: 2.10.0
Contacting opensearch cluster 'opensearch' and wait for YELLOW clusterstate ...
Clustername: wazuh
Clusterstate: GREEN
Number of nodes: 1
Number of data nodes: 1
.opendistro_security index already exists, so we do not need to create one.
Populate config from /usr/share/wazuh-indexer/opensearch-security/
Will update '/config' with /usr/share/wazuh-indexer/opensearch-security/config.yml 
   SUCC: Configuration for 'config' created or updated
Will update '/roles' with /usr/share/wazuh-indexer/opensearch-security/roles.yml 
   SUCC: Configuration for 'roles' created or updated
Will update '/rolesmapping' with /usr/share/wazuh-indexer/opensearch-security/roles_mapping.yml 
   SUCC: Configuration for 'rolesmapping' created or updated
Will update '/internalusers' with /usr/share/wazuh-indexer/opensearch-security/internal_users.yml 
   SUCC: Configuration for 'internalusers' created or updated
Will update '/actiongroups' with /usr/share/wazuh-indexer/opensearch-security/action_groups.yml 
   SUCC: Configuration for 'actiongroups' created or updated
Will update '/tenants' with /usr/share/wazuh-indexer/opensearch-security/tenants.yml 
   SUCC: Configuration for 'tenants' created or updated
Will update '/nodesdn' with /usr/share/wazuh-indexer/opensearch-security/nodes_dn.yml 
   SUCC: Configuration for 'nodesdn' created or updated
Will update '/whitelist' with /usr/share/wazuh-indexer/opensearch-security/whitelist.yml 
   SUCC: Configuration for 'whitelist' created or updated
Will update '/audit' with /usr/share/wazuh-indexer/opensearch-security/audit.yml 
   SUCC: Configuration for 'audit' created or updated
Will update '/allowlist' with /usr/share/wazuh-indexer/opensearch-security/allowlist.yml 
   SUCC: Configuration for 'allowlist' created or updated
SUCC: Expected 10 config types for node {"updated_config_types":["allowlist","tenants","rolesmapping","nodesdn","audit","roles","whitelist","internalusers","actiongroups","config"],"updated_config_size":10,"message":null} is 10 (["allowlist","tenants","rolesmapping","nodesdn","audit","roles","whitelist","internalusers","actiongroups","config"]) due to: null
Done with success

Logging in with the new password ✅

Wazuh API users ⚫

Althought I could follow the steps in the documentation, due to the error with the manager I was not able to test if it worked or not.

[vagrant@test-server wazuh-kubernetes]$ echo -n "Test1234!" | base64
VGVzdDEyMzQh
[vagrant@test-server wazuh-kubernetes]$ nano wazuh/secrets/wazuh-api-cred-secret.yaml
[vagrant@test-server wazuh-kubernetes]$ kubectl apply -k envs/local-env/
namespace/wazuh unchanged
storageclass.storage.k8s.io/wazuh-storage unchanged
configmap/dashboard-conf-46kfc92gfm unchanged
configmap/indexer-conf-7g75424b89 unchanged
configmap/wazuh-conf-9g4ffmc689 unchanged
secret/dashboard-certs-hbf5d4fc9b configured
secret/dashboard-cred unchanged
secret/indexer-certs-d6bbk6f792 configured
secret/indexer-cred unchanged
secret/wazuh-api-cred unchanged
secret/wazuh-authd-pass unchanged
secret/wazuh-cluster-key unchanged
service/dashboard unchanged
service/indexer unchanged
service/wazuh unchanged
service/wazuh-cluster unchanged
service/wazuh-indexer unchanged
service/wazuh-workers unchanged
deployment.apps/wazuh-dashboard configured
statefulset.apps/wazuh-indexer configured
statefulset.apps/wazuh-manager-master configured
statefulset.apps/wazuh-manager-worker configured
[vagrant@test-server wazuh-kubernetes]$ kubectl delete pod wazuh-manager-master-0
pod "wazuh-manager-master-0" deleted
[vagrant@test-server wazuh-kubernetes]$ kubectl delete pod wazuh-manager-worker-0
pod "wazuh-manager-worker-0" deleted
[vagrant@test-server wazuh-kubernetes]$ kubectl exec --stdin --tty wazuh-manager-master-0 -- /bin/bash
bash-5.2# TOKEN=$(curl -u wazuh-wui:Test1234! -k -X POST "https://localhost:55000/security/user/authenticate?raw=true")
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   298  100   298    0     0  47376      0 --:--:-- --:--:-- --:--:-- 49666

bash-5.2# echo $TOKEN
{"title": "Bad Request", "detail": "Some Wazuh daemons are not ready yet in node \"wazuh-manager-master\" (wazuh-modulesd->failed)", "dapi_errors": {"wazuh-manager-master": {"error": "Some Wazuh daemons are not ready yet in node \"wazuh-manager-master\" (wazuh-modulesd->failed)"}}, "error": 1017}
Deploy the agent ⚫

I was not able to do this test due to the aforementioned issue.

@wazuhci wazuhci moved this from In progress to Pending review in Release 4.8.0 Mar 14, 2024
@vcerenu
Copy link
Member

vcerenu commented Mar 14, 2024

LGTM

@wazuhci wazuhci moved this from Pending review to Pending final review in Release 4.8.0 Mar 14, 2024
@juliamagan
Copy link
Member

Review notes

There has been an error trying to download the images, but I don't see any issue about it.

@wazuhci wazuhci moved this from Pending final review to On hold in Release 4.8.0 Mar 14, 2024
@Tostti
Copy link
Member

Tostti commented Mar 14, 2024

Review notes

There has been an error trying to download the images, but I don't see any issue about it.

That error was originated in my Kubernetes Cluster, that for some reason had no access to docker.io (I even tried pulling different imagers like the Docker's hello world without success). The images are available in the corresponding site, for that reason I was able to manually download them and upload to the cluster.

I did not create an issue for this as the problem is not wazuh-related

@juliamagan
Copy link
Member

LGTM

@wazuhci wazuhci moved this from On hold to Done in Release 4.8.0 Mar 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Status: Done
Development

No branches or pull requests

4 participants