-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create new test for create multiple device classes #11096
base: master
Are you sure you want to change the base?
Create new test for create multiple device classes #11096
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: yitzhak12 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
e150cdf
to
a42d51d
Compare
a42d51d
to
2726dc2
Compare
9229e9a
to
b713c44
Compare
- Create new test for create multiple device classes - Create new Yaml files for multiple device classes - Create new helper file for multiple device classes - Create new functions for creating the LVS, add new deviceset in the storagecluster, create CephBlockpool, create deviceclass storageclass for multiple device classes Signed-off-by: Itzhak Kave <[email protected]>
b713c44
to
5b12522
Compare
The test passed in my local. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great work Itzhak!
If possible, suggest breaking this PR into several smaller PRs.
This way it will be easier to merge
@@ -5630,3 +5630,158 @@ def apply_custom_taint_and_toleration(taint_label="xyz"): | |||
) | |||
for pod_obj in pod_list: | |||
pod_obj.delete(wait=False) | |||
|
|||
|
|||
def create_ceph_block_pool_for_deviceclass( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest trying to integrate deviceclass into the existing helper function of create_ceph_block_pool() - https://github.com/red-hat-storage/ocs-ci/blob/3d8e5d8f68210023671c50e10f0ba3b81bca3035/ocs_ci/helpers/helpers.py#L576C5-L576C27
This will prevent duplicity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I thought of that, but the Yaml files have some diffs.
return lvs_obj | ||
|
||
|
||
def create_deviceclass_storageclass( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here. Please try to integrate the deviceclass aspect into the existing function of SC creation -
ocs-ci/ocs_ci/helpers/helpers.py
Line 745 in 3d8e5d8
def create_storage_class( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will try to see what the differences are.
@@ -0,0 +1,163 @@ | |||
import logging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this file could be used for anything related to device classes and not only for the multiple aspect of it.
Hence, suggest renaming the file to device_class.py and better to place it under ocs_ci/ocs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I will change the name.
""" | ||
osd_size = get_storage_size() | ||
log.info(f"the osd size is {osd_size}") | ||
old_lvs_max_size = sum_of_two_storage_sizes(osd_size, "30Gi") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's behind "30Gi"
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought it was a reasonable maximum limit—nothing special in that number.
@@ -3823,3 +3823,51 @@ def get_age_of_cluster_in_days(): | |||
seconds_per_day = 24 * 60 * 60 | |||
time_diff_in_days = time_difference_in_sec / seconds_per_day | |||
return math.ceil(time_diff_in_days) | |||
|
|||
|
|||
def get_ceph_pool_list(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have a similar function here
ocs-ci/ocs_ci/ocs/rados_utils.py
Line 124 in 3d8e5d8
def list_pools(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will check
if pvc: | ||
return pvc.get("claimName") | ||
|
||
return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is redundant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack
@@ -4013,3 +4014,42 @@ def wait_for_ceph_cmd_execute_successfully( | |||
f"The ceph command failed to execute successfully after {num_of_retries} retries" | |||
) | |||
return False | |||
|
|||
|
|||
def get_osd_pod_pvc(osd_pod): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can use this existing function instead
ocs-ci/ocs_ci/ocs/resources/pod.py
Line 1943 in 3d8e5d8
def get_pvc_name(pod_obj): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw the function get_pvc_name
, but this will not work with getting the osd PVC. Should I update the function with my changes of searching the first occurrence of the persistentVolumeClaim
?
|
||
|
||
@brown_squad | ||
@tier1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think these tests are considered function for the product. I would say these are more advanced scenarios and we can mark them with tier2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not? This is something we expect from the client, like adding capacity or resizing the osd.
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
See more details in the Jira issue: https://issues.redhat.com/browse/RHSTOR-4795 and in the test plan: https://docs.google.com/spreadsheets/d/1CL9rFM8kFcO4WNNOH2L_oTpH4qgGGTFm45GtoQshSDs/edit?gid=1729233278#gid=1729233278
The test will perform the following steps:
1. Get the osd nodes.
2. Create a new LocalVolumeSet for the new device class as defined in the function
'create_new_lvs_for_new_deviceclass'.
3. Wait for the PVS in the LocalVolumeSet above to be available.
4. Add a new device set in the storagecluster for the new LocalVolumeSet above, which will also create a new device class.
5. Wait for the storagecluster to be ready.
6. Create a new CephBlockPool for the device class above.
7. Wait for the CephBlockPool to be ready.
8. Create a new StorageClass for the pool.
9. Run the Verification steps defined in the function 'verification_steps_after_adding_new_deviceclass'.
10. Check the cluster and Ceph health.
11. Check basic cluster functionality by creating some resources.
To do in the following PR: