Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[azure arm] delete VM OS disk if it is a managed disk #1957

Merged
merged 9 commits into from
Oct 28, 2023
Merged

[azure arm] delete VM OS disk if it is a managed disk #1957

merged 9 commits into from
Oct 28, 2023

Conversation

vicwicker
Copy link
Contributor

@vicwicker vicwicker commented Sep 23, 2023

Description

VMs on Azure can be launched with managed or unmanaged disks.
Unmanaged disks are typically launched from specific VHDs.
Managed disks are managed by Azure and they do not have a VHD
associated that we can remove (e.g., ubuntu image offered by Azure).

Deleting a VM on the Azure Web console allows you to choose whether
you want to delete the OS disk as well, independently of this is
managed or unmanaged. The original code would allow that only for
unmanaged disks via the 'ex_destroy_vhd' parameter but, as said,
this is incomplete as VMs can be launched from Azure managed disks.

This change proposes to introduce a new parameter 'ex_destroy_os_disk'
to delete the VM OS disk if it is a managed disk. Ideally, this
parameter should default to True as (i) that is the Azure's default
option and (ii) it would align with 'ex_destroy_vhd'. However, this
might pose backward compatibility issues if libcloud starts deleting
managed OS disks by default as well. I am not sure what the libcloud
maintainers' take is in this regard but this would be my proposal.

Status

  • done, ready for review

Checklist (tick everything that applies)

  • Code linting (required, can be done after the PR checks)
  • Documentation
  • Tests
  • ICLA (required for bigger changes)

@codecov-commenter
Copy link

codecov-commenter commented Sep 29, 2023

Codecov Report

Attention: Patch coverage is 96.15385% with 1 line in your changes missing coverage. Please review.

Project coverage is 83.19%. Comparing base (900e43e) to head (be8a977).
Report is 209 commits behind head on trunk.

Files Patch % Lines
libcloud/compute/drivers/azure_arm.py 88.89% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##            trunk    #1957   +/-   ##
=======================================
  Coverage   83.19%   83.19%           
=======================================
  Files         353      353           
  Lines       81387    81412   +25     
  Branches     8586     8591    +5     
=======================================
+ Hits        67707    67729   +22     
- Misses      10874    10875    +1     
- Partials     2806     2808    +2     
Files Coverage Δ
libcloud/test/compute/test_azure_arm.py 98.97% <100.00%> (+0.04%) ⬆️
libcloud/compute/drivers/azure_arm.py 60.14% <88.89%> (+0.32%) ⬆️

... and 1 file with indirect coverage changes

VMs on Azure can be launched with managed or unamanaged disks.
Unmanaged disks are typically launched from specific VHDs.
Managed disks are managed by Azure and they do not have a VHD
associated that we can remove (e.g., ubuntu image offered by Azure).

Deleting a VM on the Azure Web console allows you to choose whether
you want to delete the OS disk as well, independently of this is
managed or unmanaged. The original code would allow that only for
unmanaged disks via the 'ex_destroy_vhd' parameter but, as said,
this is incomplete as VMs can be launched from Azure managed disks.

This change proposes to introduce a new parameter 'ex_destroy_os_disk'
to delete the VM OS disk independently of their type. Ideally, this
parameter should default to True as (i) that is the Azure's default
option and (ii) it would align with 'ex_destroy_vhd'. However, this
might pose backward compatibility issues if libcloud starts deleting
managed OS disks by default as well. I am not sure what the libcloud
maintainers' take is in this regard but this would be my proposal.
@vicwicker vicwicker requested a review from Kami October 3, 2023 10:29
Copy link
Member

@Kami Kami left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Please just add an entry in the upgrade notes document and then I will go ahead and merge it into trunk.

@Kami
Copy link
Member

Kami commented Oct 7, 2023

@vicwicker It looks like some lint checks (black) failed. Can you please also fix that when you get a chance?

I think running tox -eisort,black should to the trick.

Thanks.

@vicwicker
Copy link
Contributor Author

Sorry for the delay. It seems I forgot to run black after addressing the first comments. I have done now so and I have also added the release notes. I created a new section 3.9.0 but I am not sure if that's what you were asking. I have also run one last test against my use-case. Thanks a lot for you help :)

@vicwicker
Copy link
Contributor Author

@Kami It seems the validation is still failing but I am not sure if this is related to the change directly. AFAIU, the error seems to be a 404 while uploading to codecov. Please let me know if I am reading this wrong and / or my PR requires some fixing. Thanks!

@Kami
Copy link
Member

Kami commented Oct 28, 2023

@vicwicker Sorry for the delay. I believe issue is related to secrets not being passed to forked repo builds for security reasons.

I think it should pass once I merge it into trunk.

asfgit pushed a commit that referenced this pull request Oct 28, 2023
@asfgit asfgit merged commit b5cad74 into apache:trunk Oct 28, 2023
18 checks passed
@Kami
Copy link
Member

Kami commented Oct 28, 2023

PR has been merged into trunk. Thanks again for your contribution and patience :)

@Kami Kami added this to the v3.9.0 milestone Oct 28, 2023
asfgit pushed a commit that referenced this pull request Dec 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants