Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consolidate Openstack refresh workers #16465

Merged

Conversation

mansam
Copy link
Contributor

@mansam mansam commented Nov 13, 2017

Works in concert with ManageIQ/manageiq-providers-openstack#154 to do all the Openstack refreshing on one worker.
@Ladas

Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1486657

@@ -47,7 +47,6 @@
"ManageIQ::Providers::Openstack::InfraManager::RefreshWorker" => %i(manageiq_default),
"ManageIQ::Providers::Openstack::NetworkManager::EventCatcher" => %i(manageiq_default),
"ManageIQ::Providers::Openstack::NetworkManager::MetricsCollectorWorker" => %i(manageiq_default),
"ManageIQ::Providers::Openstack::NetworkManager::RefreshWorker" => %i(manageiq_default),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should probably do the same for "ManageIQ::Providers::StorageManager::CinderManager::RefreshWorker" ? Or do we support that as a separate manager now?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm also for ManageIQ::Providers::StorageManager::SwiftManager::RefreshWorker, since all of those are returned as .child_managers, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They probably should be run in the same worker since I think the only use case we support right now is as part of an Openstack provider afaik, but I know that Cinder and Swift have been kept separate for some reason.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, so right now, in the OpenStack PR, you will be doing all the work of .child_managers in that 1 worker. So the other workers need to be removed.

We will need to revisit that, once we have separate Swift (SwiftStack) and Cinder (there was something like CinderStack in progress, like year ago?) managers

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I know there's no way to add Swift or Cinder as stand alone providers, and I'm not aware of any other provider other than the Openstack Cloud Provider that they're used as part of in ManageIQ. @Ladas might want to confirm that though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're going to tie these to Openstack I wonder if we should just move them to the manageiq-providers-openstack repo? @Fryguy ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@agrare I am not sure why those are not under OpenStack tbh.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay if we end up supporting these as standalone we can change the combined_managers to exclude storage managers

@miq-bot
Copy link
Member

miq-bot commented Nov 14, 2017

Checked commits mansam/manageiq@beaf2cd~...a986e84 with ruby 2.3.3, rubocop 0.47.1, haml-lint 0.20.0, and yamllint 1.10.0
1 file checked, 0 offenses detected
Everything looks fine. 🍪

@agrare
Copy link
Member

agrare commented Dec 8, 2017

Kicking the tests

@agrare agrare closed this Dec 8, 2017
@agrare agrare reopened this Dec 8, 2017
@agrare agrare merged commit 8c9200d into ManageIQ:master Dec 8, 2017
@agrare agrare added this to the Sprint 75 Ending Dec 11, 2017 milestone Dec 8, 2017
@agrare
Copy link
Member

agrare commented Dec 8, 2017

@mansam I'm going to mark this gaprindashvili/no to be on the safe side, if there is something specific that this fixes please correct the labels

@Ladas
Copy link
Contributor

Ladas commented Dec 8, 2017

@agrare I think we want this as gaprindashvili/yes and blocker BZ, the side effect of this is avoiding duplication with targeted refresh (same use case as AWS)

@mansam can you create the blocker BZ an run it through Loic? The problem here is that both CloudManager targeted refresh and Network Manager full refresh will create network models (so 2 workers). This can lead to duplication since we do not have the DB unique indexes in place.

@mansam
Copy link
Contributor Author

mansam commented Dec 8, 2017

@agrare @Ladas Even beyond the duplication issue, it fixes https://bugzilla.redhat.com/show_bug.cgi?id=1486657

@agrare
Copy link
Member

agrare commented Dec 8, 2017

Okay cool can you add that to the main PR description @mansam ?

simaishi pushed a commit that referenced this pull request Dec 11, 2017
@simaishi
Copy link
Contributor

Gaprindashvili backport details:

$ git log -1
commit dd404b1e1d80bf8d4e54a2ae16e5fd01c000f7e0
Author: Adam Grare <[email protected]>
Date:   Fri Dec 8 09:32:28 2017 -0500

    Merge pull request #16465 from mansam/consolidate-openstack-refresh-workers
    
    Consolidate Openstack refresh workers
    (cherry picked from commit 8c9200dcd80682c073bf7a0f8a5eae06f673c20f)
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1524590

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants