Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix error when updating/creating lb in openstack #6431

Merged
merged 2 commits into from
Feb 19, 2019

Conversation

zetaab
Copy link
Member

@zetaab zetaab commented Feb 1, 2019

before:

% ~/go/bin/kops update cluster --name sre.k8s.local
I0201 15:19:36.050171   35045 apply_cluster.go:558] Gossip DNS: skipping DNS validation
I0201 15:19:36.127396   35045 executor.go:103] Tasks: 0 done / 89 total; 41 can run
I0201 15:19:37.793851   35045 executor.go:103] Tasks: 41 done / 89 total; 21 can run
I0201 15:19:38.834867   35045 executor.go:103] Tasks: 62 done / 89 total; 12 can run
I0201 15:19:39.658100   35045 executor.go:103] Tasks: 74 done / 89 total; 6 can run
I0201 15:19:40.227216   35045 executor.go:103] Tasks: 80 done / 89 total; 1 can run
I0201 15:19:40.607063   35045 executor.go:103] Tasks: 81 done / 89 total; 4 can run
I0201 15:19:45.773743   35045 context.go:231] hit maximum retries 4 with error GetFloatingIP: fetching floating IP failed: Resource not found
W0201 15:19:45.774159   35045 executor.go:130] error running task "Keypair/master" (9m54s remaining to succeed): error finding address for *openstacktasks.FloatingIP {"Name":"fip-api.sre.k8s.local","ID":"29883b66-733b-4c0a-8ce1-9e1ae13cc2f8","Server":null,"LB":{"ID":"2753cab4-4d02-4422-849e-0c1139cd4e6e","Name":"api.sre.k8s.local","Subnet":"zone-1.sre.k8s.local","VipSubnet":null,"Lifecycle":"Sync","PortID":"5d1d3d2a-9fd6-4d0e-81b2-5610b7e3d9d7"},"Lifecycle":"Sync"}: GetFloatingIP: fetching floating IP failed: Resource not found
I0201 15:19:45.774206   35045 executor.go:103] Tasks: 84 done / 89 total; 5 can run
I0201 15:19:47.352172   35045 executor.go:103] Tasks: 89 done / 89 total; 0 can run

after:

% ~/go/bin/kops update cluster --name sre.k8s.local
I0201 15:21:13.892666   35247 apply_cluster.go:558] Gossip DNS: skipping DNS validation
I0201 15:21:13.959864   35247 executor.go:103] Tasks: 0 done / 89 total; 41 can run
I0201 15:21:15.910621   35247 executor.go:103] Tasks: 41 done / 89 total; 21 can run
I0201 15:21:17.205824   35247 executor.go:103] Tasks: 62 done / 89 total; 12 can run
I0201 15:21:18.090179   35247 executor.go:103] Tasks: 74 done / 89 total; 6 can run
I0201 15:21:18.921335   35247 executor.go:103] Tasks: 80 done / 89 total; 1 can run
I0201 15:21:19.200064   35247 executor.go:103] Tasks: 81 done / 89 total; 4 can run
I0201 15:21:19.695067   35247 executor.go:103] Tasks: 85 done / 89 total; 4 can run
I0201 15:21:20.882301   35247 executor.go:103] Tasks: 89 done / 89 total; 0 can run

/sig openstack

@drekle could you test this? I do not have possibility to recreate lbaas currently. So I was thinking to run tests like 1) create new kops cluster (is lbaas working correctly?) 2) update something in cluster (like scale and run update --yes) 3) delete cluster. I have only executed commands without --yes

@k8s-ci-robot k8s-ci-robot added area/provider/openstack Issues or PRs related to openstack provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Feb 1, 2019
@chrisz100
Copy link
Contributor

Except the added comment lgtm! As it’s just a nit
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 2, 2019
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 3, 2019
@drekle
Copy link
Contributor

drekle commented Feb 3, 2019

I'll be able to test in the next 24 hours. Looks good however.

@zetaab
Copy link
Member Author

zetaab commented Feb 16, 2019

/test pull-kops-e2e-kubernetes-aws

@zetaab
Copy link
Member Author

zetaab commented Feb 18, 2019

@dims could you approve if ok?

@dims
Copy link
Member

dims commented Feb 18, 2019

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dims, zetaab

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 18, 2019
@zetaab
Copy link
Member Author

zetaab commented Feb 19, 2019

/lgtm

@k8s-ci-robot
Copy link
Contributor

@zetaab: you cannot LGTM your own PR.

In response to this:

/lgtm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@zetaab
Copy link
Member Author

zetaab commented Feb 19, 2019

@drekle can you review and add lgtm if ok

drekle
drekle approved these changes Feb 19, 2019
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 19, 2019
@drekle
Copy link
Contributor

drekle commented Feb 19, 2019

/lgtm

@k8s-ci-robot k8s-ci-robot merged commit 487ac63 into kubernetes:master Feb 19, 2019
@zetaab zetaab deleted the useifexist branch September 21, 2019 07:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/provider/openstack Issues or PRs related to openstack provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants