-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
INCIDENT: Prow currently down #2460
Comments
This would happen now with the switch of obc to smaug
This would cause issue in PR checks and logs find. |
Another instance of this happening despite the OBC being moved back to smaug: https://prow.operate-first.cloud/view/s3/ci-prow/prow-logs/pr-logs/pull/operate-first_apps/2385/kubeval-validation/1567159471027785728 corresponding pr: #2385 |
Yesterday I tested connectivity to the bucket and found no issues. I tested listing, uploading and downloading:
|
is everything in prow reconfigured to the new ionos storage? maybe some controllers need to be restarted to pick up the new config? |
Each job attaches to the credentials via secrets |
I read through the docs on IONOs storage, but I couldnt find information on configuring two of the five properties available in that secret, mainly wether to "s3_force_path_style" and wether the connection was secure or inseccure. Maybe that could be related? |
We surely want a secure connection. A question I have (disclaimer: I don't know/have context about ionos usage, so this might be off-topic): the endpoint is It probably does not cause fatal issues, though. But it might be better to use a "local" endpoint |
Well we currently have storage issues on smaug (slow obc connection to opf-ci-prow timeout producing frequent flakes), and infra as we are still trying to hook it up with storage from the NESE folks. It is a bucket, so I want to try to make it work first rather than provision a whole new one, remake all the storage routing changes and do this whole debugging jig again. |
Unfortunately IONOS doesn't have a US S3 location |
Prow jobs seem to be running successfully at the moment:
|
@Gregory-Pereira can we close this incident? |
All prow jobs are currently failing with a 503 internal server error. It pipes a hexdump to the logs (see logs below).
prow.log
It just spits out a fat hex dump... More info to come.
/assign
/cc @harshad16
The text was updated successfully, but these errors were encountered: