-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BigQuery: 'test_undelete_table' fails with 500 #9633
Comments
I've filed internal issue 144114317 and am escalating to BigQuery on-call. |
I'm able to reproduce when I run the test locally, but not when I try to perform the same actions with the BQ CLI or even with manually pasting similar code into IPython. import time
from google.cloud import bigquery
client = bigquery.Client()
dataset_id = "swast-scratch.test_dataset_b144114317"
table_id = "swast-scratch.test_dataset_b144114317.b144114317"
client.delete_dataset(dataset_id, delete_contents=True, not_found_ok=True)
dataset = client.create_dataset(dataset_id)
SCHEMA = [
bigquery.SchemaField("full_name", "STRING"),
bigquery.SchemaField("age", "INTEGER"),
]
table = bigquery.Table(table_id, schema=SCHEMA)
table = client.create_table(table)
# TODO(developer): Choose an appropriate snapshot point as epoch
# milliseconds. For this example, we choose the current time as we're about
# to delete the table immediately afterwards.
snapshot_epoch = int(time.time() * 1000)
# Due to very short lifecycle of the table, ensure we're not picking a time
# prior to the table creation due to time drift between backend and client.
created_epoch = int(table.created.timestamp() * 1000)
if created_epoch > snapshot_epoch:
snapshot_epoch = created_epoch
# "Accidentally" delete the table.
client.delete_table(table_id) # API request
# Construct the restore-from table ID using a snapshot decorator.
snapshot_table_id = "{}@{}".format(table_id, snapshot_epoch)
# Choose a new table ID for the recovered table data.
recovered_table_id = "{}_recovered".format(table_id)
# Construct and run a copy job.
job = client.copy_table(
snapshot_table_id,
recovered_table_id,
# Location must match that of the source and destination tables.
location="US",
) # API request
job.result() # Waits for job to complete.
print(
"Copied data from deleted table {} to {}".format(table_id, recovered_table_id)
) |
Oops. Not a backend issue. The difference between my manual code and the test is that the test is trying to use microsecond precision for the snapshot decorator, but only milliseconds are supported. https://cloud.google.com/bigquery/table-decorators#snapshot_decorators I'll need to update this code sample. I don't know why this is just started happening, but it's not a backend issue. |
It has failed for every run today. See internal fusion link.
I don't see any reported outages for BigQuery at https://status.cloud.google.com/
The text was updated successfully, but these errors were encountered: