Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: cluster unusable while foreign keys validated #32118

Closed
damienhollis opened this issue Nov 2, 2018 · 27 comments
Closed

sql: cluster unusable while foreign keys validated #32118

damienhollis opened this issue Nov 2, 2018 · 27 comments
Assignees
Labels
A-schema-changes C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. S-2-temp-unavailability Temp crashes or other availability problems. Can be worked around or resolved by restarting.

Comments

@damienhollis
Copy link
Contributor

Describe the problem

Please describe the issue you observed, and any steps we can take to reproduce it:

We exported our database and reimported it into a different cluster. The import worked up to the point where it attempted to validate the foreign key constraints. The validation ran for several hours and then our console died (unrelated to this issue). The database is not that big but some tables have about 300k rows. In the end it turned out the data had been loaded but the foreign key validation caused the import to take an extremely long time and make the cluster unusable.

To Reproduce

Export databases and import into a new cluster.

Expected behavior
A clear and concise description of what you expected to happen.

The validate of foreign keys would finish in a reasonable time. Perhaps the foreign key validation is actually not really necessary given we assume the exported data was valid and if it wasn't, we still need it imported into another database.

Additional data / screenshots

Environment:

  • CockroachDB version 2.1.0
  • Server OS: Container Optimized OS. Cockroach is running within a container on kubernetes.
  • Client app: cockroach sql

Additional context

The import took extremely long and the cluster was unuseable.

@tim-o
Copy link
Contributor

tim-o commented Nov 2, 2018

Hi @damienhollis - can you provide details about the DDL you're attempting to import or the file itself? Feel free to email me at [email protected], assuming you don't want to post it on a public forum.

@tim-o tim-o self-assigned this Nov 2, 2018
@damienhollis
Copy link
Contributor Author

Provided the DDL via email.

@tim-o
Copy link
Contributor

tim-o commented Nov 8, 2018

Thanks @damienhollis - I saw your update and responded with a folder to store the import. Look forward to assisting further.

@roncrdb
Copy link

roncrdb commented Nov 10, 2018

Wanted to add my findings after doing some tests on my end as well.

When dumping data into a cluster on Kubernetes Hosted GKE instance I was able to reproduce the issue described the the user.

QPS started out at about 35 and quickly degraded to as low as 6 QPS.

Once the dump reached the ALTER TABLE statments, the cluster began to be unusable.
Running show jobs, it does not return any jobs running.

Running show tables it hangs up and never returns the tables.

However, I can query to get results such as select * from <TABLE> limit 1;

It appears that VALIDATE CONSTRAINT is taking a very long time, over an hour in this case:

ALTER TABLE node_node_rel VALIDATE CONSTRAINT fkg3xjjybm6owwjsu2bhv3f25bp

Also running locally after the 18th ALTER TABLE statement, the SQL client hangs and output the following errors:

Time: 171.991015ms
SIGTRAP: trace trap
PC=0x405e8af m=0 sigcode=1
goroutine 0 [idle]:
invalid spdelta runtime.sigtramp 0x405e870 0x405e8af 0xea66a -1
runtime: unexpected return pc for runtime.sigtramp called from 0x500
stack: frame={sp:0xc420009a88, fp:0xc420009a8f} stack=[0x7ffeefb80410,0x7ffeefbff890)
runtime.sigtramp(0xc420009ee000, 0xc420009f4800, 0x1e00, 0xc420009f4800, 0xc420009ab000, 0x0, 0x0, 0x0, 0x0, 0x300, ...)
    ?:0 +0x3f
goroutine 1 [IO wait, 10 minutes]:
internal/poll.runtime_pollWait(0x7c83f00, 0x72, 0xc420b132c0)
    /usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc420466618, 0x72, 0xffffffffffffff00, 0x61262e0, 0x6d6f2b0)
    /usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc420466618, 0xc4206c4000, 0x1000, 0x1000)
    /usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc420466600, 0xc4206c4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
    /usr/local/go/src/internal/poll/fd_unix.go:157 +0x1dc
net.(*netFD).Read(0xc420466600, 0xc4206c4000, 0x1000, 0x1000, 0xc420b13408, 0x408df6d, 0x408d9ef)
    /usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc42000e1e0, 0xc4206c4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
    /usr/local/go/src/net/net.go:176 +0x6a
bufio.(*Reader).Read(0xc42058bf80, 0xc4204f7360, 0x5, 0x200, 0x40a50a3, 0xc420466600, 0xc4204f7360)
    /usr/local/go/src/bufio/bufio.go:216 +0x238
io.ReadAtLeast(0x611fe60, 0xc42058bf80, 0xc4204f7360, 0x5, 0x200, 0x5, 0x4013dc9, 0xc420344160, 0x20)
    /usr/local/go/src/io/io.go:309 +0x86
io.ReadFull(0x611fe60, 0xc42058bf80, 0xc4204f7360, 0x5, 0x200, 0x0, 0xc42049c000, 0xc420466600)
    /usr/local/go/src/io/io.go:327 +0x58
github.com/cockroachdb/cockroach/vendor/github.com/lib/pq.(*conn).recvMessage(0xc4204f7340, 0xc420344160, 0x5aa3640, 0x4fe8a01, 0x7355b60)
    /go/src/github.com/cockroachdb/cockroach/vendor/github.com/lib/pq/conn.go:947 +0xfe
github.com/cockroachdb/cockroach/vendor/github.com/lib/pq.(*conn).recv1Buf(0xc4204f7340, 0xc420344160, 0x0)
    /go/src/github.com/cockroachdb/cockroach/vendor/github.com/lib/pq/conn.go:997 +0x39
github.com/cockroachdb/cockroach/vendor/github.com/lib/pq.(*conn).recv1(0xc4204f7340, 0xc420b13690, 0x4b)
    /go/src/github.com/cockroachdb/cockroach/vendor/github.com/lib/pq/conn.go:1018 +0x7c
github.com/cockroachdb/cockroach/vendor/github.com/lib/pq.(*conn).simpleQuery(0xc4204f7340, 0xc420394280, 0x4a, 0x0, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/vendor/github.com/lib/pq/conn.go:651 +0x1bf
github.com/cockroachdb/cockroach/vendor/github.com/lib/pq.(*conn).query(0xc4204f7340, 0xc420394280, 0x4a, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/vendor/github.com/lib/pq/conn.go:841 +0x3ce
github.com/cockroachdb/cockroach/vendor/github.com/lib/pq.(*conn).Query(0xc4204f7340, 0xc420394280, 0x4a, 0x0, 0x0, 0x0, 0xc420b13970, 0x5400f98, 0xc420778180, 0xc420b138a0)
    /go/src/github.com/cockroachdb/cockroach/vendor/github.com/lib/pq/conn.go:826 +0x64
github.com/cockroachdb/cockroach/pkg/cli.(*sqlConn).Query(0xc420466400, 0xc420394280, 0x4a, 0x0, 0x0, 0x0, 0x57afafa13f, 0xc420b138c0, 0x45ac2b6)
    /go/src/github.com/cockroachdb/cockroach/pkg/cli/sql_util.go:331 +0xbd
github.com/cockroachdb/cockroach/pkg/cli.makeQuery.func1(0xc420466400, 0xed37803ae, 0x0, 0x5c26000)
    /go/src/github.com/cockroachdb/cockroach/pkg/cli/sql_util.go:614 +0x16c
github.com/cockroachdb/cockroach/pkg/cli.runQueryAndFormatResults(0xc420466400, 0x61242c0, 0xc42000e018, 0xc420449740, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/pkg/cli/sql_util.go:705 +0x81
github.com/cockroachdb/cockroach/pkg/cli.(*cliState).doRunStatement(0xc420703110, 0x2, 0x4)
    /go/src/github.com/cockroachdb/cockroach/pkg/cli/sql.go:994 +0xae
github.com/cockroachdb/cockroach/pkg/cli.runInteractive(0xc420466400, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/pkg/cli/sql.go:1121 +0x192
github.com/cockroachdb/cockroach/pkg/cli.runTerm(0x72d7fe0, 0xc4207759b0, 0x0, 0x3, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/pkg/cli/sql.go:1183 +0x159
github.com/cockroachdb/cockroach/pkg/cli.MaybeDecorateGRPCError.func1(0x72d7fe0, 0xc4207759b0, 0x0, 0x3, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/pkg/cli/error.go:40 +0x5a
github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).execute(0x72d7fe0, 0xc420775950, 0x3, 0x3, 0x72d7fe0, 0xc420775950)
    /go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:698 +0x46d
github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x72da840, 0x6, 0x0, 0xc420855ee8)
    /go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:783 +0x2e4
github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).Execute(0x72da840, 0x18, 0xc420855f00)
    /go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:736 +0x2b
github.com/cockroachdb/cockroach/pkg/cli.Run(0xc42003a150, 0x4, 0x4, 0xc4200420a8, 0xc420744000)
    /go/src/github.com/cockroachdb/cockroach/pkg/cli/cli.go:156 +0x6d
github.com/cockroachdb/cockroach/pkg/cli.Main()
    /go/src/github.com/cockroachdb/cockroach/pkg/cli/cli.go:51 +0x15d
main.main()
    /go/src/github.com/cockroachdb/cockroach/main.go:27 +0x20
goroutine 6 [syscall, 17 minutes]:
os/signal.signal_recv(0x0)
    /usr/local/go/src/runtime/sigqueue.go:139 +0xa7
os/signal.loop()
    /usr/local/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
    /usr/local/go/src/os/signal/signal_unix.go:28 +0x41
goroutine 11 [runnable]:
github.com/cockroachdb/cockroach/pkg/util/log.flushDaemon()
    /go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1161 +0xf1
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
    /go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:592 +0x110
goroutine 12 [chan receive, 17 minutes]:
github.com/cockroachdb/cockroach/pkg/util/log.signalFlusher()
    /go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:601 +0x105
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
    /go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:593 +0x128
goroutine 25 [select, 17 minutes, locked to thread]:
runtime.gopark(0x5f37a90, 0x0, 0x5e047f4, 0x6, 0x18, 0x1)
    /usr/local/go/src/runtime/proc.go:291 +0x11a
runtime.selectgo(0xc420497f50, 0xc4200b62a0)
    /usr/local/go/src/runtime/select.go:392 +0xe50
runtime.ensureSigM.func1()
    /usr/local/go/src/runtime/signal_unix.go:549 +0x1c6
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:2361 +0x1
rax    0x16
rbx    0x7315700
rcx    0x405e8ad
rdx    0x7356220
rdi    0xc420009f48
rsi    0x1e
rbp    0xc420009ab0
rsp    0xc420009a88
r8     0xc420009f48
r9     0x76536c7da15d1b89
r10    0x1095472656
r11    0x213
r12    0x0
r13    0xc42005d7a0
r14    0x69
r15    0x100
rip    0x405e8af
rflags 0x213
cs     0x2b
fs     0x0
gs     0x0

In my local debug zip, I was also seeing liveness errors like these:
W181109 22:29:04.137701 25 sql/jobs/registry.go:286 [n3] unable to get node liveness: node not in the liveness table

@tim-o
Copy link
Contributor

tim-o commented Nov 10, 2018

@vivekmenezes - can you take a look? The issue here is with ALTER TABLE VALIDATE CONSTRAINT so I think this falls under schema? If not let me know who should take a look. We have the SQL files and DDL on a private store, let @roncrdb know if you need a link.

@tim-o tim-o added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. A-schema-changes labels Nov 10, 2018
@tim-o tim-o assigned vivekmenezes and roncrdb and unassigned tim-o Nov 10, 2018
@tbg
Copy link
Member

tbg commented Nov 10, 2018

That "trace trap" thing looks like the bug we fixed in #31520. But you were almost certainly running v2.1 which wouldn't have this bug. @benesch WDYT?

@damienhollis
Copy link
Contributor Author

@tschottdorf we are using v2.1. The problem occurred in single node cluster and 3 node cluster.

@benesch
Copy link
Contributor

benesch commented Nov 11, 2018 via email

@damienhollis
Copy link
Contributor Author

@benesch we have had this issue on two environments and both had 2.1 clients. Also, I'm pretty sure we had to restart the cockroach cluster to make it useable again, not just connect a new client.

@benesch
Copy link
Contributor

benesch commented Nov 11, 2018 via email

@damienhollis
Copy link
Contributor Author

@benesch no worries, just thought I'd give more info. A couple of other things. The first instance of this was against a kubernetes cluster and we left it running overnight but the client had crashed by the morning. The second instance was on my development machine and it never finished validating the foreign keys because I killed it after it had been running all night - my laptop is set not to sleep when plugged in, so I'm pretty sure it ran for over 8 hours.

@knz knz changed the title Cluster unusable while foreign keys validated sql: cluster unusable while foreign keys validated Nov 12, 2018
@knz knz added the S-2-temp-unavailability Temp crashes or other availability problems. Can be worked around or resolved by restarting. label Nov 12, 2018
@roncrdb
Copy link

roncrdb commented Nov 12, 2018

@tkschmidt

You were correct, I updated to 2.1 on local now so no more trace error, still hangs on that same ALTER TABLE statement.

@benesch it was on local, not kubernetes.

I ran SHOW QUERIES; and it showed that this query was still executing under the app name internal-validate-fk:

SELECT s.parent_node_id FROM defaultdb.public.node_node_rel@idx_node_node_parent_node_id AS s LEFT JOIN defaultdb.public.node@primary AS t ON ((s.parent_node_id = t.id) OR ((s.parent_node_id IS NULL) AND (t.id IS NULL))) WHERE ((s.parent_node_id IS NOT NULL) AND (t.id IS NULL)) LIMIT 1;

I then ran EXPLAIN OPT on that query:

explain(opt) SELECT s.parent_node_id FROM defaultdb.public.node_node_rel@idx_node_node_parent_node_id AS s LEFT JOIN defaultdb.public.node@primary AS t ON ((s.parent_node_id = t.id) OR ((s.parent_node_id IS NULL) AND (t.id IS NULL))) WHERE ((s.parent_node_id IS NOT NULL) AND (t.id IS NULL)) LIMIT 1;
                                                    text
+----------------------------------------------------------------------------------------------------------+
  project
   ├── columns: parent_node_id:10(uuid!null)
   ├── cardinality: [0 - 1]
   ├── stats: [rows=1]
   ├── cost: 8846.68667
   ├── key: ()
   ├── fd: ()-->(10)
   ├── prune: (10)
   └── limit
        ├── columns: parent_node_id:10(uuid!null) node.id:11(uuid)
        ├── cardinality: [0 - 1]
        ├── stats: [rows=1]
        ├── cost: 8846.67667
        ├── key: ()
        ├── fd: ()-->(10,11)
        ├── select
        │    ├── columns: parent_node_id:10(uuid!null) node.id:11(uuid)
        │    ├── stats: [rows=333.333333, distinct(11)=1]
        │    ├── cost: 8846.66667
        │    ├── fd: ()-->(11)
        │    ├── left-join
        │    │    ├── columns: parent_node_id:10(uuid!null) node.id:11(uuid)
        │    │    ├── stats: [rows=333333.333, distinct(11)=1000]
        │    │    ├── cost: 5513.33333
        │    │    ├── scan node_node_rel@idx_node_node_parent_node_id
        │    │    │    ├── columns: parent_node_id:10(uuid!null)
        │    │    │    ├── flags: force-index=idx_node_node_parent_node_id
        │    │    │    ├── stats: [rows=1000]
        │    │    │    ├── cost: 1030
        │    │    │    └── prune: (10)
        │    │    ├── scan node
        │    │    │    ├── columns: node.id:11(uuid!null)
        │    │    │    ├── flags: force-index=primary
        │    │    │    ├── stats: [rows=1000, distinct(11)=1000]
        │    │    │    ├── cost: 1120
        │    │    │    ├── key: (11)
        │    │    │    └── prune: (11)
        │    │    └── filters [type=bool, outer=(10,11)]
        │    │         └── or [type=bool, outer=(10,11)]
        │    │              ├── eq [type=bool, outer=(10,11)]
        │    │              │    ├── variable: parent_node_id [type=uuid, outer=(10)]
        │    │              │    └── variable: node.id [type=uuid, outer=(11)]
        │    │              └── and [type=bool, outer=(10,11)]
        │    │                   ├── is [type=bool, outer=(10)]
        │    │                   │    ├── variable: parent_node_id [type=uuid, outer=(10)]
        │    │                   │    └── null [type=unknown]
        │    │                   └── is [type=bool, outer=(11), constraints=(/11: [/NULL - /NULL]; tight)]
        │    │                        ├── variable: node.id [type=uuid, outer=(11)]
        │    │                        └── null [type=unknown]
        │    └── filters [type=bool, outer=(11), constraints=(/11: [/NULL - /NULL]; tight), fd=()-->(11)]
        │         └── is [type=bool, outer=(11), constraints=(/11: [/NULL - /NULL]; tight)]
        │              ├── variable: node.id [type=uuid, outer=(11)]
        │              └── null [type=unknown]
        └── const: 1 [type=int]
(54 rows)

Let me know if you need anything else.

@tim-o
Copy link
Contributor

tim-o commented Nov 13, 2018

Zendesk ticket #2801 has been linked to this issue.

@vivekmenezes vivekmenezes assigned dt and unassigned vivekmenezes Nov 14, 2018
@vivekmenezes
Copy link
Contributor

@dt I hope you can followup on this one.

@dt
Copy link
Member

dt commented Nov 14, 2018

Doesn't look like the VALIDATE code is doing anything unexpected here -- it runs a SELECT statement and then based on the (lack of) results, flips a bit in the descriptor. If the SELECT statement is making the cluster unstable, any similar user-run SELECT would do the same thing, so if that is happening, it isn't something the schema change or foreign-key code can do much to fix, but rather a SQL execution issue?

@brucemcpherson
Copy link

brucemcpherson commented Nov 20, 2018

adding a "me too" to this issue.
I'm running v2.1.0 on kubernetes, and rebuilding a 400 table database with a workflow separated into a create table stage (constraints omitted), an insert stage to populate the tables, then an alter stage ( to add, then validate constraints). The first 10 or so tables, which are small, pass through ok, but then it sits forever on a table with about 10000 rows in a validate state. I dropped the database and started again, and it hung at exactly the same place.

While the validate is executing, the table being altered is inaccessible (you cant for example run a count on it from another session), but the other tables, including the one that is associated by a foreign key, are accessible normally. You also cant run a SHOW TABLES, while in this state.

I haven't provided debug logs here, as it would similar to what was already provided by @roncrdb.

  • an update - I upgraded to v2.1.1 and the issue is still there. In fact, now most alters just hang.

@vivekmenezes vivekmenezes assigned vivekmenezes and unassigned dt Nov 20, 2018
vivekmenezes added a commit to vivekmenezes/cockroach that referenced this issue Dec 4, 2018
This is present because of the call to the InternalExecutor
which has a limitation that while it can reuse a user transaction
it cannot reuse a TableCollection associated with a transaction.
Therefore if a user runs a schema change before a VALIDATE
in the same transaction the transaction can get deadlocked on:
the transaction having an outstanding intent on the table, and
the InternalExecutor triggering a table lease acquisition on the
table.

Stop using the InternalExecutor in VALIDATE CONSTRAINT.

Added the missing call to rows.Close() in validateCheckExpr()

related to cockroachdb#32118

Release note (sql change): Fix deadlock when using
ALTER TABLE VALIDATE CONSTRAINT in a transaction with a schema change.
vivekmenezes added a commit to vivekmenezes/cockroach that referenced this issue Dec 5, 2018
This is present because of the call to the InternalExecutor
which has a limitation that while it can reuse a user transaction
it cannot reuse a TableCollection associated with a transaction.
Therefore if a user runs a schema change before a VALIDATE
in the same transaction the transaction can get deadlocked on:
the transaction having an outstanding intent on the table, and
the InternalExecutor triggering a table lease acquisition on the
table.

Stop using the InternalExecutor in VALIDATE CONSTRAINT.

Added the missing call to rows.Close() in validateCheckExpr()

related to cockroachdb#32118

Release note (sql change): Fix deadlock when using
ALTER TABLE VALIDATE CONSTRAINT in a transaction with a schema change.
vivekmenezes added a commit to vivekmenezes/cockroach that referenced this issue Dec 5, 2018
This is present because of the call to the InternalExecutor
which has a limitation that while it can reuse a user transaction
it cannot reuse a TableCollection associated with a transaction.
Therefore if a user runs a schema change before a VALIDATE
in the same transaction the transaction can get deadlocked on:
the transaction having an outstanding intent on the table, and
the InternalExecutor triggering a table lease acquisition on the
table.

Stop using the InternalExecutor in VALIDATE CONSTRAINT.

Added the missing call to rows.Close() in validateCheckExpr()

related to cockroachdb#32118

Release note (sql change): Fix deadlock when using
ALTER TABLE VALIDATE CONSTRAINT in a transaction with a schema change.
@vivekmenezes
Copy link
Contributor

This problem has not been fixed. The FK validation happens in three steps 1. Read schema, 2, run validation, 3. write validated schema. CockroachDB requires that this all be done on the same timestamp.

There is some contention on the schema where all nodes refresh the schema every 5 minutes. Any of these reads on the schema will necessarily push the above schema change requiring it to be retried.

I've come to a conclusion that it's best that we run validation outside of this schema change. The validation command itself should schedule a schema change that runs steps 1, 2, and 3 as separate transactions.

@awoods187
Copy link
Contributor

We pushed this from 19.1 because it requires a refactor

@thoszhang
Copy link
Contributor

I'm going to close this issue. As of 19.2 we'd fixed the problem of using a bad join for the validation query, and we started validating FKs by default when they're created, which makes VALIDATE CONSTRAINT the non-default option, so most of the impact here is already mitigated. The one remaining potential improvement to be made is #37712, and it's easier to track that as a separate issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-schema-changes C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. S-2-temp-unavailability Temp crashes or other availability problems. Can be worked around or resolved by restarting.
Projects
None yet
Development

No branches or pull requests