Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

planner: check schema stale for plan cache when forUpdateRead #22381

Merged
merged 16 commits into from
Mar 19, 2021

Conversation

you06
Copy link
Contributor

@you06 you06 commented Jan 13, 2021

What problem does this PR solve?

Issue Number: #21498 will also affected plan cache.

Problem Summary:

What is changed and how it works?

What's Changed: when the plan is using forUpdateRead, we need to use the latest schema to check if plan is valid.

How it Works:

Update schema version when required.

Related changes

  • Need to cherry-pick to the release-4.0

Check List

Tests

  • Unit test

Release note

  • Fix a bug that plan cache may use stable schema version.

you06 added 4 commits January 12, 2021 19:24
Signed-off-by: you06 <[email protected]>
Signed-off-by: you06 <[email protected]>
Signed-off-by: you06 <[email protected]>
@you06 you06 requested review from a team as code owners January 13, 2021 11:26
@you06 you06 requested review from XuHuaiyu and removed request for a team January 13, 2021 11:26
@github-actions github-actions bot added sig/execution SIG execution sig/sql-infra SIG: SQL Infra labels Jan 13, 2021
@you06 you06 added the type/bugfix This PR fixes a bug. label Jan 14, 2021
Copy link
Member

@winoros winoros left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@ti-srebot ti-srebot added the status/LGT1 Indicates that a PR has LGTM 1. label Jan 14, 2021
@you06 you06 changed the title planner: check schema stale for forUpdateRead planner: check schema stale for plan cache when forUpdateRead Jan 14, 2021
@lysu
Copy link
Contributor

lysu commented Jan 14, 2021

LGTM

@ti-srebot ti-srebot removed the status/LGT1 Indicates that a PR has LGTM 1. label Jan 14, 2021
ti-srebot
ti-srebot previously approved these changes Jan 14, 2021
@ti-srebot ti-srebot added the status/LGT2 Indicates that a PR has LGTM 2. label Jan 14, 2021
@you06
Copy link
Contributor Author

you06 commented Jan 15, 2021

Do not merge this PR now.

Signed-off-by: you06 <[email protected]>
c.Assert(err, IsNil)
tk2.CheckExecResult(1, 0)
// FIXME: should hit plan cache here
tk2.MustQuery("select @@last_plan_from_cache").Check(testkit.Rows("0"))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There still exist a problem, plan cache will not hit here.

@github-actions github-actions bot added the sig/planner SIG: Planner label Jan 15, 2021
delete(sessVars.IsolationReadEngines, kv.TiFlash)
cacheKey = NewPSTMTPlanCacheKey(sctx.GetSessionVars(), e.ExecID, prepared.SchemaVersion)
sessVars.IsolationReadEngines[kv.TiFlash] = struct{}{}
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When use binary protocol and the plan cache is not hit, a new cache will generate inside getPhysicalPlan function, in which use the following codes to exclude TiFlash engine for write statement.

tidb/planner/optimize.go

Lines 85 to 88 in 2364fec

delete(sessVars.IsolationReadEngines, kv.TiFlash)
defer func() {
sessVars.IsolationReadEngines[kv.TiFlash] = struct{}{}
}()

After the plan is builing complete, the TiFlash engine is set back and the it'll still write a plan-cache key with TiFlash engine. As a result, we cache a write plan for TiFlash engine, and it'll never be used correctly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😮 @winoros PTAL

@you06
Copy link
Contributor Author

you06 commented Jan 21, 2021

The plan cache not hit problem is solved and the reason is explained above, @winoros, @lysu PTAL again.

@you06
Copy link
Contributor Author

you06 commented Jan 30, 2021

I think this bug is serious, @winoros can you help reviewing it? Thanks!

@ti-chi-bot ti-chi-bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Mar 18, 2021
@cfzjywxk
Copy link
Contributor

@winoros @XuHuaiyu @eurekaka
Need help PTAL.

@cfzjywxk cfzjywxk added the priority/release-blocker This issue blocks a release. Please solve it ASAP. label Mar 18, 2021
@ti-chi-bot ti-chi-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 18, 2021
@cfzjywxk
Copy link
Contributor

@you06
Please rebase and resolve conflicts.

@ti-chi-bot ti-chi-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 18, 2021
@winoros
Copy link
Member

winoros commented Mar 19, 2021

/merge

@ti-chi-bot
Copy link
Member

This pull request has been accepted and is ready to merge.

Commit hash: 5702b0e

@ti-chi-bot ti-chi-bot added the status/can-merge Indicates a PR has been approved by a committer. label Mar 19, 2021
@ti-chi-bot ti-chi-bot merged commit 92b1b8e into pingcap:master Mar 19, 2021
ti-srebot pushed a commit to ti-srebot/tidb that referenced this pull request Mar 19, 2021
@ti-srebot
Copy link
Contributor

cherry pick to release-5.0 in PR #23435

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-cherry-pick-release-5.0 priority/release-blocker This issue blocks a release. Please solve it ASAP. sig/execution SIG execution sig/planner SIG: Planner sig/sql-infra SIG: SQL Infra size/L Denotes a PR that changes 100-499 lines, ignoring generated files. status/can-merge Indicates a PR has been approved by a committer. status/LGT2 Indicates that a PR has LGTM 2. type/bugfix This PR fixes a bug.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants