-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable Continuous auto kill #14831
Enable Continuous auto kill #14831
Conversation
server/src/test/java/org/apache/druid/server/coordinator/duty/KillUnusedSegmentsTest.java
Fixed
Show fixed
Hide fixed
Personally, I think #12599 is a must have before enabling continuous auto kill. Otherwise one tiny mistake in your loadRule can result in data being permanently deleted from s3 right away. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have not had a chance to look through the tests yet, but this is some early feedback.
docs/configuration/index.md
Outdated
@@ -853,7 +853,7 @@ These Coordinator static configurations can be defined in the `coordinator/runti | |||
|`druid.coordinator.load.timeout`|The timeout duration for when the Coordinator assigns a segment to a Historical process.|PT15M| | |||
|`druid.coordinator.kill.pendingSegments.on`|Boolean flag for whether or not the Coordinator clean up old entries in the `pendingSegments` table of metadata store. If set to true, Coordinator will check the created time of most recently complete task. If it doesn't exist, it finds the created time of the earliest running/pending/waiting tasks. Once the created time is found, then for all dataSources not in the `killPendingSegmentsSkipList` (see [Dynamic configuration](#dynamic-configuration)), Coordinator will ask the Overlord to clean up the entries 1 day or more older than the found created time in the `pendingSegments` table. This will be done periodically based on `druid.coordinator.period.indexingPeriod` specified.|true| | |||
|`druid.coordinator.kill.on`|Boolean flag for whether or not the Coordinator should submit kill task for unused segments, that is, permanently delete them from metadata store and deep storage. If set to true, then for all whitelisted dataSources (or optionally all), Coordinator will submit tasks periodically based on `period` specified. A whitelist can be set via dynamic configuration `killDataSourceWhitelist` described later.<br /><br />When `druid.coordinator.kill.on` is true, segments are eligible for permanent deletion once their data intervals are older than `druid.coordinator.kill.durationToRetain` relative to the current time. If a segment's data interval is older than this threshold at the time it is marked unused, it is eligible for permanent deletion immediately after being marked unused.|false| | |||
|`druid.coordinator.kill.period`|How often to send kill tasks to the indexing service. Value must be greater than `druid.coordinator.period.indexingPeriod`. Only applies if kill is turned on.|P1D (1 Day)| | |||
|`druid.coordinator.kill.period`| How often to send kill tasks to the indexing service. Value must be greater `druid.coordinator.period.indexingPeriod` if set. Only applies if kill is turned on. | NULL (indexing period is used) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change should be release noted. Can you update the PR description to make sure it gets picked up in the release notes please.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks done.
server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java
Outdated
Show resolved
Hide resolved
return new KillUnusedSegments(segmentsMetadataManager, overlordClient, config); | ||
} else { | ||
if (killUnusedSegmentsDutyFromCustomGroups.size() > 1) { | ||
log.warn( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you throw an exception here instead. This is a misconfiguration, and it would be better to fail startup so the operator can fix the config issue.
In the exception message can you add the names of the custom groups where the duty is configured to make it easy for the operator to fix the config issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinatorConfig.java
Outdated
Show resolved
Hide resolved
server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinatorConfig.java
Outdated
Show resolved
Hide resolved
|
||
/** | ||
* Used to keep track of the last interval end time that was killed for each | ||
* datasource. | ||
*/ | ||
private final Map<String, DateTime> datasourceToLastKillIntervalEnd; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if this approach is the best way. If the leader changes, the in-memory map is essentially wiped out when a new leader comes online, so it is possible to re-schedule an interval that is already in the process of being killed. So 2 questions on this
- What is the impact if the same interval is scheduled to be killed twice? Can we test / validate that nothing bad happens? If the task just waits on a lock and then succeeds as there is no data to delete, I think that is reasonable.
- Is it possible to get the list of intervals that are being killed from the kill task payloads? Would that put a lot of stress on the master nodes if we polled the task payloads?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
testing revealed that 1. is what happens, so thought it was no the end of the world that we still have potential of submitting duplicate tasks.
@@ -144,6 +144,12 @@ Optional<Iterable<DataSegment>> iterateAllUsedNonOvershadowedSegmentsForDatasour | |||
*/ | |||
List<Interval> getUnusedSegmentIntervals(String dataSource, DateTime maxEndTime, int limit); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason not to get rid of this function from the interface? It looks like there are no callers other than the KillUnusedSegmentsDuty
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally, I think #12599 is a must have before enabling continuous auto kill. Otherwise one tiny mistake in your loadRule can result in data being permanently deleted from s3 right away.
I think this patch is actually just providing the ability to schedule kill tasks continuously. By default kill tasks run every indexing period, which is every 30 minutes. I agree #12599 is good to have, but I do not think it needs to block this change. Hopefully, both will be merged soon enough, that the order of merging these changes won't matter too much.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed with @zachjsh , I am not sure see I see the appeal of making KillUnusedSegments
a custom duty. The only motivation seems to be to be able to have very low periods of scheduling kill tasks.
My primary argument against the current approach is the concept of having the kill duty be continuous while also needing to specify a period for it. If we want it to be truly continuous, users should only be required to enable/disable kill and not have to specify any period for it.
As a compromise, we could just tie kill tasks to the indexingPeriod
(typically 30 to 60 min) for the time being as that is frequent enough for the purposes of cleaning up unused segments. In the future, if we want even more frequent kill tasks, we could simply tie it up to the coordinator period itself (typically 1 min) (which might be a little too aggressive but it's a thought).
cc: @suneet-s
server/src/test/java/org/apache/druid/server/coordinator/TestDruidCoordinatorConfig.java
Outdated
Show resolved
Hide resolved
final long currentTimeMillis = System.currentTimeMillis(); | ||
if (lastKillTime + period > currentTimeMillis) { | ||
if (null != period && (lastKillTime + period > currentTimeMillis)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we have already validated that period
is not null
in the constructor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
Preconditions.checkArgument( | ||
this.period > config.getCoordinatorIndexingPeriod().getMillis(), | ||
null == this.period || this.period > config.getCoordinatorIndexingPeriod().getMillis(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Null handling and validation can be done in the DruidCoordinatorConfig
itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
now checking against the dutyPeriod which is not always the same as indexing period. Think this needs to be done here, consequently.
Co-authored-by: Suneet Saldanha <[email protected]> Co-authored-by: Kashif Faraz <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed changes to docs/configuration/index.md
.
Docs LGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes, @zachjsh ! The changes look good to me.
DateTime maxEndTime, | ||
int limit, | ||
DateTime maxUsedFlagLastUpdatedTime | ||
); | ||
DateTime maxUsedFlagLastUpdatedTime); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: style
DateTime maxUsedFlagLastUpdatedTime); | |
DateTime maxUsedFlagLastUpdatedTime | |
); |
Description
This change enables the
KillUnusedSegments
coordinator duty to be scheduled continuously. Things that prevented this, or made this difficult before were the following:If scheduled at fast enough rate, the duty would find the same intervals to kill for the same datasources, while kill tasks submitted for those same datasources and intervals were already underway, thus wasting task slots on duplicated work.
The task resources used by auto kill were previously unbounded. Each duty run period, if unused
segments were found for any datasource, a kill task would be submitted to kill them.
This pr solves for both of these issues:
The duty keeps track of the end time of the last interval found when killing unused segments for each datasource, in a in memory map. The end time for each datasource, if found, is used as the start time lower bound, when searching for unused intervals for that same datasource. Each duty run, we remove any datasource keys from this map that are no longer found to match datasources in the system, or in whitelist, and also remove a datasource entry, if there is found to be no unused segments for the datasource, which happens when we fail to find an interval which includes unused segments. Removing the datasource entry from the map, allows for searching for unusedSegments in the datasource from the beginning of time once again
The unbounded task resource usage can be mitigated with coordinator dynamic config added as part of ba957a9
Operators can configure continous auto kill by providing coordinator runtime properties similar to the following:
And providing sensible limits to the killTask usage via coordinator dynamic properties.
Release Note
druid.coordinator.kill.period
can now be as low asdruid.coordinator.period.indexingPeriod
. It was earlier required to be strictly greater than theindexingPeriod
.kill
task for a given datasource to avoid submitting duplicate kill tasks.This PR has: