-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
notifications from resolver CF updates #56
Comments
@jerch I feel the proposed structure would require a bit of work if we want to know which model instances have been changed. I was thinking of a structure more like:
i.e. 1st level structure is a
|
@mobiware I am afraid, that a data structure shaped this way will bloat the memory for holding CF names in individual lists for every changed PK. In #58 I have implemented a first version, which simply aggregates as follows: {
model1: {
fieldset1: set_of_pks_1,
fieldset2: set_of_pks_2,
},
model2: {...}
} This is quite efficient, as it stores the PKs directly from While this data structure is quite efficient for aggregation and being passed along, it is still somewhat hard to extract certain bits of information:
I also thought about a signal to register on a CF update directly, something like that: def handler(sender, pks, **kwargs): pass
post_field_update.connect(handler, sender=(model, 'compX')) thus it gets all pks readily aggregated from updates of that field (currently not implemented). But at that point I have no clue about the typical signal usage here. I see several levels of interesting data here:
|
Coming from the discussion in #49. We should establish some way of getting notified when CFs are getting updated by the resolver. There are several ways to achieve this, mainly custom signals and/or method hooks/overloads.
To get reliable behavior any follow-up code can work with, several things have to be considered, before we can place a hook or trigger a custom signal:
general resolver layout
The resolver currently works as a DFS resolving and updating CFs in the dependency graph on descent. There is no explicit backtracking / ascending work done atm. The logic is as follows (pseudocode):
atomicity of resolver updates
As shown in the resolver layout above, the atomic update data type is a queryset filtered by dependency constraints updating certain computed fields on that model. This could be an entry for a custom signal / method hook. But there are several issues in doing it at that level:
Imho hooking into DFS runs is a bad idea, it should not be the official way offered by the API. It still could be done by overloading
update_dependent
orbulk_updater
yourself (if you know what you are doing).From an outer perspective resolver updates should be atomic for a full update tree, not its single nodes. Due to the nature of spread updates across several models, this is somewhat hard to achieve. (Directly linked to this is the question about concurrent access and whether the sync state of computed fields can be trusted. Also see #55.)
To get atomicity for whole tree updates working with custom signals/method hooks, we basically could call into it at the end of the top level
update_dependent
invocation. The work data would have to be collected somehow (now with backtracking).needed data in signal follow-up code
With a signal on whole tree update level, this question gets ugly, since the backtracking would have to carry the updated data along. My suggestion not to waste too much memory - every node in the update tree simply places entries with pks into a container. The entries could look like this:
The signal finally could get the container as argument and can work with it.
where and how to declare the signal tracking
Some ideas regarding this:
@computed
likeresolver_signal=True
to indicate collecting its updates during resolver runsIf we run into memory issues for quite big updates (really huge pk lists), we might have to find a plan B on how to aggregate the updated data.
@mobiware, @olivierdalang Up for discussion.
The text was updated successfully, but these errors were encountered: