-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[LDAP] improve processing nested groups #30613
Conversation
blizzz
commented
Jan 11, 2022
- split walkNestedGroups into two distinct methods, for $recordMode determiniation was always wrong in subsequent runs. This also simplifies the code.
- add runtime caching to avoid duplicated LDAP requests. Later on, members are cached on Redis.
- split walkNestedGroups into two distinct methods, for $recordMode determiniation was always wrong in subsequent runs. This also simplifies the code. - add runtime caching to avoid duplicated LDAP requests. Later on, members are cached on Redis. Signed-off-by: Arthur Schiwon <[email protected]>
@CarlSchwan @come-nc there is something nagging me about the memory cache: server/apps/user_ldap/lib/Group_LDAP.php Lines 78 to 80 in 04bafe7
This defaults to 512 stored values and is an arbitrary value in this context. My first urge was to remove the Then I thought about increasing the size, although with calculated maximum values it would seem safer to decrease the capacity of the latter two caches! Since there are no known issues I am aware of, the assumption is rather conservative. For each entry of the first cache I am calculating with 64 bytes for the key (group ids), and the value being an array with 20 times up to 255 characters/bytes (DNs). I do not consider the array overhead. It is about 0,3 MB per entry. Accepting to take 256MB in total, it would allow for 822 entries. The second is "misused" by the PR to store more infomration (i.e. could split), and it would store also DNs as key and lists of DNs as values. Each entry might turn about 1.3MB allowing 206 entries. Same for the last cache. Please check whether my math is wrong :D But given it is not, I am a bit unsure how to move on. Evaluation the assumptions may be one. The max values are already not production ones (but the number of groups is an assumed average). With the assumption of the DN being 100 chars on average, 1342 entries could be kept already. And assuming the gid is also rather a third, the first cache could hold more than 6000 entries. Calculations could be applied (perhaps running once weekly or via occ):
And I think nevertheless the newly used cache should be split (especially both uses cases come into question here), as well as the third cache 😒 I shall revise them, after about 7 years. |