-
Notifications
You must be signed in to change notification settings - Fork 306
Immutable user object #5212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Immutable user object #5212
Conversation
d2c434c
to
a1b9952
Compare
Signed-off-by: Nils Bandener <[email protected]>
Signed-off-by: Nils Bandener <[email protected]>
a1b9952
to
e7e5d56
Compare
// user.addSecurityRoles(Arrays.asList("sr1", "sr2")); | ||
// user.addAttributes(ImmutableMap.of("a", "v_a", "b", "v_b")); | ||
// System.out.println(Base64JDKHelper.serializeObject(user)); | ||
String serialized = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that main targets JDK 21, we should start to look for opportunities in this repo to use Text Blocks. It would particularly be useful in some test classes where we define JSON payloads over multiple lines.
Signed-off-by: Nils Bandener <[email protected]>
Signed-off-by: Nils Bandener <[email protected]>
Signed-off-by: Nils Bandener <[email protected]>
Signed-off-by: Nils Bandener <[email protected]>
Signed-off-by: Nils Bandener <[email protected]>
synchronized (user.getSecurityRoles()) { | ||
securityRoles.addAll(user.getSecurityRoles()); | ||
} | ||
final Set<String> securityRoles = new HashSet<>(user.getSecurityRoles()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI: The synchronization here can be removed again because user.getSecurityRoles()
cannot be any longer concurrently modified as observed in #4441
@@ -66,54 +62,7 @@ public UserInjector(Settings settings, ThreadPool threadPool, AuditLog auditLog, | |||
|
|||
} | |||
|
|||
public static class InjectedUser extends User { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI: The class InjectedUser
is no longer necessary, as it was only used to transport the value transportAddress
back to its direct caller, which would set it into the thread context. Afterwares, the additional fields in InjectedUser
would not be used any more. Additionally, the InjectdUser
was serialized to a normal User
object.
Thus, we have replaced Injected
user by the Result
class below which is evaluated by the direct caller and then discarded.
Signed-off-by: Nils Bandener <[email protected]>
@Serial | ||
private static final ObjectStreamField[] serialPersistentFields = { | ||
new ObjectStreamField("name", String.class), | ||
new ObjectStreamField("roles", Collections.synchronizedSet(Collections.emptySet()).getClass()), | ||
new ObjectStreamField("securityRoles", Collections.synchronizedSet(Collections.emptySet()).getClass()), | ||
new ObjectStreamField("requestedTenant", String.class), | ||
new ObjectStreamField("attributes", Collections.synchronizedMap(Collections.emptyMap()).getClass()), | ||
new ObjectStreamField("isInjected", Boolean.TYPE) | ||
}; | ||
|
||
/** | ||
* Creates a backwards compatible object that can be used for serialization | ||
*/ | ||
@Serial | ||
private void writeObject(ObjectOutputStream out) | ||
throws IOException { | ||
ObjectOutputStream.PutField fields = out.putFields(); | ||
fields.put("name", name); | ||
fields.put("roles", Collections.synchronizedSet(new HashSet<>(this.roles))); | ||
fields.put("securityRoles", Collections.synchronizedSet(new HashSet<>(this.securityRoles))); | ||
fields.put("requestedTenant", requestedTenant); | ||
fields.put("attributes", Collections.synchronizedMap(new HashMap<>(this.attributes))); | ||
fields.put("isInjected", this.isInjected); | ||
|
||
out.writeFields(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not particularly nice, but it is necessary to keep backwards compatibility with nodes on older OpenSearch versions. To a certain extent, I am inclined again to use a different serialization format on clusters with only new nodes in order to get rid of these synchronizedSet()
calls.
Benchmark ResultsWe did some initial benchmarks on the new implementation, here are the results. DisclaimerThe usual benchmarking disclaimer: Of course, these figures can only represent very specific scenarios. Other scenarios can look quite different, as there are so many variables which can be different. Yet, especially the "slow parts" of the benchmark results can give one an impression where real performance issues are. Test setup
K8s resources per node
Tested dimensionsUsersWe compared four different users:
The number of internal attributes influences the size of the user object, both unserialized and serialized. Thus, it influences the resources needed for serialization, deserialization and also the network traffic for inter-node communication. It would have been also possible to test with different numbers of roles. Both variables would influence the serialization performance in similar ways. Users with 1000 attributes might be an extreme case. Still, especially users authenticated by Active Directory/LDAP might get (sometimes unknown to administrators) quite a lot of attributes from the LDAP backend. The user authenticated by admin certificate bypasses most parts of the security plugin code (although not the serialization code) and can be thus seen as a upper threshold for what can be achieved. OperationsWe tested four different operations:
Test resultsA commented summary of the results follows in the upcoming sections. The raw test results can be also reviewed at https://docs.google.com/spreadsheets/d/1HeJ8HFOqF5S4FEDiLXZ4d8eniU9UCp_-FJ3XBrojVME/edit?usp=sharing Indexing throughputBulk size 10In this and the following charts, the dashed lines represent OpenSearch with the standard security plugin. The solid lines represent OpenSearch with the security plugin with the optimized privilege evaluation code. The blue line represents requests authenticated by the super admin certificate, which by-passes most of the security plugin. Thus, the blue line forms a kind of "hull curve", it can be seen as a rough theoretical maximum from a security plugin point of view. We see that users with 10 attributes experience a throughput improvement of about 6% with the new implementation. Users with 100 attributes get an 18% improvement. The extreme case where users have 1000 attributes sees a 46% improvement. While the new implementation shows significant improvements, we still see a declining throughput with a growing number of user attributes. This is expected, as we only optimize the process of serialization/deserialization. The increased network load stays unchanged compared to the old implementation. Bulk size 100Bulk requests with more items can be processed more efficiently. Thus, as we are looking at a docs/s throughput, the improvements are a bit smaller in this case. Users with 10 attributes experience a 2.5% improvement with the new implementation. Users with 100 attributes see an 8% improvement. For 1000 attributes, we get a 32% improvement. Search throughputSingle indexIn this benchmark, we compare the operation throughput for search requests performed on a single index. For users with 10 attributes, we only see a small throughput improvement of 1.2%. Users with 100 attributes get an improvement of 13%. Users with 1000 attributes get an improvement of 83%. 20 indicesThis benchmark tests searches operations that search through 20 indices at once (specified by an index pattern). Compared to the single index case, we see much more pronounced improvements. This is likely due to the increased inter-node communication required by searching on several indices with several shards distributed over a cluster. Users with 10 attributes see a 21% improvement. Users with 100 attributes get a throughput that is 39% higher than the on the old version. Users with 1000 attributes see an improvement of 384%. |
Description
For now, this is only a draft PR. Its purpose is to demonstrate the necessary changes and to trigger a discussion whether we can continue this approach - possibly with certain precautions. It is not complete yet, especially the LDAP code is still requiring adaptions.
Fixes #5168
This PR makes the User object immutable. This brings a number of significant advantages:
An immutable object means that the binary data created by serialization does not change as well. That means, the serialized data can be computed once per user object and then re-used again and again. The existing authentication cache even allows us to re-use the serialized data across different requests by the same user.
Additionally, a cache similar to the cache maintained by BackendRegistry can be utilized to also cut down the number of de-serialization operations to a single one per user per node. Such a cache would map the serialized binary data to the actual user object.
Immutable objects are inherently thread-safe. That makes any synchronization or locking mechanisms on the user object unnecessary.
Issues Resolved
[List any issues this PR will resolve]
Is this a backport? If so, please add backport PR # and/or commits #, and remove
backport-failed
label from the original PR.Do these changes introduce new permission(s) to be displayed in the static dropdown on the front-end? If so, please open a draft PR in the security dashboards plugin and link the draft PR here
Testing
[Please provide details of testing done: unit testing, integration testing and manual testing]
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.