Network optimizations #3
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Optimizing requests even more!
Along with caching requests, I'm caching responses as well.
How it works?
I created a simple component to demonstrate how to load multiple users in an optimized way. The component generates a list of 30 elements with user ids from [0, 4]. For every id in a list, we call
userService.getUserWithId
which returns the name of the user. To make it more interesting we delay the method call randomly - anywhere between 0 and 10 seconds.The goal is to show that we only invoke the network request once per unique user id.
There are 2 scenarios:
Scenario 1.
Two requests with the same ID happens more than 2 seconds apart. At this point one request already executed and we should have te response. No need to trigger another request, we just return the cached value.
Scenario 2.
Two requests with the same ID happen less than 2 seconds apart. When the second request happens we don't have a cached value because the server takes 2 seconds to return the name. This means that the first request is in-flight. We don't want to trigger another completely different request, instead, we want to piggyback the same request and get the value when it finishes. That's why we cache the multicasted request as well. If the response is not there we return the multicasted observable. This makes sure that if someone already called the same route we don't create another one but observe the same one.
Finally, the caching strategy is not trivial. We use an LRU cache that can fit up to 100 user objects and makes sure that these objects are not too old (less than an hour).