Blazor JS Interop Limits #37
Replies: 2 comments
-
If you saw the original post, I did remove a lot of text that was now unnecessary due to a PR removing the need of this talking point. But the topic on Blazor JS Interop Limits is still a very valid conversation to have. |
Beta Was this translation helpful? Give feedback.
-
Ahh... I haven't noticed there is a size limit here at all. Perhaps I would perfer the simplest solution, to "Utilize Stream On Every Call". I believe it's acceptable, compared with the network delay. But the methods you mentioned in "Dynamic Payload Send Back Protocol" also sound nice, which can guarantee the speed of both small and large payload. Perhaps we can do some related tests to find out the best solution. |
Beta Was this translation helpful? Give feedback.
-
@yueyinqiu
Removed lots of what was initially written because this PR resolved multiple of the talking points:
#38
CallJsAsync
I'd like to take into consideration refactoring the JS communication payload size limitations. When Blazor WebAssembly or Blazor Server via SignalR uses "IJSRuntime.Invoke" there is a payload size limit.
The default maximum payload size in WASM and Blazor Server SignalR is 32 KB. Now you can increase the payload size a bit higher, but there's also major performance issues when doing this.
So, the following is a basic serialization test I made:
Each person I add to the JSON serialization was ~317 bytes. This is a very basic example with very short text I'm serializing. And this shows how the payload is sufficient in simple and small use case. As in theory we could serialize 103 people and stay within the 32 KB payload limit, but if we had a 104th person in the serialized list, we officially have issues.
So, numbers would be much worse in production scenarios. Especially if you're utilizing similar NonRelational based data grab tactics like I do for my own applications. Which recognizing this issue. MagicIndexedDB under the current payload limits would not be sufficient for my use as I have use cases of 2.5 MB worth of content to be shipped between my app and IndexedDB. Now my use case is a bit more extreme, but it's the point that matters.
Now the very obviously solution to this problem is to utilize the IJsRuntime Invoke with stream capabilities. Thus anytime we communicate with JS, we're streaming and can handle effectively infinite payload sizes since it's all sent in chunks.
But there's always a negative to such a trade. Obviously stream is amazing when the payload would have been greater than 32 KB in size, but for items less than 32 KB in size, not only do they not benefit, but they then incur the stream setup cost. Because streaming requires an async pipeline which adds latency for small requests. Now the latency added to such a transaction is likely 1-5 ms, though this would have to be verified. So we'd need to decide if that's acceptable or not.
If 1 to 5 milliseconds in latency is acceptable, then the 100% easiest solution to this is to make the CallJsAsync method in IndexDbManager utilize stream.
Now the 1-5 ms latency is my assumption, but I'm not actually entirely sure. Plus that's assuming WASM. As Blazor Server SignalR would for sure have significantly more latency. If I had to guess, for blazor server to utilize stream 24/7 it'd likely add 34 to 60 ms latency per call if I had to guess. And that's also assuming the user is close to the server. The further the user on the webpage is from the original blazor server hosted location, the more latency is added. So 34-60 ms is generous. Because I got that 34-60ms estimate by testing my contact time between my blazor server on my personal PC, which is separated by roughly 190 miles. So imagine the latency issues with this solution potentially for those who're not just a few hours away from the hosted location.
Solutions
Utilize Stream On Every Call
Using the stream on every call is the simplest solution to make the CallJsAsync scale for larger payloads.
Dynamic Payload Send Back Protocol
The only time going from C# to JS would you need to stream would be when you're bulk adding or bulk updating items. But more importantly, we can't assume or know from the C# side how much data is coming back.
So the system would need to work on both ends to first recognize the serialized size of the file, which isn't hard. But if from C# to JS we recognize the serialized string will be larger than the payload capacity, we can simply start a stream and stream back the results since the stream was already started. So the JS side will need to be refactored to be initiated with either a stream or no stream.
But what if from C# the serialized item is under the payload size, but then in JS, it's recognized the returning payload size is too large?
I can think of 2 scenarios. Both scenarios involves returning with a protocol to inform C# that the response does not contain the payload. Then once that response comes back I can think of 2 scenarios:
1.) Potentially making a way that in JS it holds the results while waiting for the C# side to re-open the request with a stream. Though I can see limitations here. Plus the downside that we must spend latency time telling C# we can't return the payload, then waiting for the stream to initiate. Maybe a negligible latency though, but Blazor server still could be an issue here.
2.) Likely the more difficult but fastest solution. When informing C# that the payload requires a stream, it also provides an Id. And the JS side did not wait for C# to initiate a stream, but instead there's a static invokable method that the JS code knows to call and streams the payload with that Id to C# without waiting for C# to initiate a new stream. This invoked method in C# receives the payload and adds it to a concurrent bag. Now the original method that requested the payload now receiving only the Id goes to retrieve the payload based on the provided Id from this concurrent bag location/service that would be setup. Now I'd also likely need an event based system to prevent looped latency delays when waiting for the Id to be added to the concurrent bag.
Option 2 is likely the fastest, but may also have a lot of potential bugs and traps. As where Option 1 is obviously the simplest, especially if utilizing a WASM application where the latency is likely negligible.
And maybe a blend of both options should exist or allowing users to manually choose one or the other. But either way, this is something to really consider for magic indexedDB to make sure it scales with payload size effectively without too much negative performance loss.
Beta Was this translation helpful? Give feedback.
All reactions