-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial support work for dist-connected ractor
nodes
#32
Conversation
Update Some bugs were found in the native TCP handling, along with some other tracing issues in the dist protocol. A recent update fixes many problems there and adds an integration test getting two nodes dist connected (at least authenticated)
The test spawns two node servers + runs through the authentication protocol logging at trace + debug various steps in the process |
af3823a
to
274cc2c
Compare
Codecov ReportBase: 86.18% // Head: 86.70% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## main #32 +/- ##
==========================================
+ Coverage 86.18% 86.70% +0.52%
==========================================
Files 25 27 +2
Lines 2331 2942 +611
==========================================
+ Hits 2009 2551 +542
- Misses 322 391 +69
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
2efc282
to
ec90324
Compare
…y on the Erlang protocol. This is a collection of tcp managing actors and session management for automated session handling Related issue: #16
38c0e78
to
2940034
Compare
This is the initial workings of dist-connected nodes in
ractor-cluster
as well as a lot of associated changes inractor
necessary to support thenode()
protocol. This PR includes changes toractor::Message
trait is handled globally when thecluster
feature is active.BoxedMessage
trait and extension to support either aBox
of the raw message (for local actor communications) or aVec<u8>
of binary data, representing a "remote" message which needs to be deserialized since it is transmitted over a network link. This leaves deserialization up to each message implementation.ActorCell
andActorRef
to support creating an actor without a dynamically generatedActorId
since we remote actor id's will already be created and transmitted from the remote system.NodeSession
will utilize this registry to route messages from a remote system to a local system.RpcReplyPort
so downstream handlers can also timeout with a reasonable window, this is helpful when we are talking to a remote actor since an RPC might be of some complex type, which the reply will come back serialized on a different Rpc port ofVec<u8>
, be decoded, then the original port will receive a decoded reply. Therefore it's like linking Rpc ports together with a converter in the middle, and that linkage is spawned as a backgroundtokio::Task
. We don't want this living forever potentially (though is should auto-cleanup on the channel being dropped) so this allows us to thread the timeout where necessary, including over the network link (seeractor-cluster/src/protocol/node.proto
)Beyond the changes to
ractor
listed above, this includes the initial design forractor-cluster
which owns and maintains the inter-node links and remoting protocols. We so far have sketched outnode()
locally and allows for communication to a remote actor.However it is not complete, and requires much more
Control logic for inter-node control messages (list actors, actor lifecycle event forwarding, do we want to handle remote-link supervision?, etc)UPDATE: everything here is done except remote supervision and actor death doesn't report exit or panic, just that an actor exited on the remote host + synchronizes the localRemoteActor
.ractor::registry
)Associated to issue #16