Description
With the stabilization of async fn in trait as of 1.75, I think it would be more ergonomic to have a single state to handle all events which implements a trait rather than having to register each callback individually, and would also reduce heap allocation. Default implementations for the trait can also allow the same functionality as before, as they all return impl Future<Output = ()>
.
Below is a basic example of what I'm suggesting (keep in mind I have not tested any of the code below):
trait PeerConnectionEventHandle {
async fn on_ice_connection_state_change(&mut self, connection_state: RTCIceConnectionState) { }
async fn on_signaling_state_change(&mut self, signaling_state: RTCSignalingState) { }
async fn on_data_channel(&mut self, data_channel: Arc<RTCDataChannel>) { }
async fn on_neogotiation_needed(&mut self) { }
async fn on_ice_candidate(&mut self, ice_candidate: Option<RTCIceCandidate>) { }
async fn on_ice_gathering_state_change(&mut self, gatherer_state: RTCIceGathererState) { }
async fn on_track(&mut self, track: Arc<TrackRemote>, reciever: Arc<RTCRtpReceiver>, transceiver: Arc<RTCRtpTransceiver>) { }
async fn on_peer_connection_state_change(&mut self, peer_connection_state: RTCPeerConnectionState) { }
}
changing the implementation to something like:
struct PeerConnectionInternal {
/** keeping all other fields as before except removing the other event handlers **/
event_handlers: Arc<ArcSwapOption<Mutex<dyn PeerConnectionEventHandle>>>,
}
impl RTCPeerConnection {
/** keeping all other fns as before except removing the on_* fns **/
fn with_handler(&self, handler: impl PeerConnectionEventHandle) {
self.internal.event_handlers.store(Some(Arc::new(Mutex::new(handler))))
}
}
and would change the play-from-disk-h264 example to something like:
struct PeerHandler {
done_tx: tokio::sync::mpsc::Sender<()>,
notify_tx: Arc<Notify>,
}
impl PeerConnectionEventHandle for PeerHandler {
// Set the handler for Peer connection state
// This will notify you when the peer has connected/disconnected
async fn on_peer_connection_state_change(&mut self, peer_connection_state: RTCPeerConnectionState) {
println!("Peer Connection State has changed: {peer_connection_state}");
if peer_connection_state == RTCPeerConnectionState::Failed {
// Wait until PeerConnection has had no network activity for 30 seconds or another failure. It may be reconnected using an ICE Restart.
// Use webrtc.PeerConnectionStateDisconnected if you are interested in detecting faster timeout.
// Note that the PeerConnection may come back from PeerConnectionStateDisconnected.
println!("Peer Connection has gone to failed exiting");
let _ = self.done_tx.try_send(());
}
}
// Set the handler for ICE connection state
// This will notify you when the peer has connected/disconnected
async fn on_ice_connection_state_change(&mut self, connection_state: RTCIceConnectionState) {
println!("Connection State has changed {connection_state}");
if connection_state == RTCIceConnectionState::Connected {
notify_tx.notify_waiters();
}
}
}
#[tokio::main]
async fn main() -> Result<()> {
/** everything above is the same **/
let notify_tx = Arc::new(Notify::new());
let notify_video = notify_tx.clone();
let notify_audio = notify_tx.clone();
let (done_tx, mut done_rx) = tokio::sync::mpsc::channel::<()>(1);
let video_done_tx = done_tx.clone();
let audio_done_tx = done_tx.clone();
// Create a new RTCPeerConnection
let peer_connection = Arc::new(api.new_peer_connection(config).await?.with_handler(PeerHandler { notify_tx, done_tx } ));
/** everything below is the same except removing the on_* callbacks **/
}
I'll follow this up with a pr, but with the scope of how much needs to be changed it might take awhile. Also, as it is a very new feature, I wanted to see if there would be any discussion.