Skip to content

Use multicast to wait for new scan results #36

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Sh3Rm4n
Copy link
Contributor

@Sh3Rm4n Sh3Rm4n commented Jul 10, 2025

This extends the example to wait for new scan results via the multicast object.

This might be to low level, but provides a usecase to abstract multicast attachment.

A proper abstraction shouldn't probably be part of this crate but rather genetlink I assume.

@Sh3Rm4n Sh3Rm4n force-pushed the feat/wait-for-new-scan-results branch from b356ed2 to 0a027c1 Compare July 10, 2025 12:35
cathay4t
cathay4t previously approved these changes Jul 10, 2025
@cathay4t
Copy link
Member

cathay4t commented Jul 10, 2025

Nice work.

The iw code said we should use another socket for multicast event, even through I don't understand it:

	/*
	 * WARNING: DO NOT COPY THIS CODE INTO YOUR APPLICATION
	 *
	 * This code has a bug, which requires creating a separate
	 * nl80211 socket to fix:
	 * It is possible for a NL80211_CMD_NEW_SCAN_RESULTS or
	 * NL80211_CMD_SCAN_ABORTED message to be sent by the kernel
	 * before (!) we listen to it, because we only start listening
	 * after we send our scan request.
	 *
	 * Doing it the other way around has a race condition as well,
	 * if you first open the events socket you may get a notification
	 * for a previous scan.
	 *
	 * The only proper way to fix this would be to listen to events
	 * before sending the command, and for the kernel to send the
	 * scan request along with the event, so that you can match up
	 * whether the scan you requested was finished or aborted (this
	 * may result in processing a scan that another application
	 * requested, but that doesn't seem to be a problem).
	 *
	 * Alas, the kernel doesn't do that (yet).
	 */

It seems to me your example code is doing exactly what iw developer suggest not to do.

@cathay4t cathay4t dismissed their stale review July 10, 2025 13:03

iw comment indicate we should not reuse the socket for both requesting scan and wait scan result.

@cathay4t
Copy link
Member

For abstraction, you can do:

  • genetlink::new_multicast_connection().
  • wl_nl80211::new_multicast_connection().

@Sh3Rm4n
Copy link
Contributor Author

Sh3Rm4n commented Jul 10, 2025

Nice work.

The iw code said we should use another socket for multicast event, even through I don't understand it:

// ...

It seems to me your example code is doing exactly what iw developer suggest not to do.

I've also stumbled upon this comment. But I assume wl_nl80211 does not have the problem, as

  1. The connection runs concurrently via the tokio::spawn, which is reponsable to push the messages to the UnboundReceiver (aka. messages) and
  2. the scan is called after the socket (connection) is already polled.

From my experience with this approach: I seem to get the messages consistently.

For abstraction, you can do:

* `genetlink::new_multicast_connection()`.

* `wl_nl80211::new_multicast_connection()`.

I've only enabled the "nl80211" "scan" group for now. But there are many more groups available

Should I just enable all groups as it is done in iw event or should this function take an argument specifying which groups should be enabled?

Something similar to that?

enum Family {
  Nl80211,
  /// ...
}

enum Nl80211Groups {
  Config,
  Scan,
  Regulatory,
  // ...
}

wl_nl80211::new_multicast_connection(groups: HashSet<Nl80211Groups>) -> Result<...>;

genetlink::new_multicast_connection(family: Family, groups: GenlGroups<Nl80211Groups>) -> Result<...>;

@Sh3Rm4n
Copy link
Contributor Author

Sh3Rm4n commented Jul 11, 2025

This rust-netlink/genetlink#12 seems to be very useful to have before introducing a new_multicast_connection method.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants