You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
bpf: veth driver panics when xdp prog attached before veth_open
The following panic is observed when bringing up (veth_open) a veth device
that has an XDP program attached.
[ 61.519185] kernel BUG at net/core/dev.c:6442!
[ 61.519456] invalid opcode: 0000 [#1] PREEMPT SMP PTI
[ 61.519752] CPU: 0 PID: 408 Comm: ip Tainted: G W 6.1.0-rc2-185930-gd9095f92950b-dirty vvfedorenko#26
[ 61.520288] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
[ 61.520806] RIP: 0010:napi_enable+0x3d/0x40
[ 61.521077] Code: f6 f6 80 61 08 00 00 02 74 0d 48 83 bf 88 01 00 00 00 74 03 80 cd 01 48 89 d0 f0 48 0f b1 4f 10 48 39 c2 75 c8 c3 cc cc cc cc <0f> 0b 90 48 8b 87 b0 00 00 00 48 81 c7 b0 00 00 00 45 31 c0 48 39
[ 61.522226] RSP: 0018:ffffbc9800cc36f8 EFLAGS: 00010246
[ 61.522557] RAX: 0000000000000001 RBX: 0000000000000300 RCX: 0000000000000001
[ 61.523004] RDX: 0000000000000010 RSI: ffffffff8b0de852 RDI: ffff9f03848e5000
[ 61.523452] RBP: 0000000000000000 R08: 0000000000000800 R09: 0000000000000000
[ 61.523899] R10: ffff9f0384a96800 R11: ffffffffffa48061 R12: ffff9f03849c3000
[ 61.524345] R13: 0000000000000300 R14: ffff9f03848e5000 R15: 0000001000000100
[ 61.524792] FS: 00007f58cb64d2c0(0000) GS:ffff9f03bbc00000(0000) knlGS:0000000000000000
[ 61.525301] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 61.525673] CR2: 00007f6cc629b498 CR3: 000000010498c000 CR4: 00000000000006f0
[ 61.526121] Call Trace:
[ 61.526284] <TASK>
[ 61.526425] __veth_napi_enable_range+0xd6/0x230
[ 61.526723] veth_enable_xdp+0xd0/0x160
[ 61.526969] veth_open+0x2e/0xc0
[ 61.527180] __dev_open+0xe2/0x1b0
[ 61.527405] __dev_change_flags+0x1a1/0x210
[ 61.527673] dev_change_flags+0x1c/0x60
This happens because we are calling veth_napi_enable() on already enabled
queues. The root cause is in commit 2e0de63 changed the control logic
dropping this case,
if (priv->_xdp_prog) {
err = veth_enable_xdp(dev);
if (err)
return err;
- } else if (veth_gro_requested(dev)) {
+ /* refer to the logic in veth_xdp_set() */
+ if (!rtnl_dereference(peer_rq->napi)) {
+ err = veth_napi_enable(peer);
+ if (err)
+ return err;
+ }
so that now veth_napi_enable is called if the peer has not yet
initialiazed its peer_rq->napi. The issue is this will happen
even if the NIC is not up. Then in veth_enable_xdp just above
we have similar path,
veth_enable_xdp
napi_already_on = (dev->flags & IFF_UP) && rcu_access_pointer(rq->napi)
err = veth_enable_xdp_range(dev, 0, dev->real_num_rx_queues, napi_already_on);
The trouble is an xdp prog is assigned before bringing the device up each
of the veth_open path will enable the peers xdp napi structs. But then when
we bring the peer up it will similar try to enable again because from
veth_open the IFF_UP flag is not set until after the op in __dev_open so
we believe napi_alread_on = false.
To fix this just drop the IFF_UP test and rely on checking if the napi
struct is enabled. This also matches the peer check in veth_xdp for
disabling.
To reproduce run ./test_xdp_meta.sh I found adding Cilium/Tetragon tests
for XDP.
Fixes: 2e0de63 ("veth: Avoid drop packets when xdp_redirect performs")
Signed-off-by: John Fastabend <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Acked-by: Jakub Kicinski <[email protected]>
Signed-off-by: Martin KaFai Lau <[email protected]>
0 commit comments