In the Linux kernel, the following vulnerability has been resolved:
veth: reduce XDP no_direct return section to fix race
As explain in commit fa349e396e48 ("veth: Fix race with AF_XDP exposing
old or uninitialized descriptors") for veth there is a chance after
napi_complete_done() that another CPU can manage start another NAPI
instance running veth_pool(). For NAPI this is correctly handled as the
napi_schedule_prep() check will prevent multiple instances from getting
scheduled, but for the remaining code in veth_pool() this can run
concurrent with the newly started NAPI instance.
The problem/race is that xdp_clear_return_frame_no_direct() isn't
designed to be nested.
Prior to commit 401cb7dae813 ("net: Reference bpf_redirect_info via
task_struct on PREEMPT_RT.") the temporary BPF net context
bpf_redirect_info was stored per CPU, where this wasn't an issue. Since
this commit the BPF context is stored in 'current' task_struct. When
running veth in threaded-NAPI mode, then the kthread becomes the storage
area. Now a race exists between two concurrent veth_pool() function calls
one exiting NAPI and one running new NAPI, both using the same BPF net
context.
Race is when another CPU gets within the xdp_set_return_frame_no_direct()
section before exiting veth_pool() calls the clear-function
xdp_clear_return_frame_no_direct().
Metrics
Affected Vendors & Products
References
History
Tue, 23 Dec 2025 14:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | In the Linux kernel, the following vulnerability has been resolved: veth: reduce XDP no_direct return section to fix race As explain in commit fa349e396e48 ("veth: Fix race with AF_XDP exposing old or uninitialized descriptors") for veth there is a chance after napi_complete_done() that another CPU can manage start another NAPI instance running veth_pool(). For NAPI this is correctly handled as the napi_schedule_prep() check will prevent multiple instances from getting scheduled, but for the remaining code in veth_pool() this can run concurrent with the newly started NAPI instance. The problem/race is that xdp_clear_return_frame_no_direct() isn't designed to be nested. Prior to commit 401cb7dae813 ("net: Reference bpf_redirect_info via task_struct on PREEMPT_RT.") the temporary BPF net context bpf_redirect_info was stored per CPU, where this wasn't an issue. Since this commit the BPF context is stored in 'current' task_struct. When running veth in threaded-NAPI mode, then the kthread becomes the storage area. Now a race exists between two concurrent veth_pool() function calls one exiting NAPI and one running new NAPI, both using the same BPF net context. Race is when another CPU gets within the xdp_set_return_frame_no_direct() section before exiting veth_pool() calls the clear-function xdp_clear_return_frame_no_direct(). | |
| Title | veth: reduce XDP no_direct return section to fix race | |
| First Time appeared |
Linux
Linux linux Kernel |
|
| CPEs | cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* | |
| Vendors & Products |
Linux
Linux linux Kernel |
|
| References |
|
Status: PUBLISHED
Assigner: Linux
Published:
Updated: 2025-12-23T13:58:26.749Z
Reserved: 2025-12-16T14:48:05.298Z
Link: CVE-2025-68341
No data.
Status : Awaiting Analysis
Published: 2025-12-23T14:16:40.683
Modified: 2025-12-23T14:51:52.650
Link: CVE-2025-68341
No data.
OpenCVE Enrichment
No data.