| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
mm: call ->free_folio() directly in folio_unmap_invalidate()
We can only call filemap_free_folio() if we have a reference to (or hold a
lock on) the mapping. Otherwise, we've already removed the folio from the
mapping so it no longer pins the mapping and the mapping can be removed,
causing a use-after-free when accessing mapping->a_ops.
Follow the same pattern as __remove_mapping() and load the free_folio
function pointer before dropping the lock on the mapping. That lets us
make filemap_free_folio() static as this was the only caller outside
filemap.c. |
| In the Linux kernel, the following vulnerability has been resolved:
can: gw: fix OOB heap access in cgw_csum_crc8_rel()
cgw_csum_crc8_rel() correctly computes bounds-safe indices via calc_idx():
int from = calc_idx(crc8->from_idx, cf->len);
int to = calc_idx(crc8->to_idx, cf->len);
int res = calc_idx(crc8->result_idx, cf->len);
if (from < 0 || to < 0 || res < 0)
return;
However, the loop and the result write then use the raw s8 fields directly
instead of the computed variables:
for (i = crc8->from_idx; ...) /* BUG: raw negative index */
cf->data[crc8->result_idx] = ...; /* BUG: raw negative index */
With from_idx = to_idx = result_idx = -64 on a 64-byte CAN FD frame,
calc_idx(-64, 64) = 0 so the guard passes, but the loop iterates with
i = -64, reading cf->data[-64], and the write goes to cf->data[-64].
This write might end up to 56 (7.0-rc) or 40 (<= 6.19) bytes before the
start of the canfd_frame on the heap.
The companion function cgw_csum_xor_rel() uses `from`/`to`/`res`
correctly throughout; fix cgw_csum_crc8_rel() to match.
Confirmed with KASAN on linux-7.0-rc2:
BUG: KASAN: slab-out-of-bounds in cgw_csum_crc8_rel+0x515/0x5b0
Read of size 1 at addr ffff8880076619c8 by task poc_cgw_oob/62
To configure the can-gw crc8 checksums CAP_NET_ADMIN is needed. |
| In the Linux kernel, the following vulnerability has been resolved:
LoongArch: KVM: Handle the case that EIOINTC's coremap is empty
EIOINTC's coremap in eiointc_update_sw_coremap() can be empty, currently
we get a cpuid with -1 in this case, but we actually need 0 because it's
similar as the case that cpuid >= 4.
This fix an out-of-bounds access to kvm_arch::phyid_map::phys_map[]. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/amdgpu: Fix fence put before wait in amdgpu_amdkfd_submit_ib
amdgpu_amdkfd_submit_ib() submits a GPU job and gets a fence
from amdgpu_ib_schedule(). This fence is used to wait for job
completion.
Currently, the code drops the fence reference using dma_fence_put()
before calling dma_fence_wait().
If dma_fence_put() releases the last reference, the fence may be
freed before dma_fence_wait() is called. This can lead to a
use-after-free.
Fix this by waiting on the fence first and releasing the reference
only after dma_fence_wait() completes.
Fixes the below:
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c:697 amdgpu_amdkfd_submit_ib() warn: passing freed memory 'f' (line 696)
(cherry picked from commit 8b9e5259adc385b61a6590a13b82ae0ac2bd3482) |
| In the Linux kernel, the following vulnerability has been resolved:
net: macb: Use dev_consume_skb_any() to free TX SKBs
The napi_consume_skb() function is not intended to be called in an IRQ
disabled context. However, after commit 6bc8a5098bf4 ("net: macb: Fix
tx_ptr_lock locking"), the freeing of TX SKBs is performed with IRQs
disabled. To resolve the following call trace, use dev_consume_skb_any()
for freeing TX SKBs:
WARNING: kernel/softirq.c:430 at __local_bh_enable_ip+0x174/0x188, CPU#0: ksoftirqd/0/15
Modules linked in:
CPU: 0 UID: 0 PID: 15 Comm: ksoftirqd/0 Not tainted 7.0.0-rc4-next-20260319-yocto-standard-dirty #37 PREEMPT
Hardware name: ZynqMP ZCU102 Rev1.1 (DT)
pstate: 200000c5 (nzCv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : __local_bh_enable_ip+0x174/0x188
lr : local_bh_enable+0x24/0x38
sp : ffff800082b3bb10
x29: ffff800082b3bb10 x28: ffff0008031f3c00 x27: 000000000011ede0
x26: ffff000800a7ff00 x25: ffff800083937ce8 x24: 0000000000017a80
x23: ffff000803243a78 x22: 0000000000000040 x21: 0000000000000000
x20: ffff000800394c80 x19: 0000000000000200 x18: 0000000000000001
x17: 0000000000000001 x16: ffff000803240000 x15: 0000000000000000
x14: ffffffffffffffff x13: 0000000000000028 x12: ffff000800395650
x11: ffff8000821d1528 x10: ffff800081c2bc08 x9 : ffff800081c1e258
x8 : 0000000100000301 x7 : ffff8000810426ec x6 : 0000000000000000
x5 : 0000000000000001 x4 : 0000000000000001 x3 : 0000000000000000
x2 : 0000000000000008 x1 : 0000000000000200 x0 : ffff8000810428dc
Call trace:
__local_bh_enable_ip+0x174/0x188 (P)
local_bh_enable+0x24/0x38
skb_attempt_defer_free+0x190/0x1d8
napi_consume_skb+0x58/0x108
macb_tx_poll+0x1a4/0x558
__napi_poll+0x50/0x198
net_rx_action+0x1f4/0x3d8
handle_softirqs+0x16c/0x560
run_ksoftirqd+0x44/0x80
smpboot_thread_fn+0x1d8/0x338
kthread+0x120/0x150
ret_from_fork+0x10/0x20
irq event stamp: 29751
hardirqs last enabled at (29750): [<ffff8000813be184>] _raw_spin_unlock_irqrestore+0x44/0x88
hardirqs last disabled at (29751): [<ffff8000813bdf60>] _raw_spin_lock_irqsave+0x38/0x98
softirqs last enabled at (29150): [<ffff8000800f1aec>] handle_softirqs+0x504/0x560
softirqs last disabled at (29153): [<ffff8000800f2fec>] run_ksoftirqd+0x44/0x80 |
| In the Linux kernel, the following vulnerability has been resolved:
LoongArch: KVM: Make kvm_get_vcpu_by_cpuid() more robust
kvm_get_vcpu_by_cpuid() takes a cpuid parameter whose type is int, so
cpuid can be negative. Let kvm_get_vcpu_by_cpuid() return NULL for this
case so as to make it more robust.
This fix an out-of-bounds access to kvm_arch::phyid_map::phys_map[]. |
| In the Linux kernel, the following vulnerability has been resolved:
nvmet: move async event work off nvmet-wq
For target nvmet_ctrl_free() flushes ctrl->async_event_work.
If nvmet_ctrl_free() runs on nvmet-wq, the flush re-enters workqueue
completion for the same worker:-
A. Async event work queued on nvmet-wq (prior to disconnect):
nvmet_execute_async_event()
queue_work(nvmet_wq, &ctrl->async_event_work)
nvmet_add_async_event()
queue_work(nvmet_wq, &ctrl->async_event_work)
B. Full pre-work chain (RDMA CM path):
nvmet_rdma_cm_handler()
nvmet_rdma_queue_disconnect()
__nvmet_rdma_queue_disconnect()
queue_work(nvmet_wq, &queue->release_work)
process_one_work()
lock((wq_completion)nvmet-wq) <--------- 1st
nvmet_rdma_release_queue_work()
C. Recursive path (same worker):
nvmet_rdma_release_queue_work()
nvmet_rdma_free_queue()
nvmet_sq_destroy()
nvmet_ctrl_put()
nvmet_ctrl_free()
flush_work(&ctrl->async_event_work)
__flush_work()
touch_wq_lockdep_map()
lock((wq_completion)nvmet-wq) <--------- 2nd
Lockdep splat:
============================================
WARNING: possible recursive locking detected
6.19.0-rc3nvme+ #14 Tainted: G N
--------------------------------------------
kworker/u192:42/44933 is trying to acquire lock:
ffff888118a00948 ((wq_completion)nvmet-wq){+.+.}-{0:0}, at: touch_wq_lockdep_map+0x26/0x90
but task is already holding lock:
ffff888118a00948 ((wq_completion)nvmet-wq){+.+.}-{0:0}, at: process_one_work+0x53e/0x660
3 locks held by kworker/u192:42/44933:
#0: ffff888118a00948 ((wq_completion)nvmet-wq){+.+.}-{0:0}, at: process_one_work+0x53e/0x660
#1: ffffc9000e6cbe28 ((work_completion)(&queue->release_work)){+.+.}-{0:0}, at: process_one_work+0x1c5/0x660
#2: ffffffff82d4db60 (rcu_read_lock){....}-{1:3}, at: __flush_work+0x62/0x530
Workqueue: nvmet-wq nvmet_rdma_release_queue_work [nvmet_rdma]
Call Trace:
__flush_work+0x268/0x530
nvmet_ctrl_free+0x140/0x310 [nvmet]
nvmet_cq_put+0x74/0x90 [nvmet]
nvmet_rdma_free_queue+0x23/0xe0 [nvmet_rdma]
nvmet_rdma_release_queue_work+0x19/0x50 [nvmet_rdma]
process_one_work+0x206/0x660
worker_thread+0x184/0x320
kthread+0x10c/0x240
ret_from_fork+0x319/0x390
Move async event work to a dedicated nvmet-aen-wq to avoid reentrant
flush on nvmet-wq. |
| In the Linux kernel, the following vulnerability has been resolved:
futex: Require sys_futex_requeue() to have identical flags
Nicholas reported that his LLM found it was possible to create a UaF
when sys_futex_requeue() is used with different flags. The initial
motivation for allowing different flags was the variable sized futex,
but since that hasn't been merged (yet), simply mandate the flags are
identical, as is the case for the old style sys_futex() requeue
operations. |
| In the Linux kernel, the following vulnerability has been resolved:
KVM: arm64: Fix the descriptor address in __kvm_at_swap_desc()
Using "(u64 __user *)hva + offset" to get the virtual addresses of S1/S2
descriptors looks really wrong, if offset is not zero. What we want to get
for swapping is hva + offset, not hva + offset*8. ;-)
Fix it. |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: wlcore: Return -ENOMEM instead of -EAGAIN if there is not enough headroom
Since upstream commit e75665dd0968 ("wifi: wlcore: ensure skb headroom
before skb_push"), wl1271_tx_allocate() and with it
wl1271_prepare_tx_frame() returns -EAGAIN if pskb_expand_head() fails.
However, in wlcore_tx_work_locked(), a return value of -EAGAIN from
wl1271_prepare_tx_frame() is interpreted as the aggregation buffer being
full. This causes the code to flush the buffer, put the skb back at the
head of the queue, and immediately retry the same skb in a tight while
loop.
Because wlcore_tx_work_locked() holds wl->mutex, and the retry happens
immediately with GFP_ATOMIC, this will result in an infinite loop and a
CPU soft lockup. Return -ENOMEM instead so the packet is dropped and
the loop terminates.
The problem was found by an experimental code review agent based on
gemini-3.1-pro while reviewing backports into v6.18.y. |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: cfg80211: cancel pmsr_free_wk in cfg80211_pmsr_wdev_down
When the nl80211 socket that originated a PMSR request is
closed, cfg80211_release_pmsr() sets the request's nl_portid
to zero and schedules pmsr_free_wk to process the abort
asynchronously. If the interface is concurrently torn down
before that work runs, cfg80211_pmsr_wdev_down() calls
cfg80211_pmsr_process_abort() directly. However, the already-
scheduled pmsr_free_wk work item remains pending and may run
after the interface has been removed from the driver. This
could cause the driver's abort_pmsr callback to operate on a
torn-down interface, leading to undefined behavior and
potential crashes.
Cancel pmsr_free_wk synchronously in cfg80211_pmsr_wdev_down()
before calling cfg80211_pmsr_process_abort(). This ensures any
pending or in-progress work is drained before interface teardown
proceeds, preventing the work from invoking the driver abort
callback after the interface is gone. |
| In the Linux kernel, the following vulnerability has been resolved:
smb: smbdirect: introduce smbdirect_socket.recv_io.credits.available
The logic off managing recv credits by counting posted recv_io and
granted credits is racy.
That's because the peer might already consumed a credit,
but between receiving the incoming recv at the hardware
and processing the completion in the 'recv_done' functions
we likely have a window where we grant credits, which
don't really exist.
So we better have a decicated counter for the
available credits, which will be incremented
when we posted new recv buffers and drained when
we grant the credits to the peer. |
| In the Linux kernel, the following vulnerability has been resolved:
smb: server: make use of smbdirect_socket.recv_io.credits.available
The logic off managing recv credits by counting posted recv_io and
granted credits is racy.
That's because the peer might already consumed a credit,
but between receiving the incoming recv at the hardware
and processing the completion in the 'recv_done' functions
we likely have a window where we grant credits, which
don't really exist.
So we better have a decicated counter for the
available credits, which will be incremented
when we posted new recv buffers and drained when
we grant the credits to the peer.
This fixes regression Namjae reported with
the 6.18 release. |
| In the Linux kernel, the following vulnerability has been resolved:
smb: server: let send_done handle a completion without IB_SEND_SIGNALED
With smbdirect_send_batch processing we likely have requests without
IB_SEND_SIGNALED, which will be destroyed in the final request
that has IB_SEND_SIGNALED set.
If the connection is broken all requests are signaled
even without explicit IB_SEND_SIGNALED. |
| In the Linux kernel, the following vulnerability has been resolved:
net/tls: fix use-after-free in -EBUSY error path of tls_do_encryption
The -EBUSY handling in tls_do_encryption(), introduced by commit
859054147318 ("net: tls: handle backlogging of crypto requests"), has
a use-after-free due to double cleanup of encrypt_pending and the
scatterlist entry.
When crypto_aead_encrypt() returns -EBUSY, the request is enqueued to
the cryptd backlog and the async callback tls_encrypt_done() will be
invoked upon completion. That callback unconditionally restores the
scatterlist entry (sge->offset, sge->length) and decrements
ctx->encrypt_pending. However, if tls_encrypt_async_wait() returns an
error, the synchronous error path in tls_do_encryption() performs the
same cleanup again, double-decrementing encrypt_pending and
double-restoring the scatterlist.
The double-decrement corrupts the encrypt_pending sentinel (initialized
to 1), making tls_encrypt_async_wait() permanently skip the wait for
pending async callbacks. A subsequent sendmsg can then free the
tls_rec via bpf_exec_tx_verdict() while a cryptd callback is still
pending, resulting in a use-after-free when the callback fires on the
freed record.
Fix this by skipping the synchronous cleanup when the -EBUSY async
wait returns an error, since the callback has already handled
encrypt_pending and sge restoration. |
| In the Linux kernel, the following vulnerability has been resolved:
perf: Make sure to use pmu_ctx->pmu for groups
Oliver reported that x86_pmu_del() ended up doing an out-of-bound memory access
when group_sched_in() fails and needs to roll back.
This *should* be handled by the transaction callbacks, but he found that when
the group leader is a software event, the transaction handlers of the wrong PMU
are used. Despite the move_group case in perf_event_open() and group_sched_in()
using pmu_ctx->pmu.
Turns out, inherit uses event->pmu to clone the events, effectively undoing the
move_group case for all inherited contexts. Fix this by also making inherit use
pmu_ctx->pmu, ensuring all inherited counters end up in the same pmu context.
Similarly, __perf_event_read() should use equally use pmu_ctx->pmu for the
group case. |
| In the Linux kernel, the following vulnerability has been resolved:
xfrm: prevent policy_hthresh.work from racing with netns teardown
A XFRM_MSG_NEWSPDINFO request can queue the per-net work item
policy_hthresh.work onto the system workqueue.
The queued callback, xfrm_hash_rebuild(), retrieves the enclosing
struct net via container_of(). If the net namespace is torn down
before that work runs, the associated struct net may already have
been freed, and xfrm_hash_rebuild() may then dereference stale memory.
xfrm_policy_fini() already flushes policy_hash_work during teardown,
but it does not synchronize policy_hthresh.work.
Synchronize policy_hthresh.work in xfrm_policy_fini() as well, so the
queued work cannot outlive the net namespace teardown and access a
freed struct net. |
| In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: L2CAP: Fix stack-out-of-bounds read in l2cap_ecred_conn_req
Syzbot reported a KASAN stack-out-of-bounds read in l2cap_build_cmd()
that is triggered by a malformed Enhanced Credit Based Connection Request.
The vulnerability stems from l2cap_ecred_conn_req(). The function allocates
a local stack buffer (`pdu`) designed to hold a maximum of 5 Source Channel
IDs (SCIDs), totaling 18 bytes. When an attacker sends a request with more
than 5 SCIDs, the function calculates `rsp_len` based on this unvalidated
`cmd_len` before checking if the number of SCIDs exceeds
L2CAP_ECRED_MAX_CID.
If the SCID count is too high, the function correctly jumps to the
`response` label to reject the packet, but `rsp_len` retains the
attacker's oversized value. Consequently, l2cap_send_cmd() is instructed
to read past the end of the 18-byte `pdu` buffer, triggering a
KASAN panic.
Fix this by moving the assignment of `rsp_len` to after the `num_scid`
boundary check. If the packet is rejected, `rsp_len` will safely
remain 0, and the error response will only read the 8-byte base header
from the stack. |
| In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: MGMT: Fix dangling pointer on mgmt_add_adv_patterns_monitor_complete
This fixes the condition checking so mgmt_pending_valid is executed
whenever status != -ECANCELED otherwise calling mgmt_pending_free(cmd)
would kfree(cmd) without unlinking it from the list first, leaving a
dangling pointer. Any subsequent list traversal (e.g.,
mgmt_pending_foreach during __mgmt_power_off, or another
mgmt_pending_valid call) would dereference freed memory. |
| In the Linux kernel, the following vulnerability has been resolved:
net/smc: fix double-free of smc_spd_priv when tee() duplicates splice pipe buffer
smc_rx_splice() allocates one smc_spd_priv per pipe_buffer and stores
the pointer in pipe_buffer.private. The pipe_buf_operations for these
buffers used .get = generic_pipe_buf_get, which only increments the page
reference count when tee(2) duplicates a pipe buffer. The smc_spd_priv
pointer itself was not handled, so after tee() both the original and the
cloned pipe_buffer share the same smc_spd_priv *.
When both pipes are subsequently released, smc_rx_pipe_buf_release() is
called twice against the same object:
1st call: kfree(priv) sock_put(sk) smc_rx_update_cons() [correct]
2nd call: kfree(priv) sock_put(sk) smc_rx_update_cons() [UAF]
KASAN reports a slab-use-after-free in smc_rx_pipe_buf_release(), which
then escalates to a NULL-pointer dereference and kernel panic via
smc_rx_update_consumer() when it chases the freed priv->smc pointer:
BUG: KASAN: slab-use-after-free in smc_rx_pipe_buf_release+0x78/0x2a0
Read of size 8 at addr ffff888004a45740 by task smc_splice_tee_/74
Call Trace:
<TASK>
dump_stack_lvl+0x53/0x70
print_report+0xce/0x650
kasan_report+0xc6/0x100
smc_rx_pipe_buf_release+0x78/0x2a0
free_pipe_info+0xd4/0x130
pipe_release+0x142/0x160
__fput+0x1c6/0x490
__x64_sys_close+0x4f/0x90
do_syscall_64+0xa6/0x1a0
entry_SYSCALL_64_after_hwframe+0x77/0x7f
</TASK>
BUG: kernel NULL pointer dereference, address: 0000000000000020
RIP: 0010:smc_rx_update_consumer+0x8d/0x350
Call Trace:
<TASK>
smc_rx_pipe_buf_release+0x121/0x2a0
free_pipe_info+0xd4/0x130
pipe_release+0x142/0x160
__fput+0x1c6/0x490
__x64_sys_close+0x4f/0x90
do_syscall_64+0xa6/0x1a0
entry_SYSCALL_64_after_hwframe+0x77/0x7f
</TASK>
Kernel panic - not syncing: Fatal exception
Beyond the memory-safety problem, duplicating an SMC splice buffer is
semantically questionable: smc_rx_update_cons() would advance the
consumer cursor twice for the same data, corrupting receive-window
accounting. A refcount on smc_spd_priv could fix the double-free, but
the cursor-accounting issue would still need to be addressed separately.
The .get callback is invoked by both tee(2) and splice_pipe_to_pipe()
for partial transfers; both will now return -EFAULT. Users who need
to duplicate SMC socket data must use a copy-based read path. |