CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
In the Linux kernel, the following vulnerability has been resolved:
wifi: rtw89: chan: fix soft lockup in rtw89_entity_recalc_mgnt_roles()
During rtw89_entity_recalc_mgnt_roles(), there is a normalizing process
which will re-order the list if an entry with target pattern is found.
And once one is found, should have aborted the list_for_each_entry. But,
`break` just aborted the inner for-loop. The outer list_for_each_entry
still continues. Normally, only the first entry will match the target
pattern, and the re-ordering will change nothing, so there won't be
soft lockup. However, in some special cases, soft lockup would happen.
Fix it by `goto fill` to break from the list_for_each_entry.
The following is a sample of kernel log for this problem.
watchdog: BUG: soft lockup - CPU#1 stuck for 26s! [wpa_supplicant:2055]
[...]
RIP: 0010:rtw89_entity_recalc ([...] chan.c:392 chan.c:479) rtw89_core
[...] |
In the Linux kernel, the following vulnerability has been resolved:
wifi: mt76: mt7925: fix off by one in mt7925_load_clc()
This comparison should be >= instead of > to prevent an out of bounds
read and write. |
In the Linux kernel, the following vulnerability has been resolved:
wifi: mt76: mt7925: fix NULL deref check in mt7925_change_vif_links
In mt7925_change_vif_links() devm_kzalloc() may return NULL but this
returned value is not checked. |
In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: btbcm: Fix NULL deref in btbcm_get_board_name()
devm_kstrdup() can return a NULL pointer on failure,but this
returned value in btbcm_get_board_name() is not checked.
Add NULL check in btbcm_get_board_name(), to handle kernel NULL
pointer dereference error. |
In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: btrtl: check for NULL in btrtl_setup_realtek()
If insert an USB dongle which chip is not maintained in ic_id_table, it
will hit the NULL point accessed. Add a null point check to avoid the
Kernel Oops. |
In the Linux kernel, the following vulnerability has been resolved:
mailbox: th1520: Fix memory corruption due to incorrect array size
The functions th1520_mbox_suspend_noirq and th1520_mbox_resume_noirq are
intended to save and restore the interrupt mask registers in the MBOX
ICU0. However, the array used to store these registers was incorrectly
sized, leading to memory corruption when accessing all four registers.
This commit corrects the array size to accommodate all four interrupt
mask registers, preventing memory corruption during suspend and resume
operations. |
In the Linux kernel, the following vulnerability has been resolved:
xfrm: state: fix out-of-bounds read during lookup
lookup and resize can run in parallel.
The xfrm_state_hash_generation seqlock ensures a retry, but the hash
functions can observe a hmask value that is too large for the new hlist
array.
rehash does:
rcu_assign_pointer(net->xfrm.state_bydst, ndst) [..]
net->xfrm.state_hmask = nhashmask;
While state lookup does:
h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family);
hlist_for_each_entry_rcu(x, net->xfrm.state_bydst + h, bydst) {
This is only safe in case the update to state_bydst is larger than
net->xfrm.xfrm_state_hmask (or if the lookup function gets
serialized via state spinlock again).
Fix this by prefetching state_hmask and the associated pointers.
The xfrm_state_hash_generation seqlock retry will ensure that the pointer
and the hmask will be consistent.
The existing helpers, like xfrm_dst_hash(), are now unsafe for RCU side,
add lockdep assertions to document that they are only safe for insert
side.
xfrm_state_lookup_byaddr() uses the spinlock rather than RCU.
AFAICS this is an oversight from back when state lookup was converted to
RCU, this lock should be replaced with RCU in a future patch. |
In the Linux kernel, the following vulnerability has been resolved:
rtc: tps6594: Fix integer overflow on 32bit systems
The problem is this multiply in tps6594_rtc_set_offset()
tmp = offset * TICKS_PER_HOUR;
The "tmp" variable is an s64 but "offset" is a long in the
(-277774)-277774 range. On 32bit systems a long can hold numbers up to
approximately two billion. The number of TICKS_PER_HOUR is really large,
(32768 * 3600) or roughly a hundred million. When you start multiplying
by a hundred million it doesn't take long to overflow the two billion
mark.
Probably the safest way to fix this is to change the type of
TICKS_PER_HOUR to long long because it's such a large number. |
In the Linux kernel, the following vulnerability has been resolved:
Revert "libfs: fix infinite directory reads for offset dir"
The current directory offset allocator (based on mtree_alloc_cyclic)
stores the next offset value to return in octx->next_offset. This
mechanism typically returns values that increase monotonically over
time. Eventually, though, the newly allocated offset value wraps
back to a low number (say, 2) which is smaller than other already-
allocated offset values.
Yu Kuai <yukuai3@huawei.com> reports that, after commit 64a7ce76fb90
("libfs: fix infinite directory reads for offset dir"), if a
directory's offset allocator wraps, existing entries are no longer
visible via readdir/getdents because offset_readdir() stops listing
entries once an entry's offset is larger than octx->next_offset.
These entries vanish persistently -- they can be looked up, but will
never again appear in readdir(3) output.
The reason for this is that the commit treats directory offsets as
monotonically increasing integer values rather than opaque cookies,
and introduces this comparison:
if (dentry2offset(dentry) >= last_index) {
On 64-bit platforms, the directory offset value upper bound is
2^63 - 1. Directory offsets will monotonically increase for millions
of years without wrapping.
On 32-bit platforms, however, LONG_MAX is 2^31 - 1. The allocator
can wrap after only a few weeks (at worst).
Revert commit 64a7ce76fb90 ("libfs: fix infinite directory reads for
offset dir") to prepare for a fix that can work properly on 32-bit
systems and might apply to recent LTS kernels where shmem employs
the simple_offset mechanism. |
In the Linux kernel, the following vulnerability has been resolved:
hrtimers: Handle CPU state correctly on hotplug
Consider a scenario where a CPU transitions from CPUHP_ONLINE to halfway
through a CPU hotunplug down to CPUHP_HRTIMERS_PREPARE, and then back to
CPUHP_ONLINE:
Since hrtimers_prepare_cpu() does not run, cpu_base.hres_active remains set
to 1 throughout. However, during a CPU unplug operation, the tick and the
clockevents are shut down at CPUHP_AP_TICK_DYING. On return to the online
state, for instance CFS incorrectly assumes that the hrtick is already
active, and the chance of the clockevent device to transition to oneshot
mode is also lost forever for the CPU, unless it goes back to a lower state
than CPUHP_HRTIMERS_PREPARE once.
This round-trip reveals another issue; cpu_base.online is not set to 1
after the transition, which appears as a WARN_ON_ONCE in enqueue_hrtimer().
Aside of that, the bulk of the per CPU state is not reset either, which
means there are dangling pointers in the worst case.
Address this by adding a corresponding startup() callback, which resets the
stale per CPU state and sets the online flag.
[ tglx: Make the new callback unconditionally available, remove the online
modification in the prepare() callback and clear the remaining
state in the starting callback instead of the prepare callback ] |
In the Linux kernel, the following vulnerability has been resolved:
drm/amd/display: Initialize denominator defaults to 1
[WHAT & HOW]
Variables, used as denominators and maybe not assigned to other values,
should be initialized to non-zero to avoid DIVIDE_BY_ZERO, as reported
by Coverity.
(cherry picked from commit e2c4c6c10542ccfe4a0830bb6c9fd5b177b7bbb7) |
In the Linux kernel, the following vulnerability has been resolved:
irqchip/gic-v3-its: Don't enable interrupts in its_irq_set_vcpu_affinity()
The following call-chain leads to enabling interrupts in a nested interrupt
disabled section:
irq_set_vcpu_affinity()
irq_get_desc_lock()
raw_spin_lock_irqsave() <--- Disable interrupts
its_irq_set_vcpu_affinity()
guard(raw_spinlock_irq) <--- Enables interrupts when leaving the guard()
irq_put_desc_unlock() <--- Warns because interrupts are enabled
This was broken in commit b97e8a2f7130, which replaced the original
raw_spin_[un]lock() pair with guard(raw_spinlock_irq).
Fix the issue by using guard(raw_spinlock).
[ tglx: Massaged change log ] |
In the Linux kernel, the following vulnerability has been resolved:
virtio-blk: don't keep queue frozen during system suspend
Commit 4ce6e2db00de ("virtio-blk: Ensure no requests in virtqueues before
deleting vqs.") replaces queue quiesce with queue freeze in virtio-blk's
PM callbacks. And the motivation is to drain inflight IOs before suspending.
block layer's queue freeze looks very handy, but it is also easy to cause
deadlock, such as, any attempt to call into bio_queue_enter() may run into
deadlock if the queue is frozen in current context. There are all kinds
of ->suspend() called in suspend context, so keeping queue frozen in the
whole suspend context isn't one good idea. And Marek reported lockdep
warning[1] caused by virtio-blk's freeze queue in virtblk_freeze().
[1] https://lore.kernel.org/linux-block/ca16370e-d646-4eee-b9cc-87277c89c43c@samsung.com/
Given the motivation is to drain in-flight IOs, it can be done by calling
freeze & unfreeze, meantime restore to previous behavior by keeping queue
quiesced during suspend. |
In the Linux kernel, the following vulnerability has been resolved:
iio: adc: ti-ads1298: Add NULL check in ads1298_init
devm_kasprintf() can return a NULL pointer on failure. A check on the
return value of such a call in ads1298_init() is missing. Add it. |
In the Linux kernel, the following vulnerability has been resolved:
exfat: fix the new buffer was not zeroed before writing
Before writing, if a buffer_head marked as new, its data must
be zeroed, otherwise uninitialized data in the page cache will
be written.
So this commit uses folio_zero_new_buffers() to zero the new
buffers before ->write_end(). |
In the Linux kernel, the following vulnerability has been resolved:
riscv: Fix sleeping in invalid context in die()
die() can be called in exception handler, and therefore cannot sleep.
However, die() takes spinlock_t which can sleep with PREEMPT_RT enabled.
That causes the following warning:
BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 285, name: mutex
preempt_count: 110001, expected: 0
RCU nest depth: 0, expected: 0
CPU: 0 UID: 0 PID: 285 Comm: mutex Not tainted 6.12.0-rc7-00022-ge19049cf7d56-dirty #234
Hardware name: riscv-virtio,qemu (DT)
Call Trace:
dump_backtrace+0x1c/0x24
show_stack+0x2c/0x38
dump_stack_lvl+0x5a/0x72
dump_stack+0x14/0x1c
__might_resched+0x130/0x13a
rt_spin_lock+0x2a/0x5c
die+0x24/0x112
do_trap_insn_illegal+0xa0/0xea
_new_vmalloc_restore_context_a0+0xcc/0xd8
Oops - illegal instruction [#1]
Switch to use raw_spinlock_t, which does not sleep even with PREEMPT_RT
enabled. |
In the Linux kernel, the following vulnerability has been resolved:
net/sctp: Prevent autoclose integer overflow in sctp_association_init()
While by default max_autoclose equals to INT_MAX / HZ, one may set
net.sctp.max_autoclose to UINT_MAX. There is code in
sctp_association_init() that can consequently trigger overflow. |
In the Linux kernel, the following vulnerability has been resolved:
fgraph: Add READ_ONCE() when accessing fgraph_array[]
In __ftrace_return_to_handler(), a loop iterates over the fgraph_array[]
elements, which are fgraph_ops. The loop checks if an element is a
fgraph_stub to prevent using a fgraph_stub afterward.
However, if the compiler reloads fgraph_array[] after this check, it might
race with an update to fgraph_array[] that introduces a fgraph_stub. This
could result in the stub being processed, but the stub contains a null
"func_hash" field, leading to a NULL pointer dereference.
To ensure that the gops compared against the fgraph_stub matches the gops
processed later, add a READ_ONCE(). A similar patch appears in commit
63a8dfb ("function_graph: Add READ_ONCE() when accessing fgraph_array[]"). |
In the Linux kernel, the following vulnerability has been resolved:
gve: guard XSK operations on the existence of queues
This patch predicates the enabling and disabling of XSK pools on the
existence of queues. As it stands, if the interface is down, disabling
or enabling XSK pools would result in a crash, as the RX queue pointer
would be NULL. XSK pool registration will occur as part of the next
interface up.
Similarly, xsk_wakeup needs be guarded against queues disappearing
while the function is executing, so a check against the
GVE_PRIV_FLAGS_NAPI_ENABLED flag is added to synchronize with the
disabling of the bit and the synchronize_net() in gve_turndown. |
In the Linux kernel, the following vulnerability has been resolved:
nfs: Fix oops in nfs_netfs_init_request() when copying to cache
When netfslib wants to copy some data that has just been read on behalf of
nfs, it creates a new write request and calls nfs_netfs_init_request() to
initialise it, but with a NULL file pointer. This causes
nfs_file_open_context() to oops - however, we don't actually need the nfs
context as we're only going to write to the cache.
Fix this by just returning if we aren't given a file pointer and emit a
warning if the request was for something other than copy-to-cache.
Further, fix nfs_netfs_free_request() so that it doesn't try to free the
context if the pointer is NULL. |