| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
gpio: pca953x: fix pca953x_irq_bus_sync_unlock race
Ensure that `i2c_lock' is held when setting interrupt latch and mask in
pca953x_irq_bus_sync_unlock() in order to avoid races.
The other (non-probe) call site pca953x_gpio_set_multiple() ensures the
lock is held before calling pca953x_write_regs().
The problem occurred when a request raced against irq_bus_sync_unlock()
approximately once per thousand reboots on an i.MX8MP based system.
* Normal case
0-0022: write register AI|3a {03,02,00,00,01} Input latch P0
0-0022: write register AI|49 {fc,fd,ff,ff,fe} Interrupt mask P0
0-0022: write register AI|08 {ff,00,00,00,00} Output P3
0-0022: write register AI|12 {fc,00,00,00,00} Config P3
* Race case
0-0022: write register AI|08 {ff,00,00,00,00} Output P3
0-0022: write register AI|08 {03,02,00,00,01} *** Wrong register ***
0-0022: write register AI|12 {fc,00,00,00,00} Config P3
0-0022: write register AI|49 {fc,fd,ff,ff,fe} Interrupt mask P0 |
| In the Linux kernel, the following vulnerability has been resolved:
net, sunrpc: Remap EPERM in case of connection failure in xs_tcp_setup_socket
When using a BPF program on kernel_connect(), the call can return -EPERM. This
causes xs_tcp_setup_socket() to loop forever, filling up the syslog and causing
the kernel to potentially freeze up.
Neil suggested:
This will propagate -EPERM up into other layers which might not be ready
to handle it. It might be safer to map EPERM to an error we would be more
likely to expect from the network system - such as ECONNREFUSED or ENETDOWN.
ECONNREFUSED as error seems reasonable. For programs setting a different error
can be out of reach (see handling in 4fbac77d2d09) in particular on kernels
which do not have f10d05966196 ("bpf: Make BPF_PROG_RUN_ARRAY return -err
instead of allow boolean"), thus given that it is better to simply remap for
consistent behavior. UDP does handle EPERM in xs_udp_send_request(). |
| In the Linux kernel, the following vulnerability has been resolved:
Revert "sched/fair: Make sure to try to detach at least one movable task"
This reverts commit b0defa7ae03ecf91b8bfd10ede430cff12fcbd06.
b0defa7ae03ec changed the load balancing logic to ignore env.max_loop if
all tasks examined to that point were pinned. The goal of the patch was
to make it more likely to be able to detach a task buried in a long list
of pinned tasks. However, this has the unfortunate side effect of
creating an O(n) iteration in detach_tasks(), as we now must fully
iterate every task on a cpu if all or most are pinned. Since this load
balance code is done with rq lock held, and often in softirq context, it
is very easy to trigger hard lockups. We observed such hard lockups with
a user who affined O(10k) threads to a single cpu.
When I discussed this with Vincent he initially suggested that we keep
the limit on the number of tasks to detach, but increase the number of
tasks we can search. However, after some back and forth on the mailing
list, he recommended we instead revert the original patch, as it seems
likely no one was actually getting hit by the original issue. |
| In the Linux kernel, the following vulnerability has been resolved:
USB: serial: mos7840: fix crash on resume
Since commit c49cfa917025 ("USB: serial: use generic method if no
alternative is provided in usb serial layer"), USB serial core calls the
generic resume implementation when the driver has not provided one.
This can trigger a crash on resume with mos7840 since support for
multiple read URBs was added back in 2011. Specifically, both port read
URBs are now submitted on resume for open ports, but the context pointer
of the second URB is left set to the core rather than mos7840 port
structure.
Fix this by implementing dedicated suspend and resume functions for
mos7840.
Tested with Delock 87414 USB 2.0 to 4x serial adapter.
[ johan: analyse crash and rewrite commit message; set busy flag on
resume; drop bulk-in check; drop unnecessary usb_kill_urb() ] |
| In the Linux kernel, the following vulnerability has been resolved:
x86/bhi: Avoid warning in #DB handler due to BHI mitigation
When BHI mitigation is enabled, if SYSENTER is invoked with the TF flag set
then entry_SYSENTER_compat() uses CLEAR_BRANCH_HISTORY and calls the
clear_bhb_loop() before the TF flag is cleared. This causes the #DB handler
(exc_debug_kernel()) to issue a warning because single-step is used outside the
entry_SYSENTER_compat() function.
To address this issue, entry_SYSENTER_compat() should use CLEAR_BRANCH_HISTORY
after making sure the TF flag is cleared.
The problem can be reproduced with the following sequence:
$ cat sysenter_step.c
int main()
{ asm("pushf; pop %ax; bts $8,%ax; push %ax; popf; sysenter"); }
$ gcc -o sysenter_step sysenter_step.c
$ ./sysenter_step
Segmentation fault (core dumped)
The program is expected to crash, and the #DB handler will issue a warning.
Kernel log:
WARNING: CPU: 27 PID: 7000 at arch/x86/kernel/traps.c:1009 exc_debug_kernel+0xd2/0x160
...
RIP: 0010:exc_debug_kernel+0xd2/0x160
...
Call Trace:
<#DB>
? show_regs+0x68/0x80
? __warn+0x8c/0x140
? exc_debug_kernel+0xd2/0x160
? report_bug+0x175/0x1a0
? handle_bug+0x44/0x90
? exc_invalid_op+0x1c/0x70
? asm_exc_invalid_op+0x1f/0x30
? exc_debug_kernel+0xd2/0x160
exc_debug+0x43/0x50
asm_exc_debug+0x1e/0x40
RIP: 0010:clear_bhb_loop+0x0/0xb0
...
</#DB>
<TASK>
? entry_SYSENTER_compat_after_hwframe+0x6e/0x8d
</TASK>
[ bp: Massage commit message. ] |
| In the Linux kernel, the following vulnerability has been resolved:
firmware: cs_dsp: Return error if block header overflows file
Return an error from cs_dsp_power_up() if a block header is longer
than the amount of data left in the file.
The previous code in cs_dsp_load() and cs_dsp_load_coeff() would loop
while there was enough data left in the file for a valid region. This
protected against overrunning the end of the file data, but it didn't
abort the file processing with an error. |
| In the Linux kernel, the following vulnerability has been resolved:
firmware: cs_dsp: Validate payload length before processing block
Move the payload length check in cs_dsp_load() and cs_dsp_coeff_load()
to be done before the block is processed.
The check that the length of a block payload does not exceed the number
of remaining bytes in the firwmware file buffer was being done near the
end of the loop iteration. However, some code before that check used the
length field without validating it. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/amdgpu: Using uninitialized value *size when calling amdgpu_vce_cs_reloc
Initialize the size before calling amdgpu_vce_cs_reloc, such as case 0x03000001.
V2: To really improve the handling we would actually
need to have a separate value of 0xffffffff.(Christian) |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: mt76: replace skb_put with skb_put_zero
Avoid potentially reusing uninitialized data |
| In the Linux kernel, the following vulnerability has been resolved:
scsi: mpi3mr: Sanitise num_phys
Information is stored in mr_sas_port->phy_mask, values larger then size of
this field shouldn't be allowed. |
| In the Linux kernel, the following vulnerability has been resolved:
tcp_metrics: validate source addr length
I don't see anything checking that TCP_METRICS_ATTR_SADDR_IPV4
is at least 4 bytes long, and the policy doesn't have an entry
for this attribute at all (neither does it for IPv6 but v6 is
manually validated). |
| In the Linux kernel, the following vulnerability has been resolved:
nvmet: fix a possible leak when destroy a ctrl during qp establishment
In nvmet_sq_destroy we capture sq->ctrl early and if it is non-NULL we
know that a ctrl was allocated (in the admin connect request handler)
and we need to release pending AERs, clear ctrl->sqs and sq->ctrl
(for nvme-loop primarily), and drop the final reference on the ctrl.
However, a small window is possible where nvmet_sq_destroy starts (as
a result of the client giving up and disconnecting) concurrently with
the nvme admin connect cmd (which may be in an early stage). But *before*
kill_and_confirm of sq->ref (i.e. the admin connect managed to get an sq
live reference). In this case, sq->ctrl was allocated however after it was
captured in a local variable in nvmet_sq_destroy.
This prevented the final reference drop on the ctrl.
Solve this by re-capturing the sq->ctrl after all inflight request has
completed, where for sure sq->ctrl reference is final, and move forward
based on that.
This issue was observed in an environment with many hosts connecting
multiple ctrls simoutanuosly, creating a delay in allocating a ctrl
leading up to this race window. |
| In the Linux kernel, the following vulnerability has been resolved:
mm: avoid overflows in dirty throttling logic
The dirty throttling logic is interspersed with assumptions that dirty
limits in PAGE_SIZE units fit into 32-bit (so that various multiplications
fit into 64-bits). If limits end up being larger, we will hit overflows,
possible divisions by 0 etc. Fix these problems by never allowing so
large dirty limits as they have dubious practical value anyway. For
dirty_bytes / dirty_background_bytes interfaces we can just refuse to set
so large limits. For dirty_ratio / dirty_background_ratio it isn't so
simple as the dirty limit is computed from the amount of available memory
which can change due to memory hotplug etc. So when converting dirty
limits from ratios to numbers of pages, we just don't allow the result to
exceed UINT_MAX.
This is root-only triggerable problem which occurs when the operator
sets dirty limits to >16 TB. |
| In the Linux kernel, the following vulnerability has been resolved:
scsi: qedf: Make qedf_execute_tmf() non-preemptible
Stop calling smp_processor_id() from preemptible code in
qedf_execute_tmf90. This results in BUG_ON() when running an RT kernel.
[ 659.343280] BUG: using smp_processor_id() in preemptible [00000000] code: sg_reset/3646
[ 659.343282] caller is qedf_execute_tmf+0x8b/0x360 [qedf] |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: cfg80211: restrict NL80211_ATTR_TXQ_QUANTUM values
syzbot is able to trigger softlockups, setting NL80211_ATTR_TXQ_QUANTUM
to 2^31.
We had a similar issue in sch_fq, fixed with commit
d9e15a273306 ("pkt_sched: fq: do not accept silly TCA_FQ_QUANTUM")
watchdog: BUG: soft lockup - CPU#1 stuck for 26s! [kworker/1:0:24]
Modules linked in:
irq event stamp: 131135
hardirqs last enabled at (131134): [<ffff80008ae8778c>] __exit_to_kernel_mode arch/arm64/kernel/entry-common.c:85 [inline]
hardirqs last enabled at (131134): [<ffff80008ae8778c>] exit_to_kernel_mode+0xdc/0x10c arch/arm64/kernel/entry-common.c:95
hardirqs last disabled at (131135): [<ffff80008ae85378>] __el1_irq arch/arm64/kernel/entry-common.c:533 [inline]
hardirqs last disabled at (131135): [<ffff80008ae85378>] el1_interrupt+0x24/0x68 arch/arm64/kernel/entry-common.c:551
softirqs last enabled at (125892): [<ffff80008907e82c>] neigh_hh_init net/core/neighbour.c:1538 [inline]
softirqs last enabled at (125892): [<ffff80008907e82c>] neigh_resolve_output+0x268/0x658 net/core/neighbour.c:1553
softirqs last disabled at (125896): [<ffff80008904166c>] local_bh_disable+0x10/0x34 include/linux/bottom_half.h:19
CPU: 1 PID: 24 Comm: kworker/1:0 Not tainted 6.9.0-rc7-syzkaller-gfda5695d692c #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Workqueue: mld mld_ifc_work
pstate: 80400005 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : __list_del include/linux/list.h:195 [inline]
pc : __list_del_entry include/linux/list.h:218 [inline]
pc : list_move_tail include/linux/list.h:310 [inline]
pc : fq_tin_dequeue include/net/fq_impl.h:112 [inline]
pc : ieee80211_tx_dequeue+0x6b8/0x3b4c net/mac80211/tx.c:3854
lr : __list_del_entry include/linux/list.h:218 [inline]
lr : list_move_tail include/linux/list.h:310 [inline]
lr : fq_tin_dequeue include/net/fq_impl.h:112 [inline]
lr : ieee80211_tx_dequeue+0x67c/0x3b4c net/mac80211/tx.c:3854
sp : ffff800093d36700
x29: ffff800093d36a60 x28: ffff800093d36960 x27: dfff800000000000
x26: ffff0000d800ad50 x25: ffff0000d800abe0 x24: ffff0000d800abf0
x23: ffff0000e0032468 x22: ffff0000e00324d4 x21: ffff0000d800abf0
x20: ffff0000d800abf8 x19: ffff0000d800abf0 x18: ffff800093d363c0
x17: 000000000000d476 x16: ffff8000805519dc x15: ffff7000127a6cc8
x14: 1ffff000127a6cc8 x13: 0000000000000004 x12: ffffffffffffffff
x11: ffff7000127a6cc8 x10: 0000000000ff0100 x9 : 0000000000000000
x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000000000
x5 : ffff80009287aa08 x4 : 0000000000000008 x3 : ffff80008034c7fc
x2 : ffff0000e0032468 x1 : 00000000da0e46b8 x0 : ffff0000e0032470
Call trace:
__list_del include/linux/list.h:195 [inline]
__list_del_entry include/linux/list.h:218 [inline]
list_move_tail include/linux/list.h:310 [inline]
fq_tin_dequeue include/net/fq_impl.h:112 [inline]
ieee80211_tx_dequeue+0x6b8/0x3b4c net/mac80211/tx.c:3854
wake_tx_push_queue net/mac80211/util.c:294 [inline]
ieee80211_handle_wake_tx_queue+0x118/0x274 net/mac80211/util.c:315
drv_wake_tx_queue net/mac80211/driver-ops.h:1350 [inline]
schedule_and_wake_txq net/mac80211/driver-ops.h:1357 [inline]
ieee80211_queue_skb+0x18e8/0x2244 net/mac80211/tx.c:1664
ieee80211_tx+0x260/0x400 net/mac80211/tx.c:1966
ieee80211_xmit+0x278/0x354 net/mac80211/tx.c:2062
__ieee80211_subif_start_xmit+0xab8/0x122c net/mac80211/tx.c:4338
ieee80211_subif_start_xmit+0xe0/0x438 net/mac80211/tx.c:4532
__netdev_start_xmit include/linux/netdevice.h:4903 [inline]
netdev_start_xmit include/linux/netdevice.h:4917 [inline]
xmit_one net/core/dev.c:3531 [inline]
dev_hard_start_xmit+0x27c/0x938 net/core/dev.c:3547
__dev_queue_xmit+0x1678/0x33fc net/core/dev.c:4341
dev_queue_xmit include/linux/netdevice.h:3091 [inline]
neigh_resolve_output+0x558/0x658 net/core/neighbour.c:1563
neigh_output include/net/neighbour.h:542 [inline]
ip6_fini
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
net: ntb_netdev: Move ntb_netdev_rx_handler() to call netif_rx() from __netif_rx()
The following is emitted when using idxd (DSA) dmanegine as the data
mover for ntb_transport that ntb_netdev uses.
[74412.546922] BUG: using smp_processor_id() in preemptible [00000000] code: irq/52-idxd-por/14526
[74412.556784] caller is netif_rx_internal+0x42/0x130
[74412.562282] CPU: 6 PID: 14526 Comm: irq/52-idxd-por Not tainted 6.9.5 #5
[74412.569870] Hardware name: Intel Corporation ArcherCity/ArcherCity, BIOS EGSDCRB1.E9I.1752.P05.2402080856 02/08/2024
[74412.581699] Call Trace:
[74412.584514] <TASK>
[74412.586933] dump_stack_lvl+0x55/0x70
[74412.591129] check_preemption_disabled+0xc8/0xf0
[74412.596374] netif_rx_internal+0x42/0x130
[74412.600957] __netif_rx+0x20/0xd0
[74412.604743] ntb_netdev_rx_handler+0x66/0x150 [ntb_netdev]
[74412.610985] ntb_complete_rxc+0xed/0x140 [ntb_transport]
[74412.617010] ntb_rx_copy_callback+0x53/0x80 [ntb_transport]
[74412.623332] idxd_dma_complete_txd+0xe3/0x160 [idxd]
[74412.628963] idxd_wq_thread+0x1a6/0x2b0 [idxd]
[74412.634046] irq_thread_fn+0x21/0x60
[74412.638134] ? irq_thread+0xa8/0x290
[74412.642218] irq_thread+0x1a0/0x290
[74412.646212] ? __pfx_irq_thread_fn+0x10/0x10
[74412.651071] ? __pfx_irq_thread_dtor+0x10/0x10
[74412.656117] ? __pfx_irq_thread+0x10/0x10
[74412.660686] kthread+0x100/0x130
[74412.664384] ? __pfx_kthread+0x10/0x10
[74412.668639] ret_from_fork+0x31/0x50
[74412.672716] ? __pfx_kthread+0x10/0x10
[74412.676978] ret_from_fork_asm+0x1a/0x30
[74412.681457] </TASK>
The cause is due to the idxd driver interrupt completion handler uses
threaded interrupt and the threaded handler is not hard or soft interrupt
context. However __netif_rx() can only be called from interrupt context.
Change the call to netif_rx() in order to allow completion via normal
context for dmaengine drivers that utilize threaded irq handling.
While the following commit changed from netif_rx() to __netif_rx(),
baebdf48c360 ("net: dev: Makes sure netif_rx() can be invoked in any context."),
the change should've been a noop instead. However, the code precedes this
fix should've been using netif_rx_ni() or netif_rx_any_context(). |
| In the Linux kernel, the following vulnerability has been resolved:
Revert "mm/writeback: fix possible divide-by-zero in wb_dirty_limits(), again"
Patch series "mm: Avoid possible overflows in dirty throttling".
Dirty throttling logic assumes dirty limits in page units fit into
32-bits. This patch series makes sure this is true (see patch 2/2 for
more details).
This patch (of 2):
This reverts commit 9319b647902cbd5cc884ac08a8a6d54ce111fc78.
The commit is broken in several ways. Firstly, the removed (u64) cast
from the multiplication will introduce a multiplication overflow on 32-bit
archs if wb_thresh * bg_thresh >= 1<<32 (which is actually common - the
default settings with 4GB of RAM will trigger this). Secondly, the
div64_u64() is unnecessarily expensive on 32-bit archs. We have
div64_ul() in case we want to be safe & cheap. Thirdly, if dirty
thresholds are larger than 1<<32 pages, then dirty balancing is going to
blow up in many other spectacular ways anyway so trying to fix one
possible overflow is just moot. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/nouveau: fix null pointer dereference in nouveau_connector_get_modes
In nouveau_connector_get_modes(), the return value of drm_mode_duplicate()
is assigned to mode, which will lead to a possible NULL pointer
dereference on failure of drm_mode_duplicate(). Add a check to avoid npd. |
| In the Linux kernel, the following vulnerability has been resolved:
x86: stop playing stack games in profile_pc()
The 'profile_pc()' function is used for timer-based profiling, which
isn't really all that relevant any more to begin with, but it also ends
up making assumptions based on the stack layout that aren't necessarily
valid.
Basically, the code tries to account the time spent in spinlocks to the
caller rather than the spinlock, and while I support that as a concept,
it's not worth the code complexity or the KASAN warnings when no serious
profiling is done using timers anyway these days.
And the code really does depend on stack layout that is only true in the
simplest of cases. We've lost the comment at some point (I think when
the 32-bit and 64-bit code was unified), but it used to say:
Assume the lock function has either no stack frame or a copy
of eflags from PUSHF.
which explains why it just blindly loads a word or two straight off the
stack pointer and then takes a minimal look at the values to just check
if they might be eflags or the return pc:
Eflags always has bits 22 and up cleared unlike kernel addresses
but that basic stack layout assumption assumes that there isn't any lock
debugging etc going on that would complicate the code and cause a stack
frame.
It causes KASAN unhappiness reported for years by syzkaller [1] and
others [2].
With no real practical reason for this any more, just remove the code.
Just for historical interest, here's some background commits relating to
this code from 2006:
0cb91a229364 ("i386: Account spinlocks to the caller during profiling for !FP kernels")
31679f38d886 ("Simplify profile_pc on x86-64")
and a code unification from 2009:
ef4512882dbe ("x86: time_32/64.c unify profile_pc")
but the basics of this thing actually goes back to before the git tree. |
| In the Linux kernel, the following vulnerability has been resolved:
net/iucv: Avoid explicit cpumask var allocation on stack
For CONFIG_CPUMASK_OFFSTACK=y kernel, explicit allocation of cpumask
variable on stack is not recommended since it can cause potential stack
overflow.
Instead, kernel code should always use *cpumask_var API(s) to allocate
cpumask var in config-neutral way, leaving allocation strategy to
CONFIG_CPUMASK_OFFSTACK.
Use *cpumask_var API(s) to address it. |