| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
jfs: Prevent copying of nlink with value 0 from disk inode
syzbot report a deadlock in diFree. [1]
When calling "ioctl$LOOP_SET_STATUS64", the offset value passed in is 4,
which does not match the mounted loop device, causing the mapping of the
mounted loop device to be invalidated.
When creating the directory and creating the inode of iag in diReadSpecial(),
read the page of fixed disk inode (AIT) in raw mode in read_metapage(), the
metapage data it returns is corrupted, which causes the nlink value of 0 to be
assigned to the iag inode when executing copy_from_dinode(), which ultimately
causes a deadlock when entering diFree().
To avoid this, first check the nlink value of dinode before setting iag inode.
[1]
WARNING: possible recursive locking detected
6.12.0-rc7-syzkaller-00212-g4a5df3796467 #0 Not tainted
--------------------------------------------
syz-executor301/5309 is trying to acquire lock:
ffff888044548920 (&(imap->im_aglock[index])){+.+.}-{3:3}, at: diFree+0x37c/0x2fb0 fs/jfs/jfs_imap.c:889
but task is already holding lock:
ffff888044548920 (&(imap->im_aglock[index])){+.+.}-{3:3}, at: diAlloc+0x1b6/0x1630
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&(imap->im_aglock[index]));
lock(&(imap->im_aglock[index]));
*** DEADLOCK ***
May be due to missing lock nesting notation
5 locks held by syz-executor301/5309:
#0: ffff8880422a4420 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:515
#1: ffff88804755b390 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:850 [inline]
#1: ffff88804755b390 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x260/0x540 fs/namei.c:4026
#2: ffff888044548920 (&(imap->im_aglock[index])){+.+.}-{3:3}, at: diAlloc+0x1b6/0x1630
#3: ffff888044548890 (&imap->im_freelock){+.+.}-{3:3}, at: diNewIAG fs/jfs/jfs_imap.c:2460 [inline]
#3: ffff888044548890 (&imap->im_freelock){+.+.}-{3:3}, at: diAllocExt fs/jfs/jfs_imap.c:1905 [inline]
#3: ffff888044548890 (&imap->im_freelock){+.+.}-{3:3}, at: diAllocAG+0x4b7/0x1e50 fs/jfs/jfs_imap.c:1669
#4: ffff88804755a618 (&jfs_ip->rdwrlock/1){++++}-{3:3}, at: diNewIAG fs/jfs/jfs_imap.c:2477 [inline]
#4: ffff88804755a618 (&jfs_ip->rdwrlock/1){++++}-{3:3}, at: diAllocExt fs/jfs/jfs_imap.c:1905 [inline]
#4: ffff88804755a618 (&jfs_ip->rdwrlock/1){++++}-{3:3}, at: diAllocAG+0x869/0x1e50 fs/jfs/jfs_imap.c:1669
stack backtrace:
CPU: 0 UID: 0 PID: 5309 Comm: syz-executor301 Not tainted 6.12.0-rc7-syzkaller-00212-g4a5df3796467 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
print_deadlock_bug+0x483/0x620 kernel/locking/lockdep.c:3037
check_deadlock kernel/locking/lockdep.c:3089 [inline]
validate_chain+0x15e2/0x5920 kernel/locking/lockdep.c:3891
__lock_acquire+0x1384/0x2050 kernel/locking/lockdep.c:5202
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5825
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
diFree+0x37c/0x2fb0 fs/jfs/jfs_imap.c:889
jfs_evict_inode+0x32d/0x440 fs/jfs/inode.c:156
evict+0x4e8/0x9b0 fs/inode.c:725
diFreeSpecial fs/jfs/jfs_imap.c:552 [inline]
duplicateIXtree+0x3c6/0x550 fs/jfs/jfs_imap.c:3022
diNewIAG fs/jfs/jfs_imap.c:2597 [inline]
diAllocExt fs/jfs/jfs_imap.c:1905 [inline]
diAllocAG+0x17dc/0x1e50 fs/jfs/jfs_imap.c:1669
diAlloc+0x1d2/0x1630 fs/jfs/jfs_imap.c:1590
ialloc+0x8f/0x900 fs/jfs/jfs_inode.c:56
jfs_mkdir+0x1c5/0xba0 fs/jfs/namei.c:225
vfs_mkdir+0x2f9/0x4f0 fs/namei.c:4257
do_mkdirat+0x264/0x3a0 fs/namei.c:4280
__do_sys_mkdirat fs/namei.c:4295 [inline]
__se_sys_mkdirat fs/namei.c:4293 [inline]
__x64_sys_mkdirat+0x87/0xa0 fs/namei.c:4293
do_syscall_x64 arch/x86/en
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
drm: zynqmp_dp: Fix a deadlock in zynqmp_dp_ignore_hpd_set()
Instead of attempting the same mutex twice, lock and unlock it.
This bug has been detected by the Clang thread-safety analyzer. |
| In the Linux kernel, the following vulnerability has been resolved:
io_uring/rw: fix missing NOWAIT check for O_DIRECT start write
When io_uring starts a write, it'll call kiocb_start_write() to bump the
super block rwsem, preventing any freezes from happening while that
write is in-flight. The freeze side will grab that rwsem for writing,
excluding any new writers from happening and waiting for existing writes
to finish. But io_uring unconditionally uses kiocb_start_write(), which
will block if someone is currently attempting to freeze the mount point.
This causes a deadlock where freeze is waiting for previous writes to
complete, but the previous writes cannot complete, as the task that is
supposed to complete them is blocked waiting on starting a new write.
This results in the following stuck trace showing that dependency with
the write blocked starting a new write:
task:fio state:D stack:0 pid:886 tgid:886 ppid:876
Call trace:
__switch_to+0x1d8/0x348
__schedule+0x8e8/0x2248
schedule+0x110/0x3f0
percpu_rwsem_wait+0x1e8/0x3f8
__percpu_down_read+0xe8/0x500
io_write+0xbb8/0xff8
io_issue_sqe+0x10c/0x1020
io_submit_sqes+0x614/0x2110
__arm64_sys_io_uring_enter+0x524/0x1038
invoke_syscall+0x74/0x268
el0_svc_common.constprop.0+0x160/0x238
do_el0_svc+0x44/0x60
el0_svc+0x44/0xb0
el0t_64_sync_handler+0x118/0x128
el0t_64_sync+0x168/0x170
INFO: task fsfreeze:7364 blocked for more than 15 seconds.
Not tainted 6.12.0-rc5-00063-g76aaf945701c #7963
with the attempting freezer stuck trying to grab the rwsem:
task:fsfreeze state:D stack:0 pid:7364 tgid:7364 ppid:995
Call trace:
__switch_to+0x1d8/0x348
__schedule+0x8e8/0x2248
schedule+0x110/0x3f0
percpu_down_write+0x2b0/0x680
freeze_super+0x248/0x8a8
do_vfs_ioctl+0x149c/0x1b18
__arm64_sys_ioctl+0xd0/0x1a0
invoke_syscall+0x74/0x268
el0_svc_common.constprop.0+0x160/0x238
do_el0_svc+0x44/0x60
el0_svc+0x44/0xb0
el0t_64_sync_handler+0x118/0x128
el0t_64_sync+0x168/0x170
Fix this by having the io_uring side honor IOCB_NOWAIT, and only attempt a
blocking grab of the super block rwsem if it isn't set. For normal issue
where IOCB_NOWAIT would always be set, this returns -EAGAIN which will
have io_uring core issue a blocking attempt of the write. That will in
turn also get completions run, ensuring forward progress.
Since freezing requires CAP_SYS_ADMIN in the first place, this isn't
something that can be triggered by a regular user. |
| In the Linux kernel, the following vulnerability has been resolved:
dm cache: fix flushing uninitialized delayed_work on cache_ctr error
An unexpected WARN_ON from flush_work() may occur when cache creation
fails, caused by destroying the uninitialized delayed_work waker in the
error path of cache_create(). For example, the warning appears on the
superblock checksum error.
Reproduce steps:
dmsetup create cmeta --table "0 8192 linear /dev/sdc 0"
dmsetup create cdata --table "0 65536 linear /dev/sdc 8192"
dmsetup create corig --table "0 524288 linear /dev/sdc 262144"
dd if=/dev/urandom of=/dev/mapper/cmeta bs=4k count=1 oflag=direct
dmsetup create cache --table "0 524288 cache /dev/mapper/cmeta \
/dev/mapper/cdata /dev/mapper/corig 128 2 metadata2 writethrough smq 0"
Kernel logs:
(snip)
WARNING: CPU: 0 PID: 84 at kernel/workqueue.c:4178 __flush_work+0x5d4/0x890
Fix by pulling out the cancel_delayed_work_sync() from the constructor's
error path. This patch doesn't affect the use-after-free fix for
concurrent dm_resume and dm_destroy (commit 6a459d8edbdb ("dm cache: Fix
UAF in destroy()")) as cache_dtr is not changed. |
| In the Linux kernel, the following vulnerability has been resolved:
ACPI: CPPC: Make rmw_lock a raw_spin_lock
The following BUG was triggered:
=============================
[ BUG: Invalid wait context ]
6.12.0-rc2-XXX #406 Not tainted
-----------------------------
kworker/1:1/62 is trying to lock:
ffffff8801593030 (&cpc_ptr->rmw_lock){+.+.}-{3:3}, at: cpc_write+0xcc/0x370
other info that might help us debug this:
context-{5:5}
2 locks held by kworker/1:1/62:
#0: ffffff897ef5ec98 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2c/0x50
#1: ffffff880154e238 (&sg_policy->update_lock){....}-{2:2}, at: sugov_update_shared+0x3c/0x280
stack backtrace:
CPU: 1 UID: 0 PID: 62 Comm: kworker/1:1 Not tainted 6.12.0-rc2-g9654bd3e8806 #406
Workqueue: 0x0 (events)
Call trace:
dump_backtrace+0xa4/0x130
show_stack+0x20/0x38
dump_stack_lvl+0x90/0xd0
dump_stack+0x18/0x28
__lock_acquire+0x480/0x1ad8
lock_acquire+0x114/0x310
_raw_spin_lock+0x50/0x70
cpc_write+0xcc/0x370
cppc_set_perf+0xa0/0x3a8
cppc_cpufreq_fast_switch+0x40/0xc0
cpufreq_driver_fast_switch+0x4c/0x218
sugov_update_shared+0x234/0x280
update_load_avg+0x6ec/0x7b8
dequeue_entities+0x108/0x830
dequeue_task_fair+0x58/0x408
__schedule+0x4f0/0x1070
schedule+0x54/0x130
worker_thread+0xc0/0x2e8
kthread+0x130/0x148
ret_from_fork+0x10/0x20
sugov_update_shared() locks a raw_spinlock while cpc_write() locks a
spinlock.
To have a correct wait-type order, update rmw_lock to a raw spinlock and
ensure that interrupts will be disabled on the CPU holding it.
[ rjw: Changelog edits ] |
| In the Linux kernel, the following vulnerability has been resolved:
fs/ntfs3: Fix possible deadlock in mi_read
Mutex lock with another subclass used in ni_lock_dir(). |
| In the Linux kernel, the following vulnerability has been resolved:
nilfs2: fix potential deadlock with newly created symlinks
Syzbot reported that page_symlink(), called by nilfs_symlink(), triggers
memory reclamation involving the filesystem layer, which can result in
circular lock dependencies among the reader/writer semaphore
nilfs->ns_segctor_sem, s_writers percpu_rwsem (intwrite) and the
fs_reclaim pseudo lock.
This is because after commit 21fc61c73c39 ("don't put symlink bodies in
pagecache into highmem"), the gfp flags of the page cache for symbolic
links are overwritten to GFP_KERNEL via inode_nohighmem().
This is not a problem for symlinks read from the backing device, because
the __GFP_FS flag is dropped after inode_nohighmem() is called. However,
when a new symlink is created with nilfs_symlink(), the gfp flags remain
overwritten to GFP_KERNEL. Then, memory allocation called from
page_symlink() etc. triggers memory reclamation including the FS layer,
which may call nilfs_evict_inode() or nilfs_dirty_inode(). And these can
cause a deadlock if they are called while nilfs->ns_segctor_sem is held:
Fix this issue by dropping the __GFP_FS flag from the page cache GFP flags
of newly created symlinks in the same way that nilfs_new_inode() and
__nilfs_read_inode() do, as a workaround until we adopt nofs allocation
scope consistently or improve the locking constraints. |
| In the Linux kernel, the following vulnerability has been resolved:
posix-clock: posix-clock: Fix unbalanced locking in pc_clock_settime()
If get_clock_desc() succeeds, it calls fget() for the clockid's fd,
and get the clk->rwsem read lock, so the error path should release
the lock to make the lock balance and fput the clockid's fd to make
the refcount balance and release the fd related resource.
However the below commit left the error path locked behind resulting in
unbalanced locking. Check timespec64_valid_strict() before
get_clock_desc() to fix it, because the "ts" is not changed
after that.
[pabeni@redhat.com: fixed commit message typo] |
| In the Linux kernel, the following vulnerability has been resolved:
bpf: Use raw_spinlock_t in ringbuf
The function __bpf_ringbuf_reserve is invoked from a tracepoint, which
disables preemption. Using spinlock_t in this context can lead to a
"sleep in atomic" warning in the RT variant. This issue is illustrated
in the example below:
BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 556208, name: test_progs
preempt_count: 1, expected: 0
RCU nest depth: 1, expected: 1
INFO: lockdep is turned off.
Preemption disabled at:
[<ffffd33a5c88ea44>] migrate_enable+0xc0/0x39c
CPU: 7 PID: 556208 Comm: test_progs Tainted: G
Hardware name: Qualcomm SA8775P Ride (DT)
Call trace:
dump_backtrace+0xac/0x130
show_stack+0x1c/0x30
dump_stack_lvl+0xac/0xe8
dump_stack+0x18/0x30
__might_resched+0x3bc/0x4fc
rt_spin_lock+0x8c/0x1a4
__bpf_ringbuf_reserve+0xc4/0x254
bpf_ringbuf_reserve_dynptr+0x5c/0xdc
bpf_prog_ac3d15160d62622a_test_read_write+0x104/0x238
trace_call_bpf+0x238/0x774
perf_call_bpf_enter.isra.0+0x104/0x194
perf_syscall_enter+0x2f8/0x510
trace_sys_enter+0x39c/0x564
syscall_trace_enter+0x220/0x3c0
do_el0_svc+0x138/0x1dc
el0_svc+0x54/0x130
el0t_64_sync_handler+0x134/0x150
el0t_64_sync+0x17c/0x180
Switch the spinlock to raw_spinlock_t to avoid this error. |
| In the Linux kernel, the following vulnerability has been resolved:
scsi: ufs: core: Set SDEV_OFFLINE when UFS is shut down
There is a history of deadlock if reboot is performed at the beginning
of booting. SDEV_QUIESCE was set for all LU's scsi_devices by UFS
shutdown, and at that time the audio driver was waiting on
blk_mq_submit_bio() holding a mutex_lock while reading the fw binary.
After that, a deadlock issue occurred while audio driver shutdown was
waiting for mutex_unlock of blk_mq_submit_bio(). To solve this, set
SDEV_OFFLINE for all LUs except WLUN, so that any I/O that comes down
after a UFS shutdown will return an error.
[ 31.907781]I[0: swapper/0: 0] 1 130705007 1651079834 11289729804 0 D( 2) 3 ffffff882e208000 * init [device_shutdown]
[ 31.907793]I[0: swapper/0: 0] Mutex: 0xffffff8849a2b8b0: owner[0xffffff882e28cb00 kworker/6:0 :49]
[ 31.907806]I[0: swapper/0: 0] Call trace:
[ 31.907810]I[0: swapper/0: 0] __switch_to+0x174/0x338
[ 31.907819]I[0: swapper/0: 0] __schedule+0x5ec/0x9cc
[ 31.907826]I[0: swapper/0: 0] schedule+0x7c/0xe8
[ 31.907834]I[0: swapper/0: 0] schedule_preempt_disabled+0x24/0x40
[ 31.907842]I[0: swapper/0: 0] __mutex_lock+0x408/0xdac
[ 31.907849]I[0: swapper/0: 0] __mutex_lock_slowpath+0x14/0x24
[ 31.907858]I[0: swapper/0: 0] mutex_lock+0x40/0xec
[ 31.907866]I[0: swapper/0: 0] device_shutdown+0x108/0x280
[ 31.907875]I[0: swapper/0: 0] kernel_restart+0x4c/0x11c
[ 31.907883]I[0: swapper/0: 0] __arm64_sys_reboot+0x15c/0x280
[ 31.907890]I[0: swapper/0: 0] invoke_syscall+0x70/0x158
[ 31.907899]I[0: swapper/0: 0] el0_svc_common+0xb4/0xf4
[ 31.907909]I[0: swapper/0: 0] do_el0_svc+0x2c/0xb0
[ 31.907918]I[0: swapper/0: 0] el0_svc+0x34/0xe0
[ 31.907928]I[0: swapper/0: 0] el0t_64_sync_handler+0x68/0xb4
[ 31.907937]I[0: swapper/0: 0] el0t_64_sync+0x1a0/0x1a4
[ 31.908774]I[0: swapper/0: 0] 49 0 11960702 11236868007 0 D( 2) 6 ffffff882e28cb00 * kworker/6:0 [__bio_queue_enter]
[ 31.908783]I[0: swapper/0: 0] Call trace:
[ 31.908788]I[0: swapper/0: 0] __switch_to+0x174/0x338
[ 31.908796]I[0: swapper/0: 0] __schedule+0x5ec/0x9cc
[ 31.908803]I[0: swapper/0: 0] schedule+0x7c/0xe8
[ 31.908811]I[0: swapper/0: 0] __bio_queue_enter+0xb8/0x178
[ 31.908818]I[0: swapper/0: 0] blk_mq_submit_bio+0x194/0x67c
[ 31.908827]I[0: swapper/0: 0] __submit_bio+0xb8/0x19c |
| In the Linux kernel, the following vulnerability has been resolved:
RDMA/mad: Improve handling of timed out WRs of mad agent
Current timeout handler of mad agent acquires/releases mad_agent_priv
lock for every timed out WRs. This causes heavy locking contention
when higher no. of WRs are to be handled inside timeout handler.
This leads to softlockup with below trace in some use cases where
rdma-cm path is used to establish connection between peer nodes
Trace:
-----
BUG: soft lockup - CPU#4 stuck for 26s! [kworker/u128:3:19767]
CPU: 4 PID: 19767 Comm: kworker/u128:3 Kdump: loaded Tainted: G OE
------- --- 5.14.0-427.13.1.el9_4.x86_64 #1
Hardware name: Dell Inc. PowerEdge R740/01YM03, BIOS 2.4.8 11/26/2019
Workqueue: ib_mad1 timeout_sends [ib_core]
RIP: 0010:__do_softirq+0x78/0x2ac
RSP: 0018:ffffb253449e4f98 EFLAGS: 00000246
RAX: 00000000ffffffff RBX: 0000000000000000 RCX: 000000000000001f
RDX: 000000000000001d RSI: 000000003d1879ab RDI: fff363b66fd3a86b
RBP: ffffb253604cbcd8 R08: 0000009065635f3b R09: 0000000000000000
R10: 0000000000000040 R11: ffffb253449e4ff8 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000040
FS: 0000000000000000(0000) GS:ffff8caa1fc80000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fd9ec9db900 CR3: 0000000891934006 CR4: 00000000007706e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
<IRQ>
? show_trace_log_lvl+0x1c4/0x2df
? show_trace_log_lvl+0x1c4/0x2df
? __irq_exit_rcu+0xa1/0xc0
? watchdog_timer_fn+0x1b2/0x210
? __pfx_watchdog_timer_fn+0x10/0x10
? __hrtimer_run_queues+0x127/0x2c0
? hrtimer_interrupt+0xfc/0x210
? __sysvec_apic_timer_interrupt+0x5c/0x110
? sysvec_apic_timer_interrupt+0x37/0x90
? asm_sysvec_apic_timer_interrupt+0x16/0x20
? __do_softirq+0x78/0x2ac
? __do_softirq+0x60/0x2ac
__irq_exit_rcu+0xa1/0xc0
sysvec_call_function_single+0x72/0x90
</IRQ>
<TASK>
asm_sysvec_call_function_single+0x16/0x20
RIP: 0010:_raw_spin_unlock_irq+0x14/0x30
RSP: 0018:ffffb253604cbd88 EFLAGS: 00000247
RAX: 000000000001960d RBX: 0000000000000002 RCX: ffff8cad2a064800
RDX: 000000008020001b RSI: 0000000000000001 RDI: ffff8cad5d39f66c
RBP: ffff8cad5d39f600 R08: 0000000000000001 R09: 0000000000000000
R10: ffff8caa443e0c00 R11: ffffb253604cbcd8 R12: ffff8cacb8682538
R13: 0000000000000005 R14: ffffb253604cbd90 R15: ffff8cad5d39f66c
cm_process_send_error+0x122/0x1d0 [ib_cm]
timeout_sends+0x1dd/0x270 [ib_core]
process_one_work+0x1e2/0x3b0
? __pfx_worker_thread+0x10/0x10
worker_thread+0x50/0x3a0
? __pfx_worker_thread+0x10/0x10
kthread+0xdd/0x100
? __pfx_kthread+0x10/0x10
ret_from_fork+0x29/0x50
</TASK>
Simplified timeout handler by creating local list of timed out WRs
and invoke send handler post creating the list. The new method acquires/
releases lock once to fetch the list and hence helps to reduce locking
contetiong when processing higher no. of WRs |
| In the Linux kernel, the following vulnerability has been resolved:
io_uring: check if we need to reschedule during overflow flush
In terms of normal application usage, this list will always be empty.
And if an application does overflow a bit, it'll have a few entries.
However, nothing obviously prevents syzbot from running a test case
that generates a ton of overflow entries, and then flushing them can
take quite a while.
Check for needing to reschedule while flushing, and drop our locks and
do so if necessary. There's no state to maintain here as overflows
always prune from head-of-list, hence it's fine to drop and reacquire
the locks at the end of the loop. |
| In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: RFCOMM: FIX possible deadlock in rfcomm_sk_state_change
rfcomm_sk_state_change attempts to use sock_lock so it must never be
called with it locked but rfcomm_sock_ioctl always attempt to lock it
causing the following trace:
======================================================
WARNING: possible circular locking dependency detected
6.8.0-syzkaller-08951-gfe46a7dd189e #0 Not tainted
------------------------------------------------------
syz-executor386/5093 is trying to acquire lock:
ffff88807c396258 (sk_lock-AF_BLUETOOTH-BTPROTO_RFCOMM){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1671 [inline]
ffff88807c396258 (sk_lock-AF_BLUETOOTH-BTPROTO_RFCOMM){+.+.}-{0:0}, at: rfcomm_sk_state_change+0x5b/0x310 net/bluetooth/rfcomm/sock.c:73
but task is already holding lock:
ffff88807badfd28 (&d->lock){+.+.}-{3:3}, at: __rfcomm_dlc_close+0x226/0x6a0 net/bluetooth/rfcomm/core.c:491 |
| In the Linux kernel, the following vulnerability has been resolved:
ext4: fix i_data_sem unlock order in ext4_ind_migrate()
Fuzzing reports a possible deadlock in jbd2_log_wait_commit.
This issue is triggered when an EXT4_IOC_MIGRATE ioctl is set to require
synchronous updates because the file descriptor is opened with O_SYNC.
This can lead to the jbd2_journal_stop() function calling
jbd2_might_wait_for_commit(), potentially causing a deadlock if the
EXT4_IOC_MIGRATE call races with a write(2) system call.
This problem only arises when CONFIG_PROVE_LOCKING is enabled. In this
case, the jbd2_might_wait_for_commit macro locks jbd2_handle in the
jbd2_journal_stop function while i_data_sem is locked. This triggers
lockdep because the jbd2_journal_start function might also lock the same
jbd2_handle simultaneously.
Found by Linux Verification Center (linuxtesting.org) with syzkaller.
Rule: add |
| In the Linux kernel, the following vulnerability has been resolved:
i2c: stm32f7: Do not prepare/unprepare clock during runtime suspend/resume
In case there is any sort of clock controller attached to this I2C bus
controller, for example Versaclock or even an AIC32x4 I2C codec, then
an I2C transfer triggered from the clock controller clk_ops .prepare
callback may trigger a deadlock on drivers/clk/clk.c prepare_lock mutex.
This is because the clock controller first grabs the prepare_lock mutex
and then performs the prepare operation, including its I2C access. The
I2C access resumes this I2C bus controller via .runtime_resume callback,
which calls clk_prepare_enable(), which attempts to grab the prepare_lock
mutex again and deadlocks.
Since the clock are already prepared since probe() and unprepared in
remove(), use simple clk_enable()/clk_disable() calls to enable and
disable the clock on runtime suspend and resume, to avoid hitting the
prepare_lock mutex. |
| In the Linux kernel, the following vulnerability has been resolved:
ocfs2: remove unreasonable unlock in ocfs2_read_blocks
Patch series "Misc fixes for ocfs2_read_blocks", v5.
This series contains 2 fixes for ocfs2_read_blocks(). The first patch fix
the issue reported by syzbot, which detects bad unlock balance in
ocfs2_read_blocks(). The second patch fixes an issue reported by Heming
Zhao when reviewing above fix.
This patch (of 2):
There was a lock release before exiting, so remove the unreasonable unlock. |
| In the Linux kernel, the following vulnerability has been resolved:
ppp: do not assume bh is held in ppp_channel_bridge_input()
Networking receive path is usually handled from BH handler.
However, some protocols need to acquire the socket lock, and
packets might be stored in the socket backlog is the socket was
owned by a user process.
In this case, release_sock(), __release_sock(), and sk_backlog_rcv()
might call the sk->sk_backlog_rcv() handler in process context.
sybot caught ppp was not considering this case in
ppp_channel_bridge_input() :
WARNING: inconsistent lock state
6.11.0-rc7-syzkaller-g5f5673607153 #0 Not tainted
--------------------------------
inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
ksoftirqd/1/24 [HC0[0]:SC1[1]:HE1:SE0] takes:
ffff0000db7f11e0 (&pch->downl){+.?.}-{2:2}, at: spin_lock include/linux/spinlock.h:351 [inline]
ffff0000db7f11e0 (&pch->downl){+.?.}-{2:2}, at: ppp_channel_bridge_input drivers/net/ppp/ppp_generic.c:2272 [inline]
ffff0000db7f11e0 (&pch->downl){+.?.}-{2:2}, at: ppp_input+0x16c/0x854 drivers/net/ppp/ppp_generic.c:2304
{SOFTIRQ-ON-W} state was registered at:
lock_acquire+0x240/0x728 kernel/locking/lockdep.c:5759
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x48/0x60 kernel/locking/spinlock.c:154
spin_lock include/linux/spinlock.h:351 [inline]
ppp_channel_bridge_input drivers/net/ppp/ppp_generic.c:2272 [inline]
ppp_input+0x16c/0x854 drivers/net/ppp/ppp_generic.c:2304
pppoe_rcv_core+0xfc/0x314 drivers/net/ppp/pppoe.c:379
sk_backlog_rcv include/net/sock.h:1111 [inline]
__release_sock+0x1a8/0x3d8 net/core/sock.c:3004
release_sock+0x68/0x1b8 net/core/sock.c:3558
pppoe_sendmsg+0xc8/0x5d8 drivers/net/ppp/pppoe.c:903
sock_sendmsg_nosec net/socket.c:730 [inline]
__sock_sendmsg net/socket.c:745 [inline]
__sys_sendto+0x374/0x4f4 net/socket.c:2204
__do_sys_sendto net/socket.c:2216 [inline]
__se_sys_sendto net/socket.c:2212 [inline]
__arm64_sys_sendto+0xd8/0xf8 net/socket.c:2212
__invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
el0_svc+0x54/0x168 arch/arm64/kernel/entry-common.c:712
el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:730
el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:598
irq event stamp: 282914
hardirqs last enabled at (282914): [<ffff80008b42e30c>] __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:151 [inline]
hardirqs last enabled at (282914): [<ffff80008b42e30c>] _raw_spin_unlock_irqrestore+0x38/0x98 kernel/locking/spinlock.c:194
hardirqs last disabled at (282913): [<ffff80008b42e13c>] __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:108 [inline]
hardirqs last disabled at (282913): [<ffff80008b42e13c>] _raw_spin_lock_irqsave+0x2c/0x7c kernel/locking/spinlock.c:162
softirqs last enabled at (282904): [<ffff8000801f8e88>] softirq_handle_end kernel/softirq.c:400 [inline]
softirqs last enabled at (282904): [<ffff8000801f8e88>] handle_softirqs+0xa3c/0xbfc kernel/softirq.c:582
softirqs last disabled at (282909): [<ffff8000801fbdf8>] run_ksoftirqd+0x70/0x158 kernel/softirq.c:928
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&pch->downl);
<Interrupt>
lock(&pch->downl);
*** DEADLOCK ***
1 lock held by ksoftirqd/1/24:
#0: ffff80008f74dfa0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x10/0x4c include/linux/rcupdate.h:325
stack backtrace:
CPU: 1 UID: 0 PID: 24 Comm: ksoftirqd/1 Not tainted 6.11.0-rc7-syzkaller-g5f5673607153 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
Call trace:
dump_backtrace+0x1b8/0x1e4 arch/arm64/kernel/stacktrace.c:319
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:326
__dump_sta
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
RDMA/hns: Fix spin_unlock_irqrestore() called with IRQs enabled
Fix missuse of spin_lock_irq()/spin_unlock_irq() when
spin_lock_irqsave()/spin_lock_irqrestore() was hold.
This was discovered through the lock debugging, and the corresponding
log is as follows:
raw_local_irq_restore() called with IRQs enabled
WARNING: CPU: 96 PID: 2074 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x30/0x40
...
Call trace:
warn_bogus_irq_restore+0x30/0x40
_raw_spin_unlock_irqrestore+0x84/0xc8
add_qp_to_list+0x11c/0x148 [hns_roce_hw_v2]
hns_roce_create_qp_common.constprop.0+0x240/0x780 [hns_roce_hw_v2]
hns_roce_create_qp+0x98/0x160 [hns_roce_hw_v2]
create_qp+0x138/0x258
ib_create_qp_kernel+0x50/0xe8
create_mad_qp+0xa8/0x128
ib_mad_port_open+0x218/0x448
ib_mad_init_device+0x70/0x1f8
add_client_context+0xfc/0x220
enable_device_and_get+0xd0/0x140
ib_register_device.part.0+0xf4/0x1c8
ib_register_device+0x34/0x50
hns_roce_register_device+0x174/0x3d0 [hns_roce_hw_v2]
hns_roce_init+0xfc/0x2c0 [hns_roce_hw_v2]
__hns_roce_hw_v2_init_instance+0x7c/0x1d0 [hns_roce_hw_v2]
hns_roce_hw_v2_init_instance+0x9c/0x180 [hns_roce_hw_v2] |
| In the Linux kernel, the following vulnerability has been resolved:
rtmutex: Drop rt_mutex::wait_lock before scheduling
rt_mutex_handle_deadlock() is called with rt_mutex::wait_lock held. In the
good case it returns with the lock held and in the deadlock case it emits a
warning and goes into an endless scheduling loop with the lock held, which
triggers the 'scheduling in atomic' warning.
Unlock rt_mutex::wait_lock in the dead lock case before issuing the warning
and dropping into the schedule for ever loop.
[ tglx: Moved unlock before the WARN(), removed the pointless comment,
massaged changelog, added Fixes tag ] |
| In the Linux kernel, the following vulnerability has been resolved:
can: mcp251x: fix deadlock if an interrupt occurs during mcp251x_open
The mcp251x_hw_wake() function is called with the mpc_lock mutex held and
disables the interrupt handler so that no interrupts can be processed while
waking the device. If an interrupt has already occurred then waiting for
the interrupt handler to complete will deadlock because it will be trying
to acquire the same mutex.
CPU0 CPU1
---- ----
mcp251x_open()
mutex_lock(&priv->mcp_lock)
request_threaded_irq()
<interrupt>
mcp251x_can_ist()
mutex_lock(&priv->mcp_lock)
mcp251x_hw_wake()
disable_irq() <-- deadlock
Use disable_irq_nosync() instead because the interrupt handler does
everything while holding the mutex so it doesn't matter if it's still
running. |