| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
ALSA: usb-audio: Prevent excessive number of frames
In this case, the user constructed the parameters with maxpacksize 40
for rate 22050 / pps 1000, and packsize[0] 22 packsize[1] 23. The buffer
size for each data URB is maxpacksize * packets, which in this example
is 40 * 6 = 240; When the user performs a write operation to send audio
data into the ALSA PCM playback stream, the calculated number of frames
is packsize[0] * packets = 264, which exceeds the allocated URB buffer
size, triggering the out-of-bounds (OOB) issue reported by syzbot [1].
Added a check for the number of single data URB frames when calculating
the number of frames to prevent [1].
[1]
BUG: KASAN: slab-out-of-bounds in copy_to_urb+0x261/0x460 sound/usb/pcm.c:1487
Write of size 264 at addr ffff88804337e800 by task syz.0.17/5506
Call Trace:
copy_to_urb+0x261/0x460 sound/usb/pcm.c:1487
prepare_playback_urb+0x953/0x13d0 sound/usb/pcm.c:1611
prepare_outbound_urb+0x377/0xc50 sound/usb/endpoint.c:333 |
| In the Linux kernel, the following vulnerability has been resolved:
smb/server: call ksmbd_session_rpc_close() on error path in create_smb2_pipe()
When ksmbd_iov_pin_rsp() fails, we should call ksmbd_session_rpc_close(). |
| In the Linux kernel, the following vulnerability has been resolved:
macvlan: fix error recovery in macvlan_common_newlink()
valis provided a nice repro to crash the kernel:
ip link add p1 type veth peer p2
ip link set address 00:00:00:00:00:20 dev p1
ip link set up dev p1
ip link set up dev p2
ip link add mv0 link p2 type macvlan mode source
ip link add invalid% link p2 type macvlan mode source macaddr add 00:00:00:00:00:20
ping -c1 -I p1 1.2.3.4
He also gave a very detailed analysis:
<quote valis>
The issue is triggered when a new macvlan link is created with
MACVLAN_MODE_SOURCE mode and MACVLAN_MACADDR_ADD (or
MACVLAN_MACADDR_SET) parameter, lower device already has a macvlan
port and register_netdevice() called from macvlan_common_newlink()
fails (e.g. because of the invalid link name).
In this case macvlan_hash_add_source is called from
macvlan_change_sources() / macvlan_common_newlink():
This adds a reference to vlan to the port's vlan_source_hash using
macvlan_source_entry.
vlan is a pointer to the priv data of the link that is being created.
When register_netdevice() fails, the error is returned from
macvlan_newlink() to rtnl_newlink_create():
if (ops->newlink)
err = ops->newlink(dev, ¶ms, extack);
else
err = register_netdevice(dev);
if (err < 0) {
free_netdev(dev);
goto out;
}
and free_netdev() is called, causing a kvfree() on the struct
net_device that is still referenced in the source entry attached to
the lower device's macvlan port.
Now all packets sent on the macvlan port with a matching source mac
address will trigger a use-after-free in macvlan_forward_source().
</quote valis>
With all that, my fix is to make sure we call macvlan_flush_sources()
regardless of @create value whenever "goto destroy_macvlan_port;"
path is taken.
Many thanks to valis for following up on this issue. |
| In the Linux kernel, the following vulnerability has been resolved:
linkwatch: use __dev_put() in callers to prevent UAF
After linkwatch_do_dev() calls __dev_put() to release the linkwatch
reference, the device refcount may drop to 1. At this point,
netdev_run_todo() can proceed (since linkwatch_sync_dev() sees an
empty list and returns without blocking), wait for the refcount to
become 1 via netdev_wait_allrefs_any(), and then free the device
via kobject_put().
This creates a use-after-free when __linkwatch_run_queue() tries to
call netdev_unlock_ops() on the already-freed device.
Note that adding netdev_lock_ops()/netdev_unlock_ops() pair in
netdev_run_todo() before kobject_put() would not work, because
netdev_lock_ops() is conditional - it only locks when
netdev_need_ops_lock() returns true. If the device doesn't require
ops_lock, linkwatch won't hold any lock, and netdev_run_todo()
acquiring the lock won't provide synchronization.
Fix this by moving __dev_put() from linkwatch_do_dev() to its
callers. The device reference logically pairs with de-listing the
device, so it's reasonable for the caller that did the de-listing
to release it. This allows placing __dev_put() after all device
accesses are complete, preventing UAF.
The bug can be reproduced by adding mdelay(2000) after
linkwatch_do_dev() in __linkwatch_run_queue(), then running:
ip tuntap add mode tun name tun_test
ip link set tun_test up
ip link set tun_test carrier off
ip link set tun_test carrier on
sleep 0.5
ip tuntap del mode tun name tun_test
KASAN report:
==================================================================
BUG: KASAN: use-after-free in netdev_need_ops_lock include/net/netdev_lock.h:33 [inline]
BUG: KASAN: use-after-free in netdev_unlock_ops include/net/netdev_lock.h:47 [inline]
BUG: KASAN: use-after-free in __linkwatch_run_queue+0x865/0x8a0 net/core/link_watch.c:245
Read of size 8 at addr ffff88804de5c008 by task kworker/u32:10/8123
CPU: 0 UID: 0 PID: 8123 Comm: kworker/u32:10 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
Workqueue: events_unbound linkwatch_event
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0x156/0x4c9 mm/kasan/report.c:482
kasan_report+0xdf/0x1a0 mm/kasan/report.c:595
netdev_need_ops_lock include/net/netdev_lock.h:33 [inline]
netdev_unlock_ops include/net/netdev_lock.h:47 [inline]
__linkwatch_run_queue+0x865/0x8a0 net/core/link_watch.c:245
linkwatch_event+0x8f/0xc0 net/core/link_watch.c:304
process_one_work+0x9c2/0x1840 kernel/workqueue.c:3257
process_scheduled_works kernel/workqueue.c:3340 [inline]
worker_thread+0x5da/0xe40 kernel/workqueue.c:3421
kthread+0x3b3/0x730 kernel/kthread.c:463
ret_from_fork+0x754/0xaf0 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
</TASK>
================================================================== |
| In the Linux kernel, the following vulnerability has been resolved:
procfs: avoid fetching build ID while holding VMA lock
Fix PROCMAP_QUERY to fetch optional build ID only after dropping mmap_lock
or per-VMA lock, whichever was used to lock VMA under question, to avoid
deadlock reported by syzbot:
-> #1 (&mm->mmap_lock){++++}-{4:4}:
__might_fault+0xed/0x170
_copy_to_iter+0x118/0x1720
copy_page_to_iter+0x12d/0x1e0
filemap_read+0x720/0x10a0
blkdev_read_iter+0x2b5/0x4e0
vfs_read+0x7f4/0xae0
ksys_read+0x12a/0x250
do_syscall_64+0xcb/0xf80
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&sb->s_type->i_mutex_key#8){++++}-{4:4}:
__lock_acquire+0x1509/0x26d0
lock_acquire+0x185/0x340
down_read+0x98/0x490
blkdev_read_iter+0x2a7/0x4e0
__kernel_read+0x39a/0xa90
freader_fetch+0x1d5/0xa80
__build_id_parse.isra.0+0xea/0x6a0
do_procmap_query+0xd75/0x1050
procfs_procmap_ioctl+0x7a/0xb0
__x64_sys_ioctl+0x18e/0x210
do_syscall_64+0xcb/0xf80
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
rlock(&mm->mmap_lock);
lock(&sb->s_type->i_mutex_key#8);
lock(&mm->mmap_lock);
rlock(&sb->s_type->i_mutex_key#8);
*** DEADLOCK ***
This seems to be exacerbated (as we haven't seen these syzbot reports
before that) by the recent:
777a8560fd29 ("lib/buildid: use __kernel_read() for sleepable context")
To make this safe, we need to grab file refcount while VMA is still locked, but
other than that everything is pretty straightforward. Internal build_id_parse()
API assumes VMA is passed, but it only needs the underlying file reference, so
just add another variant build_id_parse_file() that expects file passed
directly.
[akpm@linux-foundation.org: fix up kerneldoc] |
| In the Linux kernel, the following vulnerability has been resolved:
net: cpsw: Execute ndo_set_rx_mode callback in a work queue
Commit 1767bb2d47b7 ("ipv6: mcast: Don't hold RTNL for
IPV6_ADD_MEMBERSHIP and MCAST_JOIN_GROUP.") removed the RTNL lock for
IPV6_ADD_MEMBERSHIP and MCAST_JOIN_GROUP operations. However, this
change triggered the following call trace on my BeagleBone Black board:
WARNING: net/8021q/vlan_core.c:236 at vlan_for_each+0x120/0x124, CPU#0: rpcbind/481
RTNL: assertion failed at net/8021q/vlan_core.c (236)
Modules linked in:
CPU: 0 UID: 997 PID: 481 Comm: rpcbind Not tainted 6.19.0-rc7-next-20260130-yocto-standard+ #35 PREEMPT
Hardware name: Generic AM33XX (Flattened Device Tree)
Call trace:
unwind_backtrace from show_stack+0x28/0x2c
show_stack from dump_stack_lvl+0x30/0x38
dump_stack_lvl from __warn+0xb8/0x11c
__warn from warn_slowpath_fmt+0x130/0x194
warn_slowpath_fmt from vlan_for_each+0x120/0x124
vlan_for_each from cpsw_add_mc_addr+0x54/0x98
cpsw_add_mc_addr from __hw_addr_ref_sync_dev+0xc4/0xec
__hw_addr_ref_sync_dev from __dev_mc_add+0x78/0x88
__dev_mc_add from igmp6_group_added+0x84/0xec
igmp6_group_added from __ipv6_dev_mc_inc+0x1fc/0x2f0
__ipv6_dev_mc_inc from __ipv6_sock_mc_join+0x124/0x1b4
__ipv6_sock_mc_join from do_ipv6_setsockopt+0x84c/0x1168
do_ipv6_setsockopt from ipv6_setsockopt+0x88/0xc8
ipv6_setsockopt from do_sock_setsockopt+0xe8/0x19c
do_sock_setsockopt from __sys_setsockopt+0x84/0xac
__sys_setsockopt from ret_fast_syscall+0x0/0x54
This trace occurs because vlan_for_each() is called within
cpsw_ndo_set_rx_mode(), which expects the RTNL lock to be held.
Since modifying vlan_for_each() to operate without the RTNL lock is not
straightforward, and because ndo_set_rx_mode() is invoked both with and
without the RTNL lock across different code paths, simply adding
rtnl_lock() in cpsw_ndo_set_rx_mode() is not a viable solution.
To resolve this issue, we opt to execute the actual processing within
a work queue, following the approach used by the icssg-prueth driver.
Please note: To reproduce this issue, I manually reverted the changes to
am335x-bone-common.dtsi from commit c477358e66a3 ("ARM: dts: am335x-bone:
switch to new cpsw switch drv") in order to revert to the legacy cpsw
driver. |
| In the Linux kernel, the following vulnerability has been resolved:
smb/client: fix memory leak in smb2_open_file()
Reproducer:
1. server: directories are exported read-only
2. client: mount -t cifs //${server_ip}/export /mnt
3. client: dd if=/dev/zero of=/mnt/file bs=512 count=1000 oflag=direct
4. client: umount /mnt
5. client: sleep 1
6. client: modprobe -r cifs
The error message is as follows:
=============================================================================
BUG cifs_small_rq (Not tainted): Objects remaining on __kmem_cache_shutdown()
-----------------------------------------------------------------------------
Object 0x00000000d47521be @offset=14336
...
WARNING: mm/slub.c:1251 at __kmem_cache_shutdown+0x34e/0x440, CPU#0: modprobe/1577
...
Call Trace:
<TASK>
kmem_cache_destroy+0x94/0x190
cifs_destroy_request_bufs+0x3e/0x50 [cifs]
cleanup_module+0x4e/0x540 [cifs]
__se_sys_delete_module+0x278/0x400
__x64_sys_delete_module+0x5f/0x70
x64_sys_call+0x2299/0x2ff0
do_syscall_64+0x89/0x350
entry_SYSCALL_64_after_hwframe+0x76/0x7e
...
kmem_cache_destroy cifs_small_rq: Slab cache still has objects when called from cifs_destroy_request_bufs+0x3e/0x50 [cifs]
WARNING: mm/slab_common.c:532 at kmem_cache_destroy+0x16b/0x190, CPU#0: modprobe/1577 |
| In the Linux kernel, the following vulnerability has been resolved:
mm, shmem: prevent infinite loop on truncate race
When truncating a large swap entry, shmem_free_swap() returns 0 when the
entry's index doesn't match the given index due to lookup alignment. The
failure fallback path checks if the entry crosses the end border and
aborts when it happens, so truncate won't erase an unexpected entry or
range. But one scenario was ignored.
When `index` points to the middle of a large swap entry, and the large
swap entry doesn't go across the end border, find_get_entries() will
return that large swap entry as the first item in the batch with
`indices[0]` equal to `index`. The entry's base index will be smaller
than `indices[0]`, so shmem_free_swap() will fail and return 0 due to the
"base < index" check. The code will then call shmem_confirm_swap(), get
the order, check if it crosses the END boundary (which it doesn't), and
retry with the same index.
The next iteration will find the same entry again at the same index with
same indices, leading to an infinite loop.
Fix this by retrying with a round-down index, and abort if the index is
smaller than the truncate range. |
| In the Linux kernel, the following vulnerability has been resolved:
dpaa2-switch: add bounds check for if_id in IRQ handler
The IRQ handler extracts if_id from the upper 16 bits of the hardware
status register and uses it to index into ethsw->ports[] without
validation. Since if_id can be any 16-bit value (0-65535) but the ports
array is only allocated with sw_attr.num_ifs elements, this can lead to
an out-of-bounds read potentially.
Add a bounds check before accessing the array, consistent with the
existing validation in dpaa2_switch_rx(). |
| In the Linux kernel, the following vulnerability has been resolved:
net: usb: r8152: fix resume reset deadlock
rtl8152 can trigger device reset during reset which
potentially can result in a deadlock:
**** DPM device timeout after 10 seconds; 15 seconds until panic ****
Call Trace:
<TASK>
schedule+0x483/0x1370
schedule_preempt_disabled+0x15/0x30
__mutex_lock_common+0x1fd/0x470
__rtl8152_set_mac_address+0x80/0x1f0
dev_set_mac_address+0x7f/0x150
rtl8152_post_reset+0x72/0x150
usb_reset_device+0x1d0/0x220
rtl8152_resume+0x99/0xc0
usb_resume_interface+0x3e/0xc0
usb_resume_both+0x104/0x150
usb_resume+0x22/0x110
The problem is that rtl8152 resume calls reset under
tp->control mutex while reset basically re-enters rtl8152
and attempts to acquire the same tp->control lock once
again.
Reset INACCESSIBLE device outside of tp->control mutex
scope to avoid recursive mutex_lock() deadlock. |
| In the Linux kernel, the following vulnerability has been resolved:
ceph: fix oops due to invalid pointer for kfree() in parse_longname()
This fixes a kernel oops when reading ceph snapshot directories (.snap),
for example by simply running `ls /mnt/my_ceph/.snap`.
The variable str is guarded by __free(kfree), but advanced by one for
skipping the initial '_' in snapshot names. Thus, kfree() is called
with an invalid pointer. This patch removes the need for advancing the
pointer so kfree() is called with correct memory pointer.
Steps to reproduce:
1. Create snapshots on a cephfs volume (I've 63 snaps in my testcase)
2. Add cephfs mount to fstab
$ echo "samba-fileserver@.files=/volumes/datapool/stuff/3461082b-ecc9-4e82-8549-3fd2590d3fb6 /mnt/test/stuff ceph acl,noatime,_netdev 0 0" >> /etc/fstab
3. Reboot the system
$ systemctl reboot
4. Check if it's really mounted
$ mount | grep stuff
5. List snapshots (expected 63 snapshots on my system)
$ ls /mnt/test/stuff/.snap
Now ls hangs forever and the kernel log shows the oops. |
| An array indexing vulnerability was found in the netfilter subsystem of the Linux kernel. A missing macro could lead to a miscalculation of the `h->nets` array offset, providing attackers with the primitive to arbitrarily increment/decrement a memory buffer out-of-bound. This issue may allow a local user to crash the system or potentially escalate their privileges on the system. |
| A flaw was found in the USB Host Controller Driver framework in the Linux kernel. The usb_giveback_urb function has a logic loophole in its implementation. Due to the inappropriate judgment condition of the goto statement, the function cannot return under the input of a specific malformed descriptor file, so it falls into an endless loop, resulting in a denial of service. |
| In the Linux kernel, the following vulnerability has been resolved:
spi: cadence-quadspi: Implement refcount to handle unbind during busy
driver support indirect read and indirect write operation with
assumption no force device removal(unbind) operation. However
force device removal(removal) is still available to root superuser.
Unbinding driver during operation causes kernel crash. This changes
ensure driver able to handle such operation for indirect read and
indirect write by implementing refcount to track attached devices
to the controller and gracefully wait and until attached devices
remove operation completed before proceed with removal operation. |
| In the Linux kernel, the following vulnerability has been resolved:
netfilter: nft_set_pipapo: prevent overflow in lookup table allocation
When calculating the lookup table size, ensure the following
multiplication does not overflow:
- desc->field_len[] maximum value is U8_MAX multiplied by
NFT_PIPAPO_GROUPS_PER_BYTE(f) that can be 2, worst case.
- NFT_PIPAPO_BUCKETS(f->bb) is 2^8, worst case.
- sizeof(unsigned long), from sizeof(*f->lt), lt in
struct nft_pipapo_field.
Then, use check_mul_overflow() to multiply by bucket size and then use
check_add_overflow() to the alignment for avx2 (if needed). Finally, add
lt_size_check_overflow() helper and use it to consolidate this.
While at it, replace leftover allocation using the GFP_KERNEL to
GFP_KERNEL_ACCOUNT for consistency, in pipapo_resize(). |
| Inadequate encryption strength in .NET, .NET Framework, Visual Studio allows an authorized attacker to disclose information over a network. |
| Improper link resolution before file access ('link following') in .NET allows an authorized attacker to elevate privileges locally. |
| In the Linux kernel, the following vulnerability has been resolved:
netfilter: nf_tables: fix inverted genmask check in nft_map_catchall_activate()
nft_map_catchall_activate() has an inverted element activity check
compared to its non-catchall counterpart nft_mapelem_activate() and
compared to what is logically required.
nft_map_catchall_activate() is called from the abort path to re-activate
catchall map elements that were deactivated during a failed transaction.
It should skip elements that are already active (they don't need
re-activation) and process elements that are inactive (they need to be
restored). Instead, the current code does the opposite: it skips inactive
elements and processes active ones.
Compare the non-catchall activate callback, which is correct:
nft_mapelem_activate():
if (nft_set_elem_active(ext, iter->genmask))
return 0; /* skip active, process inactive */
With the buggy catchall version:
nft_map_catchall_activate():
if (!nft_set_elem_active(ext, genmask))
continue; /* skip inactive, process active */
The consequence is that when a DELSET operation is aborted,
nft_setelem_data_activate() is never called for the catchall element.
For NFT_GOTO verdict elements, this means nft_data_hold() is never
called to restore the chain->use reference count. Each abort cycle
permanently decrements chain->use. Once chain->use reaches zero,
DELCHAIN succeeds and frees the chain while catchall verdict elements
still reference it, resulting in a use-after-free.
This is exploitable for local privilege escalation from an unprivileged
user via user namespaces + nftables on distributions that enable
CONFIG_USER_NS and CONFIG_NF_TABLES.
Fix by removing the negation so the check matches nft_mapelem_activate():
skip active elements, process inactive ones. |
| In the Linux kernel, the following vulnerability has been resolved:
nvmet-tcp: add bounds checks in nvmet_tcp_build_pdu_iovec
nvmet_tcp_build_pdu_iovec() could walk past cmd->req.sg when a PDU
length or offset exceeds sg_cnt and then use bogus sg->length/offset
values, leading to _copy_to_iter() GPF/KASAN. Guard sg_idx, remaining
entries, and sg->length/offset before building the bvec. |
| .NET and Visual Studio Remote Code Execution Vulnerability |