| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
usb: phy: fsl-usb: Fix use-after-free in delayed work during device removal
The delayed work item otg_event is initialized in fsl_otg_conf() and
scheduled under two conditions:
1. When a host controller binds to the OTG controller.
2. When the USB ID pin state changes (cable insertion/removal).
A race condition occurs when the device is removed via fsl_otg_remove():
the fsl_otg instance may be freed while the delayed work is still pending
or executing. This leads to use-after-free when the work function
fsl_otg_event() accesses the already freed memory.
The problematic scenario:
(detach thread) | (delayed work)
fsl_otg_remove() |
kfree(fsl_otg_dev) //FREE| fsl_otg_event()
| og = container_of(...) //USE
| og-> //USE
Fix this by calling disable_delayed_work_sync() in fsl_otg_remove()
before deallocating the fsl_otg structure. This ensures the delayed work
is properly canceled and completes execution prior to memory deallocation.
This bug was identified through static analysis. |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: don't log conflicting inode if it's a dir moved in the current transaction
We can't log a conflicting inode if it's a directory and it was moved
from one parent directory to another parent directory in the current
transaction, as this can result an attempt to have a directory with
two hard links during log replay, one for the old parent directory and
another for the new parent directory.
The following scenario triggers that issue:
1) We have directories "dir1" and "dir2" created in a past transaction.
Directory "dir1" has inode A as its parent directory;
2) We move "dir1" to some other directory;
3) We create a file with the name "dir1" in directory inode A;
4) We fsync the new file. This results in logging the inode of the new file
and the inode for the directory "dir1" that was previously moved in the
current transaction. So the log tree has the INODE_REF item for the
new location of "dir1";
5) We move the new file to some other directory. This results in updating
the log tree to included the new INODE_REF for the new location of the
file and removes the INODE_REF for the old location. This happens
during the rename when we call btrfs_log_new_name();
6) We fsync the file, and that persists the log tree changes done in the
previous step (btrfs_log_new_name() only updates the log tree in
memory);
7) We have a power failure;
8) Next time the fs is mounted, log replay happens and when processing
the inode for directory "dir1" we find a new INODE_REF and add that
link, but we don't remove the old link of the inode since we have
not logged the old parent directory of the directory inode "dir1".
As a result after log replay finishes when we trigger writeback of the
subvolume tree's extent buffers, the tree check will detect that we have
a directory a hard link count of 2 and we get a mount failure.
The errors and stack traces reported in dmesg/syslog are like this:
[ 3845.729764] BTRFS info (device dm-0): start tree-log replay
[ 3845.730304] page: refcount:3 mapcount:0 mapping:000000005c8a3027 index:0x1d00 pfn:0x11510c
[ 3845.731236] memcg:ffff9264c02f4e00
[ 3845.731751] aops:btree_aops [btrfs] ino:1
[ 3845.732300] flags: 0x17fffc00000400a(uptodate|private|writeback|node=0|zone=2|lastcpupid=0x1ffff)
[ 3845.733346] raw: 017fffc00000400a 0000000000000000 dead000000000122 ffff9264d978aea8
[ 3845.734265] raw: 0000000000001d00 ffff92650e6d4738 00000003ffffffff ffff9264c02f4e00
[ 3845.735305] page dumped because: eb page dump
[ 3845.735981] BTRFS critical (device dm-0): corrupt leaf: root=5 block=30408704 slot=6 ino=257, invalid nlink: has 2 expect no more than 1 for dir
[ 3845.737786] BTRFS info (device dm-0): leaf 30408704 gen 10 total ptrs 17 free space 14881 owner 5
[ 3845.737789] BTRFS info (device dm-0): refs 4 lock_owner 0 current 30701
[ 3845.737792] item 0 key (256 INODE_ITEM 0) itemoff 16123 itemsize 160
[ 3845.737794] inode generation 3 transid 9 size 16 nbytes 16384
[ 3845.737795] block group 0 mode 40755 links 1 uid 0 gid 0
[ 3845.737797] rdev 0 sequence 2 flags 0x0
[ 3845.737798] atime 1764259517.0
[ 3845.737800] ctime 1764259517.572889464
[ 3845.737801] mtime 1764259517.572889464
[ 3845.737802] otime 1764259517.0
[ 3845.737803] item 1 key (256 INODE_REF 256) itemoff 16111 itemsize 12
[ 3845.737805] index 0 name_len 2
[ 3845.737807] item 2 key (256 DIR_ITEM 2363071922) itemoff 16077 itemsize 34
[ 3845.737808] location key (257 1 0) type 2
[ 3845.737810] transid 9 data_len 0 name_len 4
[ 3845.737811] item 3 key (256 DIR_ITEM 2676584006) itemoff 16043 itemsize 34
[ 3845.737813] location key (258 1 0) type 2
[ 3845.737814] transid 9 data_len 0 name_len 4
[ 3845.737815] item 4 key (256 DIR_INDEX 2) itemoff 16009 itemsize 34
[ 3845.737816] location key (257 1 0) type 2
[
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
net/handshake: duplicate handshake cancellations leak socket
When a handshake request is cancelled it is removed from the
handshake_net->hn_requests list, but it is still present in the
handshake_rhashtbl until it is destroyed.
If a second cancellation request arrives for the same handshake request,
then remove_pending() will return false... and assuming
HANDSHAKE_F_REQ_COMPLETED isn't set in req->hr_flags, we'll continue
processing through the out_true label, where we put another reference on
the sock and a refcount underflow occurs.
This can happen for example if a handshake times out - particularly if
the SUNRPC client sends the AUTH_TLS probe to the server but doesn't
follow it up with the ClientHello due to a problem with tlshd. When the
timeout is hit on the server, the server will send a FIN, which triggers
a cancellation request via xs_reset_transport(). When the timeout is
hit on the client, another cancellation request happens via
xs_tls_handshake_sync().
Add a test_and_set_bit(HANDSHAKE_F_REQ_COMPLETED) in the pending cancel
path so duplicate cancels can be detected. |
| In the Linux kernel, the following vulnerability has been resolved:
f2fs: fix to avoid updating compression context during writeback
Bai, Shuangpeng <sjb7183@psu.edu> reported a bug as below:
Oops: divide error: 0000 [#1] SMP KASAN PTI
CPU: 0 UID: 0 PID: 11441 Comm: syz.0.46 Not tainted 6.17.0 #1 PREEMPT(full)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
RIP: 0010:f2fs_all_cluster_page_ready+0x106/0x550 fs/f2fs/compress.c:857
Call Trace:
<TASK>
f2fs_write_cache_pages fs/f2fs/data.c:3078 [inline]
__f2fs_write_data_pages fs/f2fs/data.c:3290 [inline]
f2fs_write_data_pages+0x1c19/0x3600 fs/f2fs/data.c:3317
do_writepages+0x38e/0x640 mm/page-writeback.c:2634
filemap_fdatawrite_wbc mm/filemap.c:386 [inline]
__filemap_fdatawrite_range mm/filemap.c:419 [inline]
file_write_and_wait_range+0x2ba/0x3e0 mm/filemap.c:794
f2fs_do_sync_file+0x6e6/0x1b00 fs/f2fs/file.c:294
generic_write_sync include/linux/fs.h:3043 [inline]
f2fs_file_write_iter+0x76e/0x2700 fs/f2fs/file.c:5259
new_sync_write fs/read_write.c:593 [inline]
vfs_write+0x7e9/0xe00 fs/read_write.c:686
ksys_write+0x19d/0x2d0 fs/read_write.c:738
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xf7/0x470 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
The bug was triggered w/ below race condition:
fsync setattr ioctl
- f2fs_do_sync_file
- file_write_and_wait_range
- f2fs_write_cache_pages
: inode is non-compressed
: cc.cluster_size =
F2FS_I(inode)->i_cluster_size = 0
- tag_pages_for_writeback
- f2fs_setattr
- truncate_setsize
- f2fs_truncate
- f2fs_fileattr_set
- f2fs_setflags_common
- set_compress_context
: F2FS_I(inode)->i_cluster_size = 4
: set_inode_flag(inode, FI_COMPRESSED_FILE)
- f2fs_compressed_file
: return true
- f2fs_all_cluster_page_ready
: "pgidx % cc->cluster_size" trigger dividing 0 issue
Let's change as below to fix this issue:
- introduce a new atomic type variable .writeback in structure f2fs_inode_info
to track the number of threads which calling f2fs_write_cache_pages().
- use .i_sem lock to protect .writeback update.
- check .writeback before update compression context in f2fs_setflags_common()
to avoid race w/ ->writepages. |
| In the Linux kernel, the following vulnerability has been resolved:
bnxt_en: Fix XDP_TX path
For XDP_TX action in bnxt_rx_xdp(), clearing of the event flags is not
correct. __bnxt_poll_work() -> bnxt_rx_pkt() -> bnxt_rx_xdp() may be
looping within NAPI and some event flags may be set in earlier
iterations. In particular, if BNXT_TX_EVENT is set earlier indicating
some XDP_TX packets are ready and pending, it will be cleared if it is
XDP_TX action again. Normally, we will set BNXT_TX_EVENT again when we
successfully call __bnxt_xmit_xdp(). But if the TX ring has no more
room, the flag will not be set. This will cause the TX producer to be
ahead but the driver will not hit the TX doorbell.
For multi-buf XDP_TX, there is no need to clear the event flags and set
BNXT_AGG_EVENT. The BNXT_AGG_EVENT flag should have been set earlier in
bnxt_rx_pkt().
The visible symptom of this is that the RX ring associated with the
TX XDP ring will eventually become empty and all packets will be dropped.
Because this condition will cause the driver to not refill the RX ring
seeing that the TX ring has forever pending XDP_TX packets.
The fix is to only clear BNXT_RX_EVENT when we have successfully
called __bnxt_xmit_xdp(). |
| OS Command Injection Remote Code Execution Vulnerability in API in Progress LoadMaster allows an authenticated attacker with “User Administration” permissions to execute arbitrary commands on the LoadMaster appliance by exploiting unsanitized input in the API input parameters |
| OS Command Injection Remote Code Execution Vulnerability in API in Progress LoadMaster allows an authenticated attacker with “User Administration” permissions to execute arbitrary commands on the LoadMaster appliance by exploiting unsanitized input in the API input parameters |
| Zohocorp ManageEngine ADSelfService Plus versions before 6519 are vulnerable to Authentication Bypass due to improper filter configurations. |
| Improper Input Validation (CWE-20) in Kibana's Email Connector can allow an attacker to cause an Excessive Allocation (CAPEC-130) through a specially crafted email address parameter. This requires an attacker to have authenticated access with view-level privileges sufficient to execute connector actions. The application attempts to process specially crafted email format, resulting in complete service unavailability for all users until manual restart is performed. |
| Allocation of Resources Without Limits or Throttling (CWE-770) in Kibana Fleet can lead to Excessive Allocation (CAPEC-130) via a specially crafted request. This causes the application to perform redundant processing operations that continuously consume system resources until service degradation or complete unavailability occurs. |
| NSecsoft 'NSecKrnl' is a Windows driver that allows a local, authenticated attacker to terminate processes owned by other users, including SYSTEM and Protected Processes by issuing crafted IOCTL requests to the driver. |
| A vulnerability affecting HPE Networking Instant On Access Points has been identified where a device processing a specially crafted packet could enter a non-responsive state, in some cases requiring a hard reset to re-establish services. A malicious actor could leverage this vulnerability to conduct a Denial-of-Service attack on a target network. |
| Authenticated arbitrary file write vulnerability exists in the web-based management interface of mobility conductors running either AOS-10 or AOS-8 operating systems. Successful exploitation could allow an authenticated malicious actor to create or modify arbitrary files and execute arbitrary commands as a privileged user on the underlying operating system. |
| An improper input handling vulnerability exists in the web-based management interface of mobility conductors running either AOS-10 or AOS-8 operating systems. Successful exploitation could allow an authenticated malicious actor with valid credentials to trigger unintended behavior on the affected system. |
| In Eptura Archibus 2024.03.01.109, the "Run script" and "Server File" components of the "Database Update Wizard" are vulnerable to directory traversal. |
| tarteaucitron.js is a compliant and accessible cookie banner. Prior to 1.29.0, a Regular Expression Denial of Service (ReDoS) vulnerability was identified in tarteaucitron.js in the handling of the issuu_id parameter. This vulnerability is fixed in 1.29.0. |
| Software installed and run as a non-privileged user may conduct improper GPU system calls to cause mismanagement of reference counting to cause a potential use after free.
Improper reference counting on an internal resource caused scenario where potential for use after free was present. |
| Jervis is a library for Job DSL plugin scripts and shared Jenkins pipeline libraries. Prior to 2.2, Jervis uses deterministic AES IV derivation from a passphrase. This vulnerability is fixed in 2.2. |
| A stack overflow vulnerability exists in the AOS-10 web-based management interface of a Mobility Gateway. Successful exploitation could allow an authenticated malicious actor to execute arbitrary code as a privileged user on the underlying operating system. |
| Jervis is a library for Job DSL plugin scripts and shared Jenkins pipeline libraries. Prior to 2.2, the salt is derived from sha256Sum(passphrase). Two encryption operations with the same password will have the same derived key. This vulnerability is fixed in 2.2. |