CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
A Cross-Site Scripting (XSS) vulnerability exists in Bhabishya-123 E-commerce 1.0, specifically within the index endpoint. Unsanitized input in the /index parameter is directly reflected back into the response HTML, allowing attackers to execute arbitrary JavaScript in the browser of a user who visits a malicious link or submits a crafted request. |
Cross-Site Scripting (XSS) vulnerability exists in TastyIgniter 3.7.7, affecting the /admin/media_manager component. Attackers can upload a malicious SVG file containing JavaScript code. When an administrator previews the file, the code executes in their browser context, allowing the attacker to perform unauthorized actions such as modifying the admin account credentials. |
Apache Syncope offers the ability to extend / customize the base behavior on every deployment by allowing to provide custom implementations of a few Java interfaces; such implementations can be provided either as Java or Groovy classes, with the latter being particularly attractive as the machinery is set for runtime reload.
Such a feature has been available for a while, but recently it was discovered that a malicious administrator can inject Groovy code that can be executed remotely by a running Apache Syncope Core instance.
Users are recommended to upgrade to version 3.0.14 / 4.0.2, which fix this issue by forcing the Groovy code to run in a sandbox. |
A lack of rate limiting in the One-Time Password (OTP) verification endpoint of SigningHub v8.6.8 allows attackers to bypass verification via a bruteforce attack. |
A lack of rate limiting in the component /Home/UploadStreamDocument of SigningHub v8.6.8 allows attackers to cause a Denial of Service (DoS) via uploading an excessive number of files. |
Incorrect access control in SigningHub v8.6.8 allows attackers to arbitrarily add user accounts without any rate limiting. This can lead to a resource exhaustion and a Denial of Service (DoS) when an excessively large number of user accounts are created. |
In Samsung Mobile Processor and Wearable Processor Exynos 980, 1280, 1330, 1380, 1480, 2400, 1580, W920, W930, and W1000, there is an improper access control vulnerability related to a log file. |
In the Linux kernel, the following vulnerability has been resolved:
media: iris: Fix memory leak by freeing untracked persist buffer
One internal buffer which is allocated only once per session was not
being freed during session close because it was not being tracked as
part of internal buffer list which resulted in a memory leak.
Add the necessary logic to explicitly free the untracked internal buffer
during session close to ensure all allocated memory is released
properly. |
In the Linux kernel, the following vulnerability has been resolved:
media: uvcvideo: Mark invalid entities with id UVC_INVALID_ENTITY_ID
Per UVC 1.1+ specification 3.7.2, units and terminals must have a non-zero
unique ID.
```
Each Unit and Terminal within the video function is assigned a unique
identification number, the Unit ID (UID) or Terminal ID (TID), contained in
the bUnitID or bTerminalID field of the descriptor. The value 0x00 is
reserved for undefined ID,
```
If we add a new entity with id 0 or a duplicated ID, it will be marked
as UVC_INVALID_ENTITY_ID.
In a previous attempt commit 3dd075fe8ebb ("media: uvcvideo: Require
entities to have a non-zero unique ID"), we ignored all the invalid units,
this broke a lot of non-compatible cameras. Hopefully we are more lucky
this time.
This also prevents some syzkaller reproducers from triggering warnings due
to a chain of entities referring to themselves. In one particular case, an
Output Unit is connected to an Input Unit, both with the same ID of 1. But
when looking up for the source ID of the Output Unit, that same entity is
found instead of the input entity, which leads to such warnings.
In another case, a backward chain was considered finished as the source ID
was 0. Later on, that entity was found, but its pads were not valid.
Here is a sample stack trace for one of those cases.
[ 20.650953] usb 1-1: new high-speed USB device number 2 using dummy_hcd
[ 20.830206] usb 1-1: Using ep0 maxpacket: 8
[ 20.833501] usb 1-1: config 0 descriptor??
[ 21.038518] usb 1-1: string descriptor 0 read error: -71
[ 21.038893] usb 1-1: Found UVC 0.00 device <unnamed> (2833:0201)
[ 21.039299] uvcvideo 1-1:0.0: Entity type for entity Output 1 was not initialized!
[ 21.041583] uvcvideo 1-1:0.0: Entity type for entity Input 1 was not initialized!
[ 21.042218] ------------[ cut here ]------------
[ 21.042536] WARNING: CPU: 0 PID: 9 at drivers/media/mc/mc-entity.c:1147 media_create_pad_link+0x2c4/0x2e0
[ 21.043195] Modules linked in:
[ 21.043535] CPU: 0 UID: 0 PID: 9 Comm: kworker/0:1 Not tainted 6.11.0-rc7-00030-g3480e43aeccf #444
[ 21.044101] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.15.0-1 04/01/2014
[ 21.044639] Workqueue: usb_hub_wq hub_event
[ 21.045100] RIP: 0010:media_create_pad_link+0x2c4/0x2e0
[ 21.045508] Code: fe e8 20 01 00 00 b8 f4 ff ff ff 48 83 c4 30 5b 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc 0f 0b eb e9 0f 0b eb 0a 0f 0b eb 06 <0f> 0b eb 02 0f 0b b8 ea ff ff ff eb d4 66 2e 0f 1f 84 00 00 00 00
[ 21.046801] RSP: 0018:ffffc9000004b318 EFLAGS: 00010246
[ 21.047227] RAX: ffff888004e5d458 RBX: 0000000000000000 RCX: ffffffff818fccf1
[ 21.047719] RDX: 000000000000007b RSI: 0000000000000000 RDI: ffff888004313290
[ 21.048241] RBP: ffff888004313290 R08: 0001ffffffffffff R09: 0000000000000000
[ 21.048701] R10: 0000000000000013 R11: 0001888004313290 R12: 0000000000000003
[ 21.049138] R13: ffff888004313080 R14: ffff888004313080 R15: 0000000000000000
[ 21.049648] FS: 0000000000000000(0000) GS:ffff88803ec00000(0000) knlGS:0000000000000000
[ 21.050271] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 21.050688] CR2: 0000592cc27635b0 CR3: 000000000431c000 CR4: 0000000000750ef0
[ 21.051136] PKRU: 55555554
[ 21.051331] Call Trace:
[ 21.051480] <TASK>
[ 21.051611] ? __warn+0xc4/0x210
[ 21.051861] ? media_create_pad_link+0x2c4/0x2e0
[ 21.052252] ? report_bug+0x11b/0x1a0
[ 21.052540] ? trace_hardirqs_on+0x31/0x40
[ 21.052901] ? handle_bug+0x3d/0x70
[ 21.053197] ? exc_invalid_op+0x1a/0x50
[ 21.053511] ? asm_exc_invalid_op+0x1a/0x20
[ 21.053924] ? media_create_pad_link+0x91/0x2e0
[ 21.054364] ? media_create_pad_link+0x2c4/0x2e0
[ 21.054834] ? media_create_pad_link+0x91/0x2e0
[ 21.055131] ? _raw_spin_unlock+0x1e/0x40
[ 21.055441] ? __v4l2_device_register_subdev+0x202/0x210
[ 21.055837] uvc_mc_register_entities+0x358/0x400
[ 21.056144] uvc_register_chains+0x1
---truncated--- |
In the Linux kernel, the following vulnerability has been resolved:
media: stm32-csi: Fix dereference before NULL check
In 'stm32_csi_start', 'csidev->s_subdev' is dereferenced directly while
assigning a value to the 'src_pad'. However the same value is being
checked against NULL at a later point of time indicating that there
are chances that the value can be NULL.
Move the dereference after the NULL check. |
Better Auth is an authentication library for TypeScript. An open redirect vulnerability has been identified in the verify email endpoint of all versions of Better Auth prior to v1.1.6, potentially allowing attackers to redirect users to malicious websites. This issue affects users relying on email verification links generated by the library. The verify email callback endpoint accepts a `callbackURL` parameter. Unlike other verification methods, email verification only uses JWT to verify and redirect without proper validation of the target domain. The origin checker is bypassed in this scenario because it only checks for `POST` requests. An attacker can manipulate this parameter to redirect users to arbitrary URLs controlled by the attacker. Version 1.1.6 contains a patch for the issue. |
In the Linux kernel, the following vulnerability has been resolved:
ASoC: qcom: audioreach: fix potential null pointer dereference
It is possible that the topology parsing function
audioreach_widget_load_module_common() could return NULL or an error
pointer. Add missing NULL check so that we do not dereference it. |
In the Linux kernel, the following vulnerability has been resolved:
net/smc: fix warning in smc_rx_splice() when calling get_page()
smc_lo_register_dmb() allocates DMB buffers with kzalloc(), which are
later passed to get_page() in smc_rx_splice(). Since kmalloc memory is
not page-backed, this triggers WARN_ON_ONCE() in get_page() and prevents
holding a refcount on the buffer. This can lead to use-after-free if
the memory is released before splice_to_pipe() completes.
Use folio_alloc() instead, ensuring DMBs are page-backed and safe for
get_page().
WARNING: CPU: 18 PID: 12152 at ./include/linux/mm.h:1330 smc_rx_splice+0xaf8/0xe20 [smc]
CPU: 18 UID: 0 PID: 12152 Comm: smcapp Kdump: loaded Not tainted 6.17.0-rc3-11705-g9cf4672ecfee #10 NONE
Hardware name: IBM 3931 A01 704 (z/VM 7.4.0)
Krnl PSW : 0704e00180000000 000793161032696c (smc_rx_splice+0xafc/0xe20 [smc])
R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
Krnl GPRS: 0000000000000000 001cee80007d3001 00077400000000f8 0000000000000005
0000000000000001 001cee80007d3006 0007740000001000 001c000000000000
000000009b0c99e0 0000000000001000 001c0000000000f8 001c000000000000
000003ffcc6f7c88 0007740003e98000 0007931600000005 000792969b2ff7b8
Krnl Code: 0007931610326960: af000000 mc 0,0
0007931610326964: a7f4ff43 brc 15,00079316103267ea
#0007931610326968: af000000 mc 0,0
>000793161032696c: a7f4ff3f brc 15,00079316103267ea
0007931610326970: e320f1000004 lg %r2,256(%r15)
0007931610326976: c0e53fd1b5f5 brasl %r14,000793168fd5d560
000793161032697c: a7f4fbb5 brc 15,00079316103260e6
0007931610326980: b904002b lgr %r2,%r11
Call Trace:
smc_rx_splice+0xafc/0xe20 [smc]
smc_rx_splice+0x756/0xe20 [smc])
smc_rx_recvmsg+0xa74/0xe00 [smc]
smc_splice_read+0x1ce/0x3b0 [smc]
sock_splice_read+0xa2/0xf0
do_splice_read+0x198/0x240
splice_file_to_pipe+0x7e/0x110
do_splice+0x59e/0xde0
__do_splice+0x11a/0x2d0
__s390x_sys_splice+0x140/0x1f0
__do_syscall+0x122/0x280
system_call+0x6e/0x90
Last Breaking-Event-Address:
smc_rx_splice+0x960/0xe20 [smc]
---[ end trace 0000000000000000 ]--- |
In the Linux kernel, the following vulnerability has been resolved:
drm/gma500: Fix null dereference in hdmi teardown
pci_set_drvdata sets the value of pdev->driver_data to NULL,
after which the driver_data obtained from the same dev is
dereferenced in oaktrail_hdmi_i2c_exit, and the i2c_dev is
extracted from it. To prevent this, swap these calls.
Found by Linux Verification Center (linuxtesting.org) with Svacer. |
In the Linux kernel, the following vulnerability has been resolved:
afs: Fix potential null pointer dereference in afs_put_server
afs_put_server() accessed server->debug_id before the NULL check, which
could lead to a null pointer dereference. Move the debug_id assignment,
ensuring we never dereference a NULL server pointer. |
In the Linux kernel, the following vulnerability has been resolved:
fs/proc/task_mmu: check p->vec_buf for NULL
When the PAGEMAP_SCAN ioctl is invoked with vec_len = 0 reaches
pagemap_scan_backout_range(), kernel panics with null-ptr-deref:
[ 44.936808] Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN NOPTI
[ 44.937797] KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
[ 44.938391] CPU: 1 UID: 0 PID: 2480 Comm: reproducer Not tainted 6.17.0-rc6 #22 PREEMPT(none)
[ 44.939062] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
[ 44.939935] RIP: 0010:pagemap_scan_thp_entry.isra.0+0x741/0xa80
<snip registers, unreliable trace>
[ 44.946828] Call Trace:
[ 44.947030] <TASK>
[ 44.949219] pagemap_scan_pmd_entry+0xec/0xfa0
[ 44.952593] walk_pmd_range.isra.0+0x302/0x910
[ 44.954069] walk_pud_range.isra.0+0x419/0x790
[ 44.954427] walk_p4d_range+0x41e/0x620
[ 44.954743] walk_pgd_range+0x31e/0x630
[ 44.955057] __walk_page_range+0x160/0x670
[ 44.956883] walk_page_range_mm+0x408/0x980
[ 44.958677] walk_page_range+0x66/0x90
[ 44.958984] do_pagemap_scan+0x28d/0x9c0
[ 44.961833] do_pagemap_cmd+0x59/0x80
[ 44.962484] __x64_sys_ioctl+0x18d/0x210
[ 44.962804] do_syscall_64+0x5b/0x290
[ 44.963111] entry_SYSCALL_64_after_hwframe+0x76/0x7e
vec_len = 0 in pagemap_scan_init_bounce_buffer() means no buffers are
allocated and p->vec_buf remains set to NULL.
This breaks an assumption made later in pagemap_scan_backout_range(), that
page_region is always allocated for p->vec_buf_index.
Fix it by explicitly checking p->vec_buf for NULL before dereferencing.
Other sites that might run into same deref-issue are already (directly or
transitively) protected by checking p->vec_buf.
Note:
From PAGEMAP_SCAN man page, it seems vec_len = 0 is valid when no output
is requested and it's only the side effects caller is interested in,
hence it passes check in pagemap_scan_get_args().
This issue was found by syzkaller. |
In the Linux kernel, the following vulnerability has been resolved:
kmsan: fix out-of-bounds access to shadow memory
Running sha224_kunit on a KMSAN-enabled kernel results in a crash in
kmsan_internal_set_shadow_origin():
BUG: unable to handle page fault for address: ffffbc3840291000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 1810067 P4D 1810067 PUD 192d067 PMD 3c17067 PTE 0
Oops: 0000 [#1] SMP NOPTI
CPU: 0 UID: 0 PID: 81 Comm: kunit_try_catch Tainted: G N 6.17.0-rc3 #10 PREEMPT(voluntary)
Tainted: [N]=TEST
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.17.0-0-gb52ca86e094d-prebuilt.qemu.org 04/01/2014
RIP: 0010:kmsan_internal_set_shadow_origin+0x91/0x100
[...]
Call Trace:
<TASK>
__msan_memset+0xee/0x1a0
sha224_final+0x9e/0x350
test_hash_buffer_overruns+0x46f/0x5f0
? kmsan_get_shadow_origin_ptr+0x46/0xa0
? __pfx_test_hash_buffer_overruns+0x10/0x10
kunit_try_run_case+0x198/0xa00
This occurs when memset() is called on a buffer that is not 4-byte aligned
and extends to the end of a guard page, i.e. the next page is unmapped.
The bug is that the loop at the end of kmsan_internal_set_shadow_origin()
accesses the wrong shadow memory bytes when the address is not 4-byte
aligned. Since each 4 bytes are associated with an origin, it rounds the
address and size so that it can access all the origins that contain the
buffer. However, when it checks the corresponding shadow bytes for a
particular origin, it incorrectly uses the original unrounded shadow
address. This results in reads from shadow memory beyond the end of the
buffer's shadow memory, which crashes when that memory is not mapped.
To fix this, correctly align the shadow address before accessing the 4
shadow bytes corresponding to each origin. |
In the Linux kernel, the following vulnerability has been resolved:
netfs: fix reference leak
Commit 20d72b00ca81 ("netfs: Fix the request's work item to not
require a ref") modified netfs_alloc_request() to initialize the
reference counter to 2 instead of 1. The rationale was that the
requet's "work" would release the second reference after completion
(via netfs_{read,write}_collection_worker()). That works most of the
time if all goes well.
However, it leaks this additional reference if the request is released
before the I/O operation has been submitted: the error code path only
decrements the reference counter once and the work item will never be
queued because there will never be a completion.
This has caused outages of our whole server cluster today because
tasks were blocked in netfs_wait_for_outstanding_io(), leading to
deadlocks in Ceph (another bug that I will address soon in another
patch). This was caused by a netfs_pgpriv2_begin_copy_to_cache() call
which failed in fscache_begin_write_operation(). The leaked
netfs_io_request was never completed, leaving `netfs_inode.io_count`
with a positive value forever.
All of this is super-fragile code. Finding out which code paths will
lead to an eventual completion and which do not is hard to see:
- Some functions like netfs_create_write_req() allocate a request, but
will never submit any I/O.
- netfs_unbuffered_read_iter_locked() calls netfs_unbuffered_read()
and then netfs_put_request(); however, netfs_unbuffered_read() can
also fail early before submitting the I/O request, therefore another
netfs_put_request() call must be added there.
A rule of thumb is that functions that return a `netfs_io_request` do
not submit I/O, and all of their callers must be checked.
For my taste, the whole netfs code needs an overhaul to make reference
counting easier to understand and less fragile & obscure. But to fix
this bug here and now and produce a patch that is adequate for a
stable backport, I tried a minimal approach that quickly frees the
request object upon early failure.
I decided against adding a second netfs_put_request() each time
because that would cause code duplication which obscures the code
further. Instead, I added the function netfs_put_failed_request()
which frees such a failed request synchronously under the assumption
that the reference count is exactly 2 (as initially set by
netfs_alloc_request() and never touched), verified by a
WARN_ON_ONCE(). It then deinitializes the request object (without
going through the "cleanup_work" indirection) and frees the allocation
(with RCU protection to protect against concurrent access by
netfs_requests_seq_start()).
All code paths that fail early have been changed to call
netfs_put_failed_request() instead of netfs_put_request().
Additionally, I have added a netfs_put_request() call to
netfs_unbuffered_read() as explained above because the
netfs_put_failed_request() approach does not work there. |
In the Linux kernel, the following vulnerability has been resolved:
mm/hugetlb: fix folio is still mapped when deleted
Migration may be raced with fallocating hole. remove_inode_single_folio
will unmap the folio if the folio is still mapped. However, it's called
without folio lock. If the folio is migrated and the mapped pte has been
converted to migration entry, folio_mapped() returns false, and won't
unmap it. Due to extra refcount held by remove_inode_single_folio,
migration fails, restores migration entry to normal pte, and the folio is
mapped again. As a result, we triggered BUG in filemap_unaccount_folio.
The log is as follows:
BUG: Bad page cache in process hugetlb pfn:156c00
page: refcount:515 mapcount:0 mapping:0000000099fef6e1 index:0x0 pfn:0x156c00
head: order:9 mapcount:1 entire_mapcount:1 nr_pages_mapped:0 pincount:0
aops:hugetlbfs_aops ino:dcc dentry name(?):"my_hugepage_file"
flags: 0x17ffffc00000c1(locked|waiters|head|node=0|zone=2|lastcpupid=0x1fffff)
page_type: f4(hugetlb)
page dumped because: still mapped when deleted
CPU: 1 UID: 0 PID: 395 Comm: hugetlb Not tainted 6.17.0-rc5-00044-g7aac71907bde-dirty #484 NONE
Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
Call Trace:
<TASK>
dump_stack_lvl+0x4f/0x70
filemap_unaccount_folio+0xc4/0x1c0
__filemap_remove_folio+0x38/0x1c0
filemap_remove_folio+0x41/0xd0
remove_inode_hugepages+0x142/0x250
hugetlbfs_fallocate+0x471/0x5a0
vfs_fallocate+0x149/0x380
Hold folio lock before checking if the folio is mapped to avold race with
migration. |
In the Linux kernel, the following vulnerability has been resolved:
spi: cadence-quadspi: Implement refcount to handle unbind during busy
driver support indirect read and indirect write operation with
assumption no force device removal(unbind) operation. However
force device removal(removal) is still available to root superuser.
Unbinding driver during operation causes kernel crash. This changes
ensure driver able to handle such operation for indirect read and
indirect write by implementing refcount to track attached devices
to the controller and gracefully wait and until attached devices
remove operation completed before proceed with removal operation. |