CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
An Out of Bounds Write occurs when the native library attempts PDF rendering, which can be exploited to achieve memory corruption and potentially arbitrary code execution. |
A double-free condition occurs during the cleanup of temporary image files, which can be exploited to achieve memory corruption and potentially arbitrary code execution. |
Dragonfly is an open source P2P-based file distribution and image acceleration system. Prior to 2.1.0, The /api/v1/jobs and /preheats endpoints in Manager web UI are accessible without authentication. Any user with network access to the Manager can create, delete, and modify jobs, and create preheat jobs. An unauthenticated adversary with network access to a Manager web UI uses /api/v1/jobs endpoint to create hundreds of useless jobs. The Manager is in a denial-of-service state, and stops accepting requests from valid administrators. This vulnerability is fixed in 2.1.0. |
KNIME Business Hub is affected by the Ingress-nginx CVE-2025-1974 ( a.k.a IngressNightmare ) vulnerability which affects the ingress-nginx component. In the worst case a complete takeover of the Kubernetes cluster is possible. Since the affected component is only reachable from within the cluster, i.e. requires an authenticated user, the severity in the context of KNIME Business Hub is slightly lower.
Besides applying the publicly known workarounds, we strongly recommend updating to one of the following versions of KNIME Business Hub:
* 1.13.3 or above
* 1.12.4 or above
* 1.11.4 or above
* 1.10.4 or above
* |
KNIME Business Hub is affected by several cross-site scripting vulnerabilities in its web pages. If a user clicks on a malicious link or opens a malicious web page, arbitrary Java Script may be executed with this user's permissions. This can lead to information loss and/or modification of existing data.
The issues are caused by a bug https://github.com/Baroshem/nuxt-security/issues/610 in the widely used nuxt-security module.
There are no viable workarounds therefore we strongly recommend to update to one of the following versions of KNIME Business Hub:
* 1.13.3 or later
* 1.12.4 or later |
Potentially sensitive information in jobs on KNIME Business Hub prior to 1.16.0 were visible to all members of the user's team. Starting with KNIME Business Hub 1.16.0 only metadata of jobs is shown to team members. Only the creator of a job can see all information including in- and output data (if present). |
An open redirect vulnerability existed in KNIME Business Hub prior to version 1.16.0. An unauthenticated remote attacker could craft a link to a legitimate KNIME Business Hub installation which, when opened by the user, redirects the user to a page of the attackers choice. This might open the possibility for fishing or other similar attacks. The problem has been fixed in KNIME Business Hub 1.16.0. |
A hard-coded, non-random password for the object store (minio) of KNIME Business Hub in all versions except the ones listed below allows an unauthenticated remote attacker in possession of the password to read and manipulate swapped jobs or read and manipulate in- and output data of active jobs. It is also possible to cause a denial-of-service of most functionality of KNIME Business Hub by writing large amounts of data to the object store directly.
There are no viable workarounds therefore we strongly recommend to update to one of the following versions of KNIME Business Hub:
* 1.13.2 or later
* 1.12.3 or later
* 1.11.3 or later
* 1.10.3 or later |
Frappe Learning is a learning system that helps users structure their content. In versions 2.34.1 and below, there is a security vulnerability in Frappe Learning where the system did not adequately sanitize the content uploaded in the profile bio. Malicious SVG files could be used to execute arbitrary scripts in the context of other users. |
NVIDIA Triton Inference Server contains a vulnerability in the DALI backend where an attacker may cause an improper input validation issue. A successful exploit of this vulnerability may lead to code execution. |
A denial-of-service attack is possible through the execution functionality of KNIME Business Hub 1.10.0 and 1.10.1. It allows an authenticated attacker with job execution privileges to execute a job that causes internal messages to pile up until there are no more resources available for processing new messages. This leads to an outage of most functionality of KNIME Business Hub. Recovery from the situation is only possible by manual administrator interaction. Please contact our support for instructions in case you have run into this situation.
Updating to KNIME Business Hub 1.10.2 or later solves the problem. |
In the Linux kernel, the following vulnerability has been resolved:
media: vivid: fix compose size exceed boundary
syzkaller found a bug:
BUG: unable to handle page fault for address: ffffc9000a3b1000
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 100000067 P4D 100000067 PUD 10015f067 PMD 1121ca067 PTE 0
Oops: 0002 [#1] PREEMPT SMP
CPU: 0 PID: 23489 Comm: vivid-000-vid-c Not tainted 6.1.0-rc1+ #512
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
RIP: 0010:memcpy_erms+0x6/0x10
[...]
Call Trace:
<TASK>
? tpg_fill_plane_buffer+0x856/0x15b0
vivid_fillbuff+0x8ac/0x1110
vivid_thread_vid_cap_tick+0x361/0xc90
vivid_thread_vid_cap+0x21a/0x3a0
kthread+0x143/0x180
ret_from_fork+0x1f/0x30
</TASK>
This is because we forget to check boundary after adjust compose->height
int V4L2_SEL_TGT_CROP case. Add v4l2_rect_map_inside() to fix this problem
for this case. |
In the Linux kernel, the following vulnerability has been resolved:
drm/xe: Don't overmap identity VRAM mapping
Overmapping the identity VRAM mapping is triggering hardware bugs on
certain platforms. Use 2M pages for the last unaligned (to 1G) VRAM
chunk.
v2:
- Always use 2M pages for last chunk (Fei Yang)
- break loop when 2M pages are used
- Add assert for usable_size being 2M aligned
v3:
- Fix checkpatch |
In the Linux kernel, the following vulnerability has been resolved:
cachefiles: Set the max subreq size for cache writes to MAX_RW_COUNT
Set the maximum size of a subrequest that writes to cachefiles to be
MAX_RW_COUNT so that we don't overrun the maximum write we can make to the
backing filesystem. |
In the Linux kernel, the following vulnerability has been resolved:
bpf: Remove tst_run from lwt_seg6local_prog_ops.
The syzbot reported that the lwt_seg6 related BPF ops can be invoked
via bpf_test_run() without without entering input_action_end_bpf()
first.
Martin KaFai Lau said that self test for BPF_PROG_TYPE_LWT_SEG6LOCAL
probably didn't work since it was introduced in commit 04d4b274e2a
("ipv6: sr: Add seg6local action End.BPF"). The reason is that the
per-CPU variable seg6_bpf_srh_states::srh is never assigned in the self
test case but each BPF function expects it.
Remove test_run for BPF_PROG_TYPE_LWT_SEG6LOCAL. |
A vulnerability was identified in kidaze CourseSelectionSystem up to 42cd892b40a18d50bd4ed1905fa89f939173a464. Affected is an unknown function of the file /Profilers/PProfile/COUNT3s3.php. The manipulation of the argument csem leads to sql injection. Remote exploitation of the attack is possible. The exploit is publicly available and might be used. This product follows a rolling release approach for continuous delivery, so version details for affected or updated releases are not provided. |
ImageMagick is free and open-source software used for editing and manipulating digital images. In versions prior to 7.1.2-0, infinite lines occur when writing during a specific XMP file conversion command. Version 7.1.2-0 fixes the issue. |
In the Linux kernel, the following vulnerability has been resolved:
mm/swapfile: skip HugeTLB pages for unuse_vma
I got a bad pud error and lost a 1GB HugeTLB when calling swapoff. The
problem can be reproduced by the following steps:
1. Allocate an anonymous 1GB HugeTLB and some other anonymous memory.
2. Swapout the above anonymous memory.
3. run swapoff and we will get a bad pud error in kernel message:
mm/pgtable-generic.c:42: bad pud 00000000743d215d(84000001400000e7)
We can tell that pud_clear_bad is called by pud_none_or_clear_bad in
unuse_pud_range() by ftrace. And therefore the HugeTLB pages will never
be freed because we lost it from page table. We can skip HugeTLB pages
for unuse_vma to fix it. |
In realme BackupRestore app v15.1.12_2810c08_250314, improper URI scheme handling in com.coloros.pc.PcToolMainActivity allows local attackers to cause a crash and potential XSS via crafted ADB intents. |
In the Linux kernel, the following vulnerability has been resolved:
f2fs: Require FMODE_WRITE for atomic write ioctls
The F2FS ioctls for starting and committing atomic writes check for
inode_owner_or_capable(), but this does not give LSMs like SELinux or
Landlock an opportunity to deny the write access - if the caller's FSUID
matches the inode's UID, inode_owner_or_capable() immediately returns true.
There are scenarios where LSMs want to deny a process the ability to write
particular files, even files that the FSUID of the process owns; but this
can currently partially be bypassed using atomic write ioctls in two ways:
- F2FS_IOC_START_ATOMIC_REPLACE + F2FS_IOC_COMMIT_ATOMIC_WRITE can
truncate an inode to size 0
- F2FS_IOC_START_ATOMIC_WRITE + F2FS_IOC_ABORT_ATOMIC_WRITE can revert
changes another process concurrently made to a file
Fix it by requiring FMODE_WRITE for these operations, just like for
F2FS_IOC_MOVE_RANGE. Since any legitimate caller should only be using these
ioctls when intending to write into the file, that seems unlikely to break
anything. |