Search Results (67718 CVEs found)

CVE Vendors Products Updated CVSS v3.1
CVE-2024-34477 1 Fogproject 1 Fogproject 2025-09-26 7.8 High
configureNFS in lib/common/functions.sh in FOG through 1.5.10 allows local users to gain privileges by mounting a crafted NFS share (because of no_root_squash and insecure). In order to exploit the vulnerability, someone needs to mount an NFS share in order to add an executable file as root. In addition, the SUID bit must be added to this file.
CVE-2024-37313 1 Nextcloud 2 Nextcloud Server, Server 2025-09-26 7.3 High
Nextcloud server is a self hosted personal cloud system. Under some circumstance it was possible to bypass the second factor of 2FA after successfully providing the user credentials. It is recommended that the Nextcloud Server is upgraded to 26.0.13, 27.1.8 or 28.0.4 and Nextcloud Enterprise Server is upgraded to 21.0.9.17, 22.2.10.22, 23.0.12.17, 24.0.12.13, 25.0.13.8, 26.0.13, 27.1.8 or 28.0.4.
CVE-2025-59408 1 Flock Safety 1 Bravo Edge Ai Compute Device 2025-09-26 7.3 High
Flock Safety Bravo Edge AI Compute Device BRAVO_00.00_local_20241017 ships with Secure Boot disabled. This allows an attacker to flash modified firmware with no cryptographic protections.
CVE-2025-59404 1 Flock Safety 1 Bravo Edge Ai Compute Device 2025-09-26 7.5 High
Flock Safety Bravo Edge AI Compute Device BRAVO_00.00_local_20241017 ships with its bootloader unlocked. This permits bypass of Android Verified Boot (AVB) and allows direct modification of partitions.
CVE-2025-57632 2025-09-26 7.5 High
libsmb2 6.2+ is vulnerable to Buffer Overflow. When processing SMB2 chained PDUs (NextCommand), libsmb2 repeatedly calls smb2_add_iovector() to append to a fixed-size iovec array without checking the upper bound of v->niov (SMB2_MAX_VECTORS=256). An attacker can craft responses with many chained PDUs to overflow v->niov and perform heap out-of-bounds writes, causing memory corruption, crashes, and potentially arbitrary code execution. The SMB2_OPLOCK_BREAK path bypasses message ID validation.
CVE-2024-56699 1 Linux 1 Linux Kernel 2025-09-26 7.8 High
In the Linux kernel, the following vulnerability has been resolved: s390/pci: Fix potential double remove of hotplug slot In commit 6ee600bfbe0f ("s390/pci: remove hotplug slot when releasing the device") the zpci_exit_slot() was moved from zpci_device_reserved() to zpci_release_device() with the intention of keeping the hotplug slot around until the device is actually removed. Now zpci_release_device() is only called once all references are dropped. Since the zPCI subsystem only drops its reference once the device is in the reserved state it follows that zpci_release_device() must only deal with devices in the reserved state. Despite that it contains code to tear down from both configured and standby state. For the standby case this already includes the removal of the hotplug slot so would cause a double removal if a device was ever removed in either configured or standby state. Instead of causing a potential double removal in a case that should never happen explicitly WARN_ON() if a device in non-reserved state is released and get rid of the dead code cases.
CVE-2024-57849 1 Linux 1 Linux Kernel 2025-09-26 7.8 High
In the Linux kernel, the following vulnerability has been resolved: s390/cpum_sf: Handle CPU hotplug remove during sampling CPU hotplug remove handling triggers the following function call sequence: CPUHP_AP_PERF_S390_SF_ONLINE --> s390_pmu_sf_offline_cpu() ... CPUHP_AP_PERF_ONLINE --> perf_event_exit_cpu() The s390 CPUMF sampling CPU hotplug handler invokes: s390_pmu_sf_offline_cpu() +--> cpusf_pmu_setup() +--> setup_pmc_cpu() +--> deallocate_buffers() This function de-allocates all sampling data buffers (SDBs) allocated for that CPU at event initialization. It also clears the PMU_F_RESERVED bit. The CPU is gone and can not be sampled. With the event still being active on the removed CPU, the CPU event hotplug support in kernel performance subsystem triggers the following function calls on the removed CPU: perf_event_exit_cpu() +--> perf_event_exit_cpu_context() +--> __perf_event_exit_context() +--> __perf_remove_from_context() +--> event_sched_out() +--> cpumsf_pmu_del() +--> cpumsf_pmu_stop() +--> hw_perf_event_update() to stop and remove the event. During removal of the event, the sampling device driver tries to read out the remaining samples from the sample data buffers (SDBs). But they have already been freed (and may have been re-assigned). This may lead to a use after free situation in which case the samples are most likely invalid. In the best case the memory has not been reassigned and still contains valid data. Remedy this situation and check if the CPU is still in reserved state (bit PMU_F_RESERVED set). In this case the SDBs have not been released an contain valid data. This is always the case when the event is removed (and no CPU hotplug off occured). If the PMU_F_RESERVED bit is not set, the SDB buffers are gone.
CVE-2024-57876 2 Linux, Redhat 3 Linux Kernel, Enterprise Linux, Rhel Eus 2025-09-26 7.0 High
In the Linux kernel, the following vulnerability has been resolved: drm/dp_mst: Fix resetting msg rx state after topology removal If the MST topology is removed during the reception of an MST down reply or MST up request sideband message, the drm_dp_mst_topology_mgr::up_req_recv/down_rep_recv states could be reset from one thread via drm_dp_mst_topology_mgr_set_mst(false), racing with the reading/parsing of the message from another thread via drm_dp_mst_handle_down_rep() or drm_dp_mst_handle_up_req(). The race is possible since the reader/parser doesn't hold any lock while accessing the reception state. This in turn can lead to a memory corruption in the reader/parser as described by commit bd2fccac61b4 ("drm/dp_mst: Fix MST sideband message body length check"). Fix the above by resetting the message reception state if needed before reading/parsing a message. Another solution would be to hold the drm_dp_mst_topology_mgr::lock for the whole duration of the message reception/parsing in drm_dp_mst_handle_down_rep() and drm_dp_mst_handle_up_req(), however this would require a bigger change. Since the fix is also needed for stable, opting for the simpler solution in this patch.
CVE-2025-57446 1 O-ran-sc 1 Ric-plt-submgr 2025-09-26 7.5 High
An issue in O-RAN Near Realtime RIC ric-plt-submgr in the J-Release environment, allows remote attackers to cause a denial of service (DoS) via a crafted request to the Subscription Manager API component.
CVE-2025-55560 1 Pytorch 1 Pytorch 2025-09-26 7.5 High
An issue in pytorch v2.7.0 can lead to a Denial of Service (DoS) when a PyTorch model consists of torch.Tensor.to_sparse() and torch.Tensor.to_dense() and is compiled by Inductor.
CVE-2025-48707 1 Stormshield 1 Network Security 2025-09-26 7.5 High
An issue was discovered in Stormshield Network Security (SNS) before 5.0.1. TPM authentication information could, in some HA use cases, be shared among administrators, which can cause secret sharing.
CVE-2025-21647 1 Linux 1 Linux Kernel 2025-09-26 7.1 High
In the Linux kernel, the following vulnerability has been resolved: sched: sch_cake: add bounds checks to host bulk flow fairness counts Even though we fixed a logic error in the commit cited below, syzbot still managed to trigger an underflow of the per-host bulk flow counters, leading to an out of bounds memory access. To avoid any such logic errors causing out of bounds memory accesses, this commit factors out all accesses to the per-host bulk flow counters to a series of helpers that perform bounds-checking before any increments and decrements. This also has the benefit of improving readability by moving the conditional checks for the flow mode into these helpers, instead of having them spread out throughout the code (which was the cause of the original logic error). As part of this change, the flow quantum calculation is consolidated into a helper function, which means that the dithering applied to the ost load scaling is now applied both in the DRR rotation and when a sparse flow's quantum is first initiated. The only user-visible effect of this is that the maximum packet size that can be sent while a flow stays sparse will now vary with +/- one byte in some cases. This should not make a noticeable difference in practice, and thus it's not worth complicating the code to preserve the old behaviour.
CVE-2023-46426 1 Gpac 1 Gpac 2025-09-26 8.8 High
Heap-based Buffer Overflow vulnerability in gpac version 2.3-DEV-rev588-g7edc40fee-master, allows remote attackers to execute arbitrary code and cause a denial of service (DoS) via gf_fwrite component in at utils/os_file.c.
CVE-2024-28318 1 Gpac 1 Gpac 2025-09-26 7.1 High
gpac 2.3-DEV-rev921-g422b78ecf-master was discovered to contain a out of boundary write vulnerability via swf_get_string at scene_manager/swf_parse.c:325
CVE-2024-31823 1 Ecommerce-codeigniter-bootstrap Project 1 Ecommerce-codeigniter-bootstrap 2025-09-26 8.8 High
An issue in Ecommerce-CodeIgniter-Bootstrap commit v. d22b54e8915f167a135046ceb857caaf8479c4da allows a remote attacker to execute arbitrary code via the removeSecondaryImage method of the Publish.php component.
CVE-2025-60017 1 Unitree 4 B2, G1, Go2 and 1 more 2025-09-26 8.2 High
Unitree Go2, G1, H1, and B2 devices through 2025-09-20 allow root OS command injection via the hostapd_restart.sh wifi_ssid or wifi_pass parameter (within restart_wifi_ap and restart_wifi_sta).
CVE-2024-57945 1 Linux 1 Linux Kernel 2025-09-26 7.1 High
In the Linux kernel, the following vulnerability has been resolved: riscv: mm: Fix the out of bound issue of vmemmap address In sparse vmemmap model, the virtual address of vmemmap is calculated as: ((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SHIFT)). And the struct page's va can be calculated with an offset: (vmemmap + (pfn)). However, when initializing struct pages, kernel actually starts from the first page from the same section that phys_ram_base belongs to. If the first page's physical address is not (phys_ram_base >> PAGE_SHIFT), then we get an va below VMEMMAP_START when calculating va for it's struct page. For example, if phys_ram_base starts from 0x82000000 with pfn 0x82000, the first page in the same section is actually pfn 0x80000. During init_unavailable_range(), we will initialize struct page for pfn 0x80000 with virtual address ((struct page *)VMEMMAP_START - 0x2000), which is below VMEMMAP_START as well as PCI_IO_END. This commit fixes this bug by introducing a new variable 'vmemmap_start_pfn' which is aligned with memory section size and using it to calculate vmemmap address instead of phys_ram_base.
CVE-2024-57929 2 Linux, Redhat 2 Linux Kernel, Enterprise Linux 2025-09-26 7.1 High
In the Linux kernel, the following vulnerability has been resolved: dm array: fix releasing a faulty array block twice in dm_array_cursor_end When dm_bm_read_lock() fails due to locking or checksum errors, it releases the faulty block implicitly while leaving an invalid output pointer behind. The caller of dm_bm_read_lock() should not operate on this invalid dm_block pointer, or it will lead to undefined result. For example, the dm_array_cursor incorrectly caches the invalid pointer on reading a faulty array block, causing a double release in dm_array_cursor_end(), then hitting the BUG_ON in dm-bufio cache_put(). Reproduce steps: 1. initialize a cache device dmsetup create cmeta --table "0 8192 linear /dev/sdc 0" dmsetup create cdata --table "0 65536 linear /dev/sdc 8192" dmsetup create corig --table "0 524288 linear /dev/sdc $262144" dd if=/dev/zero of=/dev/mapper/cmeta bs=4k count=1 dmsetup create cache --table "0 524288 cache /dev/mapper/cmeta \ /dev/mapper/cdata /dev/mapper/corig 128 2 metadata2 writethrough smq 0" 2. wipe the second array block offline dmsteup remove cache cmeta cdata corig mapping_root=$(dd if=/dev/sdc bs=1c count=8 skip=192 \ 2>/dev/null | hexdump -e '1/8 "%u\n"') ablock=$(dd if=/dev/sdc bs=1c count=8 skip=$((4096*mapping_root+2056)) \ 2>/dev/null | hexdump -e '1/8 "%u\n"') dd if=/dev/zero of=/dev/sdc bs=4k count=1 seek=$ablock 3. try reopen the cache device dmsetup create cmeta --table "0 8192 linear /dev/sdc 0" dmsetup create cdata --table "0 65536 linear /dev/sdc 8192" dmsetup create corig --table "0 524288 linear /dev/sdc $262144" dmsetup create cache --table "0 524288 cache /dev/mapper/cmeta \ /dev/mapper/cdata /dev/mapper/corig 128 2 metadata2 writethrough smq 0" Kernel logs: (snip) device-mapper: array: array_block_check failed: blocknr 0 != wanted 10 device-mapper: block manager: array validator check failed for block 10 device-mapper: array: get_ablock failed device-mapper: cache metadata: dm_array_cursor_next for mapping failed ------------[ cut here ]------------ kernel BUG at drivers/md/dm-bufio.c:638! Fix by setting the cached block pointer to NULL on errors. In addition to the reproducer described above, this fix can be verified using the "array_cursor/damaged" test in dm-unit: dm-unit run /pdata/array_cursor/damaged --kernel-dir <KERNEL_DIR>
CVE-2024-57928 1 Linux 1 Linux Kernel 2025-09-26 7.1 High
In the Linux kernel, the following vulnerability has been resolved: netfs: Fix enomem handling in buffered reads If netfs_read_to_pagecache() gets an error from either ->prepare_read() or from netfs_prepare_read_iterator(), it needs to decrement ->nr_outstanding, cancel the subrequest and break out of the issuing loop. Currently, it only does this for two of the cases, but there are two more that aren't handled. Fix this by moving the handling to a common place and jumping to it from all four places. This is in preference to inserting a wrapper around netfs_prepare_read_iterator() as proposed by Dmitry Antipov[1].
CVE-2024-56439 1 Huawei 1 Harmonyos 2025-09-26 7.5 High
Access control vulnerability in the identity authentication module Impact: Successful exploitation of this vulnerability may affect service confidentiality.