Hi,
I just upgraded system on my lx2k. Was suspecting problems. And while the system upgrade went fine. It wont boot on newer kernels.
Stock kernel on Deb12 6.1.0-37-arm64 (Debian 12.11) with iommu workaround is still ok (I am so glad I didnt remove it. Now apt-mark hold on that kernel)
But standard kernel from Deb13 which is 6.12.38+deb13-arm64 wont boot at all.
Failing silently with no output nor error message.
Are there any plans for upstream support? Thats a pity that stable version wont boot on that great board of lx2k.
So after all I will use older kernel for now on newer system. But hey. It cannot be like that forever? Right?
Edit: Normally I would expect better support in newer version not regression. Did the required quirks change? Anybody tried 6.12+?
sudrien
September 6, 2025, 6:32pm
2
Just did the Trixie upgrade myself, got the same issue. The system is supposedly failing to load a ramdisk.
[e]diting the grub record and removing “arm-smmu.disable_bypass=0 iommu.passthrough=1” (ore removing it from /etc/default/grub) had no effect.
As I am still using a firmware sd card flashed in 2021, I’m guessing it is time to update.
EDIT: Yeah that did nothing noticable
I removed the linux-image-6.12.43+deb13-arm64 and linux-image meta package to stop the upgrade attempts for now,
linux-image-6.1.0-39-arm64 still works. as annoying as it is to be forced to install Debian 12 before updating to 13.
Uefi images are build automatically but there were no changes for 4 years already:
Parent build repository for generating UEFI firmware for the LX2160a
I planned to try newer submodule of EKD2 and see if it builds.
Edit: Yup it did build. And even booted:
GitHub - AreYouLoco/lx2160a_uefi: Parent build repository for generating UEFI firmware for the LX2160a here is my fork. The only changes are switch to upstream 2025 EDK2 vs forked EDK2 in 2021 by SolidRun and usage of newer Debian release as build image. The rest is same. You may give it a go. But I guess this wont resolve newer kernel not booting. But at least some 4 years of development of EDK2 included.
Edit2:
Also did painful rebase.
Arm-trusted-firmware bumped from v2.5-lx2160acex7 to v2.13.0 all boots no issues observed. Contains also previous link changes.
Ok I did some upgrades of UEFI. And it allowed me to boot into 6.12.43+deb13-arm64.
Will post dmesg output as there are new errors and how I did that. But at least it booted.
dmesg_6.12.txt (56.7 KB)
User-space stuff from dmesg skipped
@sudrien Could you please try with this combination of kernel parameters:
GRUB_CMDLINE_LINUX="arm-smmu.disable_bypass=0 iommu.passthrough=1 iommu=force clocksource=arch_sys_counter numa=off efi=runtime threadirqs"
To be sure if its iommu=force to add that or my version bumps because I tried without and it didn’t boot without iommu.passthrough or the other quirk → kernel exceptions.
So the only issues I got so far from log *:
[Tue Sep 9 15:51:38 2025] cma: __cma_alloc: reserved: alloc failed, req-size: 4 pages, ret: -12
[Tue Sep 9 15:51:38 2025] cma: number of available pages: => 0 free of 16384 total pages
[Tue Sep 9 15:51:38 2025] cma: __cma_alloc: reserved: alloc failed, req-size: 16 pages, ret: -12
[Tue Sep 9 15:51:38 2025] cma: number of available pages: => 0 free of 16384 total pages
[Tue Sep 9 15:51:38 2025] cma: __cma_alloc: reserved: alloc failed, req-size: 4 pages, ret: -12
[Tue Sep 9 15:51:38 2025] cma: number of available pages: => 0 free of 16384 total pages
[Tue Sep 9 15:51:38 2025] cma: __cma_alloc: reserved: alloc failed, req-size: 16 pages, ret: -12
[Tue Sep 9 15:51:38 2025] cma: number of available pages: => 0 free of 16384 total pages
[Tue Sep 9 15:51:38 2025] cma: __cma_alloc: reserved: alloc failed, req-size: 4 pages, ret: -12
[Tue Sep 9 15:51:39 2025] cma: number of available pages: => 0 free of 16384 total pages
[Tue Sep 9 15:51:39 2025] cma: __cma_alloc: reserved: alloc failed, req-size: 16 pages, ret: -12
[Tue Sep 9 15:51:39 2025] cma: number of available pages: => 0 free of 16384 total pages
[Tue Sep 9 15:51:39 2025] cma: __cma_alloc: reserved: alloc failed, req-size: 4 pages, ret: -12
[Tue Sep 9 15:51:39 2025] cma: number of available pages: => 0 free of 16384 total pages
[Tue Sep 9 15:51:39 2025] cma: __cma_alloc: reserved: alloc failed, req-size: 16 pages, ret: -12
[Tue Sep 9 15:51:39 2025] cma: number of available pages: => 0 free of 16384 total pages
[Tue Sep 9 15:51:39 2025] cma: __cma_alloc: reserved: alloc failed, req-size: 4 pages, ret: -12
[Tue Sep 9 15:51:39 2025] cma: number of available pages: => 0 free of 16384 total pages
[Tue Sep 9 15:51:39 2025] cma: __cma_alloc: reserved: alloc failed, req-size: 16 pages, ret: -12
[Tue Sep 9 15:51:39 2025] cma: number of available pages: => 0 free of 16384 total pages
Possible solution already there:
committed 10:11PM - 13 May 22 UTC
This reverts commit a4efc174b382fcdb which introduced a regression issue
that wh… en there're multiple processes allocating dma memory in parallel by
calling dma_alloc_coherent(), it may fail sometimes as follows:
Error log:
cma: cma_alloc: linux,cma: alloc failed, req-size: 148 pages, ret: -16
cma: number of available pages:
3@125+20@172+12@236+4@380+32@736+17@2287+23@2473+20@36076+99@40477+108@40852+44@41108+20@41196+108@41364+108@41620+
108@42900+108@43156+483@44061+1763@45341+1440@47712+20@49324+20@49388+5076@49452+2304@55040+35@58141+20@58220+20@58284+
7188@58348+84@66220+7276@66452+227@74525+6371@75549=> 33161 free of 81920 total pages
When issue happened, we saw there were still 33161 pages (129M) free CMA
memory and a lot available free slots for 148 pages in CMA bitmap that we
want to allocate.
When dumping memory info, we found that there was also ~342M normal
memory, but only 1352K CMA memory left in buddy system while a lot of
pageblocks were isolated.
Memory info log:
Normal free:351096kB min:30000kB low:37500kB high:45000kB reserved_highatomic:0KB
active_anon:98060kB inactive_anon:98948kB active_file:60864kB inactive_file:31776kB
unevictable:0kB writepending:0kB present:1048576kB managed:1018328kB mlocked:0kB
bounce:0kB free_pcp:220kB local_pcp:192kB free_cma:1352kB lowmem_reserve[]: 0 0 0
Normal: 78*4kB (UECI) 1772*8kB (UMECI) 1335*16kB (UMECI) 360*32kB (UMECI) 65*64kB (UMCI)
36*128kB (UMECI) 16*256kB (UMCI) 6*512kB (EI) 8*1024kB (UEI) 4*2048kB (MI) 8*4096kB (EI)
8*8192kB (UI) 3*16384kB (EI) 8*32768kB (M) = 489288kB
The root cause of this issue is that since commit a4efc174b382 ("mm/cma.c:
remove redundant cma_mutex lock"), CMA supports concurrent memory
allocation. It's possible that the memory range process A trying to alloc
has already been isolated by the allocation of process B during memory
migration.
The problem here is that the memory range isolated during one allocation
by start_isolate_page_range() could be much bigger than the real size we
want to alloc due to the range is aligned to MAX_ORDER_NR_PAGES.
Taking an ARMv7 platform with 1G memory as an example, when
MAX_ORDER_NR_PAGES is big (e.g. 32M with max_order 14) and CMA memory is
relatively small (e.g. 128M), there're only 4 MAX_ORDER slot, then it's
very easy that all CMA memory may have already been isolated by other
processes when one trying to allocate memory using dma_alloc_coherent().
Since current CMA code will only scan one time of whole available CMA
memory, then dma_alloc_coherent() may easy fail due to contention with
other processes.
This patch simply falls back to the original method that using cma_mutex
to make alloc_contig_range() run sequentially to avoid the issue.
Link: https://lkml.kernel.org/r/20220509094551.3596244-1-aisheng.dong@nxp.com
Link: https://lore.kernel.org/all/20220315144521.3810298-2-aisheng.dong@nxp.com/
Fixes: a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock")
Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Lecopzer Chen <lecopzer.chen@mediatek.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org> [5.11+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
*but this could be to changes to UEFI I made. @sudrien Could you verify on your system if it’s there as well?
Because I am not sure what helped at the end. iommu=force First to try if it’s enough.
Apologies for the slow response, this full list got me a boot with 6.12.48+deb13-arm64
”arm-smmu.disable_bypass=0 iommu.passthrough=1 iommu=force” did not work.
I would say it’s words that actually do not mean anything “not stable” I tried rebooting many times to new kernel in a row and sometimes it will simply not load initramfs even with that new set of parameters. So kind of lottery at the end.
Looks like some timing somewhere
Edit:
From cold boot I was unable to boot straight into 6.12+ but when I boot 6.1 and then soft reboot into 6.12 it works.
@sudrien whats your experience?