Issue with PCIe5 Detection on LX2160A Platform

Dear SolidRun Support Team,

I hope this message finds you well.

I am working with a custom board based on the LX2160A platform, and we are encountering an issue with PCIe5 detection. Specifically, when connecting an SSD to the PCIe5 slot, it does not seem to be detected properly. However, when the same SSD is connected to the PCIe3 slot, it is detected as expected and works fine.

Below are the relevant details:

  • The PCIe3 slot uses an adapter card for Key K, which is compatible with the SSD for debugging purposes.
  • The PCIe5 slot is used without an adapter card and is configured as Key M.
  • When scanning PCIe devices in U-Boot, PCIe3 correctly detects the SSD as a mass storage controller, but PCIe5 in uboot reports: “Does not fit any class”

To clarify, the PCIe3 slot is used only for debugging purposes, as we intend to work with the SSD through PCIe5. Initially, we were unable to detect the PCIe5 slot, so we tested the PCIe3 slot to verify whether the SSD was functional, which worked as expected.

Could you please advise on the following:

  1. If there are any known issues with PCIe5 detection on the LX2160A platform?
  2. Are there any specific settings in the device tree or U-Boot that could be affecting the PCIe5 slot’s ability to detect the SSD?

Below is the relevant log from linux showing the issue with PCIe5:
[ 867.212999] OF: /soc/pcie@3800000: no iommu-map translation for id 0x100 on (null)
[ 867.213037] nvme 0002:01:00.0: assign IRQ: got 396
[ 872.366007] nvme nvme1: pci function 0002:01:00.0
[ 872.366098] nvme 0002:01:00.0: enabling device (0000 → 0002)
[ 872.366157] nvme 0002:01:00.0: enabling bus mastering
[ 872.366665] OF: /soc/pcie@3800000: no msi-map translation for id 0x100 on /interrupt-controller@6000000/gic-its@6020000
[ 872.366850] nvme 0002:01:00.0: saving config space at offset 0x0 (reading 0x22691d79)
[ 872.366867] nvme 0002:01:00.0: saving config space at offset 0x4 (reading 0x100406)
[ 872.366885] nvme 0002:01:00.0: saving config space at offset 0x8 (reading 0x1080203)
[ 872.366901] nvme 0002:01:00.0: saving config space at offset 0xc (reading 0x0)
[ 872.366918] nvme 0002:01:00.0: saving config space at offset 0x10 (reading 0x40000004)
[ 872.366934] nvme 0002:01:00.0: saving config space at offset 0x14 (reading 0x0)
[ 872.366951] nvme 0002:01:00.0: saving config space at offset 0x18 (reading 0x0)
[ 872.366967] nvme 0002:01:00.0: saving config space at offset 0x1c (reading 0x0)
[ 872.366983] nvme 0002:01:00.0: saving config space at offset 0x20 (reading 0x0)
[ 872.366999] nvme 0002:01:00.0: saving config space at offset 0x24 (reading 0x0)
[ 872.367015] nvme 0002:01:00.0: saving config space at offset 0x28 (reading 0x0)
[ 872.367032] nvme 0002:01:00.0: saving config space at offset 0x2c (reading 0x22691d79)
[ 872.367048] nvme 0002:01:00.0: saving config space at offset 0x30 (reading 0x0)
[ 872.367065] nvme 0002:01:00.0: saving config space at offset 0x34 (reading 0x40)
[ 872.367081] nvme 0002:01:00.0: saving config space at offset 0x38 (reading 0x0)
[ 872.367097] nvme 0002:01:00.0: saving config space at offset 0x3c (reading 0x18c)
[ 872.367667] pcieport 0003:00:00.0: scanning [bus 01-ff] behind bridge, pass 0
[ 872.369181] pcieport 0003:00:00.0: scanning [bus 01-ff] behind bridge, pass 1
[ 933.037822] nvme nvme1: I/O 0 QID 0 timeout, completion polled
[ 994.477823] nvme nvme1: I/O 4 QID 0 timeout, completion polled
[ 1055.917814] nvme nvme1: I/O 1 QID 0 timeout, disable controller
[ 1056.025834] nvme nvme1: failed to set APST feature (-4)
[ 1056.025834] nvme nvme1: failed to set APST feature (-4)
[ 1056.031082] nvme nvme1: Removing after probe failure status: -4
[ 1056.050076] nvme nvme1: failed to set APST feature (-19)
[ 1056.050076] nvme nvme1: failed to set APST feature (-19)

Below is the relevant log from Uboot showing the issue with PCIe5:

=> pci enum
=> pci 5
Scanning PCI devices on bus 5
BusDevFun VendorId DeviceId Device Class Sub-Class


05.00.00 0x17cb 0x0308 Does not fit any class 0x00
=> pci 3
Scanning PCI devices on bus 3
BusDevFun VendorId DeviceId Device Class Sub-Class


03.00.00 0x1d79 0x2269 Mass storage controller 0x08

We would greatly appreciate your assistance in resolving this issue so we can proceed with using the PCIe5 slot for the SSD.

Thank you for your support!

there are no known detection issues with pcie5. Both pcie3 and pcie5 are of similar port type. What adapter are you using to connect the nvme to the full sized PCIe slot on the carrier? Is it possible it has an active pcie switch on it, or some other bridge device?

Dear Jnettket,

Thank you for your response.

To answer your question, we are using an M.2 Key E to M.2 Key A+M adapter to connect the NVMe SSD to the full-sized PCIe slot on the carrier. The adapter we are using is the one linked below:

Sintech NGFF M.2 Key E to M.2 Key A+M Adapter

Just to clarify. You are connecting this to our HoneyComb carrier for the LX2160a, or your own carrier? Maybe you are using our LX2162a based CX-Lite platform?

We are not using the HoneyComb carrier. We are using our own custom carrier board based on the LX2160A platform for this setup

I would recommend you open a proper ticket through support@solid-run.com. We do offer carrier board technical reviews for our customers.

Ok Jnettlet, Thank you for your respone