Large number of DPIOs needed for DPDK test

I am trying to reproduce tests I did on an NXP RDB (rev1 with older software) on the HoneyComb.
I’ve been able to reproduce the issue I’m seeing on the HoneyComb with a simple test case.
I used the SolidRun git repo with the following change (as mentioned in the Quick Start Guide): I added " isolcpus=1-15 iommu.passthrough=1" to extlinux.conf.

When I run the list of commands at the end of this post, I see the following when I launch the testpmd application:
fslmc: No software portal resource left
fslmc: Error in software portal allocation
dpaa2_net: Failure in affining portal
fslmc: No software portal resource left

This appears to be caused by the number of DPIOs: with 5 the test works, with 4 it does not.
I don’t understand why 5 DPIOs would be needed in this case. The NXP RDB works fine with 4 DPIOs.

List of commands:

export DPCI_COUNT=0
export DPIO_COUNT=4 # Experimentally I found 5 does work
export DPBP_COUNT=1
export DPCON_COUNT=3
export DPSECI_COUNT=0
export MAX_QOS=1
export MAX_QUEUES=1
export MAX_CGS=1
export MAX_TCS=1
export DPCON_PRIORITIES=1

export DPMAC=dpmac.10
export DPNI=dpni.1
export CTRL_ETH1=eth1
export CTRL_ETH2=eth2
export TEST_PMD=/usr/bin/dpdk-testpmd

export BEFORE_MAC=00:00:00:00:04:02
export MY_MAC=00:00:00:00:04:03
export OTH_MAC=00:00:00:00:05:03

cd /usr/bin
./dynamic_dpl.sh dpni -b ${BEFORE_MAC}
ls-addsw -o=DPSW_OPT_CTRL_IF_DIS ${DPNI} ${DPMAC}
ip link add name br type bridge
ip link set dev br up
ls /sys/bus/fsl-mc/drivers/fsl_mc_dprc/dprc.1/dpsw.0/net/

ip link set dev ${CTRL_ETH1} down
ip link set dev ${CTRL_ETH1} address 00:00:00:00:00:01 up
ip link set dev ${CTRL_ETH1} master br

ip link set dev ${CTRL_ETH2} down
ip link set dev ${CTRL_ETH2} address 00:00:00:00:00:02 up
ip link set dev ${CTRL_ETH2} master br

export DPRC=dprc.2
restool dpni info ${DPNI}
${TEST_PMD} -c 0x3 -n 1 – --eth-peer=0,${OTH_MAC} --txd=1500 --txpkts=1500 --tx-first --auto-start --forward-mode=mac --stats-period=10

Most likely this is due to the memory available for the network packet processor / management complex. However it could be something as simple as the firmware running on the mc-bin complex. Are you running the same BSP versions on both the RDB and the HC?

For the NXP RDB: MC=10.29.0, restool=la1224rdb-early-access-bsp0.6
For the HoneyComb: MC=10.31.1, restool=v2.3 (commit lf-5.10.52-2.1.0)

Unfortunately with the NXP RDB, we have a rev 1 board, which makes it difficult to upgrade since they dropped support for the Rev 1s as of LSDK 20.12 and onwards. I could revert the Honey comb to 20.04 or further back to see if there is a point that it works for the test in question. For the NXP board, I’ve had several MC/restool issues with the NXP RDB, where I needed to upgrade to the latest version to fix issues.

One other thing - I’ve noticed that restool reports an error when dynamic_dpl.sh is used. It suggests that something isn’t quite right with the current version. When I do a “./dynamic_dpl.sh dpmac.10” (and I don’t override any of the environment variables for the script), I see:

restool dpni info dpni.1
dpni version: 8.1
dpni id: 1
plugged state: plugged
endpoint state: 0
endpoint: dpmac.10, link is down
link status: 0 - down
mac address: 00:00:00:00:00:10
max frame length: 1536
dpni_attr.options value is: 0x80000310
Unrecognized options found… <<< problem…

I suspect this is something to do with high performance buffers (used by default). If I don’t use the high performance buffers the unrecognized option goes away.

You are using dynamic_dpl.sh from NXP’s dpdk-extras repository? That repo hasn’t been updated in quite a long time. My guess is that there is now an incompatibility between those scripts and the restool / mcbin firmware versions.

I will try to carve out some time and test your example cases.

Sounds right - I’m using the script from dpdk/nxp/dpaa2/dynamic_dpl.sh. Thanks for looking into this.