Are you following NXP’s instructions for converting the model via ONNX to work with the NPU on the iMX8MP?
Also NXP has patches to tensorflow in their Yocto recipes. I believe you could be running into issues there as well.
- For model conversion, I use DeGirum/ultralytics_yolov8 repository, where my final model is yolov8n_full_integer_quant.tflite. When I try CPU inference on CuBox-M the converted model works fine. As far as I understand, it means that conversion was successful (correct me here if I am wrong). But NPU inference results are messy. After I’ve upgraded my image from hardknott->kirkstone I can see at least some results (check attached images).
- So you suggest finding the latest patches and applying them? When I tried to include the latest tensorflow-lite, tensorflow-lite-vx-delegate, in my recipes for hardknott it failed (I have very low experience with Yocto build system).
NPU INFERENCE:
Additional info:
Model conversion versions:
onnx 1.17.0
onnx2tf 1.25.0
onnxruntime 1.21.1
tensorflow 2.19.0
tensorflow_cpu 2.19.0
Kirkstone image python packages versions:
onnxruntime 1.10.0
tflite-runtime 2.9.1
tensorflow-lite-vx-delegate_2.9.1
@jnettlet It might be a long-shot, but I have to ask. Is there a simple way that I can effortlessly add the latest tensorflow-lite (2.18) related recipes (as well as tensorflow-delegate) in kirkstone CuBox-M build?
I believe that large gap in package versions is the main problem of a bad NPU inference.
That is a good question. I will have to take a look at it. Would you be willing to switch to a newer Yocto version that includes those packages?
Ok, switch to Yocto styhead 6.12.3 (or other version) is not a problem for a project. As far as I understand, the latest tensorflow-lite-vx-delegate recipes are at that Yocto version.
Right now, the main concerns are:
- Reliable inference on CuBox-M NPU;
- Reliable development with SDK for created image;
I am currently working with Scarthgap. I haven’t tested Styhead yet. Do you have a specific video I can test against?
@jnettlet Scarthgap is fine for me.
I can provide you with yolov8n_car_relu6.pt model for car detection from DeGirum YOLOv8 release (DeGirum/ultralytics_yolov8/releases/tag/v1.0.0) (it has some optimizations for NPU inference) and a sample image. Basically, any image with a car will be enough to test the model.
got it. Thanks
@jnettlet Hi, are there any news regarding Scarthgap image for CuBox-M with updated Tensorflow Lite and delegate packages?
Also, is it possible to update only tensorflow packages in a Kirkstone build? Something like copy-paste directory with the latest tensorflow recipes from Scarthgap to Kirkstone. Just looking for any possible solution to solve this problem.
Hello @jnettlet, are there any updates on the Scarthgap image?
@jnettlet Good day, Jon
I don’t want to disturb with frequent questions, but are there any updates about my issue?