Compiling kernel modules for the ClearFog CX LX2

Hello,
How do we compile kernel modules (a camera device driver in our case) for the ClearFog CX LX2? /lib/modules/$(uname -r)/build is empty after installing a pre-built image. Are the kernel headers and binaries distributed anywhere?
I tried to build an image, and headers, myself by following this process:

  1. Following the steps here, built an image with docker on an x86-64 machine, based on commit bc46e34: GitHub - SolidRun/lx2160a_build: Scripts to build basic images for LX2160A COM express type 7 modules
  2. Flash the ClearFog board with the created image file.
  3. Copied the lx2160a_build/build/linux directory to the ClearFog board.
  4. Unfortunately, copying the contents of the linux directory into /lib/modules/$(uname -r)/build and then compiling the kernel module did not work. This is because the docker container used the host compiler rather than the cross-compiler when compiling the contents of build/linux/scripts, as well as some other files. It’s apparently quite common for kernel cross-compilation setups to let the host compiler leak through, according to some online searches.
  5. Run make scripts -j16 LOCALVERSION="-00007-g9c7b74fdbb19" and make modules -j16 LOCALVERSION="-00007-g9c7b74fdbb19" inside the linux directory, on the ClearFog, to build the kernel. I had to specify the LOCALVERSION, otherwise the kernel’s release string will would simply be 5.4.47-generic (If I recall correctly), rather than the system’s actual uname -r output of 5.4.47-00007-g9c7b74fdbb19. Kernel module compilation would fail if they were mismatched. I don’t know why this is necessary, as I can’t see where the docker container sets any version string.
  6. Copy the linux directory’s contents into /lib/modules/$(uname -r)/build and then build the kernel module. It successfully built and the module was then loaded. Unfortunately, the camera API produced a non-descriptive error when I attempted to connect to the camera, using this driver.

I can think of a few issues with my process:

  • In step 5, compiling the kernel headers with the ClearFog used the system’s default GCC version. However, the actual kernel itself was compiled with a specific version of GCC Linaro as defined in the Docker container. ABI issues, perhaps?
  • In step 5, certain config was lost when building the kernel headers. I don’t understand where the Docker container produces the 5.4.47-00007-g9c7b74fdbb19 string to the kernel name, and why I need to manually specify it when building on the ClearFog. Other settings may have been lost.
  • The driver compiled fine, but it’s an issue with the camera itself, or the way the camera interfaces with this particular ARM board.

Is there a simpler way to build the kernel headers, or be provided them prebuilt?

If you are building your own image, then I would recommend just tweaking the kernel config file to include the kernel modules you need for your hardware.

It’s an out-of-tree module. Would it make sense for me to add the kernel’s source code to linux/drivers in the container, tweaking the makefiles and Kconfig files so it’s part of the build?

that is probably easiest. Newer distributions should have the hardware supported in kernels 5.14 and newer with device-tree if you want to try a packaged kernel.

It’s probably easiest to cross-compile the module on the host:

$ cd <module_dir>
$ export PATH=$ROOTDIR/build/toolchain/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu/bin:$PATH
$ export CROSS_COMPILE=aarch64-linux-gnu-
$ export ARCH=arm64
$ make -C <path_to_kernel_src> M=$PWD
<copy .ko to the target>

that will also work for a one off module.