Forum Discussion

fpga_guyx's avatar
fpga_guyx
Icon for New Contributor rankNew Contributor
1 month ago
Solved

yolov3_tiny_tf run_inference_stream problem

i have completed successfully Arria 10 SoC demo project resnet-50-tf on Arria 10 SoC devkit. (my tool version intel fpga ai suite 2025.1 and open vino 2024.6). i have used the precompile arria10 wic image. 

Arria 10 SoC devkit:

https://www.altera.com/products/devkit/a1jui0000049utgmam/arria-10-sx-soc-development-kit

SoC Demo project:

https://www.intel.com/content/www/us/en/docs/programmable/848957/2025-1/soc-design-example-prerequisites.html

Then, i have compiled yolo_v3_tiny_tf model with no folding and device fpga, cpu to obtain .bin file. When i run the  ./run_inference_stream.sh, it get this error:

root@arria10:~/app# ./run_inference_stream.sh
Runtime version check is enabled.
[ INFO ] Architecture used to compile the imported model: A10_Performance
Using licensed IP
Read hash from bitstream ROM...
Read build version string from bitstream ROM...
Read arch name string from bitstream ROM...
Runtime arch check is enabled. Check started...
Runtime arch check passed.
Runtime build version check is enabled. Check started...
Runtime build version check passed.
Exception from src/inference/src/cpp/core.cpp:184:
Exception from src/inference/src/dev/plugin.cpp:73:
Exception from src/inference/src/dev/plugin.cpp:73:
Exception from src/plugins/intel_cpu/src/utils/serialize.cpp:145:
[CPU] Could not deserialize by device xml header.

How can i solve this problem? Thank you.

Note:

root@arria10:~/app# ls
build_os.txt libopenvino_auto_batch_plugin.so
build_version.txt libopenvino_auto_plugin.so
categories.txt libopenvino_c.so
dla_benchmark libopenvino_c.so.2024.6.0
hetero_plugin libopenvino_c.so.2460
image_streaming_app libopenvino_ir_frontend.so
libcoreDLAHeteroPlugin.so libopenvino_ir_frontend.so.2024.6.0
libcoreDlaRuntimePlugin.so libopenvino_ir_frontend.so.2460
libformat_reader.so libopenvino_jax_frontend.so
libhps_platform_mmd.so libopenvino_jax_frontend.so.2024.6.0
libopencv_core.so.4.8.0 libopenvino_jax_frontend.so.2460
libopencv_core.so.408 libopenvino_pytorch_frontend.so
libopencv_highgui.so.4.8.0 libopenvino_pytorch_frontend.so.2024.6.0
libopencv_highgui.so.408 libopenvino_pytorch_frontend.so.2460
libopencv_imgcodecs.so.4.8.0 libopenvino_template_extension.so
libopencv_imgcodecs.so.408 libopenvino_tensorflow_lite_frontend.so
libopencv_imgproc.so.4.8.0 libopenvino_tensorflow_lite_frontend.so.2024.6.0
libopencv_imgproc.so.408 libopenvino_tensorflow_lite_frontend.so.2460
libopencv_videoio.so.4.8.0 plugins.xml
libopencv_videoio.so.408 results.txt
libopenvino.so run_image_stream.sh
libopenvino.so.2024.6.0 run_inference_stream.sh
libopenvino.so.2460 streaming_inference_app
libopenvino_arm_cpu_plugin.so

  • I have solved the error ([CPU] Could not deserialize by device xml header) by modifying the model xml file.

    Run the dla_compiler and then get the unsupported layers name. Delete the unsupported layers from xml file and compile again to obtain IR data. Error would disappear. 

9 Replies

  • JohnT_Altera's avatar
    JohnT_Altera
    Icon for Regular Contributor rankRegular Contributor

    Hi,

     

    Have you modify the run_image_stream.sh? If yes, can you share with me what is the changes performed? What is the FPGA design or bitstream used?

     

    Thanks

    • fpga_guyx's avatar
      fpga_guyx
      Icon for New Contributor rankNew Contributor

      Actually, i did not any change on the run_image_stream.sh or stream.sh.

      I used the below wic file from example project.

      $COREDLA_ROOT/demo/ed4/a10_soc_s2m/sd-card/coredla-image-arria10.wic

      https://www.intel.com/content/www/us/en/docs/programmable/848957/2025-1/writing-the-sd-card-image-wic-to-an-sd-card.html

       

      • JohnT_Altera's avatar
        JohnT_Altera
        Icon for Regular Contributor rankRegular Contributor

        Hi,
         

        Have you performed some emulation to confirm if the model you are using is able to be run in FPGA? The reason is that the example provided is to have the model fully run in FPGA. 

         

        But based on the log file provided, some of the layer is not able to performed in FPGA and it need to revert to CPU but the Application code does not support it.

         

        It is recommended you try to compile graph and check the output if it is fully supported or not. https://www.intel.com/content/www/us/en/docs/programmable/863373/2025-3/compiling-a-graph.html.

  • fpga_guyx's avatar
    fpga_guyx
    Icon for New Contributor rankNew Contributor

    I have solved the error ([CPU] Could not deserialize by device xml header) by modifying the model xml file.

    Run the dla_compiler and then get the unsupported layers name. Delete the unsupported layers from xml file and compile again to obtain IR data. Error would disappear.