yolov3_tiny_tf run_inference_stream problem
i have completed successfully Arria 10 SoC demo project resnet-50-tf on Arria 10 SoC devkit. (my tool version intel fpga ai suite 2025.1 and open vino 2024.6). i have used the precompile arria10 wic image.
Arria 10 SoC devkit:
https://www.altera.com/products/devkit/a1jui0000049utgmam/arria-10-sx-soc-development-kit
SoC Demo project:
Then, i have compiled yolo_v3_tiny_tf model with no folding and device fpga, cpu to obtain .bin file. When i run the ./run_inference_stream.sh, it get this error:
root@arria10:~/app# ./run_inference_stream.sh
Runtime version check is enabled.
[ INFO ] Architecture used to compile the imported model: A10_Performance
Using licensed IP
Read hash from bitstream ROM...
Read build version string from bitstream ROM...
Read arch name string from bitstream ROM...
Runtime arch check is enabled. Check started...
Runtime arch check passed.
Runtime build version check is enabled. Check started...
Runtime build version check passed.
Exception from src/inference/src/cpp/core.cpp:184:
Exception from src/inference/src/dev/plugin.cpp:73:
Exception from src/inference/src/dev/plugin.cpp:73:
Exception from src/plugins/intel_cpu/src/utils/serialize.cpp:145:
[CPU] Could not deserialize by device xml header.
How can i solve this problem? Thank you.
Note:
root@arria10:~/app# ls
build_os.txt libopenvino_auto_batch_plugin.so
build_version.txt libopenvino_auto_plugin.so
categories.txt libopenvino_c.so
dla_benchmark libopenvino_c.so.2024.6.0
hetero_plugin libopenvino_c.so.2460
image_streaming_app libopenvino_ir_frontend.so
libcoreDLAHeteroPlugin.so libopenvino_ir_frontend.so.2024.6.0
libcoreDlaRuntimePlugin.so libopenvino_ir_frontend.so.2460
libformat_reader.so libopenvino_jax_frontend.so
libhps_platform_mmd.so libopenvino_jax_frontend.so.2024.6.0
libopencv_core.so.4.8.0 libopenvino_jax_frontend.so.2460
libopencv_core.so.408 libopenvino_pytorch_frontend.so
libopencv_highgui.so.4.8.0 libopenvino_pytorch_frontend.so.2024.6.0
libopencv_highgui.so.408 libopenvino_pytorch_frontend.so.2460
libopencv_imgcodecs.so.4.8.0 libopenvino_template_extension.so
libopencv_imgcodecs.so.408 libopenvino_tensorflow_lite_frontend.so
libopencv_imgproc.so.4.8.0 libopenvino_tensorflow_lite_frontend.so.2024.6.0
libopencv_imgproc.so.408 libopenvino_tensorflow_lite_frontend.so.2460
libopencv_videoio.so.4.8.0 plugins.xml
libopencv_videoio.so.408 results.txt
libopenvino.so run_image_stream.sh
libopenvino.so.2024.6.0 run_inference_stream.sh
libopenvino.so.2460 streaming_inference_app
libopenvino_arm_cpu_plugin.so
I have solved the error ([CPU] Could not deserialize by device xml header) by modifying the model xml file.
Run the dla_compiler and then get the unsupported layers name. Delete the unsupported layers from xml file and compile again to obtain IR data. Error would disappear.