Intel FPGA AI suite input data type
Hello,
I'm using Intel FPGA AI 2023.2 on ubuntu 20.04 host computer and trying to infer a custom CNN in a Intel Arria 10 SoC FPGA.
I have followed Intel FPGA AI Suite SoC Design Example Guide and I'm able to copile the Intel FPGA AI suite IP and run the M2M and S2M examples.
I have a question regarding the input layer data type. In the example, resnet-50-tf NN, the input layer seems to be FP32 [1,3,224,224] in the .dot file obtained after the graph compiling (see screenshot below)
However, when running dla_benchmark example I noticed that U8 input data type was detected. After reading the .bin file of the graph compiled, from which this info is extracted by the dla_benchmark application, it can be seen that input data type is U8 (see screenshot below)
The graph was compitled according to Intel FPGA AI Suite SoC Design Example Guide using:
omz_converter --name resnet-50-tf \
--download_dir $COREDLA_WORK/demo/models/ \
--output_dir $COREDLA_WORK/demo/models/
In addition I compiled my custom NN with dla_command and I get the same result: the input layer is fp32 in the .dot and u8 in the .bin.
dla_compiler
--march "{path_archPath}"
--network-file "{path_xmlPath}"
--o "{path_binPath}"
--foutput-format=open_vino_hetero
--fplugin "HETERO:FPGA,CPU"
--fanalyze-performance
--fdump-performance-report
--fanalyze-area
--fdump-area-report
Compilation was performed for A10_Performance.ach example architecture architecture in both cases.
In addition, input precission should be reported in input_transform_dump report but it is not.
2 questions:
-Why is the input data_type changed?
- Is it possible to compile the graph with input data type FP32?
Thank you in advance