mkont1
New Contributor
6 years agobenchmark and classification_sample apps hang on starting inference when running with -d HETERO:FPGA,CPU.
PAC installed in Artesyn MC1600 chassis with Intel(R) Xeon(R) CPU D-1567 @ 2.10GHz running CentOS 7.5.
fpgainfo fme:
Board Management Controller, microcontroller FW version 26889
Last Power Down Cause: POK_CORE
Last Reset Cause: None
//****** FME ******//
Object Id : 0xEF00000
PCIe s:b:d:f : 0000:06:00:0
Device Id : 0x09C4
Socket Id : 0x00
Ports Num : 01
Bitstream Id : 0x123000200000185
Bitstream Version : 0x30201
Pr Interface Id : 69528db6-eb31-577a-8c36-68f9faa081f6Prior to running the inference, this bitsream was programmed:
aocl program acl0 /opt/intel/openvino/bitstreams/a10_dcp_bitstreams/2019R1_RC_FP11_ResNet_SqueezeNet_VGG.aocxclassification_sample and benchmark apps run without issue with target device set to CPU. Both applications hang when attempting run on the FPGA (with -d HETERO:FPGA,CPU). Inference on the FPGA usually complete successfully with a single iteration (-ni 1) but consistently hang with higher number of iterations.
# ./classification_sample -d HETERO:FPGA,CPU -ni 10 -i /opt/intel/openvino/deployment_tools/demo/car.png -m /root/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.xml
[ INFO ] InferenceEngine:
API version ............ 1.6
Build .................. custom_releases/2019/R1.1_28dfbfdd28954c4dfd2f94403dd8dfc1f411038b
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] /opt/intel/openvino/deployment_tools/demo/car.png
[ INFO ] Loading plugin
API version ............ 1.6
Build .................. heteroPlugin
Description ....... heteroPlugin
[ INFO ] Loading network files:
/root/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.xml
/root/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.bin
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (787, 259) to (227, 227)
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (10 iterations)# ./benchmark_app -d HETERO:FPGA,CPU -i /opt/intel/openvino/deployment_tools/demo/car.png -m /root/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.xml
[ INFO ] InferenceEngine:
API version ............ 1.6
Build .................. custom_releases/2019/R1.1_28dfbfdd28954c4dfd2f94403dd8dfc1f411038b
[Step 1/8] Parsing and validation of input args
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] /opt/intel/openvino/deployment_tools/demo/car.png
Progress: [....................] 100.00% done
[Step 2/8] Loading plugin
[ INFO ]
API version ............ 1.6
Build .................. heteroPlugin
Description ....... heteroPlugin
Progress: [....................] 100.00% done
[Step 3/8] Read IR network
[ INFO ] Loading network files
[ INFO ] Network batch size: 1, precision: FP32
Progress: [....................] 100.00% done
[Step 4/8] Configure input & output of the model
[ INFO ] Preparing output blobs
Progress: [....................] 100.00% done
[Step 5/8] Loading model to the plugin
Progress: [....................] 100.00% done
[Step 6/8] Create infer requests and fill input blobs with images
[ INFO ] Infer Request 0 created
[ INFO ] Network Input dimensions (NCHW): 1 3 227 227
[ INFO ] Prepare image /opt/intel/openvino/deployment_tools/demo/car.png
[ WARNING ] Image is resized from (787, 259) to (227, 227)
[ INFO ] Infer Request 1 created
[ INFO ] Network Input dimensions (NCHW): 1 3 227 227
[ INFO ] Prepare image /opt/intel/openvino/deployment_tools/demo/car.png
[ WARNING ] Image is resized from (787, 259) to (227, 227)
Progress: [....................] 100.00% done
[Step 7/8]
Start inference asynchronously (120000.00 ms duration, 2 inference requests in parallel)
Progress: [ ] 0.00% done