N6000-PL MAX10 Build
Hi, I am currently building the N6000 MAX10 BMC provided in this guide, when running the script as mentioned in the guide "./build.sh" the script throws errors. I am not able to clear this error and the file is not present in the directory. Any ideas as what to do next is deeply appreciated? Thank you, Best Regards.Solved15KViews1like21CommentsIntel N6001 FIM not compiling due to script failing
Hey @khtan, I am done testing the N6000 card, and wanted to test the N6001 design in the N6000 card without enabling the E810 Controller. Hence I started building and compiling the N6001 design, but the script fails. Traceback (most recent call last): File "/usr/bin/afu_json_mgr", line 5, in <module> from packager.tools.afu_json_mgr import main File "/usr/lib/python3/dist-packages/packager/tools/afu_json_mgr.py", line 35, in <module> from packager.utils.afu import AFU File "/usr/lib/python3/dist-packages/packager/utils/afu.py", line 37, in <module> from jsonschema import validators ModuleNotFoundError: No module named 'jsonschema' Error: "afu_json_mgr json-info --afu-json=/home/admin/test/intel-ofs-fim/ofs-common/scripts/common/syn/pim/dummy_afu/dummy_afu.json --verilog-hdr=../hw/afu_json_info.vh" failed Copying build from /home/admin/test/intel-ofs-fim/work_n6001/syn/syn_top/afu_with_pim/pim_template/hw/lib/build... Configuring Quartus build directory: afu/build Error running /home/admin/test/intel-ofs-fim/ofs-common/scripts/common/syn/build_fim.sh Exit code: 1 Below is the attached log file, any ideas as to what can be done are deeply appreciated. Thank you, Best Regards.Solved11KViews1like13CommentsIntel FPGA AI Suite Inference Engine
Hello, I'm using Intel FPGA AI 2023.2 on ubuntu 20.04 host computer and trying to infer a custom CNN in a Intel Arria 10 SoC FPGA. I have followed Intel FPGA AI Suite SoC Design Example Guide and I'm able to copile the Intel FPGA AI suite IP and run the M2M and S2M examples. I have also compiled the grpah for my custom NN and I'm trying to run it with the Inter FPGA AI suite IP but I have not clear how to do it. I'm trying to use the dla_benchmark app provided but for example, the input data of my NN (it is trained and graph was compiled in this way) must be float whereas the input data of the IP must be int8 if I'm not wrong. Another problem I have is regarding the ground truth file. I have a ground truth file for each imput file because each groud truth is a 225 array. Is there any additional information or guide to run custom model with Intel FPGA AI Suite? Thank you in advance8KViews0likes31CommentsIntel FPGA AI suite accuracy drop
Hello, I'm using Intel FPGA AI 2023.2 on ubuntu 20.04 host computer and trying to infer a custom CNN in a Intel Arria 10 SoC FPGA. The CNN was trained with TensorFlow and the accuracy is 98.89% across the test dataset. After converting the model to IR model with OpenVINO model optimer the accuracy remains the same. mo --saved_model_dir "{path_savedModelPath}" --input_shape "{lst_inputShape}" --model_name "{str_modelName}" --output_dir "{path_irTargetPath}" --use_new_frontend However afer running the model in the Intel FPGA AI Suite IP the accuracy drops to 74.64% across the same test dataset. The architecture used is A10_FP16_Generic.arch, which has "arch_precision"=FP16. I have also tested with A10_FP16_Performance.arch and A10_Performance.arch. dla_compiler --march "{path_archPath}" --network-file "{path_xmlPath}" --o "{path_binPath}" --foutput-format=open_vino_hetero --fplugin "HETERO:FPGA,CPU" --fanalyze-performance --fdump-performance-report --fanalyze-area --fdump-area-report I tried to optimize the model with "compress_to_fp16" openVINO model optimizer option but when compiling with dla_compiler I get this error: "Layer (Name: Transpose_517_compressed, Type: Constant) is not supported: Error occurred. ../compiler/aot_plugin/src/dla_executable_network.cpp:134 Graph is not supported on FPGA plugin due to existance of layer (Name: Transpose_517_compressed, Type: Constant) in topology. Most likely you need to use heterogeneous plugin instead of FPGA plugin directly." As you can see, hetero plugin option is set to FPGA and CPU. It was also tested with Intel FPGA AI Suite 2023.3 and OpenVINO 2022.3.1 with the same error message. The accuracy in software with this compressd IR model to FP16 is 98.91 so in the FPGA the accuracy should be almos the same but there is a 24% of accuracy drop. Find attached both IR model files. What could be the rootcause of this accuracy drop? What solution I can implement to improve the accuracy?7.8KViews0likes21CommentsIntel FPGA AI Sutie Inference Engine
Is there any official documentation on the DLA runtime or inference engine for managing the DLA from the ARM side? I need to develop a custom application for running inference, but so far, I’ve only found the dla_benchmark (main.cpp) and streaming_inference_app.cpp example files. There should be some documentation covering the SDK. The only documentation that i found related with is the Intel FPGA AI suite PCIe based design example https://www.intel.com/content/www/us/en/docs/programmable/768977/2024-3/fpga-runtime-plugin.html From what I understand, the general inference workflow involves the following steps: Identify the hardware architecture Deploy the model Prepare the input data Send inference requests to the DLA Retrieve the output data6.6KViews0likes42CommentsIntel FPGA AI suite input data type
Hello, I'm using Intel FPGA AI 2023.2 on ubuntu 20.04 host computer and trying to infer a custom CNN in a Intel Arria 10 SoC FPGA. I have followed Intel FPGA AI Suite SoC Design Example Guide and I'm able to copile the Intel FPGA AI suite IP and run the M2M and S2M examples. I have a question regarding the input layer data type. In the example, resnet-50-tf NN, the input layer seems to be FP32 [1,3,224,224] in the .dot file obtained after the graph compiling (see screenshot below) However, when running dla_benchmark example I noticed that U8 input data type was detected. After reading the .bin file of the graph compiled, from which this info is extracted by the dla_benchmark application, it can be seen that input data type is U8 (see screenshot below) The graph was compitled according to Intel FPGA AI Suite SoC Design Example Guide using: omz_converter --name resnet-50-tf \ --download_dir $COREDLA_WORK/demo/models/ \ --output_dir $COREDLA_WORK/demo/models/ In addition I compiled my custom NN with dla_command and I get the same result: the input layer is fp32 in the .dot and u8 in the .bin. dla_compiler --march "{path_archPath}" --network-file "{path_xmlPath}" --o "{path_binPath}" --foutput-format=open_vino_hetero --fplugin "HETERO:FPGA,CPU" --fanalyze-performance --fdump-performance-report --fanalyze-area --fdump-area-report Compilation was performed for A10_Performance.ach example architecture architecture in both cases. In addition, input precission should be reported in input_transform_dump report but it is not. 2 questions: -Why is the input data_type changed? - Is it possible to compile the graph with input data type FP32? Thank you in advance6.4KViews0likes23CommentsError (23035) Tcl error running dla_build_example_design.py script
Hello, I got this error when I run "dla_build_example_design.py" esample script with Intel FPGA AI suite. It is run according to the "Intel FPGA AI Suite SoC Design Example User Guide" and it should be executed even the license was not valid. According to documentation if the ir no license available, bitstreams should be compiled but with limited inferences. However, I am not able to compile it. " nfo: Command: quartus_cpf -c --hps -o bitstream_compression=on output_files/top.sof output_files/top.rbf Error (213009): File name "output_files/top.sof" does not exist or can't be read Error: Quartus Prime Convert_programming_file was unsuccessful. 1 error, 0 warnings Error: Peak virtual memory: 721 megabytes Error: Processing ended: Mon Sep 11 12:38:19 2023 Error: Elapsed time: 00:00:00 Error: System process ID: 517860 Error (23035): Tcl error: ERROR: Error(s) found while running an executable. See report file(s) for error message(s). Message log indicates which executable was run last. while executing "execute_module -tool cpf -args "-c --hps -o bitstream_compression=on output_files/top.sof output_files/top.rbf"" (file "generate_sof.tcl" line 5) ------------------------------------------------ ERROR: Error(s) found while running an executable. See report file(s) for error message(s). Message log indicates which executable was run last. while executing "execute_module -tool cpf -args "-c --hps -o bitstream_compression=on output_files/top.sof output_files/top.rbf"" (file "generate_sof.tcl" line 5) ------------------------------------------------ Error (23031): Evaluation of Tcl script generate_sof.tcl unsuccessful Error: Quartus Prime Shell was unsuccessful. 8 errors, 745 warnings Error: Peak virtual memory: 1037 megabytes Error: Processing ended: Mon Sep 11 12:38:19 2023 Error: Elapsed time: 00:21:00 Error: System process ID: 513901 Error: A license needed by one or more of the IP components in the design was not found. Contact Intel if you wish to obtain a license for the Intel FPGA AI Suite. Command Failed quartus_sh -t generate_sof.tcl " -- OpenVino version: 2022.3 Intel FPGA AI Suite version: 2023.2 Device: Intel Arria 10 SoC FPGA OS: Ubuntu 20.045.8KViews0likes31CommentsN6000/PL-1 SmartNIC image deployment error
Hello, I’ve installed an Intel N6000/1-PL SmartNIC on a Lenovo SR650v2 server with the following stack: N6000 SKU1 CentOS Stream release 8 OPAE v2.1.1 kernel 5.15.92-dfl Server BIOS settings: card tested on two slots (1 and 7) with PCIe bifurcation set to x8x8. Fan speed set to maximum. The server BIOS reports the following warning: PCIe error recovery has occurred in slot number 1. The adapter may not work correctly. And dmesg contains: [22638.864360] intel-m10bmc-sec-update n6000bmc-sec-update.3.auto: SDM trigger failure: 4 [22638.877250] dfl-pci 0000:c5:00.1: enabling device (0140 -> 0142) [22638.877568] dfl-pci 0000:c5:00.1: PCIE AER unavailable -5. [22638.890287] dfl-pci 0000:c5:00.2: enabling device (0140 -> 0142) [22638.890607] dfl-pci 0000:c5:00.2: PCIE AER unavailable -5. [22638.904091] dfl-pci 0000:c5:00.3: enabling device (0140 -> 0142) [22638.904377] dfl-pci 0000:c5:00.3: PCIE AER unavailable -5. [22638.916944] dfl-pci 0000:c5:00.4: enabling device (0140 -> 0142) [22638.917231] dfl-pci 0000:c5:00.4: PCIE AER unavailable -5. Trying to deploy an image results in the error included below. Otherwise PCIe inventory and fpgainfo command seem to work ok as shown below. Any help would be appreciated. Hardware problem, on-card BMC problem, software problem ? fpgasupdate --log-level debug ofs_top_page1_pacsign_user1.bin 0000:C5:00.0 [2024-01-29 05:07:27.46] [DEBUG ] fw file: ofs_top_page1_pacsign_user1.bin [2024-01-29 05:07:27.46] [DEBUG ] addr: 0000:C5:00.0 [2024-01-29 05:07:27.46] [DEBUG ] hash256: b'e026976389252b8a746943f351e8f149e5f0415f620cd1e0618229eb79e01bb8' [2024-01-29 05:07:27.46] [DEBUG ] hash384: b'bb04ea12557ce23f2cb75685669d794fb6a06bf7b590430aa8bfdb4c765c6e15ecdb38200e1599aa8a7e52a2958e20db' [2024-01-29 05:07:27.46] [DEBUG ] file type: Static Region (Update) [2024-01-29 05:07:27.47] [DEBUG ] found device at 0000:c5:00.3 -tree is [pci_address(0000:c2:04.0), pci_id(0x8086, 0x347c)] (pcieport) [pci_address(0000:c5:00.3), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.1), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.4), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.2), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.0), pci_id(0x8086, 0xbcce)] (dfl-pci) [2024-01-29 05:07:27.47] [DEBUG ] found device at 0000:c5:00.1 -tree is [pci_address(0000:c2:04.0), pci_id(0x8086, 0x347c)] (pcieport) [pci_address(0000:c5:00.3), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.1), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.4), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.2), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.0), pci_id(0x8086, 0xbcce)] (dfl-pci) [2024-01-29 05:07:27.47] [DEBUG ] found device at 0000:c5:00.0 -tree is [pci_address(0000:c2:04.0), pci_id(0x8086, 0x347c)] (pcieport) [pci_address(0000:c5:00.3), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.1), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.4), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.2), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.0), pci_id(0x8086, 0xbcce)] (dfl-pci) [2024-01-29 05:07:27.47] [DEBUG ] found device at 0000:c5:00.4 -tree is [pci_address(0000:c2:04.0), pci_id(0x8086, 0x347c)] (pcieport) [pci_address(0000:c5:00.3), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.1), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.4), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.2), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.0), pci_id(0x8086, 0xbcce)] (dfl-pci) [2024-01-29 05:07:27.47] [DEBUG ] found device at 0000:c5:00.2 -tree is [pci_address(0000:c2:04.0), pci_id(0x8086, 0x347c)] (pcieport) [pci_address(0000:c5:00.3), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.1), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.4), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.2), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.0), pci_id(0x8086, 0xbcce)] (dfl-pci) [2024-01-29 05:07:27.47] [DEBUG ] found device at 0000:c5:00.0 -tree is [pci_address(0000:c2:04.0), pci_id(0x8086, 0x347c)] (pcieport) [pci_address(0000:c5:00.3), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.1), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.4), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.2), pci_id(0x8086, 0xbcce)] (dfl-pci) [pci_address(0000:c5:00.0), pci_id(0x8086, 0xbcce)] (dfl-pci) [2024-01-29 05:07:27.48] [DEBUG ] could not find: "/sys/class/fpga_region/region0/dfl-fme.0/dfl*.*/*spi*/spi_master/spi*/spi*" [2024-01-29 05:07:27.48] [DEBUG ] could not find: "/sys/class/fpga_region/region0/dfl-fme.0/dfl*.*/spi_master/spi*/spi*" [2024-01-29 05:07:27.48] [DEBUG ] could not find: "/sys/class/fpga_region/region0/dfl-fme.0/spi*/spi_master/spi*/spi*" [2024-01-29 05:07:27.48] [DEBUG ] could not find: "/sys/class/fpga_region/region0/dfl-fme.0/dfl_dev.4/n6000bmc-sec-update.3.auto/*fpga_sec_mgr*/*fpga_sec*" [2024-01-29 05:07:27.48] [DEBUG ] could not find: "/sys/class/fpga_region/region0/dfl-fme.0/dfl_dev.4/n6000bmc-sec-update.3.auto/fpga_image_load/fpga_image*" Traceback (most recent call last): File "/usr/bin/fpgasupdate", line 33, in <module> sys.exit(load_entry_point('opae.admin===1.4.1-', 'console_scripts', 'fpgasupdate')()) File "/usr/lib/python3.6/site-packages/opae/admin/tools/fpgasupdate.py", line 789, in main if pac.upload_dev.find_one(os.path.join('update', 'filename')): AttributeError: 'NoneType' object has no attribute 'find_one' lspci -vt | +-02.0-[c3-c4]--+-00.0 Intel Corporation Ethernet Controller E810-C for backplane | | +-00.1 Intel Corporation Ethernet Controller E810-C for backplane | | +-00.2 Intel Corporation Ethernet Controller E810-C for backplane | | +-00.3 Intel Corporation Ethernet Controller E810-C for backplane | | +-00.4 Intel Corporation Ethernet Controller E810-C for backplane | | +-00.5 Intel Corporation Ethernet Controller E810-C for backplane | | +-00.6 Intel Corporation Ethernet Controller E810-C for backplane | | \-00.7 Intel Corporation Ethernet Controller E810-C for backplane | \-04.0-[c5]--+-00.0 Intel Corporation Device bcce | +-00.1 Intel Corporation Device bcce | +-00.2 Intel Corporation Device bcce | +-00.3 Intel Corporation Device bcce | \-00.4 Intel Corporation Device bcce fpgainfo fme Intel Acceleration Development Platform N6001 Board Management Controller NIOS FW version: 3.14.0 Board Management Controller Build version: 3.14.0 //****** FME ******// Object Id : 0xEF00000 PCIe s:b:d.f : 0000:C5:00.0 Vendor Id : 0x8086 Device Id : 0xBCCE SubVendor Id : 0x8086 SubDevice Id : 0x1771 Socket Id : 0x00 Ports Num : 01 Bitstream Id : 0x5010202FAB46E6A Bitstream Version : 5.0.1 Pr Interface Id : 00bc56cf-9e1f-5bf0-8011-48736ec862c9 Boot Page : user1 Factory Image Info : 801148736ec862c900bc56cf9e1f5bf0 User1 Image Info : 801148736ec862c900bc56cf9e1f5bf0 User2 Image Info : 801148736ec862c900bc56cf9e1f5bf05.3KViews0likes12CommentsHardSigmoid and Intel FPGA AI suite
Hello, I have a trained neural network with PyTorch. I exported it in ONNX format and successfully obtained the IR model using the OpenVino toolkit. The model includes a HardSiLU (or H-Swish) activation layer, which is defined as x·hardsigmoid(x). I compiled the graph using the Intel FPGA AI Suite. The x·hardsigmoid(x) operation has been correctly identified but the problem is that, even though H-Sigmoid and H-Swish layers are compatible with Intel FPGA AI suite according to Intel FPGA AI Suite IP Reference Manual it seems that the HardSigmoid operation is being executed on the CPU instead of the FPGA. Is it possible that the layer has not been properly recognized by the Intel FPGA Suite? Why is it being executed on the CPU rather than the FPGA? -- OpenVino version: 2022.3 Intel FPGA AI Suite version: 2023.2 Device: Intel Arria 10 SoC FPGA OS: Ubuntu 20.044.6KViews0likes18Comments