Forum Discussion

RubenPadial's avatar
RubenPadial
Icon for Contributor rankContributor
1 year ago

Intel FPGA AI Sutie Inference Engine

Is there any official documentation on the DLA runtime or inference engine for managing the DLA from the ARM side? I need to develop a custom application for running inference, but so far, I’ve only found the dla_benchmark (main.cpp) and streaming_inference_app.cpp example files. There should be some documentation covering the SDK. The only documentation that i found related with is the Intel FPGA AI suite PCIe based design example https://www.intel.com/content/www/us/en/docs/programmable/768977/2024-3/fpga-runtime-plugin.html

From what I understand, the general inference workflow involves the following steps:

  1. Identify the hardware architecture
  2. Deploy the model
  3. Prepare the input data
  4. Send inference requests to the DLA
  5. Retrieve the output data

42 Replies