Forum Discussion
RubenPadial
Contributor
10 months agoHello @JohnT_Intel,
Is there any official documentation on the DLA runtime or inference engine for managing the DLA from the ARM side? I need to develop a custom application for running inference, but so far, I’ve only found the dla_benchmark (main.cpp) and streaming_inference_app.cpp example files. There should be some documentation covering the SDK.
From what I understand, the general inference workflow involves the following steps:
- Identify the hardware architecture
- Deploy the model
- Prepare the input data
- Send inference requests to the DLA
- Retrieve the output data