raghad
New Member
3 hours agoCan Intel's AI Reference Kit LLM pipelines run on OpenVINO runtime inside FPGA AI Suite 26.1.1?
I run OpenVINO + FPGA AI Suite 26.1.1 in two setups:
- PCIe: OpenVINO on x86 Linux host → FPGA card
- SoC: OpenVINO on Arm Linux (HPS) → FPGA AI Suite IP over AXI
Intel's AI Reference Kits include ready-made LLM inference pipelines built on OpenVINO.
https://www.intel.com/content/www/us/en/developer/topic-technology/edge-5g/open-potential.html
I want to take one of these pipelines and run it using the OpenVINO runtime that ships inside FPGA AI Suite, so the FPGA handles the inference instead of the CPU.
- Is the bundled OpenVINO runtime + FPGA plugin / spatial compiler in 2026.1.1 compatible with these Reference Kit LLM pipelines?
- If it does not work directly out-of-the-box, what modifications would be needed?
Thanks,