Forum Discussion

raghad's avatar
raghad
Icon for New Member rankNew Member
3 hours ago

Is Spatial IP ready for LLM / transformer inference?

I am using FPGA AI Suite 2026.1.1 (with the new spatial compiler). Most of the FPGA AI Suite handbook examples I see are classical CNN / vision flows (ResNet-style) on PCIe, hostless JTAG, and SoC.

Is transformer / LLM inference (attention layers, variable sequence lengths, large KV-cache activations, etc.) something we can target today with dla_compiler + Spatial IP, or is Spatial still aimed primarily at CNN-like graphs, or is custom RTL expected?

And if yes, are there any LLM examples, guides, recommended flows, or known limitations?

Thanks,

1 Reply

  • Currently FPGA AI Suite 2026.1.1 (with the new spatial compiler) doesn't support LLM / transformer inference.