Forum Discussion

Mickleman's avatar
Mickleman
Icon for Occasional Contributor rankOccasional Contributor
5 years ago

Achieving parallel execution of loop on FPGA

I am having difficulty persuading the compiler to execute an outer loop in parallel in an FPGA kernel. I have constructed a simple example to illustrate the issue.

Here is a simple array initialisation loop with a loop carried dependency:

const int ITEM_LENGTH = 1000;
uint16_t a[ITEM_LENGTH];

for (int i = 0; i < ITEM_LENGTH; i++)
if ( i == 0 )
a[i] = j;
else
a[i] = a[i-1] + i;

The compiler correctly schedules this with an II of 6 (at 240MHz on Arria 10).

To improve throughput 10 of these are processed together by adding an outer loop:

const int ITEM_LENGTH = 1000;
const int GROUP_SIZE = 10;
uint16_t a[GROUP_SIZE][ITEM_LENGTH];

for (int j = 0; j < GROUP_SIZE; j++)
for (int i = 0; i < ITEM_LENGTH; i++)
if ( i == 0 )
a[j][i] = j;
else
a[j][i] = a[j][i-1] + i;


The compiler spots that the loops can be inverted to give an II of 1 on the inner loop thus improving throughput by a factor of 6.

Now I want to process 2 of these in parallel so an outer loop is introduced for which there are no depencencies between the two iterations of the loop so expect them to be executed in parallel by duplicating the logic.

const int ITEM_LENGTH = 1000;
const int GROUP_SIZE = 10;
uint16_t a[2][GROUP_SIZE][ITEM_LENGTH];

for (int block = 0; block < 2; block++)
for (int j = 0; j < GROUP_SIZE; j++)
for (int i = 0; i < ITEM_LENGTH; i++)
if ( i == 0 )
a[block][j][i] = j;
else
a[block][j][i] = a[block][j][i-1] + i;

But this causes the inner loop to revert to an II of 6 and the outer loop to be: 'Serial exe: Memory dependency' citing all 4 combinations of the two assignment statements as the problem.

An attempt to explicitly declare no dependencies inverts the two innermost loops manually and adds an ivdep:

const int ITEM_LENGTH = 1000;
const int GROUP_SIZE = 10;
uint16_t a[2][GROUP_SIZE][ITEM_LENGTH];

for (int block = 0; block < 2; block++)
[[intel::ivdep]]
for (int k = 0; k < GROUP_SIZE * ITEM_LENGTH; k++)
{
int i = k / GROUP_SIZE;
int j = k - i * GROUP_SIZE;

if ( i == 0 )
a[block][j][i] = j;
else
a[block][j][i] = a[2][j][i-1] + i;

The inner loop is now scheduled with II of 1 (trusting the ivdep) but the outer loop still does not execute in parallel for exactly the same reason as before.

So, given that there really isn't any actual dependency between iterations of the outer loop, how do I persuade the compiler of this so that I achieve parallel execution of the loop iterations?

11 Replies

  • MGRAV's avatar
    MGRAV
    Icon for New Contributor rankNew Contributor

    Hi,

    Did you try with "#pragma unroll 2" ?

    I am working on something similar and I would say that something like that should almost work

    const int ITEM_LENGTH = 1000;
    const int GROUP_SIZE = 10;
    uint16_t a[2][GROUP_SIZE][ITEM_LENGTH];
    
    #pragma unroll 2
    [[intel::ivdep]]
    for (int block = 0; block < 2; block++)
    
      [[intel::ivdep]]
      for (int j = 0; j < GROUP_SIZE; j++)
        for (int i = 0; i < ITEM_LENGTH; i++)
          if ( i == 0 )
            a[block][j][i] = j;
          else
            a[block][j][i] = a[block][j][i-1] + i;
    
    
    

    good luck

    • HRZ's avatar
      HRZ
      Icon for Frequent Contributor rankFrequent Contributor

      Solution mentioned by @MGRAV is the right way to run loop iterations in "parallel". However, the difference you are experiencing here is likely caused by the resource used to implement i[], rather than the loop-carried dependency. If i[] is implemented using registers (which can be quite inefficient in case ITEM_LENGTH is too large), then you should achieve an II of 1 in all of the cases you mentioned above. You can force implementation using registers using the respective pragma mentioned in Intel's documentation and see what happens. If, however, i[] is implemented using Block RAMs, then you will end up with a load/store dependency since it is not possible to perform single-cycle read/write from/to Block RAMs and that is where the II=6 comes from (latency of one Block RAM read + write). There is nothing you can do to improve the II in this case, since your the load/store dependency is real and any attempt to force the compiler to ignore it using ivdep will result in incorrect output.

      P.S. You should probably move "if ( i == 0 ) a[i] = j; " outside of the "i" loop and start "i" from 1. This will allow you to completely eliminate the branch from the code and save some area.

      • Mickleman's avatar
        Mickleman
        Icon for Occasional Contributor rankOccasional Contributor

        Many thanks for replying HRZ.

        Your insights are very helpful. However, this is a very abstracted example of the problem. In the actual design the main inner loop is complex with a critical dependency of over 80 clocks which cannot easily be reduced. The data itself is too large to be implemented as registers.

        The long dependency is resolved (as in the example) by operating on many data vectors concurrently and inverting the consequent pair of nested loops. The ivdep is needed because although there is a data dependency on the array as a whole the loop inversion ensures that LD/ST of any one element of the array is separated by at least the natural II (GROUP_SIZE=10 is greater than II of 6 in the example).

        I am now trying to run several of these inverted loops in parallel on distinct data sets. In the example this is represented by the third array dimension indexed by block. The first iteration of the loop only accesses array elements with first index equal to 0 (a[0][j][i]), the second iteration only accesses array elements with first index 1 (a[1][j]i]).

        Following MGRAV's suggestion on the unroll pragma, it unfortunately does not result in parallel execution because the compiler is insisting there is a memory dependency preventing it. I will be confirming that this is actually the case with an actual run as soon as the FPGA runtime nodes are available again. I will post the results here.

        Any further insights would be most welcome.

    • Mickleman's avatar
      Mickleman
      Icon for Occasional Contributor rankOccasional Contributor

      Many thanks for replying MGRAV.

      I have tried with the unroll pragma but as far as I can see it does not execute in parallel because of the memory dependency on the array - even though this is not a real dependency because each iteration accesses a distinct separate part of the array. I have only been able to confirm this by looking at the Graph Viewer as none of the FPGA runtime nodes is working at the moment.