Forum Discussion
The number of work-items that can be in-flight simultaneously depends on the pipeline depth; even though you see only three units in the report, the total length of the pipeline should be in the order of 50-200 stages which would allow the same number of work-items be pipelined at the same time. Note that if you want work-item parallelism, you should use SIMD. By default, work-items are only pipelined in NDRange kernels.
- hiratz6 years ago
Occasional Contributor
Thank you, HRZ.
Actually I did not compile this example code. I just read the description about how hardware pipeline stages are generated for a given kernel code in Intel's "Best Practices Guide". The guide provides many similar but simple examples to help people understand how the pipeline parallelism can be got.
I'm still curious why only the single statement "c[gid] = a[gid]+b[gid];" can get a pipeline depth of the order of 50 - 200 stages by the compiler. It seems that the guide does not mention such implicit stages. Would you like to provide more details?
- HRZ6 years ago
Frequent Contributor
Latency of most operations on the FPGA is higher than one cycle to allow reasonable operating frequency. For the particular case of external memory accesses, the latency is in the order of a few hundred cycles. Generally the compiler generates a deep-enough pipeline to be able to absorb the majority of the external memory stalls and at the same time accommodate all the necessary operations in the pipeline targeting a specific operating frequency (240 MHz by default). If you check the "System viewer" tab of the HTML report, you can find the latency of each block in your code and calculate the total pipeline depth by adding up all the latency values.
- hiratz6 years ago
Occasional Contributor
I see. Nice explanations! I just looked at the "System viewer" tab of the HTML report and it indeed shows the latency of each block in my code. Good info!
Thanks again!
- hiratz6 years ago
Occasional Contributor
One more question, the purpose of unrolling a loop is to add the depth of the pipeline (for single work item), not to let the unrolled iterations become a SIMD circuit (real parallel execution), right? If so, for NDRange version, since the loop cannot be pipelined as it is in the single work item, putting a "#pragma unroll" before a loop actually cannot bring some benefit (but add some extra area), right? (Note: when saying "the loop cannot be pipelined as ..." above, I mean their iterations cannot be pipelined. Instead, the loop is viewed as a whole and constructs the pipeline with other code. As a result, the loop becomes a stage as a whole. In this case, there is no difference between unrolling or not unrolling the loop. This is just my understanding.)
BTW, I'm curious why the compiler still can unroll a loop whose loop bound is a run-time value. For example, "while(i < n) {i++; do sth.}" (assume n is not changed in the loop body). If n is pretty large, there will be no enough area for the compiler to unroll the loop. (Please correct me if I understand this incorrectly.)
Thanks!