Forum Discussion
Altera_Forum
Honored Contributor
7 years ago --- Quote Start --- I guess the problem lies with the way to create a shared buffer. I put the producer and consumer kernels in different command queues for the concurrent execution and use clCreateBuffer(...,CL_MEM_READ_WRITE,...) to create a shared buffer. The programming guide mentioned clCreateBuffer(...,CL_MEM_READ_WRITE,...) allocates memory to nonshared DDR memory banks and shared memory should be allocated by using clCreateBuffer(...,CL_MEM_ALLOC_HOST_PTR,...). However, when I am using the clCreateBuffer(...,CL_MEM_ALLOC_HOST_PTR,...) function, these two kernels cannot execute concurrently. Consumer kernel will wait until producer kernel finishes. --- Quote End --- You are mixing two different concepts. The type of "shared memory" that the guide recommends to be allocated using CL_MEM_ALLOC_HOST_PTR is for FPGA SoCs which share the same "physical memory" between the ARM processor and the FPGA; it does not apply to PCI-E-attached FPGA boards and is not related to whether a "global buffer" is shared between two or more kernels. Regarding your problem, OpenCL does NOT guarantee global memory consistency unless when kernel execution has finished. Hence, by default, trying to share a global buffer between to concurrent kernels with one writing to it and the other reading from it can (and likely will) lead to undefined behavior. Altera's guide claims using mem_fence(CLK_CHANNEL_MEM_FENCE | CLK_GLOBAL_MEM_FENCE) can allow two such kernels to synchronize a shared global memory buffer through tokens passed via a channel as you are trying to do in this case; however, I have seen multiple people trying this and reporting that it doesn't work in the forum. In fact, the programming guide has an example of this in "Section 5.4.5.6 Use Models of Intel FPGA SDK for OpenCL Channels Implementation" but I wouldn't be surprised if that doesn't work either. Maybe you should consider using a Single Work-item implementation as is done in Altera's example and then it might work.