Forum Discussion
JohnT_Altera
Regular Contributor
9 months agoHi Ruben,
If that is the case then I suspect that the FPGA AI suite might not be able to run as it is already pre-occupied with the previous inferencing. It has already not able to run further inferencing unless the previous task is already fully completed and it can move towards a new inferencing.
RubenPadial
Contributor
9 months agoHello @JohnT_Intel ,
Yes, that's what I supposed. How should it be handled?
The inferRequest->wait(), inferRequest->startAsync() and inferRequestsQueue->waitAll() statements are used, and the output is properly retrieved, so the inference is completed. I don't know what happens with the request or how to handle/wait/stop the request once inference is finished.