Forum Discussion
Altera_Forum
Honored Contributor
8 years ago --- Quote Start --- Hi, I use float point calculation in convolution neural network (opencl). Result of calcultaion on FPGA and CPU is similar, but in low byte I get some difference. How can I fix this problem ? How can I get result on FPGA identical to result of CPU. --- Quote End --- Unless the FP add/sub/mul/div algorithms are EXACTLY the same in each implementation (eg, how rounding is handled) you can reasonably expect a few bits of difference in the mantissa of the final result. This is an age old problem in software using floating point. You should compare the expected and received values to be equal within some error bound, and not expect them to be EXACTLY equal. IE, use: if (abs(a-b) < max_error) ... as opposed to: if (a == b) ... MAX_ERROR should be set to some small tolerance value, like 1e-6, or whatever makes sense for your application. If you go down the path of looking for EXACTLY EQUAL you will be chasing your tail for years.