Haha, yeah no problem. I've had a few hurdles with these.
I believe these values are accurate.
It'll largely depend on your application and its tolerance for error but generally I see roughly about 7 digits of accuracy (about -130dB ish) before floating point error begins to occur when using a float.
For double precision, I usually see around 14 digits of accuracy (about -300dB I think?) before floating point error begins to occur.
Since your values are e+78 and your errors start occurring around e+62, you do end up getting around 14 digits of accuracy which I would presume that this is using double precision and that you're getting about as close as double precision can describe.
Here the error would be negligible I would think. The error introduced would then be due to running the computations in a differently (aka OpenCL and the library are slightly different on the binary).
The interesting thing to me though is that I would normally see this when comparing across different architectures such as CPU/GPU/FPGA when looking at FP error but since these are both computed on the FPGA, I'd imagine there should be a way to get them to come out the exact same.
Since they're running on the same FPGA, it would have to comedown to the FP operations are done differently in OpenCL vs the HDL code which may be due to how they both conform to the FP IEEE-754 standard. If I'd had to guess I'd maybe think it would be due to how they handle rounding or fp denorm/normalization.
I'm not entirely sure, however, that's been my experience without diving too deep but that's my two cents.