Problems when simulate the floating point functions Intel FPGA IP
I build simulation for the floating IPs, but with modelsim simulation, the outputs are always zero.
I have tried several arithmetic functions, add, sub, accumulate and divide.
However, I got the correct result when simulating the max and min function.
Does anyone know what is wrong with those arithmetic floating IP core?
The IP declaration example
add_sub_fp u0 (
.clk (clk), // input, width = 1, clk.clk
.areset (reset), // input, width = 1, areset.reset
.en (en_feat), // input, width = 1, en.en
.a (data_a), // input, width = 32, a.a
.b (data_b), // input, width = 32, b.b
.q (sum_out), // output, width = 32, q.q
.opSel (sel) // input, width = 1, opSel.opSel
);
Since the quartus updates, they do not provide the previous version of floating IP anymore. This IP core is the only choice to implement floating point function on altera FPGA.