fixed point is easy to deal with. Like integer arithmatic a 16bitx16 bit gives a 32 bit result, but now you havd 1.15 number. a 1.15 x 1.15 gives a 2.30 (2 bits magnitude, 30 bits fraction) result. remembering these rules makes using the ieee.numberic_std package possible because you can monitor which bits are magnitude, and which are fractional.
so the following numbers, with range -1 to +1
signal A, B : signed(15 downto 0); (1.15 number)
signal C : signed(31 downto 0); (2.30 result)
signal OP : signed(15 downto 0); (1.15 output (16 bits))
c := A * B; --2 magnitude bits, 30 fractional bits
OP := C(30 downto 15); --in range -1 to nearly 1
(this is only possible if -1x-1 is not a possible input. If it is, you need to use the range (31 downto 16), and your result will be a 2.14 result, but you will lose 1 bit of accuracy. a 1.15 number only represents -1 to nealy 1, and cannot represent +1 itself, so an extra bit of magnitude is required)
When multiplying any signed numbers, you will always have 2 sign bits as the MSBs unless you multiply the most negative number by itself, so assuming max neg x max neg is not possible, you can ignore the MSB and gain an extra bit of accuracy
But now, life is alot easier
All you need to do is use the new floatfixpkg from the IEEE:
http://www.vhdl.org/fphdl/vhdl.html from here you can do stuff like this:
signal A,B : sfixed(1 downto -14)
signal C : sfixed(3 downto -28)
signal D : sfixed(3 downto -12)
process(clk)
begin
if rising_edge(clk)
C <= A*B;
end if
end process;
D <= c(d'range);
Division is always going to be a bummer. But if you remember that A/B is actually just A * 1/B, and you can generate 1/B in software, all division problems are removed.