--- Quote Start ---
I've made a little progress. I've written a little function that multiplies two 16 bit unsigned integers, which results in a 32-bit unsigned integer. The function then just drops the 16 LSBs which results in the scaled answer. So this works fine for unsigned inputs whose 'real' value is =< 1. I don't know how general this is for values > 1... probably not. For signed values I suppose just a little extra logic is needed to test the sign bit and derive the appropriate sign bit for the result.
--- Quote End ---
I am totally bewildered as I see various ideas of computations from the use of functions to float ...etc.
For most general applications you need to think fixed-point, this is "almost" always adequate:
1) scale your real values to nearest integers, represent on available resolution. so a [-1.00000 to +1.00000] range
must be scaled and rounded up until it fits well in your datawidth.
2)then do the computation as (choose signed or unsigned) 3)chop off result as required possibly with rounding. You may truncate at the very end or at an intermediate stage depending on resource/speed issues. You also may need some work to truncate any unused MSBs.
Kurtar: you are almost there but why use unsigned functions when you can just instantiate signed hardware mutipliers directly without functions.
Mathematically: Step 1) above of scaling is cancelled out at step 3) of output LSB truncation. for example if you scale by 2^n then truncate off n bits you get in effect the actual value as if you used fractional inputs.
R => round(R*2^n) --scaled
R => R/2^n --truncated(divided)
edit:
as an example, assume your input value e.g. a coeff is .705 then you scale it
to .705 * 2^8 = 180
then you used 180 as input to a mult with input A:
result = 180 * A
then you truncate reslut by 8 bit:
result =180*A/256
so in effect you multiplied by .705
The use of float library is fairly recent and I don't know of any engineers using it. I don't advise beginners to go that way before learning fixed-point