Here is an example of an integer FFT that uses a 15 pre-scale factor that has been accelerated using the Nios II C2H compiler (perhaps this is the example you are referring to):
http://www.altera.com/literature/tt/tt_nios2_c2h_accelerating_tutorial.pdf http://www.altera.com/literature/tt/c2h_fft_cyclone_ii.zip http://www.altera.com/literature/tt/c2h_fft_stratix_ii.zip So in a nutshell this example shows fixed point math. The idea of a pre-scale operation is to increase the data resolution so that operations that degrade precision (like division, >>, or multiplication of small numbers) will have as little affect on the accuracy as possible on the final result.
As Tom pointed out there is no difference between using a denominator of 100 or 1024. With that said the division (or multiplication) by 1024 is much better since that just involves a shift operation (hence why you have a >> in your FFT code) Whenever you divide binary numbers, you have no uncertainty when the denominator is a factor of 2 and the numerator contains a power of 2 (greater or equal power of 2 in the denominator). So by pre-scaling the numbers by a factor of 2, performing the calculation, then post-scaling by the same factor of 2, you minimize the amount of error in your calculation (often to 0% if you reserve enough bits). Remember when I said "greater or equal power than the denominator", well the whole idea behind a pre-scaling factor is that you remove that pre-scale from the result after the calculation is done (i.e. the numerator is a power of two, and the same power of two as the denominator).
To learn more I recommend searching the web for information on "fixed point math". You should be able to find many examples that can also explain this concept.