Forum Discussion
Altera_Forum
Honored Contributor
10 years agoThanks again okbz. One example area is climate simulations, where typically an application is packaged with dozens of independent functions that simulate certain climate physics, such as precipitation. On GPU, you can implement each function as a GPU kernel, and depending on application input data and configuration, a subset of available physics functions are used at run time. Also, there are a large number of such physics functions. I'm not a climate scientist, but this is what I see in some of the well-known simulation codes in this field.
Furthermore, basically, many of large simulation code with long history would have such a collection of routines in order to cover a wide range of simulation conditions. Even if most of them are in fact used, I'd imagine they are usually not used at the same time, thus at any point in its execution, only a few of the routines would be exercised, occupying only a fraction of logics. I'm new to FPGA, and I think this was not a big issue previously since having a large number of functions run on FPGAs was itself a huge challenge without OpenCL.