In all cases of using storage to cushion the data flow, the average data rate on either side of storage must tend to equality over time. Otherwise your fifos will either overflow or underflow soon. This is common space physics.
The idea is just cushioning the flow i.e. absorp the burstiness into the fifo level just like a water tank that receives irregular supply of water from the government but can provide the household anytime...never empty or overflowing on their heads.
The depth of buffering is implementation dependant and there are no rules of thumb either. You will need to have an input model or the actual input and then watch for fifo state. Moreover you will need to consider cases when you process entire packets or blocks of data that these are not broken inside fifos which therefore must accomodate them as they arrive. You may also have info from your fellows who deal with same degree of burstiness.
One other issue about data flow here is that at start up or other stressful moments your fifos may get stuffed and close to fill-level - accidentally -and then you are likely to stay there having limited cushioning. You will need to clear them in these cases to make full use of them.