Forum Discussion
Altera_Forum
Honored Contributor
13 years agoShuckc is correct about the bursting properties of the bridge -- if you create a burst transaction on the Avalon side of the bus, it will pack all the data it can (based on other PCIe rules) into one TLP for maximum throughput.
However, I think the question you're asking is more basic. There are two modes you can use the PCIe IP in: Completer only or requester/completer. If you're going to use a DMA in the FPGA, you need to use requester/completer mode. The DMA will write data to the "txs" (transmit slave) port on the IP (this port is in addition to one or more BARs, which are avalon master ports). Then you'll use an address translation table to convert Avalon addresses to PCIe (64-bit) addresses. For the txs port, the translation table can contain 512 entries of a given size (so you could have 512 x 1MB pages, or 512 x 2MB pages, or fewer entries 4 x 256MB pages, or smaller -- 2x 4kB, etc.). These can be static or dynamically allocated (through the CRA port). Have a look at the IP for PCI Express User Guide section called PCI Express Avalon MM Bridge, and the "address translation" section. Basically on the avalon side, the MSbits of your master address (going to the txs slave) will be translated to (a greater number of) the MSbits of the 64-bit PCI address space, based on the table. This is separate from the address translation table for BARs, which operates a lot differently, but won't get you the throughput you're looking for. So if your txs port was set up with translation table size of 1KB (to make the math easy) and you had 2 entries, and you set up the txs port to start at avalon location 0x0, you'd have an address range of 0x0 to 0x800 for the txs port. Now your translation table would look like: 0x0-0x3ff -> some 64-bit address range that's 1k in size 0x400-0x7ff -> another (possibly totally different) 64-bit address range that's 1k in size.