Forum Discussion
Altera_Forum
Honored Contributor
13 years ago --- Quote Start --- The data rate is high, DDR2 SDRAM is necessary to storage large temporary data to ensure real-time --- Quote End --- Then you have no choice but to use DMA. You cannot efficiently transfer data using the host CPU. --- Quote Start --- it is designed by my supervisor, not me. --- Quote End --- And is your supervisor aware of how to design PCI and PCIe systems and how to write Linux device drivers? Is he aware of the differences in address maps? Has he been reading these forum messages? (Please ask him to). --- Quote Start --- My task is to make both DDR2 and on-chip memory work, not to avoid the problem by getting rid of DDR2 SDRAM or on-chip memory. --- Quote End --- I never indicated you had to get rid of it. However, it is unlikely you can ever have the host CPU see all memory on your board via a BAR. As I commented earlier, I have a 64-bit HP EliteBook with 16GB of RAM, and if I use a PCIe BAR of 256MB, that CPU will not boot. So, this issue is not solved by simply using a 64-bit host CPU, its only solved by avoiding it altogether, by using DMA between the Qsys address map and the PCIe address map. --- Quote Start --- Could you tell me where is the link of your example ? " in the Qsys PCIe example I sent a link to, the PCIe bridge can be configured with a 1MB outgoing translation window." --- Quote End --- Its part of the PCIe core configuration. The core gets configured with two 1MB outgoing translation windows, eg., see p8. This window can be used to access any location in the PCIe address map. Using the Qsys DMA controller, you can access two 1MB regions of the host memory. This is not as flexible as a Qsys-to-PCIe bridge with a DMA controller embedded inside it, however, since that component does not exist, you need to figure out whether you can live with the existing solution. Cheers, Dave