Forum Discussion

Altera_Forum's avatar
Altera_Forum
Icon for Honored Contributor rankHonored Contributor
13 years ago

DMA controller memory verification failed

Hello ,

I have constructed a simple system to transfer data using DMA controller to initiate burst transfers from sdram to NIOS2 . The system is composed of onchip memory , sdram memory , sram memory , nios2 and dma controller on DE2-115 Cyclone IV board.

Without the DMA controller I am able to run a hello world program on NIOS (I tried reading/writing to SDRAM too) .

But after adding the DMA controller I get memory verification failed on the base address of the SDRAM . I am supplying it with 100 MHZ clock (sys clock) and using these settings

http://i49.tinypic.com/ayau8k.png

6 Replies

  • Altera_Forum's avatar
    Altera_Forum
    Icon for Honored Contributor rankHonored Contributor

    I'm a little surprised that you were able to set a maximum burst count higher than the FIFO depth. Also is there any particular reason why you have the burst count cranked way up to 512 beats? I don't know what kind of SDRAM you are using but I can assure you *no* memory out there needs such a large burst length to be efficient.

  • Altera_Forum's avatar
    Altera_Forum
    Icon for Honored Contributor rankHonored Contributor

    I have an image which is stored in sdram in the form of 512 words chunks , So I wanted to read them chunk by chunk to maximize performance.

    I will decrease the burst size and post the results.
  • Altera_Forum's avatar
    Altera_Forum
    Icon for Honored Contributor rankHonored Contributor

    I reduced size of burst size to 32 and FIFO size to 64 still getting verification error but this time on SDRAM .

  • Altera_Forum's avatar
    Altera_Forum
    Icon for Honored Contributor rankHonored Contributor

    Try a FIFO depth that is at least 4x the burst size. I don't know of any issue with the DMA that require it but with all the DMAs I designed that seems to be the minimum buffer depth you would want when bursting is involved for efficiency reasons. I also seem to recall some sort of weird issue where the transfer length had to be a multiple of the burst size, if that is still the case perhaps you are running into that issue.

    The problem with using large burst sizes is you starve all other masters in your system from accessing the same memory. If you need that sort of functionality I recommend increasing the arbitration share (assuming you are using Qsys, in SOPC Builder arbitration share is ignored for bursting masters). The other problem is that most DMAs wait until there is enough data buffered to post a burst, so by increasing the burst size you can actually have a negative effect on your throughput since the write master will have to wait for more data to be read. Most memory controllers use burst lengths of 1, 2, or 4 on the slave side. So cranking the burst count up even higher than what the memory controller uses most of the time just results in additional hardware to adapt the large burst into smaller ones and no real gains in throughput.
  • Altera_Forum's avatar
    Altera_Forum
    Icon for Honored Contributor rankHonored Contributor

    Thanks a lot ,

    I passed this point but the problem now is that I never receive DMA done signal after initiating a transfer so the program hangs forever.
  • Altera_Forum's avatar
    Altera_Forum
    Icon for Honored Contributor rankHonored Contributor

    If you are not seeing the interupt fire I recommend switching to polling just to ensure the transfer is being setup correctly. Polling is less error prone than interrupt based interactions and I typically recommend polling when you start then switching to interrupts once you are confident the hardware/system is behaving correctly.

    Also for stuff like this I often simulate my system with a minimal amount of hardware & software running so that I can watch how the DMA behaves. Often you'll find weird stuff like you passing a pointer instead of the location where the data lives to the DMA and other mistakes that can cause the hardware to go off into the weeds. Debugging that type of thing in a simulation is much easier than with a software debugger since you just watch what the masters do and it'll sometimes be clear what happened.