Forum Discussion
This project is a port of an existing, mature design into modern parts (Arria 10 for Cyclone 3, LPDDR3 for DDR2, etc).
As such, there is no use of the QSys-type structure. The LPDDR3 controller is used bare as seen in the instantation file. The memory simulation is from Micron, under NDA. The actual part being simulated is MT52L256M32D1PF. All other models in the FPGA portion of the design are from Intel.
However, the emif_usr_clk signal is created in the Intel controller and that is inexplicably (and always) generated with a period of 5.024ns when provided at reference clock of 5.000ns (and an observed bus rate of 800 MHz exactly). One would expect asynchronous clocks to be an issue.
The observed data failure is, after a long sequence of sequential reads in a single bank, with occasional row steps, the controller eventually drops data_valid and repeats the next-to-last 256 bits (32-bit transfers quarter rate). Then it reasserts data_valid and returns a block with the first 32 bits "xxxxxxxx". This, when fed into my DSP process, breaks everything in the simulation.
The controller is handling refresh and precharge. Doing a predictive precharge myself did not affect the situation. The effective workaround is to not ask for more than 8 long-words in a row. The minimum necessary hold-off has not been determined.
The FPGA closes timing according to the Intel tools. This isn't surprising as it ran fine in a Cyclone 3, although the memory clock rate has been doubled.
I am not using the sample design, but the attached sample design was created from the emif ip block that I am using, using the parameter editor