Hello thank you for the link.
Ive had the chance to look at this before but was confused about it.
I guess now is the best chance to ask about it.
Just for reference for the pictures - https://www.intel.com/content/www/us/en/docs/programmable/683091/22-3/typical-read-and-write-transfers.html
First, I wanted to ask what the vertical squigly lines.
Second, In the context above Who is the host. Who is agent?
In this picture it seems my top level custom logic is the M/Master/Host while the SDRAM controller is the S/Slave/Agent
However, I think based on the reading and writing example I think the host is the controller while the agent is the actual external SDRAM.
Next are some assumpltions. Please correct any assumptions I have:
-steps 1 - 4 are READING
-5 - 6 are for WRITING.
-The order of reading or writing first doesnot matter. '
-The memory have something initially or was written data before which is why it is able to READ first.
For READING.
Clock cycle 1 : the host sends data/address/read..........the agent asserts wait request
Clock cycle 2 : the host receives wait request = 1, thereby latching the data/address/read.
Clock cycle 3: the host still latch data/address/read.......the agents deasserts wait request
Clock cycle 4: the host receives wait request = 0, thereby finally sampling data/address/read.
Overall, wait request = 1 for only 2 clock cycles.
On the other hand, wait request for writing is much longer even though it seems the step is the same as the reading.
It says on step 6 the wait request deasserts after the rising edge of the clk but the which rising edge it choose seems random.
If we look at the reading part, the wait request becomes 0 immediately after the host spends a clock cycle to register it as 1.
But in the writing part it takes way longer
I apologize if I am making too many assumptions and complicating something that is simple.
Please let me know any concerns or comments.
I appreciate everyone as usual.
-htolentino