Forum Discussion
Altera_Forum
Honored Contributor
15 years agoThe access time of a tightly coupled memory will be equivalent to the cache access time when a cache hit occurs. Many call tightly coupled memories "scratch pads" since they have low latency access times like a cache and are recommended when you want to work on data 'locally'. Since not all cache accesses hit, a tightly coupled memory will achieve higher performance but how much high depends on the algorithm and it's memory access patterns.
Caches help when you have 'temporal' and 'spacial' locality accesses. Temporal locality means you access the same memory location frequently so having the data cached saves the CPU cycles fetching and storing to main memory multiple times. Spacial locality only comes into play when you set the cache line size to be greater that 4 bytes/line (native word size of Nios II). The Nios II instruction cache is fixed to 32 bytes/line but the data cache can be configured for 4/16/32 bytes per line. When using 16/32 bytes per line data caches when a cache miss occurs not only that particular word gets loaded into the cache line but the others that map to the same line also get loaded as well. So if you were accessing a 32-bit array sequentially and a particular access resulted in a cache miss, then not only will that array element get loaded but the elements before/after will get loaded if you have a 16/32 byte per line cache. Which elements get loaded has to do with how they are lined up in memory in terms of the address. So spacial locality means that if you access data frequently in the same general location in memory, caches will help minimize main memory accesses assuming the cache line size is greater than the native word size of the processor. Here are more details about direct mapped caches, when I refer to "lines" I'm talking about the "index" portion of the address: http://www.laynetworks.com/direct%20mapped%20cache.htm