Please enable JavaScript.
Coggle requires JavaScript to display documents.
Topic 1.2: Cache memory - Coggle Diagram
Topic 1.2: Cache memory
Cache Memory
-
-
How it works
-
CPU looks for data first in the cache, and if the data is not found there, it looks in main memory.
The larger the cache, the larger the number of gates involved in addressing the cache.
As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory.
Introduction
-
-
-
Reading data from memory is faster but cost much more and is therefore best suited for small amounts of data.
-
Mapping technique
Associative mapping
allows that each word that is present in the cache can have two or more words in the main memory for the same index address.
-
Direct mapping
-
content of a location in main memory can be stored at one and only one, specific location in the cache
-
-
Disadvantage
if a program happens to reference words repeatedly from two different blocks that map into the same line, the block will be continuously swapped in the cache, the hit ratio will be low.
-
Set Associative mapping
-
-
-
-
Overcomes the disadvantage of direct mapping by permitting each main memory block to be loaded into any line of the cache.
-
Memory Hierarchy
one goes down the hierarchy, the following occur
-
-
-
-
-
distinguishes each level in the 'hierarchy' by response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by the controlling technology.
-