Please enable JavaScript.
Coggle requires JavaScript to display documents.
CHAPTER 1.2 [THE COMPUTER SYSTEM] CACHE MEMORY - Coggle Diagram
CHAPTER 1.2 [THE COMPUTER SYSTEM]
CACHE MEMORY
Introduction
Physical devices used to store programs or data on a temporary or permanent basis for use in a computer
Reading data from memory is faster than reading it from other types of storage ( such as a hard disk ) , but cost much more and is therefore best suited for small amounts of data.
Memory is typically used for temporary or storage of small amounts of data.
There is no technology that is optimal in satisfying the memory requirements for a computer system.
As a consequence, the typical computer system is equipped with a hierarchy of memory sub system; some internal to the system & some external.
External Memory
Accessible by the processor via an I/O Module
Internal Memory
Directly accessible by the processor
Memory Hierarchy
The term memory hierarchy is used in the theory of computation when discussing performance issues in computer architectural design.
A 'memory hierarchy' in computer storage distinguishes each level in the 'hierarchy' by response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by the controlling technology.
Key characteristic of memory
Capacity
Access time
Cost
As one goes down the hierarchy, the following occur
-Decreasing cost per bit
-Increasing capacity
-Increasing access time
-Decreasing frequency of accessing memory by the processor
Cache Memory
Cache is internal memory.
Cache contains a copy of portions of main memory
The larger the cache, the larger the number of gates involved in addressing the cache.
Cache memory is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM.
As the microprocess or processes data, it looks first in the cache memory and fit finds the data there ( from a previous reading of data ), it does not have to do the more time-consuming reading of data from larger memory.
Mapping Functions
Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines.
The choice of mapping functions determine how the cache is organized.
Associative Mapping
Over comes the disadvantage of direct mapping by permitting each main memory block to be loaded into any line of the cache.
There is flexibility as to which block to replace when a new block is read into the cache.
In Fully Associative Mapped Cache, each memory location can be in any cache block ( a block is allowed to go any where in cache ).
There is no restriction on mapping from memory to cache.
Associative search of tags is expensive, so this type of mapping only feasible for very small size caches only.
Disadvantage
The complex circuitry required
Examine the tag of all cache lines in parallel
Set Associative Mapping
Set Associative Mapped Cache combines the ideas of Direct Mapped and Associative cache.
Set-associative mapping allows that each word that is present in the cache can have two or more words in the main memory for the same index address
Set-associative mapping allows that each word that is present in the cache can have two or more words in the main memory for the same index address
Direct Mapping
The content of a location in main memory can be stored at one and only one, specific location in the cache ( it is mapped to exactly one location in the cache
If the CPU request the contents of a specific memory location, there is only one block in the cache that could possibly contain that information
Maps each block of main memory into only one possible cache line.
Main disadvantage
There is a fixed cache location for any given block
Thus if a program happens to reference words repeatedly from two different blocks that map into the same line, the block will be continuously swapped in the cache, the hit ratio will below.