Please enable JavaScript.
Coggle requires JavaScript to display documents.
Cache Memory - Coggle Diagram
Cache Memory
Introduction
- Physical devices used to store programs or data
on a temporary or permanent basis for use in a computer.
- Memory is typically used for temporary or
storage of small amounts of data.
• Reading data from memory is faster than reading
it from other types of storage (such as a hard disk), but cost much more and is therefore best
suited for small amounts of data.• There is no technology that is optimal in
satisfying the memory requirements for a computer system.• As a consequence, the typical computer
system is equipped with a hierarchy of memory subsystem; some internal to the
system & some external.
-
-
Memory Hierarchy
• The term memory hierarchy is used in the theory
of computation when discussing performance issues in computer architectural design.
• A 'memory hierarchy' in computer
storage distinguishes each level in the 'hierarchy' by response time. Since response time,
complexity, and capacity are related, the levels may also be distinguished by the controlling
technology.
-
As one goes down the hierarchy, the following
occur
-
-
-
-
Cache Memory
-
-
how it works
The CPU looks for data first in the cache, and if the data is not found there, it looks in main memory.
• The larger the cache,
the larger the number of gates involved in addressing the cache.
• Cache memory is random access memory
(RAM) that a computer microprocessor can
access more quickly than it can access regular
RAM.
• As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory.
Mapping Function
• Because there are fewer cache lines than main
memory blocks, an algorithm is needed for mapping main memory blocks into cache
lines.
• The choice of mapping functions determine
how the cache is organized.
Mapping technique
Direct mapping
• The content of a location in main memory can be stored at one and only one, specific location in the cache (it is mapped to exactly
one location in the cache).
• If the CPU request the contents of a specific
memory location, there is only one block in the cache that could possibly contain that
information.
• Maps each block of main memory into only
one possible cache line.
• Simple technique.
• Inexpensive to implement – simple algorithm
• Main disadvantage:
There is a fixed cache location for any given block
Thus if a program happens to reference words
repeatedly from two different blocks that map into the same line, the block will be continuously swapped in
the cache, the hit ratio will be low.
Associative mapping
• Overcomes the disadvantage of direct
mapping by permitting each main memory block to be loaded into any line of the cache.
• There is flexibility as to which block to replace
when a new block is read into the cache.
• In Fully Associative Mapped Cache, each
memory location can be in any cache block (a block is allowed to go anywhere in cache).
• There is no restriction on mapping from
memory to cache.
• Associative search of tags is expensive, so this
type of mapping only feasible for very small size caches only.
• Disadvantage:
The complex circuitry required
Examine the tag of all cache lines in parallel
Set Associative mapping
• Set Associative Mapped Cache combines the
ideas of Direct Mapped and Associative cache.
• Set-associative mapping allows that each
word that is present in the cache can have two • Set-associative mapping allows that each
word that is present in the cache can have two